gloomyandy wrote:Th video explains why they use two cameras. It allows them to track distance to the object without having to know the size of the object. They show how a single cam can be used if you know the size of the object....
Ah, shouldn't have been watching it without sound
-- no it's all clear.
I'm now convinced the video author used stereo vision. Even though I think the demo is a bit misleading, as you could do the ball tracking in this case with just 1 cam, no compass and no tilt sensor (by just keeping the object in the center of the cam, and moving the motors accordingly).
Anyway, using just 1 cam and known size of the object, this 3D range method is the same as the Playstation's controller PSMove does.
Given 2 NXTcams, I'd personally do intrinsic and extrinsic camera calibration using
Bouguet's toolbox for MATLAB or
OpenCV's 3D calib functions (which is based on Bouguet anyway). A
stereo vision example can be found here (see pic at the bottom of the page).
For calibration you need at least 4 points / trackable objects on a plane. Show this plane to the two NXTcams from various positions and angles. Record the tracked marker locations. Save them to NXT memory, and export them to a PC. Then use the above mentioned calibration engines.
After that, you have calibrated stereo system with the transformation you need to get from cam1's coordinate system to cam2's (i.e. a rigid motion transformation, covering 6 degrees of freedom (3 translation + 3 rotation)). If you align your cameras precisely, there won't be any rotation and only displacement in 1 direction (the baseline). But that doesn't really matter...
Given this calbration data, you can work out the 3D coordinates in world coordinates for any given object / marker, which is recognized by both NXTcams simultaneously. Take away certain degenerate disambiguities...