Simulation and Calibration

You have a mobile robot with an active imaging system. You want to build a primitive simulator so you can test out various basic vision algorithms and also more advanced issues like camera control (tracking, say). Also you want to know camera parameters in the real system, so you need to write a camera calibration program. Luckily your simulation can help....

Forward Kinematics and Imaging

Your camera has a 240 by 360 wide CCD array of pixels on square centers. It is 12 mm. high and proportionally wide. It has a zoom lens that varies from 10 to 100 mm focal length. It is a perfect pinhole camera with no radial distortions, etc. The LAB coordinate system is at the base of the robot. The camera's pan-tilt platform is on a rotating shaft, raised 1000 mm from the origin of LAB along the z axis. The rotating shaft supporting the camera is 20 mm long above the tilt platform. The system has a ``spherical wrist'', so that all the axes for the platform rotation (A in the figure), the tilt (B), and the pan (C) all pass through a single point. (Does this make your job easier or harder?). Forward kinematics is the mathematics that translates geometrical parameters (here d, e, A, B, C ) into a description of the end effector (here, the camera's coordinate system telling how it has been rotated and translated with respect to LAB).

Write a simulator that shows the image produced by the camera in u-v (horizontal and vertical) discrete image space. The input to your camera is a matrix of real-number [x,y,z] or [x,y,z,1], if you find that easier) points in LAB. Each projects to one pixel in the camera (unless it is not projected onto the CCD array).

Write a simple (or fancy, if you want) interface to let you change camera parameter values (rotate, pan, tilt, zoom) and show the resulting image. Zooming should make it bigger, panning left should make it move to the right, etc.

Just in case, write your simulator so that it can be ``called'' in a loop by a controlling program that can change parameters (in the camera: rotate, pan, tilt, zoom. In the world: position of world points) in response to something it senses in the image. For instance, it might change pan and tilt to track a single point that starts in the center of the field of view but moves over time. We might well use this simulation code in future projects so keep it flexible and reusable.

Calibration

Write a calibration routine as in Section 9.2.4 of the text. Use it and your simulator to determine a camera matrix for your camera. Check the results in various ways (you can apply the matrix to the scene points. Theoretically, the matrix you derive should reveal certain fixed camera dimensions and parameters in your simulation, so you should be able to tell if calibration can discover your camera's geometry. You should explore are the effects of noise (say badly measured scene points and the effect of mapping real coordinates onto the discrete pixel grid). You can't get around the latter problem but you can investigate what happens when you use more points as input (to get a least-squares solution). Also: how are you going to demonstrate that one camera matrix is more accurate than another? I'd guess some form of comparison with the real camera would work...

If the input to the calibration is points on a plane, the calculation simplfies (we now need only eight data points instead of eleven for the three-dimensional case). This image-plane to scene-plane mapping is called a collineation or more commonly a homography. Given the 11 variable case, the 8 variable case is trivial so you might as well do it! Again, you can make your own data for testing and debugging. Also, here are some real data of this sort, with the xyPoints.txt files having the real-world points and the uvPoints.txt files having corresponding image points.

The nice thing about the plane-to-plane case is that the image pixels map onto exactly one point on the scene plane instead of into a 3-d line in scene space. This means you can get an "inverse camera" matrix that tells where on the 2-D plane any image point is. Useful for figuring out the location of objects out in front of you if you assume their bases are on the ground plane supporting the robot.

Here's some matlab code for the (inverse) homography problem. This is just quick and dirty but shows how easy it is: MatLab Code .

Turn In

Simulator: Mathematical description of your model: what equations you used to implement the simulator given the system parameters. Remark on anything cute, smart or unusual you did. Description of real-world ``scenes'' and resulting images to demonstrate that your simulator works.

Calibration: As above, with justifications that your calibration outputs are useful and correct. The inverse camera matrix for the homography case can be used to check your algorithm as well as provide useful applications.

Also turn in a README document that outlines the structure of your code: who calls what, names of modules and what they do, information on how to run the programs (arguments, data file formats, etc.).

Hints

Simulator: Keep all the static parameters (dimensions) of your system symbolic so you can easily change the geometry. This assignment is easy in Matlab, otherwise there are issues of matrix manipulation and solving linear equations that you have to address somehow. Test on simple cases like imaging a cube. Especially useful is lining up one of the cube edges with the camera's central axis so you are viewing right down the edge. Make sure everything behaves consistently with your understanding of the geometrical and optical setup. In your writeup it is up to you to demonstrate to us that your simulator works. We don't want to read code, we want to be shown examples.

---

This page is maintained by CB.

Last update: 15.8.02