Research
ARIGS Image Mosaicking 3-D Object Modeling

This workspace augmented reality (AR) system serves as the test-bed of new AR techniques, new applications and human perception research. It is a stereo video-see-through head mounted display (HMD) based system. The system runs at real time with no lag between video and graphics. The delay is less than 100ms.

This system is a joint project of Department of Computer Science, University of Rochester , University of Rochester Medical Center and Siemens Corporate Research.


System Components

System front view The system consists two SGI Visual Workstation 540's. Each SGI machine takes one analog video input and one digital video (SDI) input. Since we have 3 video streams, we need 2 computers and an external A/D converter (a Miranda ASD101i).
The system has three cameras. Two Panasonic KPS-1000 cameras capture the scene. These are lipstick-size light weight color cameras. With a 15mm telephoto lens, each one has a FOV of 30°, which matches the FOV of HMD (Kaiser Proview 35). A modified Sony black-and-white camera, which works in infrared, is the tracking camera. The small PCB attached to the Sony camera is the IR illuminator. An adjustable aluminum mount holds the 3 cameras together. The convergence angle between the 2 scene cameras can be adjusted according to working distance. The angle between the tracking camera and the plane of the 2 scene cameras can also be changed, so the tracking pattern can be put to an appropriate place. Cameras mounted on HMD

Calibration, Tracking and Augmenting

A metal plate with retro-reflective markers is used for both calibration and tracking. Tsai's method is used to calibrate all the 3 cameras. Once every camera is calibrated individually, the relative poses between the tracking camera and scene cameras are computed. We again use Tsai's method to determine the pose of the tracking camera (which is the pose of the HMD). The software runs in a server/client manner. The server computes the pose of the tracking camera and broadcasts to 2 clients which take care the augmenting of left eye and right eye respectively.

Each client deduces the pose of its scene camera from the pose of the tracking camera. It then translates and rotates the virtual OpenGL camera according to the scene camera. Graphical objects are rendered through the 'calibrated' OpenGL camera. Live video is texture mapped to the background according to the view angle of the scene camera.


Timing and Performance

Timing The timing of the system is shown above. The left is the server side and the right is the client side. Starting with the tracker camera gets its frame, 2ms later, the markers are found and it takes Tsai 8 ms to compute the pose. Meanwhile, there is a sync handshake between the server/client, making sure the right frame is rendered. In other words, we make sure that the virtual objects that augment a real scene are rendered from the camera pose that was valid when the particular image of the real scene was taken. This synchronization together with the stable pose estimation results in a very natural, believable impression of the augmented world. The virtual objects do not swim or jump or jitter, they are firmly anchored in the scene. We measured the time from having a video frame stored in memory to swapping the augmented frame into the front framebuffer to be 24 to 30 ms. Now the camera takes about one frame's time (33ms) to shoot one frame, and it may take about another frame's time to read the data into memory. And once the rendered image is in the front framebuffer, it will be displayed at the next vertical refresh of the monitor. At 60Hz refresh rate, that may take up to 16ms. Overall, we estimate the latency of the system, i.e. the time difference between a live action and its display in the augmented view, to be about 0.1 sec. This latency is small enough to become apparent only for fast movements.

Augmented Scene

Augmented scene The video on the left shows a scene that a user is lighting a virtual candle. Although it's hard to convey to sense of 3D, you can see that the candle (and the fire) is anchored in the scene firmly when the user moves around.

3D Perception

People perceive depth by physiological cues, such as binocular disparity and motion parallax (kinetic cue), and psychological cues, such as perspective illusion, interposition (occlusion), size constancy and etc. For a user, one cue may be stronger than another. Our system works in binocular and the user can get a very good 3D sense from the disparity. The 3D perception can be further enhanced by the kinetic cue if a user moves his head around. However, psychological cues, especially occlusion, interfere most of our testers' perception. In the above candle-lighting video, the user has a very good idea of where the candle by just looking through the HMD. But when the user reaches out her/his hand to light it, no matter the red bar before the behind the candle in space it always looks behind the candle because the red bar belongs the background image. And here the occlusion comes in to interfere the user's 3D perception. We can solve this specific problem by tracking the red bar, but ultimately a dense depth map of the real scene should be known to solve the occlusion problem. It of course very hard to get a dense depth map on the fly, lest in real time. The way we are trying to solve or alleviate this problem is to use psychological cues over psychological cues. See next section for details.

Application: IGS

One of the applications of the system is image guide surgery (IGS). An AR system gives surgeons the ability of X-Ray vision. In other words, the surgeon can see an augmented human tissue, say a tumor, inside the patient's head. Click to see large pictures and videos.
Triangular mesh of a head phantom A triangle set of the skin model is superimposed on a head phantom. Notice the accurate overlay of the skin markers.
Tumor and a needle rendered A tumor with a biopsy needle rendered.
Tumor and a needle rendered The problem in above picture is that some users can't perceive the tumor is inside the head phantom, although it is. The key is the occlusion problem we mentioned before. The rendered tumor is always before the image of the head phantom and it conflicts the correct 3D perception from stereo. What we did is to render the skin model, which has the right occlusion relationship with the tumor. This way helps the user to perceive the real position of the tumor.
Movie showing interaction Kinetic cue can also help the user to build the right space perception. The video on the left shows a user moving around the head phantom. Notice how the relative positions of the tumor and the ventricles change with the viewpoint.

Publication


Links