Experiments with Memory-Based Recognition System

Variation in Performance with Size of Database

One measure of the performance of an object recognition system is how the performance changes as the number of classes increases. To test this, we obtained test and training images for a number of objects, and built 3-D recognition databases using different numbers of objects. The objects used were chosen to be ``different'' in that they were easy for people to distinguish on the basis of shape. Data was acquired for 24 different objects and 34 hemispheres. The number of hemispheres is not equal to twice the number of objects because a number of the objects were either unrealistic or painted flat black on the bottom which made getting training data against a black background difficult. The training objects are shown below.



Clean image data was obtained automatically using a combination of a robot-mounted camera, and a computer controlled turntable covered in black velvet. Training data consisted of 53 images per hemisphere, spread fairly uniformly, with approximately 20 degrees between neighboring views. The test data consisted of 24 images per hemisphere, positioned in between the training views, and taken under the same good conditions. Note that this is essentially a test of invariance under out-of-plane rotations, the most difficult of the 6 orthographic freedoms. The planar invariances are guaranteed by the representation, once above the level of feature extraction, and experiments testing this have shown no degradation due to translation, rotation, and scaling up to 50%. Larger changes in scale have been accommodated using a multi-resolution feature finder, which gives us 4 or 5 octaves at the cost of doubling the size of the database.

We ran tests with databases built for 6, 12, 18 and 24 objects, and obtained overall success rates (correct classification on forced choice) of 99.6%, 98.7% 97.4% and 97.0% respectively. The worst cases were the horse and the wolf in the 24 object test, with 19/24 and 20/24 correct respectively. On inspection, some of these pictures were difficult for human subjects. None of the other examples had more than 2 misses out of the 24 (hemisphere) or 48 (full sphere) test cases. Results are shown below.



Overall, the performance is fairly good. In fact, as of the the 1999 date of these experiments, this represents the best results presented anywhere for this sort of problem. A naive estimate of the theoretical error trends in this sort of matching system would lead us to expect a linear increase in the error rates as the size of the database increased (best-case). Our results are consistent with this, though we don't have enough data points to provide convincing support for a linear trend.

The resource requirements are high, but scale more or less linearly with the size of the database. The system is memory intensive, and currently uses about 3 Mbytes per hemisphere. This could be reduced using a number of schemes, since many of the patterns stored have similarities. The time to identify an object depends more or less linearly on the number of key features fed to the system, and the size of the database. At the moment, overall recognition times on a single processor Ultrasparc are about 20 seconds for the 6 object database, and about 2 minutes for the 24 object database. This could also be improved substantially by pushing on the indexing methods. The process is also efficiently parallelizable, simply by splitting the database among processors.

Performance in the presence of clutter

The feature-based nature of the algorithm provides some immunity to the presence of clutter in the scene; this, in fact, was one of the design goals. This is in contrast to appearance-based schemes that use the structure of the full object, and require good prior segmentation. The algorithm, in fact seems reasonably robust against modest dark-field clutter in high quality images, that is, extra objects or parts thereof in the same image as the object of interest. We ran a series of tests where we acquired test sets of the six objects used in the previous 6-object case in the presence of non-occluding clutter. Some examples are shown below.



Out of 264 test cases, 252 were classified correctly which gives a recognition rate of about 96%, compared to 99% for uncluttered test images. The following table shows the results.



In a second experiment, we took pictures of the objects against a light background. Clutter in these images arises from shadows, from wrinkles in the fabric, and from a substantial shading discontinuity between the turntable and the background. Unlike the dark-field pictures, the object in many of these pictures is not trivially segmentable. In addition, many of the images produce substantial numbers of clutter curves as shown below.



Out of 264 test cases, 236 were classified correctly which gives an overall recognition rate of about 90%, which is not as good as some of our other results. However, almost half the errors were due to instances of the toy bear, the reason being that the gray level of the bear's body was so close to the upper background in low-level shots that many of the main boundaries could not be found. If this case is excluded, the rate is about 94%. Overall results are shown in the following table,



Background clutter, and particularly texture that seriously disrupts the performance of the contour extraction system is a more serious problem. The biggest problem arises with ``checkerboard'' like backgrounds, where frequent contrast reversals occur along object boundaries. The underlying model of our contour finder does not deal with this situation, with the result that external boundaries are badly fragmented. The solution is to use a contour extraction algorithm that is resistant to this sort of disturbance (e.g. various perceptual grouping methods).

Performance in the presence of occlusion

The current system is not designed to deal with arbitrary occlusion; specifically occlusion that breaks up all or most of the key features will cause the recognition process to fail. That said, for objects that are complex enough to contain recognizable subparts, the system can deal with significant amounts of occlusion. For our database, many of the objects are sufficiently complex that they can be chopped in half, for instance, and still recognized by by the system. Some manageable examples are shown below.



The combination of robustness to clutter and occlusion gives the system considerable ability to identify objects in ordinary environmental settings. The images below illustrate examples of situations in which system is able to correctly identify known objects in the scene. We estimate that forced choice accuracy in scenes of this "complexity" is on the order of 90% with a 6 object database. Multiple known objects in a scene do not pose any difficulty. The system simply reports them both.



Experiments on ``Generic'' Recognition

This set of experiments was suggested when, on a whim, we tried showing our coffee mugs to an early version of the system that had been trained on the creamer cup in the previous database (among other objects), and noticed that even though the creamer is not a very typical mug, the system was making the ``correct'' generic call a significant percentage of the time. Moreover, the features that were keying the classification were the ``right'' ones, i.e., boundaries derived from the handle, and the circular sections, even though there was no explicit part model of a cup in the system. Though the notion of generic visual classes is ill defined scientifically, the generalization to different objects in the same ``human class'' was suggestive.

For the experiment, gathered multiple examples of objects from five generic classes, (11 cups, 6 ``normal'' airplanes, 6 fighter jets, 9 sports cars, and 8 snakes). The recognition system was trained on a subset of each class, and tested on the remaining elements. The training sets consisted of 4 cups, 3 airplanes, 3 jet fighters, 4 sports cars, and 4 snakes. These classes are shown with the training objects on the left of each picture, and the test objects on the right. The training and test views were taken according to the same protocol as in the previous experiment. The cups, planes, and fighter jets were sampled over the full sphere; the cars and snakes over the top hemisphere (the bottom sides were not realistically sculpted). The objects used are shown below. Training objects are on the left, test objects on the right.



Overall performance on forced choice classification for 792 test images was 737 correct, or 93.0%. If we average performance for each group so that the fact that the best group, the cups, does not get weighted more because we had more samples, we get 92% (91.96%) performance. The error matrix is shown in Table \ref{fig:results6}. The performance is best for the cups at about 98%, and the planes, sports cars and snakes came in around 92%-94%. The fighter planes were the worst by a significant factor, at about 83%. The reason seems to be that there is quite a bit of difference between the exemplars in some views in terms of armament carried, which tends to break up some of the lines in a way the current boundary finder does not handle. Two of the test cases also have camouflage patterns painted on them. We expect that a few more training cases would help.

The snakes were a surprise, given the degree of flexibility, and the fact that none of the curves are actually rigidly similar. On close examination, the success seems to be effectively an accidental case of "default" reasoning. The snake model has high variability, and a random complex object that does not resemble anything in the database is more likely to get a strong match to a snake exemplar than anything else. Thus snakes get classed as snakes in a forced choice experiment, despite the fact that ROC curves for the snake class display poor absolute discrimination.

These results do not say anything conclusive about the nature of ``generic'' recognition, but they do suggest a route by which generic capability could arise in an appearance based system that was initially targeted at recognizing specific objects, but needed enough flexibility to be able to deal with inter-pose variability and environmental lighting effects. They also suggest that one way of viewing generic classes is that they correspond to clusters in a (relatively) spatially uniform metric space defined by a general, context-free, classification process. This is in contrast to distinctions, such as those needed to tell a cow from a bull, an F16 from an F18, or distinguish faces, that, though they may become fast and automatic in people, involve focusing attention on specific small areas, and assigning disproportionate weight to differences in those regions.

Back to recognition page

Back to Randal Nelson's home page