Memory-Based Object Recognition Algorithm

In order to recognize objects, we must first prepare a database against which the matching takes place. To do this, we first take a number of images of each object, covering the region on the viewing sphere over which the object may be encountered. The exact number of images per object may vary depending on the features used and any symmetries present, but for the patch features we use, obtaining training images about every 20 degrees is sufficient. To cover the entire sphere at this sampling requires about 100 images. For every image so obtained, the boundary extraction procedure is run, and the best 20 or so boundaries are selected as keys, from which patches are generated and stored in the database. Currently, the ``best'' features are simply the largest; other distinctiveness measures could be used as well. With each patch is associated the identity of the object that produced it, the viewpoint it was taken from, and three geometric parameters specifying the 2-D size, location, and orientation of the image of the object relative to the key curve. This information permits a hypothesis about the identity, viewpoint, size, location and orientation of an object to be made from any match to the patch feature.

The basic recognition procedure consists of four steps. First, potential key features are extracted from the image using low and intermediate level visual routines. In the second step, these keys are used to access the database memory (via hashing on key feature characteristics and verification via local context), and retrieve information about what objects could have produced them, and in what relative configuration. The third step uses this information, in conjunction with geometric parameters factored out of the key features regarding position, orientation, and scale, to produce hypotheses about the identity and configuration of potential objects. These ``pose'' hypotheses serve as the loose global contexts into which information is integrated. This integration is the fourth step, and it is performed by using the pose hypotheses themselves as keys into a second associative memory, where evidence for the various hypotheses is accumulated. Specifically, all global hypotheses in the secondary memory that are consistent (in our loose sense) with a new hypothesis have the associated evidence updated. After all features have been so processed, the global hypothesis with the highest evidence score is selected. Secondary hypotheses can also be reported.

In the final step described above, an important issue is the method of combining evidence within a loose global context. The simplest technique is to use an elementary voting scheme - each feature (local context patch) consistent with a pose contributes equally to the total evidence for that pose. This is clearly not well founded, as a feature that occurs in many different situations is not as good an indicator of the presence of an object as one that is unique to it. For example, with 24 3-D objects stored in the database, comprising over 30,000 context patches, we find that some image features match 1000 or more database features, even after local context verification, while others match only one or two. An evidence combination scheme should take this into account. An obvious approach in our case is to use statistics computed over the information contained in the associative memory to evaluate the quality of a piece of information. It is clear that the optimal quality measure, which would rely on the full joint probability distribution over keys, objects and configurations is infeasible to compute, and thus we must use some approximation.

One idea would be to use the first order feature frequency distribution over the entire database in a Bayesian framework. This, with minor modifications, is what we do. In the following discussion, the term ``feature'' should be taken to mean the entire key curve plus local context, since this is what is being matched. Also recall that the pose hypotheses serve as the global contexts within which evidence is accumulated. The resulting algorithm, is to accumulate evidence, for each match supporting a pose, proportional to F*log(k/m) where m is the number of matches to the image feature in the whole database, and k is a proportionality constant that attempts to make m/k represent the actual geometric probability that some image feature matches a particular patch in the pose model by accident. F represents an additional empirical factor proportional to the square root of the size of the feature in the image, and the 4th root of the number of key features in the model. These modifications capture certain aspects that seem important to the recognition process, but are difficult to model using formal probability.

A formal derivation of the above term can be found in our papers, but it is worthwhile noting that a simple way of understanding the source of the logarithmic term is to interpret the total evidence score as representing the log of the reciprocal of the probability that the particular assemblage of features (local context patches) is due to chance. If the features are independent (which they are not, but we don't have any better information to use) then we just multiply the probabilities. Equivalently, to keep the actual values small, we can add the logarithms. Because the independence assumption is unwarranted in the real world, the evidence values actually obtained are serious underestimates if interpreted as actual probabilities. However, the rank ordering of the values, which is all that is important for classification, is fairly robust to distortion due to this independence assumption.

In the above discussion we have assumed that the associative memory already existed in the requisite form. However, one of the primary attractions of a memory-based recognition system is that it can be trained efficiently from image data. The basic process of model acquisition is simply a matter of providing images of the object to the system, running the key detection procedures on these images, and storing the resulting (key, association) pairs. The number of images needed may vary from one, for simple 2-D applications, to several tens for rigid object recognition, and possibly more for complicated non-rigid objects. The process is efficient, and essentially runs in time proportional to the number of pairs stored in memory. This is in contrast to many learning algorithms that scale poorly with the the number of stored items. (Actually, indexed memory building process are apt to scale as N log(N), for very large numbers of items. However, since the processing for all databases run so far is dominated by the key-feature extraction image processing, the complexity has essentially been linear).

Back to recognition page

Back to Randal Nelson's home page