Imagine someone hands you a dictionary in a different language, with different characters, no translations, and no pictures. Could you learn that language and be able to use it to communicate? Clearly not. Yet we expect many natural language understanding systems to do just that. Language must be grounded in our personal experiences to have meaning - how do we give computers experiences, and more importantly, how do we tie them to language in a meaningful way?

To answer these questions, I study the intersection of natural language understanding, knowledge representation, and computer vision. Each of these fields can benefit from the others - natural language provides labels for computer vision data, computer vision grounds natural language, and the right knowledge representations can lead to improved language and scene understanding. I am interested in exploring new ways of tying these aspects together.

What is the Ground? Continuous Maps for Symbol Grounding

Analysis of the Symbol Grounding Problem has typically focused on the nature of symbols and how they tie to perception without focusing on the actual qualities of what the symbols are to be grounded in. We formalize the requirements of the ground and propose a basic model of grounding perceptual primitives to regions in perceptual space that demonstrates the significance of continuous mapping and how it influences categorization and conceptualization of perception. We also outline methods to incorporate continuous grounding into computational systems and the benefits of applying such constraints.

 

Ian Perera and James F. Allen. What is the Ground? Continuous Maps for Symbol Grounding. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Conference of the Cognitive Science Society (to appear). Quebec City, Canada : Cognitive Science Society. PDF with funding addendum

SALL-E: Situated Agent for Language Learning

We describe ongoing research towards building a cognitively plausible system for near one-shot learning of the meanings of attribute words and object names, by grounding them in a sensory model. The system learns incrementally from human demonstrations recorded with the Microsoft Kinect, in which the demonstrator can use unrestricted natural language descriptions. We achieve near-one shot learning of simple objects and attributes by focusing solely on examples where the learning agent is confident, ignoring the rest of the data. We evaluate the system's learning ability by having it generate descriptions of presented objects, including objects it has never seen before, and comparing the system response against collected human descriptions of the same objects. We propose that our method of retrieving object examples with a k-nearest neighbor classifier using Mahalanobis distance corresponds to a cognitively plausible representation of objects. Our initial results show promise for achieving rapid, near one-shot, incremental learning of word meanings.

 

Ian Perera and James F. Allen. SALL-E: A Situated Agent for Language Learning. In Proceedings of the Twenty-Seventh AAAI Conference (AAAI-2013). PDF