Adam Sadilek : Activity Recognition - University of Rochester

Relational Learning and Reasoning about Human Activities: An Overview

Adam Sadilek
Henry Kautz

Department of Computer Science
University of Rochester

March 2010

Introduction

Below is a high-level overview of our research on automated reasoning about human behavior with applications in assisted cognition, human-computer interaction, (online) social networks, computer supported cooperative work, and the military.

We are also developing a web application that runs on top of Google Maps and does not only visualize the raw positional data and the results of the inference, but is also becoming a full-fledged front end through which users can naturally interact with our reasoning system at the back end. You can try a demo of it below, or in fullscreen mode here. The demo allows you to animate and visualize one round of capture the flag that was recorded by people carrying GPS loggers while playing the game (more information below). You can use the toolbar in the top right corner in conjunction with standard Google Maps controls to interact with the application.

Motivation

Our society is founded on the interplay of human interactions and relationships. Patterns in human activity are exhibited in many different facets of life, including people's plans, goals, important locations and events, daily routines, etc. Since every person is tightly embedded in our social structure, the vast majority of actions and events can only be fully understood in the context of the actions of other—related—people.

Our main test domain is the game of capture the flag—an outdoor game that involves many distinct cooperative and competitive joint activities. Imagine two teams—seven players each—playing capture the flag (CTF) on a university campus, where each player carries a consumer-grade global positioning system (GPS) that logs its position every second (see Figure 1). Accuracy of the GPS data varies from 1 to more than 10 meters. In open areas, readings are typically off by 3 meters, but the discrepancy is much higher near buildings. Errors between devices are poorly correlated, because subtle differences between players, such as the angle at which the device sits in the player's pocket, can dramatically affect accuracy. Errors in measuring the distance between players can therefore compound, and be higher than 20 meters.

Consider the task of inferring the individual and joint activities and intentions of the players from their GPS traces. For example, suppose the GPS data shows Player A running toward a stationary teammate Player B, then moving away. What occurred? Possibly Player A has just "freed" Player B, but GPS error has hidden the fact that Player A actually reached B. (In CTF, a player may be captured by being tagged by an opponent while on the opponent's territory. A captured player must remain in place until freed by being tagged by a teammate.) Another possibility is that Player A had the intention of freeing Player B, but was scared off by an opponent at the last second. Yet another possibility is that no freeing occurred or was even intended, because Player B had not been previously captured.

Understanding a game thus consists of inferring a complex set of interactions among the various players as well as players' intentions. The conclusions drawn about what occurs at one point in time affect and are affected by inferences about past and future events. In the example just given, recognizing that Player B is moving in the future reinforces the conclusion that Player A is freeing Player B, while failing to recognize a past event of Player B being captured decreases confidence in that conclusion. The game of CTF also illustrates that understanding a situation is as much or more about recognizing goals and intentions as about recognizing successfully executed actions. For example, in course of a 15 minute game, only a handful of capture or freeing events occur. However, there a dozens of cases where one player unsuccessfully tries to capture an opponent or to free a teammate. A description of a game that was restricted to what actually occurred would be only a pale reflection of the original.

Reasoning about human intentions is an important problem since if we can recognize what a person (or a group of people) wants to do, we can proactively try to help them (or—in adversarial situations—hinder them). Intent is notoriously problematic to quantify, but in [3] we show that the notion is precisely and naturally captured in the process of learning the structure of failed activities.

Though recent research has shown that surprisingly rich models of human behavior can be learned solely from GPS (positional) data (e.g., [1]), most effort to date has concentrated on modeling single individuals or statistical properties of groups of individuals. In contrast, we consider the problem of modeling and recognizing activities and intended actions that involve multiple related individuals playing distinct roles and having a variety of desired goals.

Thus, in our work, we take on the task of understanding the capture the flag game from GPS data as an exemplar of the general problem of inferring human interactions and intentions from sensor data. Although CTF doesn't capture all the complexities of life, most of the problems that we are addressing here clearly have direct analogs in more real-life tasks that artificial intelligence needs to address—such as improving smart environments, human-computer interaction, surveillance, assisted cognition, and battlefield control. Namely, we address the following questions.

Given raw and noisy data, how can we automatically and reliably detect and recognize interesting events that happen in the game? Can we learn the rules of capture the flag by observing several rounds? Can we generalize beyond the observations and envision a "typical game" that would capture the underlying commonalities and allow us to recognize anomalous events and failed attempts at activities?

How can we use such mined knowledge to predict what is going to happen next in a given game? Capitalizing on this higher-level knowledge, can we learn better and more robust strategies that an artificial multi-agent system can then use to compete with people? What about electronic assistants that help human teams be more successful? How to efficiently summarize a long game to a busy person without leaving out important events? Can we make sensible decisions based on our inferred results? . . .

Figure 1. A video of two teams (7 players each) playing one round of capture the flag (CTF). One team has each player represented by a pin with a green star, the other team has pins with no stars. In our version of CTF, the two "flags" are stationary and are shown as icons of houses near the top and the bottom of the figure. The horizontal road in the middle of the image is the territory boundary. One way to achieve victory is to enter the opponent's circle. The data is shown sped-up and prior to any denoising or corrections for map errors. The same video, except in much better quality, is available here (in Ogg Video format).

Our Approaches

In order to immediately test our methodologies across domains (and not just in the capture the flag world), we also experiment with data collected by people wearing various multi-modal sensory loggers over extended periods of time (including our own data—see Figure 2 and Figure 3—and the Reality Mining dataset). In those domains, instead of reasoning about game events, typical games, strategies, etc., we reason about people's everyday activities and routines, goals, plans, schedules, significant locations, and the like.

We experiment with several approaches to representing our data and to the inherently relational and multi-agent reasoning that these domains require. More specifically, we focus on application of Markov logic networks, probabilistic inductive logic programming, eigenanalysis lifted to relational setting, suchrelational conditional random fields, and reinforcement learning to solve the above-mentioned problems.

Figure 2. Examples of three trips (red, bright blue, and violet) logged by a GPS device carried by a person living in Rochester. (Click on the map to enlarge.)
Figure 3. An originally noisy GPS trace "snapped" to a street map (thick gray lines represent streets) and displayed in our system that combines denoising, reasoning, and visualization.

Results

Capture The Flag Domain

We model the CTF domain using Markov logic, a statistical relational language, and learn a theory that can simultaneously and jointly denoise the data and infer high-level activities, such as capturing or freeing a player. The denoising and inference are coupled in a probabilistically and logically sound fashion that combines constraints imposed by the geometry of the game area, motion model of the players, as well as by the rules and the dynamics of the game. In [2] we show that while it may be impossible to directly detect a joint activity due to sensor noise, the occurrence of the activity can still be deduced by its impact on the past and future behaviors of the individuals involved. We compare our unified approach to three alternatives (both probabilistic and nonprobabilistic) where either the denoising of the GPS data and the detection of the high-level activities are strictly separated, or the states of the players are not considered, or both. We show that the unified approach with the time window spanning the entire game, although more computationally costly, is significantly more accurate.

If we are to reliably recognize interesting events that happen in these games of capture the flag, we need to consider not only each player individually but also the relationships among them over extended periods of time (possibly the whole length of the game). For example, GPS noise may make it impossible to determine whether or not a player has been captured at the moment of the capture, but as the player thereafter remains in place for a long time, the possibility of his capture becomes certain.

Consider a real game situation illustrated in the first ten seconds of the video in Figure 4. There we see a video of a game segment before any modification of the GPS data. Players D, F, and G are allies and are currently on their home territory near their flag, whereas players L and M are their enemies. Initially, players L and M head for the opponent's flag but then—around the fifth second—they are intercepted by G. At this point it is unclear what happens because of the substantial error in the GPS data—the three players appear to be very close to each other but in actuality they could have been 20 or more meters apart. However, after the eighth second we realize—in retrospect—that player G actually probably captured only player M and didn't capture L since he is still chasing him. The fact that player M remains stationary coupled with the fact that neither D nor F attempts to capture him suggests that M has indeed been captured. Our unified model gives the correct labeling even for complex situations like these whereas limited approaches largely fail.

In [3] we further extend our model of CTF and demonstrate that given raw GPS data, we can automatically and reliably detect and distinguish both successful and failed attempts at activities within this complex multi-agent domain. Additionally, we show that success, failure, goal, and intent of an activity are intimately tied together and having a model for successful events allows us to naturally learn models of the other three important aspects of life. We compare our approach with two alternatives and show that the augmented model, which takes into account not only relationships among individual players, but also relationships among activities over the entire length of a game, performs significantly better.

To visualize the input data, the reasoning process, and the results of our research, we are developing a web application that runs on top of Google Maps. Since we are also adding an active learning component to our system, we expect the cloud application to become a front end through which users can naturally interact with the reasoning system.



Figure 4. A video showing several game situations that illustrate the need for an approach that exploits both the relational and the far reaching temporal structure of our domain (see text for details). The same video, except in much better quality, is available here (in Ogg Video format).

References

  1. Liao, L., Fox, D., and Kautz, H. Learning and Inferring Transportation Routines. In Proceedings of the Nineteenth National Conference on Artificial Intelligence. Best Paper Award. 2004.
  2. Sadilek A., Kautz, H. Recognizing Multi-Agent Activities from GPS Data. Twenty-Fourth AAAI Conference on Artificial Intelligence. 2010.
  3. Sadilek A., Kautz, H. Modeling and Reasoning about Success, Failure, and Intent of Multi-Agent Activities. UbiComp. 2010. (in submission, available upon request)