Interpreter

The interpreter is the second major part of the program. If you're looking for the same solid basis in algorithms, references and linguistic theory you found in the parser, start running. Now. The interpreter could theoretically get more kludgy and theoretically unsound without actually being (more) broken, but trust me-- theory isn't the same as practice.

Structurally, the interpreter isn't as complex as the parser. There's one main class, Interpreter, which doesn't do much, a wrapper for Prolog, PrologEngine, which does nothing too complex, and a salience manager, SalienceManager, which doesn't exist at all.

Input to the interpreter is in the form of Constituents, which is good, because that's exactly what you get out of the parser. Output is in the form of DiscourseEvents, the common currency of the discourse controller. In between, most of the work is done in Prolog by a group of helper functions in the file domain/semantic-utils. By the way, the interpreter maintains a different prolog engine from the parser. Don't confuse them.

A good reference for Prolog is JIPrologRefManual.pdf, or look online. The utility functions used here are moderately advanced-- in other words, they're my first real Prolog program (the stuff in the parser and CB's homework assignment don't count) and they're as pretty as anyone's first real program tends to be. Make sure to save copies of everything when editing, and test thoroughly.

Semantics

I haven't taken a semantics course, or found a really useful semantics book, so most of the semantic theories here come from LIN 110, where we learned why they weren't good ideas. They're probably in my code to stay, however, so learn to like them.

The first bad, but obvious, concept involved is reference theory, which states that the meaning (semantics) associated with an NP, or an AuxP like what vision programs do is a referent or set of referents, which are simply domain objects that exist in the world. In our case, the world is the database, and specifically, the file domain/sched-domain, though you can write another domain or spread the same domain through many files if you want.

For instance, the referent of Mabel is mabel, and the referent of you (from Mabel's point of view, anyway) is also mabel. The referents of robot are, say, mabel and grace, or whatever robots we happen to know about. The referent of the robot from UR is mabel again.

Reference theory isn't very well-respected in modern semantics, because it's wrong. A few phrases with unclear references are faith, talking to Jon and when the program finally works. A few phrases without referents are unicorns, Santa Claus and the current King of France. Luckily, our domain isn't very complex, so we can afford to ignore statements about the King of France and acts like 'talking'. The hole in reference theory also coincides perfectly with a hole in Prolog, the closed world assumption, which states that Prolog can't prove anything about objects that aren't in its domain. (There's an exception for the integers, which we can ignore, because we aren't doing math.) But non-referential NPs might show up, in which case you should know we're too dumb to handle them.

Our second stupid idea is compositional semantics; at least, I think that's what it's called. This states that a phrase made up of a string of modifiers means the logical AND of all the modifiers. For instance, a big, smiling demon with a pitchfork (or trident) is big AND smiling AND a demon AND has a pitchfork (or trident), and all objects that satisfy this formula are, in fact, big, smiling demons with pitchforks (or at least many-to-one reductions).

There are two problems with this. The first has to do with certain nouns acting as modifiers. Consider phrases like linguistics speakers and English teachers. These are not, in fact, speakers who are also linguistic theories and teachers from the UK-- at least, not always. These interpretations do exist (compare hi-fi speakers and Texan teachers) but they aren't the most common ones, and in the case of linguistics speaker, the compositional interpretation is almost never going to be the right one.

As explained in the further work section in the parser docs, Carnie's solution is to make these phrases complements of the nouns, though this introduces ambiguities (which will slow down the parser). Note that any noun can have such a complement, as in dog food box top.

The other problem is that some adjectives don't act compositionally. A fake gun isn't an object that is a gun, AND is fake, since if an object is a gun, we expect to be able to shoot it at things. A small dog is built on an entirely different scale from a small mountain, as you'll discover the next time you try to climb a dog. Unlike the noun modifier problem, this problem is unlikely to come up. (As far as I know, by the way, the EPILOG team has at least partial solutions to things like this.)

Simple Interpretation

With these assumptions in mind, the interpretation system isn't very hard to understand. Each primitive word defines a predicate which relates it (and its environment) to a set of domain objects. The interpreter selects all the matching domain objects and returns them. This set of matches is the interpretation we'll use.

To make this easier, the interpreter collapses the tree structure of the parsed sentence into a more compact format by removing all the <X'> and X nodes.

The tree transformations are shown above on a parse of the NP the smiling demon with a pitchfork. The parse tree should seem familiar, except the triangle labeled NP, which is linguistic shorthand for "the structure shown here is not important, and I'm too lazy to draw it"; mentally fill in a standard NP tree. (Actually, I've been lazy on the bottom diagram too, so mentally add in a specifier with an empty predicate.) In the bottom diagram, rectangles are phrases and ovals are predicates. The dashed lines show which tree level asserts each predicate, while the black lines with solid arrowheads show the connections between phrases. Note that the intermediate phrases are gone, so the only solid-arrow lines are between heads and their adjuncts, specifiers and complements. The open-arrow lines show which phrase each predicate is attached to-- note that this is not the same as which phrase asserts them. For instance, a pitchfork asserts that it refers to a pitchfork, but that predicate is encapsulated within with, which is in turn attached to the whole NP. By the way, the predicate drawn in for the isn't really implemented-- the, like a, is ignored by the program.

Implementation Details

The actual interpretation system, like the selectional system, is based on Prolog. When it is given a Constituent to interpret, the Interpreter walks the tree concatenating all the interpretation code attached to all the tree nodes. It transforms this code using various markup directives (as below), then loads it into Prolog. Finally, it makes some queries against Prolog to find out what kind of sentence it is dealing with.

The interpretation code is quite similar to the selectional code in the parser. Both rules and words in the lexicon have interpretation code associated with them, which in both cases is the last section of the line they are defined on. When dealing with rules, the code is bound against the binding list, and in both cases, & is bound to the name of the word or phrase.

So far, so good-- but what are all these funny angle-bracketed directives doing? They're markup, converted into Prolog by the Interpreter class. Most of the markup tags have the same general format: <KEYWORD(space+)ARG1(space+)ARG2>. Note that space between the < and the KEYWORD, or the second argument and the >, is an error, and will prevent the regular expression used from detecting the tag. Also, since binding takes place before the tags are interpreted, both items from the binding list and the & symbol behave properly as arguments to all the tags.

After all the substitution, the generated Prolog code is asserted into the engine. Then some queries are made to classify the sentence. Among these, so far, are tests for greet, bye, ownsQuery and imper, all of which should be relatively self-evident (or see the domain/semantic-hierarchy file).

More annoying are the answer predicate and its cousin finalAnswer. Definitions for them, and their helpers, are found in domain/semantic-utils. A good way to think of the answer to a phrase is as the integration of all the predicates attached to that phrase. Though answer can actually perform the predicate integration, it is usually invoked by predicate (see the various definitions in Interpreter.java) and the resulting code is packed into a list for later execution by finalAnswer. answer doesn't actually reify any variables by binding them to objects, it just binds them to each other. finalAnswer differs from answer in that it actually calls all the generated code, fixing the variables into satisfactory assignments for the logic of the phrase.

Believing a sentence is a similar process to evaluating it. The interpreter provides the belief faculty as a way of adding new knowledge to the database, in order to make it consistent with the statements asserted by our users. A believable statement is currently not consistent with the database, and is an assertion rather than a command or question.

believe handles belief the way finalAnswer handles evaluation. The first stage, calling answer, is common to both. Both functions then call the structural parts of the answer, and end up with a list of predicates. The critical part of belief is deciding which predicate to assert. For instance, in the sentence Henry is fun, we have three predicates, henry(X), fun(Y), is(X, Y). The correct assertion is fun(henry), which satisfies the sentence, since we already have henry(henry), is(henry, henry). However, is(henry, mabel) satisfies the sentence as well, if we happen to believe fun(mabel), and so does is(henry, mabel). To fix this, each predicate has an asseratability value set in semantic-hierarchy. believe succeeds if there is some single predicate in the list with a maximal assertability, and that assertability is greater than zero. In this case, it asserts that predicate. Generally, nouns and the verb be should have zero assertability (never asserted) within this domain. Currently, adjectives and prepositions share the next rank, followed by verbs, but I have no theoretical basis for this decision.

Programming Semantic Utilities

I don't have the time to explain every line of the semantic utilities, which are commented anyway, but here are some tips:

Further Work

The interpreter is quite rough, and requires lots of things to be done. This list should not be considered complete, but then, if you get through it, you'll have your own ideas about what to fix. (Alternately, you may come after me with an axe. I recommend against it.)

Propositional arguments don't work. These are the objects of sentences like I want to know where Marie is. What we want to do is find an answer for where Marie is and attach that as the complement of want. This will require verbs to attach some sort of marker to their new list the way nouns do. In fact, our semantic need for sentences like this isn't very great-- they tend to be indirect questions where the first part is ignorable. It would probably be reasonable, if not especially elegant, to treat them as a special case, or even eliminate them in preprocessing.

Imperatives don't currently work. Using MAKE tags in the grammar to assign fake specifiers to them, and tagging them using imper(&), is probably a good start. (Propositions begin the same way as imperatives, without subjects, but are distinguished by their non-finite auxiliaries.)

Salience is not implemented. As you may have noticed, we have hooks to a SalienceManager but if you look at the code, you'll see the class definition is empty. There is a good reason for this (I don't have the time to write it), but it will probably need to be solved at some point. The salience manager is intended to assign pronoun referents, guess the referents of phrases specified with the, track the discourse context (including our own speech as well as the user's) and interpret words like first or other with respect to our current topic of conversation. Doing this right will require a lot of work (viz. more time than you have). See my paper from last semester for a fairly brain-dead attempt at doing something like this-- about the only worthwhile point in it is that the correct algorithm uses a stack (Allen suggests the same thing, actually). How the salience manager will interact with discourse control is an open question-- all I can suggest is that there should be some link in there.

The system can't currently give true answers to questions. We can fake answers by printing out the acceptable bindings of a question, but we can't actually find out which object is bound to the question phrase. This is certainly doable, but will probably require a great deal of hacking. It's also not especially important, but being short and to the point in responses will probably make our robot a lot less annoying.

Speed is probably not a big issue, especially if the utilities are compiled, but if you have time or inclination, they can probably be written more efficiently. For instance, try to cut down on uses of append..