Performance Report

The set of sentences used to test the system is basically the same one used for evaluation of the tagger last semester. However, some of the intentionally inane sentences used to check the tagger's ability to reject things outside its domain (will you marry me? is typical) were removed, since the current system is not statistical, and runs no risk at all of classifying these sentences as useful.

Sphinx

No current tests on Sphinx are included, since the project focused on building a text-to-text system with voice capability, rather than a voice-to-text system. However, a similar set of tests midway through the development process did use Sphinx, initializing it with a training corpus equivalent to the test set, plus a few more sentences. Performance is expected to degrade when using a larger dictionary.

Results of Sphinx Testing at Mid-Development
Sentences in Corpus45
Words in Dictionary68
Pronunciations in Dictionary95
Total Sentences Tested19
Sentences Correct14
Percent Correct73%

Overall Current Performance

The current tests used a similar but larger set of data, which is included as Appendix D. A good measure of net system ability is the number of sentences out of the total which were correctly handled by the system at all processing stages. There are 38 sentences in the test set.

Each bar of the graph represents the percentage of input which was correctly processed by all stages up to and including the one indicated.

The parser is fairly adequate, though it could easily be improved by a few percent by eliminating vocatives (hi, Mabel). The interpreter, however, does not perform very well. In some cases, this was because it was simply not capable of handling the sentence structure, but in many the semantic database did not contain adequate (or any) definitions for the words involved. Adding these is not a trivial task, but it should be fairly straightforward.

As it stands, the system contains a reasonable amount of syntactic knowledge, but not much semantic knowledge.
System Knowledge Base
Words in Dictionary148
Morphological Variants Known257
Words Defined32
Words without definitions are semantically useless. Several of these occur in the test set, such as for, with, over and end. Other words, like locations or times, are correctly defined for events but not defined when applied to speakers.

Parser

As the initial part of the system, the parser can be evaluated independently. Since the parser tends to be slow, the first several tests focused on time. In all these tests, the timer was programmed into the test program, obtaining times from the system clock. The tests were run on a 1Ghz single-processor desktop under Windows.

The first time tests checked performance in the training set.

The times range from nearly 0 seconds (usually on failures due to unknown words) to just over 200 seconds, and are clearly proportional both to the length of the sentence in question and its degree of ambiguity (how many full parses of it were found).However, except for a peak at four parses which is almost certainly an outlier, time is much better correlated with ambiguity than with length. Ambiguity is a reasonable cause for longer parse times, since the parsing algorithm produces and checks more constituents when the sentence is ambiguous.

To demonstrate this effect more clearly, the tests also include three parallel sets of phrases designed specifically to have varying degrees of ambiguity. The first set was structurally and lexically unambiguous. The second used a single multiply-defined word as close to the beginning of the phrase as possible, where it would affect all the rest of the derived structures. The third used a wh-determiner which would be analyzed as a possible raised construction and checked at all sentence positions. It was also placed at the beginning of the phrase, for the same reason.

As predicted, all the times increase non-linearly as more useless constituents become derivable. However, a single lexical ambiguity approximately doubles parse times, since every constituent including the ambiguous word is derived in two different versions. The wh-raised construction takes the longest of all, and most clearly approximates an exponential time increase (though the wh-determiner cannot land in several positions toward the end of the sentence, and the increase slows down again).

Interpretation

The selection and interpretation algorithms are harder to test directly, but there are a few relevant statistics. First, the accuracy of the selector is mostly dependent on the number of parses to choose from, since it has no real disambiguation heuristics. Therefore, it is important to see which sentences are most likely to be ambiguous.

Beyond two or three words, longer sentences do not seem to be more ambiguous than shorter ones, however. Word choice and the presence of wh-words or auxiliary verbs have much more to do with how ambiguous a sentence is.

Next, if the selector is really performing at chance accuracy, performance should degrade as the number of parses goes up.

This does not seem to have been the case. However, there is probably an explanation for this. The test set contains several sentences with nearly parallel structure, and these sentences generally result in the same number of parses, in parallel order. Especially toward the high end of the scale, only one or two sentences have any given number of parses, and often those sentences are all members of such a parallel set. The samples at higher numbers of parses, therefore, are not statistically significant. If the selector is lucky with its choice on one such sentence, it is correct for all of them.

Interpretation accuracy likewise seems to be independent of length, which is evidence that many of its failures were based on the use of undefined words.

Summary

The system as a whole achieves about 20% accuracy, though this is probably easy to improve. It takes about 34.24 seconds to parse the average sentence, and negligible time to interpret it. It is about equally likely to process sentences of any length accurately, though it will take longer for longer sentences. This is not especially impressive in the light of modern natural language systems, but on the other hand, chance performance in this domain is essentially zero, so the system must be doing something right.