Lecture notes for CSC 173, Tues. Oct. 2 -- Thurs. Oct. 11, 2001 ------------------------------------------------- READING: Aho & Ullman chapter 10 ------------------------------------------------- Finite Automata and Regular Expressions Pattern recognition and generation is ubiquitous in computing: * patterns of symbols in a program * patterns of words in a document * patterns of characters in a file * patterns of bits in an image * patterns of operations on a hardware bus * patterns of messages in a telecommunications system Finite automata are formal (or abstract) machines to recognize patterns. These machines are used extensively in compilers and text editors, which must recognize patterns in the input, and in communication hardware and software, which must recognize patterns of messages. Regular expressions are a formal notation to describe patterns. (We also say they can be used to *generate* patterns, since given the description you can crank out all possible strings matching the pattern in a straightforward way.) This notation is used extensively in programming language manuals (used to describe legal patterns of input) and in command languages (such as the Unix shell, where it is used to describe patterns for file names, etc.) As it turns out, finite automata and regular expressions are equally powerful. Moreover there are nice automated procedures for converting one into the other. This technology underlies scanners in compilers and command interpreters, and searching in editors and grep-like tools. It's one of the most elegant applications of theory to practice in all of computer science. ======================================================================== Finite Automata Suppose you want to write a program that will read a file and tell you whether it contains the word "main" anywhere inside. Logically, your program will look something like this: 1 repeat // look for an 'm' read c; if eof, fail while c != 'm' 2 repeat // found an 'm'; look for an 'a' read c; if eof, fail while c == 'm' if c != 'a' return to step 1 3 read c; if eof, fail // found "ma"; look for an 'i' if c == 'm' return to step 2 if c != 'i' return to step 1 4 read c; if eof, fail // found "mai"; look for an 'n' if c == 'm' return to step 2 if c != 'n' return to step 1 5 got what we wanted; skip remainder of file and succeed Each step in the program corresponds to a different place in the recognition process. We can capture this behavior in a graph * each node in the graph represents a step in the process * arcs in the graph represent movement from one step to another * labels on the arcs correspond to the input required to make a transition ------------------------------------------------------------------------ Definition of Finite Automata A finite automaton (FA) is a simple idealized machine used to recognize patterns within input taken from some set of symbols (alphabet) C. (The formal definition of "alphabet" is general enough to cover not just text, but things like message types in a communication system as well.) The job of an FA is to accept or reject an input depending on whether the pattern defined by the FA occurs in the input. A finite automaton consists of: * a finite set S of N states * a special start state * a set C of input symbols (the book uses Lambda; some use Gamma) * a set of final (or accepting) states F * a set of transitions T (many books use delta) from one state to another, labeled with symbols in C This last item (the set of transitions) is actually a transition *function* that maps (state, input symbol) pairs to states. delta : S x C --> S As noted above, we can represent a FA graphically, with nodes for states, and arcs for transitions. We execute our FA on an input sequence as follows: * Begin in the start state * If the next input symbol matches the label on a transition from the current state to a new state, go to that new state * Continue making transitions on each input symbol o If not at EOF and no move is possible, then reject o If EOF then accept iff in an accepting state Wrinkles: The book is unclear about whether an automaton has to read its entire input before accepting. (In some places the authors seem to require this; in others they talk about accepting as soon as you reach an accepting state, even if there's input left.) The formulation above (read the whole input; then decide whether to accept -- reject if you can't read the whole input) makes it easier to talk about strings that end with a given substring. It also makes it easier to prove the equivalence of deterministic and non-deterministic automata (more on this later). If someone gives us an automaton designed to accept as soon as it reaches an accepting state, whether it's read its whole input or not, we can easily convert it to the formulation above: just add a self-loop at every accepting state that is labeled by all symbols that don't appear on any other transition out of the state. Some even stricter formulations require that the automaton never get "stuck" -- that there always be a possible transition on every symbol. We can convert any automaton to this formulation by creating a new non-accepting "dead state" with a self-loop on every symbol, and create a transition to it whenever some other state lacks an outgoing transition on some symbol. As noted in the book, you need this dead state in order to find the smallest DFA that accepts a given language (more on this later, too). Some formulations allow the machine to produce output or perform other actions at each state (Moore) or on each transition (Mealy). The bounce filter example in the book is a Moore machine. Most communication protocols are based on Moore or Mealy machines. (NB: I find the book's attempt to use final and non-final states to control the bounce filter output confusing.) ------------------------------------------------------------------------ Examples * 4-state FA to recognize words with at least 3 x's * 3-state FA to recognize Pascal variable names (letter followed by one or more letters or digits) * 4-state FA to recognize binary strings that end with 111 * 8-state FA to recognize real numbers in Pascal (one or more digits followed by (a) a dot followed by one or more digits, and/or (b) an E followed by either one or more digits or a plus or minus followed by one or more digits) * 7-state FA for a soda machine that accepts nickels, dimes, and quarters, and requires that you input 30 cents or more. ------------------------------------------------------------------------ Programs from FA It is fairly straightforward to translate an FA into a program. Consider a 5-state FA to recognize "main" in a program. * Let FA = {S,C,T,s0,F} * S = {ss, sm, sa, si, sn} * C = {a,b,..z,A,B,..Z,0,1,..9,+,-,*,/,etc} * F = {sn} * T = {(ss,m,sm), (ss,C-m,ss), (sm,a,sa), (sm,m,sm), (sm,C-a-m,ss), (sa,i,si), (sa,m,sm), (sa,C-i-m,ss), (si,n,sn), (si,m,sm), (si,C-n-m,ss), (sn,C,sn)} We can easily create a program from this description of the FA, using two-level nested switch statements. (We can also use gotos, as the book does, but this is less structured, and arguably not good style.) Note that the following always consumes its whole input. (Also note that this is not syntactically valid C: it's pseudocode.) enum {ss, sm, sa, si, sn} state = ss char c bool accept = false while (c = getchar()) != EOF switch (state) case ss: if c == 'm' state = sm case sm: switch (c) case 'm': ; // stay in sm case 'a': state = sa default : state = ss case sa: switch (c) case 'm': state = sm case 'i': state = si default : state = ss case si: switch (c) case 'm': state = sm case 'n': state = sn default : state = ss case sn: accept = true print (accept ? "yes" : "no") ------------------------------------------------------------------------ A note re: programming assignment 3. A scanner differs from a pure FA in that it doesn't have to consume all its input: it finds the longest acceptable *prefix* of its input and returns that to the main program, with the expectation that the main program will call the scanner again when it needs another token. So in your code, keep reading as long as the next character might be part of some valid token. Return only when stuck or at EOF. NB: this strategy works only because the Java designers made sure that the next character can never be part of the current token *or* the start of a new token. Some languages are not so well-designed. In Pascal, for example, the characters "3." might be the start of a real number ("3.14") or a number followed by the start of a range specifier ("3..10", meaning the numbers 3 through 10). In Pascal, therefore, you can't use a pure DFA to scan: you have to add a hack to deal with the "dot dot problem". ------------------------------------------------------------------------ (lecture 10-2-2001 finished here) ------------------------------------------------------------------------ Nondeterministic Automata If, for each pair of states and possible input symbols, there is a unique next state (as specificed by the transitions), then the FA is deterministic (DFA). Otherwise, the FA is nondeterministic (NDFA). An FA may be non-deterministic because it has two transitions out of the same state labeled with the same symbol, or because there are one or more transitions marked with an epsilon, meaning the machine is allowed to "spontaneously" move from one state to another. In effect, for an NDFA we do not require delta to be a function: we allow it to be (merely) a binary relation. delta \subset S x (C U {epsilon}) x S Alternatively, we can think of it as a function from (state, input symbol) pairs to *sets* of possible new states. delta : S x (C U {epsilon}) --> 2^S What does it mean for an FA to have epsilon transitions, or more than one transition from a given state on the same input symbol? How do we translate such an FA into a program? How can we "goto" more than one place at a time? Mathematically, we say that the FA accepts its input if there exists a series of valid transitions that reaches an accepting state. Intuitively, we can either think of an NDFA as following all paths simultaneously (sort of unlimited free parallelism), or as always correctly "guessing" which path to take. If we imagine simulating a DFA by putting a penny on the state we're currently in, we can see a possible way to translate an NDFA into a program: put a penny on *every* state we *might* be in. On each input, remove all current pennies and put one on every possible new state. There's another option, however. We can prove that every NDFA has a corresponding DFA, and there is a straightforward process for translating an NDFA into a DFA. So, when given an NDFA, we can translate it into a DFA, and then write a program based on the DFA. ------------------------------------------------------------------------ Example of an NDFA An NDFA to accept strings containing the word "main": -> s0 -m-> s1 -a- > s2 -i-> s3 -n-> (s4) -any sumbol-> (s4) -> s0 -any symbol-> s0 This is an NDFA because, when in state s0 and seeing an "m", we can choose to remain in s0 or go to s1. (In effect, we guess whether this "m" is the start of "main" or not.) We'll see another example later, with epsilon transitions. If we simulate this NDFA with input "mmainm" we see the NDFA can end up in s0 or s1 after seeing the first "m". These two states correspond to two different guesses about the input: (1) the "m" represents the start of "main" or (2) the "m" doesn't represent the start of "main". -> s0 -m-> s0 -m-> s1 On seeing the next input symbol ("m"), one of these guesses is proven wrong, as is there is no transition from s1 for an "m". That path halts and rejects the input. The other path continues, making a transition from s0 to either s0 or s1, in effect guessing that the second "m" in the input either is or is not the start of the word "main". -> s0 -m-> s0 -m-> s0 -m-> s1 -m-> s1 Continuing the simulation, we discover that at the end of the input, the machine can be in state s0 (still looking for the start of "main"), s1 (having seen an "m" and looking for "ain"), or s4 (having seen "main" in the input). Since at least one of these states is an accepting state (s4), the machine accepts the input. s0 -m-> s0 -m-> s0 -a-> s0 -i-> s0 -n-> s0 -m-> s0 -m-> s1 -m-> s1 -a-> s2 -i-> s3 -n-> s4 -m-> s4 -m-> s1 ------------------------------------------------------------------------ Equivalence of Automata Two automata A and B are said to be equivalent if both accept exactly the same set of input strings. Formally: * if there is a path from the start state of A to a final state of A labeled a1a2..ak, there there is a path from the start state of B to a final state of B labeled a1a2..ak. * if there is a path from the start state of B to a final state of B labeled b1b2..bj, there there is a path from the start state of A to a final state of A labeled b1b2..bj. ------------------------------------------------------------------------ Equivalence of Deterministic and Nondeterministic Automata To show that there is a corresponding DFA for every NDFA, we will show how to remove nondeterminism from an NDFA, and thereby produce a DFA that accepts the same strings as the NDFA. (And of course every DFA *is* an NDFA, by definition.) The basic technique is referred to as subset construction, because each state in the DFA corresponds to some subset of states of the NDFA. The idea is this: as we trace the set of possible paths thru an NDFA, we must record all possible states that we could be in as a result of the input seen so far. We create a DFA which encodes the set of states of the NDFA that we could be in within a single state of the DFA. Put another way, each state of the DFA represents the *set* of states that would have pennies on them in the direct NDFA simulation. ------------------------------------------------------------------------ Subset Construction for NDFA To create a DFA that accepts the same strings as this NDFA, we create a state to represent all the combinations of states that the NDFA can enter. From the previous example (of an NDFA to recognize input strings containing the word "main") of a 5 state NDFA, we can create a corresponding DFA (with up to 2^5 states) whose states correspond to all possible combinations of states in the NDFA: {}, {s0}, {s1}, {s2}, {s3}, {s4}, {s0, s1}, {s0, s2}, {s0, s3}, {s0, s4}, {s1, s2}, {s1, s3}, {s1, s4}, {s2, s3}, {s2, s4}, {s3, s4}, {s0, s1, s2}, {s0, s1, s3}, {s0, s1, s4}, {s0, s2, s3}, {s0, s2, s4}, {s0, s3, s4}, {s1, s2, s3}, {s1, s2, s4}, {s1, s3, s4}, {s2, s3, s4}, {s0, s1, s2, s3}, {s0, s1, s2, s4}, {s0, s1, s3, s4}, {s0, s2, s3, s4}, {s1, s2, s3, s4}, {s0, s1, s2, s3, s4} Note that many of these states won't be needed in our DFA because there is no way to enter that combination of states in the NDFA. However, in some cases, we might need all of these states in the DFA to capture all possible combinations of states in the NDFA. The "empty" DFA state handles the case where the NDFA gets completely stuck, and has no transitions on the input symbol from any of the states it might be in. ------------------------------------------------------------------------ A DFA accepting the same strings as our example NDFA has the following transitions: {s0} -m-> {s0,s1} {s0} -not m-> {s0} {s0,s1} -m-> {s0,s1} {s0,s1} -a-> {s0,s2} {s0,s1} -not m,a-> {s0} {s0,s2} -m-> {s0,s1} {s0,s2} -i-> {s0,s3} {s0,s2} -not m,i-> {s0} {s0,s3} -m-> {s0,s1} {s0,s3} -n-> {s0,s4} {s0,s3} -not m,n-> {s0} There are also a bunch of less interesting transitions after we've already seen the word main: {s0,s4} -m-> {s0,s1,s4} {s0,s1,s4} -a-> {s0,s2,s4} {s0,s2,s4} -i-> {s0,s3,s4} {s0,s3,s4} -n-> {s0,s4} The start state is {s0} and the final states are {s0,s4}, {s0,s1,s4}, {s0,s2,s4}, and {s0,s3,s4} -- the ones containing a final state of the NDFA. This is an 8-state DFA. We know, of course, that a 5-state DFA exists (it was our original example), and we can see in this case by inspection that the four final states really "ought" to be combined. We'll see later how to do this automatically. By coincidence in this case, the minimal DFA has the same number of states (5) as the original NDFA. THIS IS NOT ALWAYS TRUE. For example, if we start with the "obvious" 5-state NDFA to accept strings that contain at least two 'a's or at least two 'b's or at least two 'c's, the subset construction gives us a DFA with 15 states, and the minimal DFA has 9 states. (I RECOMMEND WORKING THIS OUT AS A NOT-TO-BE-TURNED-IN HOMEWORK EXERCISE.) A much messier example is given in the book. Consider an NDFA that accepts all strings that cannot be created out of the letters in the word "washington". It's a relatively simple 19-state machine (fig. 10.14, with a few corrections: (a) the various Lambda-x transitions can simply be Lambda; (b) the final states need self-loops on Lambda to consume the whole input). The minimal DFA has nearly 5000 states. ------------------------------------------------------------------------ Limitations of Finite Automata The defining characteristic of FA is that they have only a finite number of states. Hence, a finite automata can only "count" (that is, maintain a counter, where different states correspond to different values of the counter) a finite number of input scenarios. There is no finite automaton that recognizes these strings: * The set of binary strings consisting of an equal number of 1's and 0's * The set of strings over '(' and ')' that have "balanced" parentheses The 'pumping lemma' can be used to prove that no such FA exists for these examples. Take 280 and you'll see. ======================================================================== Regular Expressions Just as finite automata are used to recognize patterns of strings, regular expressions are used to generate (describe) patterns of strings. A regular expression is an algebraic formula whose value is a pattern describing of a set of strings, called the language of the expression. Operands in a regular expression can be: * symbols from the alphabet over which the regular expression is defined. * variables, whose values are any pattern defined by a regular expression. * epsilon, which denotes the empty string containing no symbols. * NULL, which denotes the empty set of strings. If R is a regular expression, we use L(R) to indicate the *language* (set of strings) described (generated) by R. Operators used in regular expressions include: * Union: If R1 and R2 are regular expressions, then R1 | R2 (also written as R1 U R2 or R1 + R2) is also a regular expression. L(R1|R2) = L(R1) U L(R2). Union is also sometimes called alternation. * Concatenation: If R1 and R2 are regular expressions, then R1R2 (also written as R1.R2) is also a regular expression. L(R1R2) = L(R1) concatenated with L(R2). * Kleene closure: If R1 is a regular expression, then R1* (the Kleene closure of R1) is also a regular expression. L(R1*) = epsilon U L(R1) U L(R1R1) U L(R1R1R1) U ... By convention, in the absence of parentheses, closure has the highest precedence, followed by concatenation, followed by union. ------------------------------------------------------------------------ Examples The set of strings over {0,1} that end in 3 consecutive 1's. (0 | 1)* 111 The set of strings over {0,1} that have at least one 1. 0* 1 (0 | 1)* The set of strings over {0,1} that have at most one 1. 0* | 0* 1 0* The set of strings over {A..Z,a..z} that contain the word "main". Let = A | B | ... | Z | a | b | ... | z * main * The set of strings over {A..Z,a..z} that contain 3 x's. * x * x * x * The set of identifiers in Pascal. Let = A | B | ... | Z | a | b | ... | z Let = 0 | 1 | 2 | 3 ... | 9 ( | )* The set of real numbers in Pascal. Rules: must have a fractional part, an exponent, or both (otherwise it's an integer) if it has a fractional part, there must be at least one digit on each side of the decimal point. in an exponent the sign is optional, but there must be at least one digit Let = 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 digit_string = * Let = 'E' Let = '+' | '-' | epsilon Let = '.' ( | | ) Remember: abbreviations like and are for convenience ONLY; they do not change the power of the notation. To see this, just expand them all out in-line (they aren't allowed to be recursive). ------------------------------------------------------------------------ (lecture 10-4-2001 finished here) ------------------------------------------------------------------------ Unix Operator Extensions Regular expressions are used frequently in Unix: * In shell command lines * Within text editors * In the context of pattern matching programs such as grep and egrep To facilitate construction of regular expressions, Unix recognizes additional operators. These operators can be defined in terms of the operators given above; they represent a notational convenience only. * character classes: '[' ']' * start of a line: '^' * end of a line: '$' * wildcard matching any character except newline: '.' * optional instance: R? = epsilon | R * one or more instances: R+ == RR* NB: notation is NOT the same in all tools. For example, in most shells '.' is just a dot, and '?' means "any one (non-newline) character". ------------------------------------------------------------------------ Equivalence of Regular Expressions and Finite Automata Regular expressions and finite automata have equivalent expressive power: * For every regular expression R, there is a corresponding FA that accepts the set of strings described by R. * For every FA A there is a corresponding regular expression that describes the set of strings accepted by A. The proof is in two parts: 1. an algorithm that, given a regular expression R, produces an FA A such that L(A) == L(R). 2. an algorithm that, given an FA A, produces a regular expression R such that L(R) == L(A). The first part (construction of an FA from an RE) is what tools like lex, emacs, and grep do. The construction relies on epsilon transitions, but these are just a notational convenience: for every FA with epsilon transitions there is a corresponding FA without them. In practice we can deal with epsilon transitions directly in the NDFA to DFA construction. ------------------------------------------------------------------------ Constructing an FA from an RE We begin by showing how to construct an FA for the operands in a regular expression. * If the operand is a symbol c, then our FA has two states, s0 (the start state) and sF (the final, accepting state), and a transition from s0 to sF with label c. * If the operand is epsilon, then our FA has two states, s0 (the start state) and sF (the final, accepting state), and an epsilon transition from s0 to sF. * If the operand is null, then our FA has two states, s0 (the start state) and sF (the final, accepting state), and no transitions. Given FA for R1 and R2, we now show how to build an FA for R1R2, R1|R2, and R1*. Let A (with start state a0 and final state aF) be the machine accepting L(R1) and B (with start state b0 and final state bF) be the machine accepting L(R2). * The machine C accepting L(R1R2) includes A and B, with start state a0, final state bF, and an epsilon transition from aF to b0. If we note that there is no transition out of aF and no transition into b0, we can eliminate the epsilon transition and simply merge aF and b0. * The machine C accepting L(R1|R2) includes A and B, with a new start state c0, a new final state cF, and epsilon transitions from c0 to a0 and b0, and from aF and bF to cF. * The machine C accepting L(R1*) includes A, with a new start state c0, a new final state cF, and epsilon transitions from c0 to a0 and cF, and from aF to a0, and from aF to cF. ------------------------------------------------------------------------ Example: (1*01*0)*1* is a RE describing all strings of 1s and 0s in which the number of zeros is even. The correspoinding NDFA can be found at the bottom of p. 90 in Programming Language Pragmatics (the 254 text). ------------------------------------------------------------------------ (lecture 10-9-2001 finished here) ------------------------------------------------------------------------ If we apply the subset construction to the NDFA we get the DFA at the top of p. 91. This DFA has 5 states. Interestingly, it's easy to show that an equivalent two-state DFA exists. It turns out that for any regular language there exists a unique *minimal* DFA, and there's a straightforward construction to construct this minimal DFA given any equivalent DFA. First we add a dead state, if necessary, so every state has an outgoing transition on every input symbol. The construction then works inductively. Initially we place the states of the (not necessarily minimal) DFA into two equivalence classes: final states and non-final states. We then repeatedly search for an equivalence class C and an input symbol a such that when given a as input, the states in C make transitions to states in k > 1 different current equivalence classes. We then partition C into k classes in such a way that all states in a given new class would move to a member of the same old class on a. When we are unable to find a class to partition in this fashion we are done. In our example, the original placement puts states A, B, and E in one class (final states) and C and D in another. In all cases, a 1 leaves us in the current class, while a 0 takes us to the other class. Consequently, no class requires partitioning, and we are left with a two-state machine. ------------------------------------------------------------------------ Constructing an RE from an FA To construct a regular expression from a DFA (and thereby complete the proof that regular expressions and finite automata have the same expressive power), we replace each state in the DFA one by one with a corresponding regular expression. Just as we built a small FA for each operator and operand in a regular expression, we will now build a small regular expression for each state in the DFA. The basic idea is to eliminate the states of the FA one by one, replacing each state with a regular expression that describes the portion of the input string that labels the transitions into and out of the state being eliminated. ------------------------------------------------------------------------ Algorithm for Constructing an RE from an FA Given a DFA M we construct a regular expression R such that L(M) == L(R). Dynamic programming algorithm. Let Rij[t] be a regular expression describing all the ways to get from state i to state j (i.e. all the labels on paths that go from from i to j) without going through any intermediate state numbered higher than t. If states are numbered starting with 1, Rij[0] describes the ways to go from i to j without going through *any* intermediate states. Initially Rii[0] = the alternation of epsilon and all symbols on the self loop, if any, from i to i. Rij[0], i != j, is the alternation of all symbols on the arc from i to j, or the empty RE if there isn't such a loop. What we want for the whole expression is R1j[n] | R1k[n] | R1l[n] ..., where 1 is the start state and {j, k, l, ...} is the set of final states. The inductive step notes that Rij[k] = Rij[k-1] | Rik[k-1] Rkk[k-1]* Rkj[k-1] Example pp 91-92 in PLP: Start with a two-state DFA to accept all binary strings with an even number of zeros: S = {s1, s2} start state = s1 final states = {s1} T = {(s1, 1, s1), (s1, 0, s2), (s2, 1, s2), (s2, 0, s1)} What we want is R11[2]. R11[0] = 1|e R12[0] = 0 R21[0] = 0 R22[0] = 1|e R11[1] = (1|e) | (1|e) (1|e)* (1|e) R12[1] = 0 | (1|e) (1|e)* 0 R21[1] = 0 | 0 (1|e)* (1|e) R22[1] = (1|e) | 0 (1|e)* 0 R11[2] = ((1|e) | (1|e) (1|e)* (1|e)) | (0 (1|e) (1|e)* 0) ((1|e) | 0 (1|e)* 0)* (0 | 0 (1|e)* (1|e)) ------------------------------------------------------------------------ Summary of Results We have shown that all four of the following formalisms for expressing languages of strings are equivalent: * deterministic finite automata * nondeterministic finite automata * nondeterministic finite automata with epsilon transitions * regular expressions