Notes for CSC 162, 2 Feb. 2010 ff Chapter 3 and Sections 7.3 and 7.2 of the text, plus some extra material First project due Thurs Feb. 4, noon. Second project should be available on-line that afternoon. ======================================== Recursion "three laws of recursion" have to have a base case have to change state and move toward base case have to call self (directly or indirectly) Cf. proof by induction Compare iterative listsum (Listing 3.1): def listsum_iter(l): sum = 0 for i in l: sum = sum + i return sum with naive recursive version (Listing 3.2): def listsum_rec1(l): if len(l) == 1: return l[0] else: return l[0] + listsum_rec1(l[1:]) This is inefficient (lots of copying), but Python runs out of stack to keep track of the calls before the cost gets out of hand. Note that the two implementations actually sum the elements in opposite order. We could make them do it in the same order like this: def listsum_rec2(sum, l): if len(l) == 0: return sum return listsum_rec2(sum + l[0], l[1:]) That has an ugly extra parameter, but we can hide it: def helper(sum, l): if len(l) == 0: return sum return helper(sum + l[0], l[1:]) def listsum_rec3(l): return helper(0, l) We can even hide the helper inside listsum: def listsum_rec4(l): def helper(sum, l): if len(l) == 0: return sum return helper(sum + l[0], l[1:]) return helper(0, l) ---------------------------------------- Listing 3.3 (convert number to string in given base) is a reformulation of Listing 2.6. Most people would agree it's quite a bit more elegant. Here's the original version: def toBase_iter(num, base): assert base <= 16 digits = "0123456789ABCDEF" remstack = Stack() while num > 0: rem = num % base remstack.push(rem) num = num / base newString = "" while not remstack.isEmpty(): newString += digits[remstack.pop()] return newString And here's the recursive one: def toBase_rec(num, base): assert base <= 16 digits = "0123456789ABCDEF" if num == 0: return "" else: return toBase_rec(num / base, base) + digits[num % base] The comparison suggests that recursion has a natural connection to stacks, and indeed it has. ---------------------------------------- Sometimes one has to be careful with recursion. In addition to limits on stack depth and the cost of argument copying, some naive recursive algorithms are inherently expensive. Iterative Fibonacci function, O(n): def fib_iter(n): a = 0 b = 1 i = 0 while i < n: t = a + b a = b b = t i += 1 return b Naive recursive version, O(2**n): << draw tree >> def fib_rec1(n): if n < 2: return 1 return fib_rec1(n-1) + fib_rec1(n-2) Good recursive version, O(n): def fib_rec2(n): def helper(a, b, i): if i == n: return b return helper(b, a + b, i + 1) return helper(0, 1, 0) Notice that the latter basically captures the iterative version. If you take CSC 254 you'll learn that iteration doesn't have to be any more expensive than iteration (though it is in Python). And it's definitely more expressive: iteration can't capture recursion in the general case without an explicit stack. ---------------------------------------- Stack frames within Python go back to the toBase example note that each call introduces a new scope ---------------------------------------- Towers of Hanoi def moveDisk(fp, tp): print "moving disk from " + str(fp) + " to " + str(tp) def Hanoi(height, fromPole, toPole, withPole): if height >= 1: Hanoi(height-1, fromPole, withPole, toPole) moveDisk(fromPole, toPole) Hanoi(height-1, withPole, toPole, fromPole) Interesting secret: there's also an easy iterative solution, but it isn't anywhere near as intuitive. (1) On every even-numbered move (starting with zero), move the little disk one pole "clockwise". If the total number of disks is even, the first move should be from 'fromPole' to 'withPole'; if the total number of disks is odd, the first move should be from 'fromPole' to 'toPole'. (2) On every odd-numbered move, make the only legal move not involving the smallest disk (there can be only one). def Hanoi_iter(height, fromPole, toPole, withPole): if height % 2 == 0: poles = [fromPole, withPole, toPole] else: poles = [fromPole, toPole, withPole] stacks = [range(height, 0, -1), [height], [height]] for i in range(2**height-1): if i % 2 == 0: # move little disk fd = (i/2)%3 td = (i/2+1)%3 else: # move other disk fd = (i/2)%3 td = (i/2+2)%3 if stacks[fd][len(stacks[fd])-1] > stacks[td][len(stacks[td])-1]: td = (i/2)%3 fd = (i/2+2)%3 stacks[td].append(stacks[fd].pop()) moveDisk(poles[fd], poles[td]) Read the Sherpinski triangle example in the book and make sure you understand it. ======================================== Backtracking Search [ 4 Feb.: how did the project go? how is the class going? who is going to lab? is it useful? are the TAs helpful? how about workshop? for help: visit me visit RongRong (during office hours) post to the Blackboard discussion group go to lab ask questions in class (and of course, read the text) ] "Die Hard" problem (programming exercise 3.8) BIG = 4 SMALL = 3 def moves(pair): big, small = pair big2small = min(big, SMALL-small) small2big = min(small, BIG-big) return [(BIG, small), # fill big jug (big, SMALL), # fill small jug (big-big2small, small+big2small), # pour small jug into big jug (big+small2big, small-small2big), # pour big jug into small jug (0, small), # empty big jug (big, 0)] # empty small jug class GotIt(Exception): def __init__(self, p): self.path = p def solveHelper(pathSoFar, goal): currentState = pathSoFar[len(pathSoFar)-1] for newState in moves(currentState): if newState[0] == goal: raise GotIt(pathSoFar + [newState]) elif newState in pathSoFar: pass # already been here else: solveHelper(pathSoFar + [newState], goal) def solve(n): try: solveHelper([(0, 0)], n) except GotIt as e: print e.path Note use of nontrivial exception, with fields we can set and get. Discussion: - generalize to arbitrary b & s? - what errors should we be checking for? - do we have to use an exception? ======================================== Introduce the next programming assignment; look at the code. Levels of abstraction: rows of pixels graphics package takes care of this for you polygons, delimited by points tiles, with neighbors I provide those last two Note: calling out to an external program in Java have to have Java available on your machine can download from java.sun.com (or from Apple if you have a mac) may take arbitrarily long helps to start with tiles that have lots of neighbors ======================================== Memoization When we were considering the "find _a_ solution" version of the jugs of water problem, someone noticed that we sometimes pursue an unfruitful path multiple times, because it occurs in different branches of the tree. We can solve this problem by remembering not only the states we've seen on the current path (which we need in order to print out a solution), but also the full set of states we've seen in the entire tree traversal so far (since any of these that are _not_ on the current path are known to be unfruitful). In effect, we create a "memo to ourselves" that makes note of what we've already computed, so we don't have to do it again. The technique is then called "memoization". def solveHelper(pathSoFar, seenOnAnyPath, goal): currentState = pathSoFar[len(pathSoFar)-1] for newState in moves(currentState): if newState[0] == goal: raise GotIt(pathSoFar + [newState]) elif newState in seenOnAnyPath: pass # already been here else: seenOnAnyPath.append(newState) # modifies list in place solveHelper(pathSoFar + [newState], seenOnAnyPath, goal) def solve(n): try: memo = [(0, 0)] Helper([(0, 0)], memo, n) except GotIt as e: print e.path # print memo Note ability to use /import/, which distinguishes between jugs.solve and jugs2.solve. Also note: In this particular problem, all states turn out to be reachable on the leftmost spine, so memoization doesn't actually help us. In other problems, however, it definitely does. The making-change example in the book is one. ---------------------------------------- Now suppose we want ALL solutions. Here memoization is NOT a good idea, because it will cause us to abandon alternative paths that happen to have a state in common. def solveHelper(pathSoFar, goal, solutions): currentState = pathSoFar[len(pathSoFar)-1] for newState in moves(currentState): if newState[0] == goal: solutions.append(pathSoFar + [newState]) # modify in place elif newState in pathSoFar: pass # already been here else: solveHelper(pathSoFar + [newState], goal, solutions) def solve(n): solutions = [] solveHelper([(0, 0)], n, solutions) print solutions Turns out there are quite a few solutions, many of which have nontrivial common suffixes: [(0, 0), (4, 0), (4, 3), (0, 3), (3, 0), (3, 3), (4, 2), (0, 2), (2, 0)] [(0, 0), (4, 0), (1, 3), (4, 3), (0, 3), (3, 0), (3, 3), (4, 2), (0, 2), (2, 0)] [(0, 0), (4, 0), (1, 3), (0, 3), (3, 0), (3, 3), (4, 2), (0, 2), (2, 0)] [(0, 0), (4, 0), (1, 3), (1, 0), (0, 1), (4, 1), (4, 3), (0, 3), (3, 0), (3, 3), (4, 2), (0, 2), (2, 0)] [(0, 0), (4, 0), (1, 3), (1, 0), (0, 1), (4, 1), (2, 3)] [(0, 0), (4, 0), (1, 3), (1, 0), (0, 1), (0, 3), (3, 0), (3, 3), (4, 2), (0, 2), (2, 0)] [(0, 0), (0, 3), (4, 3), (4, 0), (1, 3), (1, 0), (0, 1), (4, 1), (2, 3)] [(0, 0), (0, 3), (3, 0), (4, 0), (1, 3), (1, 0), (0, 1), (4, 1), (2, 3)] [(0, 0), (0, 3), (3, 0), (3, 3), (4, 3), (4, 0), (1, 3), (1, 0), (0, 1), (4, 1), (2, 3)] [(0, 0), (0, 3), (3, 0), (3, 3), (4, 2), (4, 3), (4, 0), (1, 3), (1, 0), (0, 1), (4, 1), (2, 3)] [(0, 0), (0, 3), (3, 0), (3, 3), (4, 2), (0, 2), (2, 0)] [(0, 0), (0, 3), (3, 0), (3, 3), (4, 2), (4, 0), (1, 3), (1, 0), (0, 1), (4, 1), (2, 3)]] ======================================== Dynamic Programming Now suppose we want to know how to solve the problem in the _fewest_ steps. For this we can't just raise an exception when a solution is found; we have to explore until we know there are no shorter solutions. As it turns out, we don't have to explore the whole tree, if we explore it "breadth first", from the top down. Turns out this solution doesn't even employ recursion. The key is _Dynamic Programming_. Like backtracking search, it's a general-purpose technique that's useful for a wide variety of problems. The key idea is to build a table of solutions, starting with simple ones and building up to more complicated ones. It's ideal for problems where solutions to complicated versions are built from solutions to simple versions, and the simple ones are repeatedly useful. The key difference from naive memoization is that instead of simply remembering any place we happen to have been, we deliberately and systematically ALL possible places, simplest first. class GotIt(Exception): pass def solve(goal): knownStates = [(0, 0)] # list of states I know how to reach howToGetThere = [None] # for corresponding entries in knownStates, # the index I came from to get there lastRow = 0 # index of first state requiring the maximum number of # moves considered so far -- left end of row of tree try: while True: # each time around this loop we are considering paths that # require one more move than before nextRow = len(knownStates) for i in range(lastRow, nextRow): # all states requiring the previous number of moves # to get here for newState in moves(knownStates[i]): # all states I can reach from current state if newState in knownStates: pass # already been here else: knownStates.append(newState) howToGetThere.append(i) if newState[0] == goal: raise GotIt # ~2-level break lastRow = nextRow except GotIt: i = len(knownStates)-1 rtn = [] while True: rtn.append(knownStates[i]) i = howToGetThere[i] if i == None: break rtn.reverse() # in place # print knownStates # print howToGetThere return rtn The book describes a similar problem that involves making change in a monetary system where the "greedy" algorithm doesn't necessarily work. ======================================== Linked Lists << see SLlist.py, in this folder >> Advantage: constant-time insert/remove at head If maintain a tail pointer, constant-time insert there, too (but not remove) (code not shown in book or accompanying file) Can be used for much more efficient recursive algorithms than the built-in Python lists. Need first() and rest() routines: class UnorderedList: def first(self): return self.head.getData() def rest(self): rtn = UnorderedList() rtn.head = self.head.getNext() return rtn IMPORTANT: this relies on a convention where lists are allowed to share suffixes. Aside: remember the difference between methods that modify an existing object and operators/functions that create a new object: >>> l1 = [1, 2, 3] >>> l2 = l1 + [4, 5] >>> l1 [1, 2, 3] >>> l2 [1, 2, 3, 4, 5] >>> l1.extend([4, 5]) >>> l1 [1, 2, 3, 4, 5] With linked lists: >>> L1 = UnorderedList() >>> L1.add(4) >>> L1.add(3) >>> L1.add(2) >>> L1.add(1) >>> print L1 1 2 3 4 >>> L2 = L1.rest() >>> print L2 2 3 4 >>> L1.head.next.next.data = 7 >>> print L1 1 2 7 4 >>> print L2 2 7 4 Revisit Listing 3.2 (listSum). def listsum_rec1(l): if len(l) == 1: return l[0] else: return l[0] + listsum_rec1(l[1:]) With linked lists, this becomes def SLlistsum(l): if l.isEmpty(): return 0 return l.first() + SLlistsum(l.rest()) which is efficient (linear time instead of quadratic)! NB: both versions add list from tail to head ---------------------------------------- Keeping sorted makes no-duplicate union, intersection, difference, etc. O(n) instead of O(n**2) << look at code for union; discuss how to do intersection >> ---------------------------------------- Double linking allows fast traversal in both directions, and insertion/removal in the middle given only a pointer to one node. Great for equivalence sets: characters in a room in a game products at stations in a factory players on teams in a sports league Also great if you want to build a deque. Has the constant-time insert and remove of the circular-buffer array-based deque, but without the need to set a maximum size up front. On the other hand, requires space for all the links, and time for memory allocation and gargage collection. (Does everybody know what garbage collection is?) << see DLlist.py, in this folder >>