---

242 HOMEWORK KORNER

EX1: RN1, RN2, RN3

What's Where

Exercise No. Chapter Section or Page
(1.9-1.13) all of RN1
2.1 2.4
2.2 pp 35 and 51
2.3 p. 33
2.10, 2.11 2.4
3.1, 3.1 3.1-3.3
3.6 3.5
3.7 3.1, 3.2
3.12 3.5
3.13 3.4

CB's Thoughts

3.12 says ``Prove''

We should know what constitutes a proof: for example, a possibly-related sort of problem is:
prove BFS finds soln of minimal depth in a tree, branching factor at most b, soln depth at d.

Several ways to go: obvious is:
contradiction:
Assume solution exists at e Not quite so natural (or satisfying, maybe is):
induction:
True if d=0 assume true for d prove true for d+1: By its def, BFS will have examined all solutions of depth d and found nothing. Thus a soln found at d+1 is of minimal depth.

If you've had this concept, you could do a program proof of correctness:
Write out the BFS alg. with a queue and establish the loop invariant that all lower-level nodes are explored before any upper-level.

3.13 is VERY EASY

EX2: RN4, RN5, RN6

What's Where

Exercise No. Chapter Section or Page
4.2 pp97-99 admiss, opt ideas.
4.7 pp97-99 admiss. and consist (173 CB phenom!)
4.3 4.1
4.9 p. 108, thinking
4.11 4.3
4.14 3.6, 4.5
5.5a 5.1
6.1 6.2, 6.3
6.7, 6.3 6.5

CB's Thoughts

4.3 Prove! This involves specifying values or generally simplifying steps in one algorithm that reduce it to another algorithm. E.g. summing elements in a column is a special case of summing an array when you hold the col. variable const, OR a++ is a special case of a+b where b=1. OR to show algorithm A is special case of B, show how to implement A by specializing B (simplifying, freezing in parameters,....).

4.11, sec.4.3, again exercise of showing how to specialize certain algorithms to produce other algorithms.

4.14 (Note perceptions tell you the legal unblocked states.).

Problem is to translate the little online search problem to offline: how much harder is it to anticipate whatever you might run into rather than just blunder along, making it up as you go?

So what we should notice is the number of states in the offline version...all the beliefs you MIGHT have..

Similar problem: say a 4x4 grid with pits in unknown locations inside and all around the outside, start at 0,0, goal 3,3. with 14 unknown squares, get 2^14 poss. configs. Belief space is what you know, all poss. things that might be true. So there are 2^14 possibilities for the initial belief state, and in general you might entertain any subset of beliefs (even contradictory ones, in your ignorance, since you don't know which is true.) so theres 2^2^14th of those. That's why we do "on-line planning".

But actually it's not that bad. Each of 14 squares is one of (clear, pit, unknown), and they're independent, so beliefs are decomposable. really there are just 3^14 reachable belief states.

In each state (location of the agent and contents of four-neighbors) there are only 2^4 = 16 different percepts.

See my accompanying picture (in Lecture, 4.14) for the belief states that follow from belief-propagating ``actions''.

RN5: 5.5a... simple, same as the SFS on class scheduling. just understanding, formalizing the CSP: no solution required.

Ex. 6.1 (sections 6.2 and 6.3)
See www.btinternet.com/~se16/hgb/tictactoe.htm
May want to use this but then you have to add value, like explaining where the combinatorial formulae come from, extending the results to a 4x4 board (cf.
(http://cera.us/proj/games/tictactoe/ ).

6.1 pretty thorough run-thru of all the concepts:
a. good question: upper bound is easy, but some games finish early (shortest is how many moves?). So justify answer. Most room for creativity here.
b. 'taking symmetry into account' means there are only 3 intial moves, for instance (center, corner, other).
c d easy. apply evaluation and do minimax
e. need to understand a-b that's all...

6.2, 6.7 Proofs.. Note the appearance of 173's DFS CB Phenomenon, or the heuristic-search version of it!

6.2 Simple common reasoning, clear english. 6.7 use tree induction to reduce n-level tree to single ply by showing that the results of chance, min and max nodes on the transformed values are preserved.
For example: min(ax1+b, ...., axn+b) = a min(x1+x2+..xn) + b.
note also that x >y => ax+b > ay+b if a>0...

6.3 Looks worse than it is. Tree is really small. Note A has a forced win in 3 moves! The ? nodes are loops: the state is exactly one seen before. So this is a game with loops, kind of like any state space with loops. Q. is how to value the idea of going back to an earlier state. Would you chose to win or to loop, for instance? so you can write down max(1,?) and min (-1,?). If all successors are ?, what is the BUV of the node?

c) This is actually pretty deep and a really good answer requires some thought and digging. MINIMAX fails since it's depth first and will loop, so need to use the ? instead to return immediate values. [ not for publication? How compare ? with draws? Also what if there are differences of degree of winnnig (like gin rummy). With chance, what is average of a number and a ? ? Chapter 17 has some of this (ex. 17.8 ).]

d) certainly seems right: can be done by induction on size of game, carving it down by 2 per induction: note N=3 is loss for A. Seems pretty easy.

EX3: RN7, RN8, RN9, RN10

What's Where

Exercise No. Chapter Section or Page
7.4 7.3
7.6 7.4
7.8 pp 210--211
7.9 7.4
8.2-8.4 8.2
8.6 8.2
8.7-8.9 8.3, 8.3
8.12 8.4
8.13-8.15 8.2-8.4
9.3 9.1
9.4 9.2
9.9, 9.11 9.2 -- 9.4
9.19 9.5 plus Ch. 8

CB's Thoughts

A long one on important material: get started early.

7.4 based on a |= b iff in every model in which a is true then b is also true. 7.6 trivial esp for 173er's. 7.8 basic stuff: truth tables plus log. equivalences. 7.9 use prolog!? or a bunch of modus ponens, fwd chaining, resolution...whatever!

8.2-8.4: exploring defs of models. Assume each model has at least 1 domain element. 8.3, .7, .8, .9 translation from English, basic xlation issues and rep. issues. 8.12: need to reason about how Piano represents integers, and could use induction to prove commutativity. 8.13 The vertical bar | means ``conjoining into one set'' here. 8.14 Fns at issue are: List?, Cons, First, Rest, Append,Nil, Find. Also assume mean that the lists we are reasoning about are "proper" lists: that is, a cons structure with Nil as last atom. 8.15: Need to think like a computer, pattern matching; also want to be able to prove things false like (2,2) not adj (2,4). Some annoying details of the situation are also missing.

9.3, 9.4 understanding basic definitions from text. 9.9 english to FOPC plus using rule base: familiar domain to 173er's. 9.11 english to FOPC, elementary deduction. 9.19 checks on models, implication, and resolution.

EX4: NLU: RN22, RN23

What to Read:

22: All (except 22.7 and 22.8 are optional). 23: optional (But modern, high-leverage, Dan Gildea does this sort of research and some of the underlying ideas are making their way into image understanding as well).

Five points per question (except .9 and .12 get 10):
RN22: [22.1, 22.7, 22.14] -- cute! look at for fun.
22.5, 22.6, 22.8, 22.9.
22.12 Extra credit (5 pts.)

RN23: 23.6, 23.8, (23.9, 23.10). Not done 2009.

What's Where

Exercise No. Chapter Section or Page
22.5 section 22.5
22.6 section 22.5
22.8 section 22.1,.2
22.9 section 22.3
22.12 section 22.3

CB's Thoughts

This is all straight-ahead phrase str. grammars, not even augmented, no chart parsing, nothing. so pretty easy!

22.1, .7, .14: for your amusement and amazement only.

22.5: To clarify relation between quasi-logical form and some final FOPC representation. There's a little emphasis on the difficulties of scoping in these questions, so pay attention to that in the reading.

22.6: The problem with just writing "There exists x such that Wumpus(x)" for "it is a wumpus" is that we'd also like to formalize "It was a wumpus". Thus it seems a good idea to introduce events, which occur in time and relate times and predicates.

So, you might introduce symbols Is and It and then do some translation into FOPC using some predicate like During(t,e), which locates a time t (e.g. the time 'Now' if the event includes the present) as happening during some extended event e. Is can be defined as "is the same as" for two variables using simple logical relations (basically saying that "=" is true if and only if "<=>" is.

22.8: Familiar for 173'ers, others could look at any book on CS foundations, formal languages, automata theory, scanning and parsing, etc. Lots of coverage "out there" for this sort of question. Part c is not so easy, actually: feel free to start out with a restricted alphabet of a and b . One way to go is to use non-terminal "markers" to mark the front and middle of the string, and to generate two things for each member of our string -- a terminal and an associated non-terminal-- the latter of which is moved over to the other half of the emerging sentential form and then converted to the proper terminal later.

22.9: The first part -- (attempted) parsing, really, though they speak of 'generation' with the three grammars -- seems easy. The second part (more language generation) could be interpreted as needing one new lexicon, three new lexicons, or 18 new lexicons (is this all or are there even more readings?). I'd say use as many lexicons as you need to illustrate the properties of the grammars, and as few as you can get away with. Ideally, then, one lexicon.

EX5: RN11, (RN12)

What to Read:

11: All, but can skip sections 11.5, 11.6.

12: There are no exercises from Ch. 12, but for your short paper you probably want to look into Chapter 12 at least -- 12.7 looks promising and of course there is plenty of on-line material. Don't forget full-source databases like those in the Web of Sciences (follow databases off the library's main page). Or there may well be books, and I'm sure there are conference proceedings, with useful material. And there's exploring the applicability of available planners to quagents, say: actually matching up functionality with your needs.

What's Where

Exercise No. Chapter Section or Page
11.1 Chaps 3,4, section 11.1
11.3 section 10.3, 11.1
11.5, 11.6 section 11.1, 11.2
11.7 sec. 11.2, Ch 3 (pp 79-81), (plus thinking!)
11.12 section 11.3 (p. 388, fig. 11.6), 11.4

CB's Thoughts

11.1: Basic definitions and implications for the sets of problems addressable.

11.3: Understanding situational calculus and STRIPS rules; this is a brief dousing under the cold shower of having to formalize the frame axioms by creating a Precondition predicate for every action and for every fluent (like At creating a predicate that says it keeps its old value only according to effect of relevant action whose preconditions are satisfied, else there's no effect.

11.5, 11.6: Very basic consequences from the definitions of Strips representations: one-liner answers appropriate.

11.7: Needs awareness of general search principles, applied especially to bidirectional search and partial order planning. Not trivial questions, require some thought.

11.12: Basic graph planning techniques and ideas, nothing tricky or hard. Your diagram of the plan will help in computing the linearizations. I don't understand the last sentence of this problem, so ignore it or if you can explain it and answer it, then thanks!

EX6: RN13, RN17, RN21

What to Read:

13: All pretty vital except for 13.7, which is an extended example that made my eyes cross a bit. Try it, you might like it.

17: 17.1 - 17.3 Everything after that is optional, but it's all important. What if you can't observe everything reliably? 17.4! Wondering how to apply Bayes nets to all this? 17.5's for you! If you've wondered about game theory, 17.6 is a good start.

21. All.

What's Where

Let's see if we can dispense with this, eh?

CB's Thoughts

13.1: First princs are def. of conditional prob and defs of logical connectives (and facts like conjunction is commutative and associative).

13.2: Main thing to use is axiom 3, def. of "or".

13.3: Maybe figure out the probabilities of the atomic events, i.e. the combinations of truth and false beliefs for A and B. The joint prob. table for A and B has four numbers that have to sum to one. The axioms of probablity come into the calculation too.

13.5: Basic combinatorics; Foundations of Computer Science by Aho and Ullman, or any combinatorics text (or website, probably) is all you need.

13.6a,c,d: Point is understanding diff. btwn bold Pand non-bold P. That plus addition is all you need to know.

13.9: Need defs. of conditional probability, basic prob. manipulation rules. Part b can use part a!

13.10: Uses the def. of cond. prob. a lot! Keep substituting!

13.11: More simple combinatorics and you need to count the atomic events that constitute the events you're interested in. Kinda fun.

13.15: Two random variables B for taxi was blue, LB for taxi looked blue. Reliability information leads to conditional probabilities of one RV given the other. You'll turn out to know the prior probabilities of taxi color, which you're not intially given. You could presume a diffuse prior (aka the principle of indifference) and give prob. of .5 to P(B), say. The last part of question gives you prior info, though.

13.16: This is the classic "Let's Make a Deal" problem restated. There is actually information lurking in an unexpected place that makes the true answer not the same as the "naive" one. Enjoy.

13.19: Based on the extended example in section 13.7. You wind up counting assignments of pits to squares, as in Fig. 13.7 on p. 485. A leeettle time-consuming, I'd bet...

RN17:

17.1: Good way to go is to make a tree showing what can happen -- the states reached after each step, with the corresponding probabilities. How does Markov property allow you to calculate the prob. of each final state? What if same state appears in more than one leaf of the tree?

17.2: This is easier if you go remind yourself of what it means to have stationarity... then some counterexamples and paradoxes of this "maximum reward attained " utility definition for a state sequence.

17.4: This one requires some technical details, writing out the mathematical operations used for value determination and policy update for the states. Also part c. is not trivial, requiring a little simple mathematical analysis. A good problem to ensure basic understanding, but allow yourself some time.

17.5: Part a. is another representative of our recurring theme of being able to express some concept formally in mathematics. Both expressions start out with a max function over a, or actions. Parts b. and c. call for a little programming-like ingenuity: you can create a new MDP with extra states interpolated before or after the ones you're given to remember and use the relevant information needed to simulate one reward system in terms of the other.

RN21:

21.2: I dunno, I find this problem to be worded rather confusingly. My recommendation is to modify the middle sentence ("Show that...") by putting a period after the word 'improper' about halfway through and deleting the rest of the sentence.

21.4: Basically any and all ingenious or obvious shortcuts, approximations, and hacks are invited here: there's no single clear obvious elegant answer...

21.5: Pretty clear what to do here: they must mean "value iteration", not "value determination", right? Cute, tho... look for the latter in the index and see what you find!

21.8: Straightforward extension (evidently something to do with (Euclidean) distance on a grid) to eqs. 21.9 and 21.10 (and the three eqns below 21.10).

21.9: The issue is how to boil down a very complex state to some simple features that you can measure and then use to make policy (decisions) on. Section 21.4 is the relevant one and the spirit is to find features whose values are integers (better a small range of integers for combinatorial reasons!) that will help you learn to get to the goal states.

21.10: The 3-D plots aren't necessary but are easy and we want them for full credit. The last sentence should read "for each environment where you used an approximation",

a. Pretty easy, exact linear solution exists.
b. Easy to update above for another exact (but nonlinear) solution.
c. The solution depends on the random placement so a helpful approximation is pretty meaningless. Given a placement of obstacles, one could approximate, maybe using features from Ex. 21.9 above.
d. Let's substitute "As in (a)" for "As in (b)" as written to keep things simpler. Wall adds nonlinearities, but one clear optimal policy and two simple utility equations result.
e. The utility equation in terms of x and y is nonlinear, but you can fix that by substituting two obvious features.

EX7: RN20, RN24, RN25

What to Read:

20.5, with 20.7 a case study of classification.

24: All

All but 25.3

CB's Thoughts:

RN20:

20.11: Step-function activation functions are easy, and you'll thus need a hidden layer (in fact one unit is enough). Think of XOR as an OR with the AND case ruled out (which is what your hidden unit can do).

20.14: A good combinatorics problem: answer is probably "out there" but as usual I don't care about the answer I care about your process.

RN24:

24.1: Hint: Section 24.2.

24.2: Simple.

24.4: Analagous to Fig. 24.4.

24.3: Not hard, just draw a diagram of the setup, recall the formula for brightness of lambertian surface and work out the relationship between normal vectors on the cylinder and x: it's another nice Lambertian cancellation result as in class.

24.5: Teeny bit of freshman calculus needed: just apply simple algebra and definitions of derivatives of sums, products, etc. Easy.

24.6: This is a very typical formalization of a stereo setup. Make a nice diagram of the situation, assume 'lenses' act like pinholes, note this is a completely planar problem (no use for y coordinate), don't be put off by 'epipolar lines', just use your common sense for that 'because' if you like.

Part (a) wants you to solve for the disparity, part (b) is asking you to assume that at 16 meters how much further could an object be and still create the same disparity. So you rearrange the previous formula and see what different values for Z you get if you increase disparity by 1. (c) Objects farther than this range (which gives a disparity of 1) are 'out of range' of our stereo ranger. Once again you use the formula from (a).

24.8: A cute problem --- Some of the points are easy to reason about, but those pesky D and E points are more interesting and call for more thought.

24.10: Easy.

RN25: Strictly extra credit:

25.1: A toughie: needs some code: Here's a Hint (or a check) .

25.3: Another toughie! Again, your process is important.

25.6: This pulls together lots of what you know about search and puts it into the real world. Key words like A* and BUG should get you started. Again, no simple 'right answer'.

---
Schedule and Syllabus 242 Home Honesty

This page is maintained by CB.

Last update: 11.16.04.