TA: Barnum section on Weds at 3pm Weds CSB 703. ---------------------- Game Search Problems No easy board evaluations for various reasons... Quarto! Othello Go Poker (opponent modelling), lots of combinatorics. UBC I think... Web page for student projects points to ``Advanced game-playing search'', a paper by Maks Orlovich: negamax, transposition tables, narrow window searches, scout, negascout, mt-sss and mtd(f), tournament applications (chess, others) beyond minimax; opponent modeling, adversarial planning, heuristic pruning, learning evaluation functions. (off the 242.html-> projects page) ------------- What search technique... N-queens can be looked at with ops of start state of empty board and ``add queen'' which leads to heuristic search, or start state of full board and ``move a queen around'', which leads to iterative improvement, constraints like ``min conflicts''. Likewise p. 121 suppose you have some n-dimensional problem like putting 3 airports in romania to minimize sum of squared distances to all the cities..... or in general f(x1,...xn)... maximize or minimize. (discrete or continuous) here to get started you usually start with a ``full board'' because the goodness is the evaluation of x1-xn conveniently provided by f. OR you could simplify f maybe if you felt xi was unimportant...lots of ad hoc stuff to do. ops: move in xi, or move in gradient direction. -> hill climbing or iterative improvement. ---------- Question: For optimization problems in which a max or min must be found (like the st ate space in fig. 4.10 page 111) it seems like stochastic search algorithms are the bes t algorithms to use in order to overcome the problem of finding a local max/min as opposed to the global max/min. Can you go over stochastic beam search in greater detail? Does stochastic beam search still use the passing of information between thread s as local beam search? It seems like this could lead to problems where the threads c ould all be pulled into a local max/min. Can you also go over the use of randomness in genetic algorithms? Beam search: Pure Hill climbing and SA have a single ``current'' state that gets updated. GA's are differnent in that they have a population of states that is searching by reproducing, dying, and mutating. There is ``random restart'', and if you do that in parallel you get some parallelism. But if searches communicate you can get ``Local Beam Search'', which also uses N states. vague idea is if one state has better successors you abandon other parts of the tree. but can get bunched. Choose best N successors over all searches, say. So Stochastic Beam Search is like stochastic hill-climbing (choose most uphill direction with highest probability). Instead choose N successors at random, but with prob. of choosing node being an increasing function of its value. Sort of like ``natural selection''. ------- Question: On page 120 the authors talk about how the mechanisms of evolution are act ually more powerful then what genetic algorithms allow. I was wondering if people hav e tried to implement these ways (reversals from mutation, duplication, movement of la rge chunks of DNA, etc.) in genetic algorithms or if it is in fact impossible to do. It seems like some sort of random use of these techniques could result in stronger genetic algorithms. More sophisticated GA operations? Reversals of mutation, Issue may be design of representation. I like the idea that you test each mutation first before moving on....then if it sets you back, don't do it. Movement of large chunks of DNA -- seems like could happen in your reproduction fn... you can choose larger or smaller chunks of the bit-description to swap. Usually it's about half, though, since there are two partners and the parts have to add up to one. BUT issues of finding ``schemata'', identifying useful substructure,etc. that is clearly a problem with naive GA. Developmental Computing? There is some work in this.. another biological analogy to say language development or developing from fetus to grownup. Whatever... I'd say one should look at the literature...start with the Bio and Historical notes... about 1.5 pp of GA and GP and related stuff. There's lots of wild ideas out there. ``analysis of the energy landscape'' is key, since for GAs it influences the ``Design of Representation'' is key. Leads to Hacker Heaven since these N-dimensional landscapes are so varied even within different instances of the ``same problem'', so finding optimum with GA often calls for specific tricks that are innovative enough to get your paper published. ---------- Question: Alpha-Beta searches and other such decision-making methods make sense to me. However, I have not seen much detail on evaluation mechanisms: Yes, the idea is to maximize score, but how is that score computed? For example, what classifies a good chess position from a bad one? The chess is prehaps too complex of an example, but are there any general methods for evaluating a situation? How make a better heuristic Func.? 2002 ?? AI jounal on games! Must read for all interested. Yes, states are classified as good or bad, and it's nice to have a finer gradation so you can discern between all your possible next actions. 0. E.g. Chess. Material Position Dynamic? Static AND--- time evaluating or time searching? chess maybe search is better. 1. General problem-solving: Remember that if it's perfect there's no search. If it is uninformative (always says ``0'', say) your strategy boils down to all search, no pruning. Optimistic (admissible) leads to optimal results. Pessimistic estimates of how far you have to go lead to more pruning, hence less search but not optimal. So ideal is to approach a ``perfect'' judgement about exactly what to do next. One way would be science and knowledge (endgame positions, say, or some theoretical breakthru), Or a new representation. Learning could do it. 2. Games: Again, goal is to evaluate the position precisely, Same observations apply. Samuels learned to improve his evaluation function and generate new features. But in general this is a very interesting real life problem: how good is this situation? What should I do when confronted with situations like this? cope, put head in sand, be pigheaded, conciliatory, buy, sell, what? How to learn from past experience, and how to put events in some true proportion (avoid superstition, get priorities right, etc.) So this leads to different sorts of computer learning and data analysis which we'll be looking at soon. You can go from tables of data to decision trees (Chapter 16 I think). GA's and SA and NN's can be looked at as learning. One can optimize the parameters of the heuristic function this way. Note that coming UP with the relevant features and how to combine them is another issue. There IS statistics...factor analysis, Principal Components Analysis, and lots of modern variants. Reinforcement learning: the credit assignment problem. Idea there is to develop a ``Policy function'' that maps world state to actions you should take. ----------------- Question: I haven't gotten though all the reading yet, but i would be interested in knowing whats the state of the AI field when it comes to having the computer write its own successor-fn, cost function, goal function. The source of the problem specification could be either from natural language or similar human interface, or from a computer's own artificial interest in solving the problem in order to optimize itself. then, it would be more intelligent. Write your own successor and cost functions and goal functions? Sounds like reinforcement=type learning to me. Again, lots of the above applies, except successor functions are USUALLY not too much of a surprise unless you really think out of the box, like going over the high-jump bar on your back for instacne. So you need to frame the assumptions so that you formalize the aspects of this learning process that interest you. Reinf. Learning again, for example. Adaptive Optimal Control also. It's not crazy, because there is optimal control, and there is a certain ``exploration'' component you need to stay optimal. So there is a certain amount of energy you put in to this exploration and you get to decide what that is, even in reinforcement learning. NLU input to such a learning thing could be in the form of reinforcement ``Bad bot'' or shaping ``you're getting warmer'', or indeed of stating the goal. If agent is building a model of the world then your input can help in that process....usually the models are simple (markov models), so this sort of input would work fine, but in general its Develop Policy or Develop Model to Derive Policy Also things like ``interest in solving the problem'' is easily stated as a reward for getting to goal. ---------- Online agents (this answers the short paper question in EX1!) INterleaves acting and observing 1. in real life can't jump around discontinuously between states, so you're left with dfs effectively. Assume agent knows for state s Actions (s) step-cost c(s,a,s')....but agent needs to know s' is outcome! goaltest(s) so have to ``suck it and see''. -->dead ends Online DFS Agent stores table result(a,s) as it explores....tries untried actinos from a state. But if tried them all, must PHYSICALLY backtrack. so only works with reversible actions. Online Local Search Just take the best next action in this any state. Hillclimbing is an online local search. --> got to local maximum. So random walk. but random-> exponential So learn and remember ... current best estimate of cost to reach the goal. it's like A* where you update f = g + h update h by experience. This is Learning Real Time A*. Untried actions assumed to lead to goal so this implements exploration: optimism under uncertainty. Learning in Online Search This is the reinforcement learning problem: what policy gives you the best action? Learn policy or learn model? Desire for formal reps of knowledge leads to KR and part III. ----------------