Things on Exam
- Tracing A*
- Homework things
- Intro to AI (+ cognitive computing) - Chapter 1-2
- Search and problem solving - Chapter 3-5
- Knowledge representation & Logic-based agent, propositional logic - Chapter 7
- What is AI? What is intelligence in general? - Think & act, humanly & rationally
- Turing test
An agent is a program in an architecture (that represents the environment). It has:
- Performance
- Environment
- Actuators
- Sensors
A rational agent does the right thing: it maximizes (expected) performance measure
- The *environment*
- Fully observable or partially observable
- Deterministic (fully predictable) / stochastic
- static / dynamic
- episodic / sequential
- discrete / continuous
- single-agent / multi-agent
- *Types of Agents* (homework 1)
- Simple reflex agent (with/without randomness)
- model-based agent (reflex agent with states)
- goal-oriented agent
- utility-based agent
- learning agent
Will an agent be rational or not based on problem?
- How to formulate a problem:
- States
- Actions
- Transition model (results)
- Initial State
- Goal Test
- Cost function
As a 6-tuple: <S, A, Tr_Model, Is, Goal_test, g>
- Listing states:
- Complete state formulation (all legal states)
- Incremental formulation (all reachable states) - Initial state => actions => transitions => goal-test => cost function
- Types of algorithms:
Blind search algorithms (uninformed) - Breadth-first search (complete, optimal if step cost is constant,
exponential space complexity, frontier is a queue)
- Depth-first search (linear space complexity, not complete for graphs, not optimal, frontier is a stack)
- Iterative deepening search
- Uniform cost search: expand node in frontier with least cost
Heuristic search algorithms
Know limitations and advantages of each algorithm
- Evaluation of search algorithms:
- Completeness (will it find a solution if one exists?)
- Optimality (will it find the best solution?)
- Time complexity
- Space (memory) complexity
- Properties of search algorithms
- b branching factor, the number of successor nodes of a node in search tree
- d depth of the shallowest goal state node
- m the maximal depth of search tree, could be
INF
- The search tree
- Different from state space (states can be repeated)
- Tree search vs. Graph Search
- Graph search does not add nodes to frontier if it is already in the frontier or explored - frontier leaf nodes - explored seet internal nodes
- Heuristic search algorithms
- Best-first search: choose cheapest
f(n)
- Greedy best-first search: choose quickest path to goal
f(n) = h(n)
- A*
f(n) = g(n) + h(n)
- Uniform cost is a special case of A*, whenh(n)=0
- Optimal for tree search if h is admissible - Optimal for graph search if h is consistent
- Best-first search: choose cheapest
- Heuristics
- Admissible: estimation is always at most the true cost
h(n) <= h*(n)
- Consistent:
h(n) <= c(n,a,n') + h(n')
- Consistency implies admissibility, but not the other way around
- Admissible: estimation is always at most the true cost
- How to obtain heuristic functions for problems
- An optimal solution for a relaxed problem is a heuristic for the original
- Maximum of set of other heuristic functions
- Use learning and pattern database
Trace AStar
- Local (greedy) search & variants
Searching for a state x which maximizes a function f
Advantages: quick & cheap, produces reasonable quality results
Disadvantages: not optimal
Variants - Stochastic Hillclimbing (add in some randomness) - first-choice hillclimbing - random-restart hillcimbing (choose a random starting point, possibly
with multiple runs comparing the best)
- Modeling of English statements into propositional logic formulas
- Convert into clausal form
- perform resolution to prove entailment
- KB |= alpha => negate alpha, put ~a into clausal form
- KB
AND
~a is unsatisfiable - resolution from clauses for KBAND
~a to derive the empty clause - DPLL algorithm