AIMA - Part II - Problem Solving
Part II – Problem Solving
After the high-level introduction of what an agent is, which task environments it might live in and some ways of building agents and representing state, part II of the book is focused on problems that involve some notion of searching for a solution.
Chapter 3 – Solving Problems By Searching
Chapter 3 introduces problem-solving agents that need to plan (or find) some sequence of actions that take them to a goal in task environments where state can be modeled as atomic. A lot of the actual algorithms presented in this chapter are classic textbook (graph) search algorithms like depth-first, breadth-first, Dijkstra or A*, not much news there. I did learn that the AI community calls Dijkstra a uniform-cost search (blasphemy! ;-).
What I found interesting is to think of them in a more generic framwork. So far, I had mostly encountered them in the context of actual graphs, with a known number of vertices and edges. But the book expands that idea to arbitrary, possibly infinite search spaces by instead thinking about things in terms of a search tree and using characteristics like the branching-factor b and a search depth d to estimate complexity.
- The branching factor b estimates how many children a node will produce on average when we expand it, which might map to the number of actions have have available on average in a given atomic state.
- The search depth d is how many steps away from the root we have (needed to) expand our search.
The book distinguishes uninformed and informed search strategies, where the latter have some (mabe estimated) notion of how close to our target we are. For example, Dijkstra would be an uniformed search algorithm, while A* with its monte carlo distance estimation is an informed one.