# Pathfinding Demystified (Part IV): Practical A*

Generic Search - Search Strategies - A* Demystified - Practical A*

## Introduction

The first three articles of this series start from the very basics of pathfinding algorithms, and end up with a clear exposition of the **A*** algorithm. This is great in theory, but understanding how to apply it in practice is a different topic.

For example, what if your world isn’t a grid?

What if your characters can’t turn 90 degrees instantaneously?

How do you build a graph if your world is infinite?

What if you don’t care about the length on the path, but you depend on solar power and need to be under sunlight as much as possible?

How do you find the shortest path to any of two goal nodes?

## The Cost Function

In the original examples we searched for the shortest path between the start node and the goal node. However, instead of storing the partial path lengths in a variable called `length`

, we called it `cost`

. Why?

We can make **A*** search not only for the *shortest* path, but the *best* path for some definition of “best” which is good for our purposes. When we want the shortest path, the cost is the length of the path, but if you want to minimize, say, the fuel consumption, that should be used as the cost. If you want to maximize “time under sunlight”, the cost is the time spent without sunlight. And so on.

This generally means that each edge in the graph has an associated cost with it. In the original examples the cost was implicit and it was assumed to be always `1`

, because we were counting steps in the path, but you can tweak this edge cost according to whatever you’re trying to minimize.

## The Goal Function

Suppose you’re a car and need to get to a gas station. Any gas station will do. You want the shortest path to the closest gas station.

A naive approach would be computing the shortest path to each gas station in turn, and then choosing the shortest one. That would work, but it would be quite wasteful.

What you can do instead is to replace the single `goal_node`

with a method which given a node can tell whether it’s a goal node or not. This way you can search for more than one goal at the same time. You also need to tweak the heuristic to return the minimum estimated cost for all the possible goals.

Depending on the specifics of your situation, you may not be able to reach the goal *exactly*, or it would be costly to do so (if you’re sending a character halfway across a huge map, do you care about a difference of an inch?), so the `is_goal_node`

method can return `true`

when you’re “close enough”.

## No need to be explicit

Representing the world as a discrete grid may not be good enough for many games. Consider a first-person shooter or a car racing game, for example. The world is discrete and you can’t really represent it as a grid.

But there’s an even bigger problem; what if the world is infinite? In that case, even if you could represent it as a grid, you just won’t be able to build the graph corresponding to the grid, because it would be an infinite graph.

Hope is not lost, however. We definitely need a graph to run a graph search algorithm. But nobody said the graph had to be *explicit*!

If you look at the algorithm carefully, you’ll notice we don’t do anything with the graph as a whole; instead, we explore the graph locally, by getting the nodes we can reach from the node we’re considering. As you could see in the **A*** demo, some of the nodes of the graphs aren’t explored at all.

So what if we just build the graph as we search?

We set the starting node to the current character’s position. Whenever `get_adjacent_nodes`

is called, it can figure out the possible ways the character can move from the given node, and create the adjacent nodes on the fly.

## Beyond three dimensions

Even if your world *is* a 2D grid, there are some other considerations to make. For example, what if your character can’t turn 90 or 180 degrees instantaneously, as it’s usually the case?

The *state* represented by each search node doesn’t have to be limited to a *position*; on the contrary, it can include an arbitrarly complex set of values. For example, if turning 90 degrees takes the same time as walking from one square to the next, the state of your character can be `[position, heading]`

. Each node now represents not only the position of the character, but also its heading; and the new edges of the graph (implicit or explicit) reflect that.

Going back to the original 5x5 grid, the starting position of the search now may be `[A, East]`

. The adjacent nodes are now `[B, East]`

and `[A, South]`

- if you want to reach **F**, you need to correct your heading first, so the path would be `[A, East]`

, `[A, South]`

, `[F, South]`

.

First-person shooter? At least four dimensions: `[X, Y, Z, Heading]`

. Maybe even `[X, Y, Z, Heading, Health, Ammo]`

.

Note that the more complex the state, the more complex the heuristic function needs to be. **A*** itself is simple; the art is often in coming up with a good heuristic.

## Conclusion

The goal of these articles are to dispel once and for all the feeling that **A*** is some sort of mystical, indecipherable algorithm. On the contrary, I’ve shown that there’s nothing mysterious about it, and in fact can be derived in a very straightforward way starting from scratch.

### Further Reading

Amit Patel has an excellent Introduction to A* (and his articles about other topics are also excellent!).