This site uses cookies to ensure you get the best experience. More info
Got it!

Advanced Pathfinding [Part 1]



Pathfinding problems

Pathfinding, even in simple 2D grid environments, can become a non-trivial issue very quickly. This is especially true, whenever we have to deal with pathfinding for multiple actors.

In this post, we will start to explore some of the ways to resolve the issues of multiple actors pathfinding.

For the sake of simplicity, we will be dealing with examples on 2D grids with movement restricted to straight moves.

LRA - Local Repair A*

Let’s start from the most obvious and simple solution - the LRA variant of A* algorithm.

The idea behind it is simple - we generate our paths as usual for each of our agents and let them move, but whenever we encounter a collision on our path we find a new path from current position.

Let’s take a look at some example.

Here we have a simple case, where our actor P1 walks along the path shown by the red colour in order to reach goal G.

LRA introduction - example

But actor P1 encounters a problem. During his travel, another actor - P2 has suddenly appeared right on his path to the G.

LRA introduction - problem

Because of that, P1 needs to recalculate his path before he moves so he can still reach the goal. This process is the local repair.

The generated path looks as following:

LRA introduction - recalculated path

Now, after the game’s state update, our actors make another move and change their positions without any problems.

LRA introduction - problem resolved

What’s worth to notice here, is that if the agent P1 somehow knew if P2 will move, and free the position ahead of him, he wouldn’t need to do anything at all and continue moving, as P2 would free up the slot right away.

Another interesting thing, is that because we lack the knowledge of other agents movement, we may at some cases not find a new path during the local repair. Take a look on the example below.

LRA introduction - knowledge issue example

P1 is inside an enclosed area within there is only one route to exit. His generated path naturally follows that way to reach G, but we can see P2 moving closeby

Now, let’s imagine that after game’s state update, our actors are placed as following.

LRA introduction - knowledge issue example deadlock

As you can see, during P1’s local repair process, we would not be able to find a new path to reach G. The result of that (depending on the implementation) may result in the actor stopping.
Of course, we could implement a simple retry algorithm, but even if we retry for N number of steps, we can’t be sure if the path to reach G will or will not be created.

These issues also depend on the order of which we process our actors, so some round-robin way of choosing the processing order could in average help to minimize the issues but we cannot be sure to remove it by using LRA.

I would like to add, that LRA by itself might be perfectly fine for some games and this approach could be the way to go for your game.
Sometimes proper level design constraints can minimize occurance of such issues and even a simple algorithm like this may work very well but keep in mind that sometimes your agents behaviour could appear odd or even broken.

Cooperative A* - expanding to the third dimension

Typically, in case of simple 2D grid pathfinding, we can create a space map to save the position of all obstacles, under given coordinates x and y. It all works well, until we introduce more actors to our game, whose positions change in time.

To deal with this issue, we can introduce another dimension to our space map, and that dimension would be time. Now we will be accessing the location by passing not only the coordinates x and y but also some time step t.

In this approach, our units would also have another possible move to make, that is to pause. Which means moving in time, but not in space.

For example, if we would like to move an actor north from location (x, y, t) we would have to move our unit through the space and time map to (x, y + 1, t + 1).

Similarly, making a pause move would change the actors position from (x, y, t) to (x, y, t + 1).

Any of these actions have the same cost and they are only legal if under given location there is empty space.

After any action we need to keep track of actors positions to mark given locations at given time as occupied. Depending on the implementation, this may be resolved by using a reservation table or some other data structure.

Occupied spaces may occur only for a single time-step, if the actor is not commiting a pause action, which results in these spaces being left empty in the next time step, and free for other agents to occupy.

It’s worth to note, that by default this kind of data representation does not prevent two actors crossing through eachother head to head. If an actor reserved (x, y, t) and (x, y + 1, t + 1), another agent can by default reserve locations (x, y + 1, t) and (x, y, t + 1).

We may resolve this by making two reservations for each actions involved in the moving process, or alternatively identify and mark head to head collisions as illegal.

Space maps can be often large (N x M, where N and M are level dimensions), and that means that space and time maps will be even larger because of the third dimension. Fortunately, these maps are sparse which allows us to implement it more efficiently with, for example, hash tables.

Now, our A* algorithm would traverse the space and time map and the goal would be to reach the destination via shortest path at any time. We will be looking for shortest paths length-wise.
Also, note that the length of the path can be longer than the distance, because of the possible pause actions.

It’s important that any greed algorithm that calculates an optimal path will not be able to solve certain problems. For example, this can happen when one agent’s greedy solution prevents any solution for some other agent.

In general, these kind of algorithms are sensitive to the ordering of agents. Even a simple round-robin ordering of agents before each time step may be better than leaving it as is.

An example

Let’s take a look at an example to make sure everyone stays on the same page.

Here we have a simple case, where two actors P1 and P2 have decided to visit goals G1 and `G2.

CA introduction - example

As you can see, whichever agent moves first, the second one will have to act accordingly due to having his desired space reserved.

Let’s say that the agent P1 was decided to have the priority here. Hence there are no reserved locations, he can greedily find a simple path and mark all required spaces along the way as reserved in each consecutive timestep.

CA introduction - first agent pathfinding

Now, we will need to calculate the path for the second agent. In the first step of finding the optimal path, we check our neighbors to see what spaces are free to traverse.

CA introduction - second agent pathfinding

Unfortunately the first step on the easiest way to reach G2 is reserved.
Now, you may wonder what action should the P2 agent take here? If we take a look on the costs of each possible actions - that is to move right, move down or pause. We can clearly see that the most cost-wise (where the numbers represent relative cost change from current position) action would be to wait in this time step, so agent P2 will reserve his current position and sit tight.

CA introduction - second agent pausing

Now, in the next step, P2 by default would see that after his pause, he can freely reserve the tile to the left, as P1 has reserved his current position so P2 would have to move either way.
As we spoke earlier, this may not be desired as the agents would pass through eachother and the result would look like following:

CA introduction - agents passing through

If we would instead disallow such movement, our agent would just move down to let P1 step ahead as he had the priority to reserve his position resulting in following situation:

CA introduction - second agent freeing space for first agent

Whatever the case, further path search would go on as usual without any further problems in this example.

As you can see the agents now can move out of each others way to help them move in efficient way with usage of the CA* algorithm.

Wrapping up

In today’s post I have introduced two approaches to resolving dynamic environment of multiple-agents on a static 2D grid.

One of these is the simple and brutal solution, which is the LRA* algorithm. This solution is very easy to implement, and probably the first step on the way to implementing pathfinding that resolves conflicts on the paths.
It is definitely a not bad solution for many games, but may sometimes result in odd, or silly looking pathfinding for some agents. These issues can be limited down by clever level design.

On the other hand I have introduced a more complex algorithm, which also introduces a heavier bearing on memory usage due to tracking the agent’s reserved position in time.
It makes our agents behave much more cooperatively but also introduces a very high repeated cost calculations for the same tiles, the more agents we introduce to the problem.

These issues can be adressed in other, more complex and clever algorithms which I’ll try to introduce in the future posts from this series.

I would like to add that there is a lot of pathfinding algorithms and this field is still booming after many years (did you know A* was introduced in 1968?), so it’s definitely hard to introduce and cover all of them.

I decided to start up with some of the less-complex ones and see if the series will get any interest.

I find pathfinding to be a very interesting topic, and one that requires reading a lot of publications, hence more complex algorithms tend to be harder to find in simple tutorials.

And that’s it for today’s post! I hope you have enjoyed it and maybe learned something new along the way.
If you have any feedback, feel free to comment below. See you next time!


Also, if you would like to do some further reading on yourself, check out these papers by David Silver, which I have also based my post on: