The AI Challenge this year is to control a colony of ants, ideally so that they thrive and overwhelm the other colonies they're competing with. There's starter packs in 24 different languages, so you should be able to build a bot in something you're comfortable with, and have some fun.
The most excellent tutorial shows you how to set up a basic strategy for your ants, but leaves you looking for good ways to send your ant from one (x, y) location to another. This post aims to introduce some path-finding algorithms to help, with examples in Python.
Knowing your neighbourhood
Let's start by defining some helpers, which will make our later code a bit cleaner. Firstly, given an ant's location, we want to know what are the neighbouring locations which are valid moves.
If you're unfamiliar with Python's
yield syntax, mentally think of it as appending an option to a list, and the function as returning that list. Next, we want to be able to expand a path we have, which we do just by adding a neighbour onto the end.
Now we have the basic ingredients to support algorithms.
Each algorithm we look at will have a frontier of paths so far, and a queue of things to visit next. When we always choose to take things off the front of the queue, and add things to the end, we get breadth-first search. In the ants game, where the cost of a move is always the same, breadth-first search is the same as the related uniform cost search, and is optimal.
We started with our initial path in the queue, then began the main loop. Every time we look at path, we first check if we're at our goal. If not, we expand it to neighbouring regions, and put these paths back in the queue for later. Like any search, we can also fail if we have exhausted all our options.
Note that what we get out is a path p, which starts where the ant is and leads to the goal. This can be turned into an order by taking the first location along that path p and working out what direction that is from p, where your ant is.
If we look at breadth-first-search's behaviour, we see that it expands the search frontier equally in every direction. This seems a little silly, given that we know which direction is most likely to have our target. What if we instead score options by their expected distance to the destination, and look at the closest ones first? That gives us greedy search.
On the surface this looks much like our earlier algorithm, but instead of queue we're using a list, and making sure we sort so that we get the lowest cost path out first. (Implementation note: ideally you'd be using a priority queue, which would be cheaper than resorting the list every time -- see the heapq module in Python.)
Now greedy search looks pretty good, but it's not guaranteed to always give us the shortest path, like breadth-first search did. The reason is that it doesn't care about the path length so far. Only a tiny adjustment is needed to the cost function to make it optimal -- we just add the path length into the cost.
By making our cost function include the length of the path so far, we get A* search, the holy grail of search algorithms, which is both optimal and complete.
If you're doing the AI challenge, go off and have a try implementing one of these path-finding algorithms. There are also tricks to making them run efficiently. Here's just a few that Russell and Norvig recommend:
- In your expand function, avoid expanding to where you've just been
- Even better, avoid expanding to anywhere in the path so far -- this will avoid having cycles in your search graph
- Keep a set of places you've expanded to previously, and don't expand there either
For this particular task where you have multiple agents, here's another idea:
- Keep a record of your plans for each ant at each timestep, and avoid expanding to paths which are already taken -- this means you'll avoid planning conflicting paths for your ants
Hopefully this should give you enough ideas to get started and hook up your ants with some pathfinding power. Enjoy!