The U.N. Bot is a game agent based on an artificial intelligence research project at the University of Edinburgh. It uses a healthy blend of robotics techniques, agent architectures, AI routines, and a few other interesting algorithms from Computer Science. All the underlying technology is geared towards getting the most human-like results possible. The result is a bot that is quite different from anything you've seen before. Read on!
The U.N. Bot is an Utterly Naive bot that spends all its time discovering and patrolling terrains. It is intended to be a demonstration of our Ubiquitous Navigation scheme. As such all the existing targeting and experimental firing code was voluntarily removed. Don't expect a good deathmatch game from this bot! Also, as the name may indicate (to the french-speaking among you), there's only one of them... so take care of it!
During the first few months of the project -- and during time stolen from lectures and other assessments before that, this autonomous navigation system was the sole focus of all efforts. Any bot coder will admit this to you: navigation is the toughest problem to "solve" in game AI. We believe our current investigation is getting very close to that, and this demo is a first step in the right direction. Don't judge too harshly the default behaviours, since they can be drastically changed by modifying the parameters... the framework is there!
The demo is partly intended as a test, from which we would love feedback! It's also an example of our flexible behaviour system, which allows high-level customisation and learning of motion. We encourage you to dive inside the scripts provided and customise the bot until your heart is content.
Select features may sound familiar to some of you; indeed some bots in the past have partly achieved these goals. However, rarely was it the premise a genuinely autonomous game agent. Rather, it was about trying to hardcode a fake behaviour until satisfactory compromises could be achieved. This bot requires NO human interference at all, and is capable of learning all of the behaviours itself -- given overall instructions of what it's trying to achieve.
- Requires no background knowledge of a terrain
The most basic movement of the bot requires no existing waypoints, paths or other guidelines to exist within the world. It can wander around autonomously without really knowing where it's going, or where it's come from.
- Avoids obstacles dynamically
The bot is continuously on the lookout for obstacles that may impede his path, and it can prevent collision with them by correcting his trajectory on the fly.
- Perceptually honest interaction with the environment
The bot does not know where it is in the world; it has no GPS tracking if you will. All it can base its decision on is what it can see and feel. As such, it is obliged to build up a map incrementally, based on its previous knowledge. This becomes troublesome when it gets teleported, since it needs to continue exploration until a path back is found.
- Internal representation of the map learnt online
The visual sensors inform the bot of items that are present in the world only as it sees them. Using them, it can build up a representation of the world within the actual game. This is ideally suited to huge randomly generated worlds, or when the bot is expected to make path finding mistakes during exploration -- much like humans indeed.
- Human-like appearance of motion
Due to the bot's ability to learn the basic motion control, and since the problem is expressed with human-like constraints, the bot's behaviour -- once learnt -- ends up surprisingly human-like.
- Curiosity driven exploration
The bot has the desire to find all the items in the world, since it gets reward for finding new places! This prompts him to try to get to new items nearby, and also plan his path back to items which it previously saw and didn't get time to investigate.
For those of you that have a bit of technical background, whether be it in game programming, artificial intelligence, or computer algorithms, the details under the hood may interest you also.
- Level of detail path planning
Unlike the A* or Dijkstra shortest path algorithms, this algorithm can provide a rough estimate of the best path straight away! As more computation is dedicated to the task, the results returned will become perfect.
- Dynamic path conjunction and combination
Multiple targets in the world can be set, and as long as the agent has been there before (i.e. there actually is a path to them!), the agent will be capable of picking a path that combines these multiple goals in an efficient manner (a.k.a. the travelling salesman's problem -- I'm still trying to determine it's optimality ;).
- Neural network allows user customisation of flexible motion behaviours
All the low-level motion controllers can be fully customised via a fitness function. You specify the desired behaviour, and the bot will learn it! You don't have to tweak insignificant constants in an equation -- you tweak the behaviour.
- Physically accurate motion prediction
This allows a lot of inaccurate/unecessary information to be discarding, since the bot can base its decision on precise data. Practically, this implies smoother motion and quicker learning.
- Optimal behaviour arbitration policy chosen by genetic algorithm
At the top-level, rewards are given for simple concepts (such as exploration or picking up specific items). The bot can learn how to decide what to do, based on its current state, in such a way to maximize his potential reward.
Though the AI code itself is wrapped into a fully independent C++ module, the demo requires a specific game to run, namely Quake 2. There are plans to port the code to other platforms, but not in the immediate future.
You can find the zip containing the game DLL right here:
Last update 03/03/2002.
As requirements, you need a PC that can run Quake 2.
The first thing for you to do is view the default demonstration. Once you've got to terms with that, you can attempt to evolve your own behaviours.
- Installing the demo
When you extract the ZIP to the Quake 2 directory, make sure you let it create the sub-directory "un-bot". This will usually be done by default if you use WinZIP. The directory now contains a few XML files, a DLL, and some default Neural Network data.
- Starting a game with a bot
Essentially, you start the game with the "un-bot" mod, and set the "bots" variable to 1 in the console. This can be done by typing the following line in a command prompt:quake2.exe +set game un-bot +set deathmatch 1 +set spectator 1 +set bots 1 +map q2dm1Or you can just click on the default short-cut in the Quake 2 directory called "U.N. Bot Demo".
This will show you the default bots behaviour, that the bot has learnt with the default parameters.
- Removing the bot
If you want the bot to reload the configuration file without having to restart the game, or if you just want to kick it 'cos it did something bad while learning, then you can set the "bots" variable to 0. In the console, type:set bots 0This will remove the bot from the game. Setting it to one again will spawn it again.
- Modifying Behaviours - XML Settings
You can open up the file Navigation.XML in your favourite editor, and take a peek at what's going on. Basically, there are three "layers" in the system. The lowest level is composed by motion controllers, which process the sensorial data. On top of them sit motion influences, which can guide (even force) the controllers in specific directions. Finally, the behaviour arbitration learns the best way to select these behaviours based on the current situation. Each individual attribute is commented once in the file, so you should have no trouble finding your way around.
- Advanced Tweaking - Python Scripts
If you believe that changing the XML files is still not good enough, then you can actually modify the python scripts. Python is used for pre- and post-processing, as well as the fitness evaluation. This allows you to completely re-specify the behaviours. There are more details within the python scripts provided, so take a quick peek for more information.
Remember, if you have any comments or questions about setting up your learning bot, let us know about them!