Working on our game this weekend. Despite the fact that we’ve been doing this on and off for close to a year now, I reckon we must each have only put maybe 6-8 hours of work into this thing, work’s just been that hectic! Also integrated the AngryAnt behave library into the project for the AI which is a lot of fun. The plan is to make the game just a lovely thing to interact with before we make any kind of winning conditions.
Finally getting round to it
For the last couple of months I’ve been working for the ex-Blackrock guys over in Brighton at their new studio Gobo games and have been enjoying learning lots of new stuff there (along with a lot of homemade food made in their big open plan kitchen). All good stuff, but the personal projects have fallen by the wayside a little.
Anyway I’ve finally got around to playing around with the Unity API and have been finding it a total joy to use. Everything is so easy! So I’ve decided to make just one game that is pure gameplay to get to know the package. It’s going to be a little multiplayer tower defence game - specifically so me and a friend of mine have something to play at the same time with just one iPad. It will be the last project I do with absolutely no artistic value whatsoever, but it’s still quite fun.
So far I’ve coded a hybrid navigation mesh / spacial partitioning system for the AI to use and to optimise collision detection. After making the first generic AI framework it’s a real joy to be building this from the ground up without having to keep working out how to achieve the next step. It’s also really nice to have the requirements of the whole thing in mind so you can just do exactly what needs to be done and no more.
Generalisation
People are very good at making assumptions about objects they have never encountered before and instantly knowing how to use them.
The illusion of this can be achieved with AI by having the objects themselves specify how they can be used, and what they can be used for. Agents can then search their immediate surroundings for objects that fit a certain criteria and use them accordingly. This is a useful tool if you want to be able to easily add things to the world that the AI will instantly be able to understand.
In a more open-world game, objects could be used to fulfil basic needs, creating a more lifelike, generic NPC. Then, these same objects could also be used as weapons, however useless they may be - each one describing how much damage done when thrown, or when used as a melee weapon.
The purpose of all this would be to create a game where anything could be picked up and used, or used for defence in a pinch. I believe a system where a few simple actions exist that can be used on anything would help decrease a feeling of limitation in a game, and increase the ability to experiment and explore. This type of system is also a natural match for the polymorphic nature of many object-oriented programming languages.
It’s also a step in the direction of more interesting, generic open-world AI that may or may not fight you depending on the circumstances and would create nice situations where attacking a random passer-by may see them grab a nearby common object for defence.
Additionally too many open world games see mindless characters wonder about when it would be relatively simple to give them just a few basic desires and sleep patterns. It would be refreshing to have a game city that gets busy at lunchtime, quiet at night and so on. These two things combined could make a world come alive in a way that has not been achieved particularly well to date.
As far as the demo goes this won’t be incredibly visible as everything is a weapon and all the agents are in a permanent combat state. However it will result in a more modular weapon description, in which each weapon describes the appropriate movement when being used; for example charging with a short-range weapon or hanging back with a long range one. This would also exist in common objects when they get created. Hopefully once the multiplayer demo is completed, work can start on a game with AI that does more than try to shoot you.
Imminent and future development
The demo needs a bit of spring cleaning as it stands, but I just wanted to get it out there. Delta time is calculated but needs applying to all update methods, large and small. The Level and Brain classes are also very big and need splitting up. There are also a number of ‘TODO’ statements dotted around that need attending - small improvements in code elegance.
The demo is now being turned into a game which has been fully designed. The following changes will be made to allow this:
'Brain' will be split into a 'higher brain' class which will be able to accept one of any number of decision making modules, and a 'lower brain' class which will handle pathfinding and also call into sight, hearing and other sense routines, compile all the data and feed it into 'higher brain'.
'Level' will be overhauled to allow instantiation with a 'level description' function - allowing the generation of different maps.
The path table will also be stripped out and the real-time search will be re-implemented to allow for dynamic maps.
Agents will no longer exist as a lengthy chain of inheritance, but rather contain a number of classes allowing for agents that can include or exclude certain components.
All objects and entities able to be targeted will be derived from a super-type of GameObject; ‘TargetableGameObject’ - this just makes a lot of the code much simpler to implement and work with.
Levels will be generated from a hybrid Navigation Mesh / Spacial Partitioning system, and locations tracked automatically by GameObjects allowing for more complexity in behaviour and much more efficient calculations.
Finally all of it will be wrapped up in a nice, neat little menu system.
http://www.youtube.com/watch?v=crietSSXn1M
A better video. It has been mentioned that the colour of the text displaying the main agent’s thoughts can be a little misleading. The colours actually refer to the spinning markers, and not the the enemy agents. This has been remedied in the latest build which is available on the downloads page. (VS2010 and DirectX10 required)
http://www.youtube.com/watch?v=gkNuMZcMIJU
A short capture from the latest version of the demo. The video encoding has made it appear slightly choppy, but the actual demo is not like this.
Demo V3.0
The demo’s now been converted to DirectX10 using C++. Source code and videos will be posted shortly.
Simple Game Framework
I put together this simple framework a few weeks ago as a first step in converting my demo from C#/XNA to C++ using Direct3D. It’s fairly simple but works in much the same way the XNA wrapper does - Global Load, Initialise and Unload methods run once, the Draw method runs as often as it can and the Update method runs sixty times a second, passing through a delta time value that is used to compensate for variations in loop-cycle times. The entire Demo should be ported shortly, currently various maths algorithms are having to be converted due to the differences in the handedness of the two framework’s respective coordinate systems, and it has also been necessary to write a custom collision class for bounding volumes and rays.
http://www.youtube.com/watch?v=QW5kIqyjbbs
Short capture from the last XNA version. Direct3D / C++ version should be up shortly.
Phew!
So, after starting to fix a minor bug I got sidetracked and recoded a large portion of the AI, (Although they still look like little cubes…) They now maintain a list of everything they can see as well as keeping a few choice things in memory. You can then choose to take any information from these lists in order to have the agents make a decision on what to do. At the moment they choose their target based on a simple hierarchy of object type and distance. Adding new behaviours is now incredibly easy, for example, the agents do not currently react to being shot at, but all that would need to be done is to turn their head briefly in the direction of fire which would cause them to see the enemy and automatically factor it in to their decision making. Easy!
Before I get sidetracked making these improvements however I’m going to port the demo to C++ using DirectX10. The C#/XNA version is available on the download page. It may still be in need of some refactoring.
If for any reason you want to mess around with their behaviour, the best way is to modify or rewrite the TargetChooser() method in the Agent class, (all this does is set ChosenTarget - which holds an index of the target, and also sets TargetType to AGENT or OBJECT_OR_MEMORY - which is necessary to differentiate between agents and objects with the same index.)
Updates Soon...
The AI is now in a finished state. It’s HFSM based AI with a little bit of learning thrown in. It was an interesting journey, the biggest conclusion being that unless you have enough animations and general ‘things-for-them-to-do’, they can appear relatively unintelligent even if they are actually doing some quite clever things.
In the case of this AI, they learn to prefer different areas of the map based on their success rate with different weapons. They also have unique personalities, some being braver than others. The problem is that, being little cubes and not living for very long, this behaviour isn’t really very noticeable.
Another thing to take away from all this is that without a game to tailor their behaviour to, you can pretty much keep adding and adding to their intelligence in infinite little ways. In the end, you need a goal to work towards, otherwise, where do you stop?
I’ll post up the final results in the next few days, but in the weeks coming I’m going to strip out their behaviour from the framework and start fresh with a different system.
http://img341.imageshack.us/flvplayer.swf?f=Mkf2
More basic tests, with the path-node, search-path and field-of-vision visibility on.
http://img684.imageshack.us/flvplayer.swf?f=Mg6s
Some basic sight and pathfinding tests on a larger map.
State Based AI
I should have a proper update soon, but in the meantime if anyone’s interested, this is the framework that my AI is roughly based on.
[gallery]
The Agents can now see! At the moment they can see through walls but it should be very simple to stop that. Now I should be able to start programming some behavioural stuff. Finally!
[gallery]
Made little floating cubes which will eventually be replaced by a little rifle and shotgun. All the graphics at this point are just place-holders.
[gallery]
taught myself a little bit of trig so now the agents move about properly and face where they’re going rather than hopping from point to point. Also made some of the nodes into pick up generators that will periodically produce weapons for the AI to use. (Green and Yellow dots)
[gallery]
Created the pathfinding for the agents and a simple placeholder object to represent them. The pathfinding is pretty crude, but my program’s so small there’s no need to make it better at the moment. I might fix it later.
[gallery]
Classified some of the nodes as spawn points (little red nodes in the corners) - this is where the Agents will re-appear after they die.
Also classified some of the nodes as ‘room centres’ (blue nodes). The AI search to these when they have nothing else to do. I thought this would be a bit more like how a human player would play the game, you generally search from room to room when you can’t find anyone, and probably wouldn’t just wander into a random corner.
[gallery]
Created the inner walls. The Grid Nodes are coded so that they remove themselves wherever there is a wall, and link themselves up accordingly. The Nodes are linked with little lines called Edges which the Artificial Intelligent ‘Agents’ will use to navigate the world.