I’ve been working some more on the squad AI this week (trying to fix a bug that left the squad firing all the time). I turned off movement for the agents and then spawned enemies constantly at them. Which struck me as a might peculiar, because I realized that basically I’d created a tower defence game!
Now of course I’m not trying to create a tower defence game, so that wasn’t the intention. But it’s quite interesting that essentially what you get when you have static agents that are capable of independant fire on enemies that have a goal they always move towards is the fundamentals of a tower defence game. Admittedly one where the “towers” can move around in squads and where the thing you’re defending can also move around and defend itself quite well. But still, it bears some thinking about from a design standpoint. Maybe all scenarios where you have static combatants vs moving threats is a tower defence game?
My design intention is far more like Bullfrog’s game Sydicate than a tower defence game. So just to assuage that fear inside myself that I’m creating something as derivative as a TD game, I’m posting a vid of good old Syndicate in all its dos-based glory.
One fun part of having watched all these syndicate video’s recently, is that I now have a bunch of design features I can incoporate into the game. Stuff like squad selection, upgrades, world map, taxes and research seem do-able and useful. But likely the best one is the notion that you can take items off enemies you kill. Which adds almost an RPG element to the game (looting corpeses of enemies killed is very much RPG fodder after all). In syndicate you can then sell those looted items to pay for upgrades. I like that idea a lot. Don’t know if I’ll steal it wholesale though, because its a lot of UI work and I loathe UI work.
Anyway, onwards and upwards, the firing bug is fixed and I can allow the agents movement again. Now work switches to one of the more thorny issue of local avoidance and the navigation loop. Right now the agents can get blocked by each other and other inhabitants of the world (including enemies if they aren’t “uncovered” as enemies). They have a path through the world, but that is a static path calculated when they know where they want to go. Static paths are fine for large scale movements, but we always need to take into account local collision avoidance if we’re going to have agents that feel reasonably alive. Think of it this way, if you were moving around a city and I asked you were you were going you would say “to the shops” or “to the cinema”. Thats the global path. If I then asked you to describe how you were moving as you walked there, you would have to describe to me how you became aware of other people in the area, how you directed your movement to avoid them etc. That is the local navigation issue.
Luckily, Mikko Mononen (the guy who wrote the navigation mesh generation the game uses) has been doing a lot of work in this area over at his blog. I’m hoping to use some of Mikko’s work and fiddle with some of it to increase the number of agents to match the requirements of the game. Will post a video of that soon.
So I thought I’d post real quick about some of the animation work I did last week.
The animation on all of the characters in the game will come from a mix of hand-animation and motion capture. Motion capture being the kind of animation data you get when you literally take a recording of real humans doing particular motions.
There are a number of indie-affordable motion capture solutions out there now. I’ve got the optitrack one (shown in the video above) and its pretty good, but can be a pain to setup and needs a fairly big space. Luckily I can borrow one of the labs where I work so can sometimes get space to play with it. But marker based solutions (you have probably seen them, where your motion capture actor wears the blue cat-suit and the while dots) have some pretty issues unless you can spring for a very expensive setup. I’m planning to have a look at IPISoft’s video based solution, which basically takes the input of 4 video cameras that are frame synchronized and does a shape fitting algorithm on them to create a “markerless” motion capture. Still not 100% ideal, not realtime, but perhaps useful enough to be worth a try.
Of course there are plenty of places that sell motion capture. Plus a number of free resources, the largest being the Carnegie Mellon University’s motion capture database. Here’s an example of that CMU data being played back on the new box guy prototype agent.
There is still a bit of footskate on his playback, but thats more of an artifact than anything “wrong”. Easy to fix but kind of pointless right now.
So, at some point I’ll get round to showing off the animation blend-tree and how that works for character locomotion. Plus show off some nicer clips of social motions (waving, hugging, chatting etc). For now the main thing is to work out some more of the squad control interface, which is coming along slowly.