Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see this kind of sentiment all the time from people outside the game development industry. Every single time a new graphics card is launched, a new console, a new engine, whatever.

Do people really think that it's a matter of CPU time goes in and intelligence comes out?

People seem to think that if we were not spending so much time on graphics, that we would have amazing AI by now, but it's simply not the case.

The reason we use simple state based AI's for the most part comes down to controllability.

In order to actually design a fun gameplay experience you need to have ways to tweak the AI to behave in the way that the player will find fun. You want to be able to ask "Why did that character not throw a grenade in this situation?", be able to find the answer, and tweak it.

The techniques that come out of AI research don't expose parameters in this form, and so are essentially useless for real game AI.

Probably the biggest CPU use from AI today are perception checks, like can entity X see entity Y from it's location. Checks like these tend to cross cut into engine code and just feed into the very simple state based AI.



It would be cool to see incredibly rare behaviours, such as if you are going up against a bunch of mercs, wipe the floor with all of them, and round the corner and shove your rifle into the last one's face he'd be like "fair cop mate, I'm out of here". This would only happen every once in a while but it would be a fantastic reward for the player to feel like they can, if they do well enough or play in a certain style, make the AI feel simulated emotions - fear, rage, complacency, etc. So if you snuck through the building and carefully knocked out guys they might not be fearful, but complacent, but if you did the same thing and brutally fucked them all up with knifes, they'd be much more alert but fearful and prone to error.

Personally I don't think this would be hard to simulate, for instance fear would be marked with increased cursing, decreased accuracy, increased error, and a potential for full scale running away. Complacency would mean that guards don't check every corner on their patrol routes, have smaller visual cones, and are slower to react when shit hits the fan. Confidence could be marked by seeking cover less, drastically increased accuracy, and so on.

So yes, whilst in the distant future we could have truly "intelligent" AI which would lead to unpredictable gameplay, we at present have the tools to create enemies that can provide a much richer and more vast array of feedback to the player's behaviour.


"It would be cool to see incredibly rare behaviours"

Working in game design with complex AI, I can tell you that the last thing players ever want is rare behaviors. They tend to think of these things as bugs even if they are simply that, rare behaviors.

Then again, I've been working in game systems where the rare behaviors are sometimes positive and sometimes negative for the player. Given this, the negative ones stand out as "bugs" even if they are not.


The problem with current AI is that it is programmed. Thus it stretches to where the programmer programmed it. AI needs to learn, it needs to be let to produce emergent behaviors, not just those which were programmed into it.


There was a post on Gamasutra by one of the Civ IV devs talking about this very problem. It turns out that most players don't really want an enemy that makes perfect moves most of the time. The problem with AI is not making it hard enough, but making it feel good to win against. That's a much more difficult problem than making a good AI in many games.


I would argue against this -- you can make very effective AI without machine learning. I think that perhaps what your are trying to say is that current AI feels too "hand-scripted" -- if X, then Y. Agree about emergent behavior totally, but emergent behavior can be produced without machine learning. Simple example:

Enemies set of motivations: kill the player

Knowledge base: 1) An organism can be killed with an excessive amount of directed energy (heat, force, etc). 2) The player is an organism 3) Some general rules about the physical world (how gravity works, how fire works, what switches and levers activate which objects, blah blah blah).

Now, the enemy might take some pre-programmed path to kill the player (like fire a gun at the player). Or, if circumstances dictate that it is best, they might produce a more "creative," emergent approach. Say, for example, the enemy pulls a lever. This was never added in by the programmer. The lever activates a trap door, above the players head, and a bunch of boulders crush the player. Switch action -> directed energy -> kill organism -> player death.

In shorter words, it really comes down to the programmer determining an effective knowledge base (abstraction), and letting the AI run wild with reasoning. :) This is not to say machine learning wouldn't be the icing on the cake, but you could just as well let enemies (or NPCs) have the knowledge before the game is launched without having to learn it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: