Hacker Newsnew | past | comments | ask | show | jobs | submit | nihilocrat's commentslogin

> I've worked in environments like that. Unfortunately, the open space looks like "not working" to people and is highly visible. Since no one wants to be publicly seen as not working, they just ended up being a very nicely decorated and inviting ghost town.

FYI, this is exactly what happens to luxurious and inviting game rooms at videogame companies.


Exactly right: this was when I worked at EA. :)


This one's particularly great:

http://youtu.be/1-zhdU4GSBA

You can tell there are pretty often some weird syncing errors, but hey, it's worth it for stuff like this.


As per the recent test, I think a lot of latency issues have become less pronounced. Only in 'Dome Deathmatch' did I really see any glaring movement issues.

There are still plenty of funny movement related issues because of latency, but they don't really affect the game too much.

And it's totally worth it. I sat there laughing for a good long while after someone dumped a jet right into the ground, taking at least 5 people with him.


Drone attack.

Just to clarify, I'm not wearing a tinfoil hat, I just have seen the circumstances of some drone attacks: against US citizens in places where no war was declared and with collateral damage shrugged off. From the US perspective, it is an easy means of getting rid of people they don't like with neither the need for judicial proceedings nor formal declaration of war. The only thing that is really stopping them right now is that Assange is currently in a highly populated part of the UK, and not in the middle of nowhere in Yemen.


How many drone attacks has the US launched against white Australian journalists?


The US would certainly not do a drone attack against a first-world nation, as the political fallout would be immense. They do have a great many other levers to use that the public would never see...


If they did that here, we'd need to find a new CEO... but the programmers would be in good shape.


Am I the only one who can't read this graph? It looks like it's rotated 90 degrees and there's no time axis so I don't see what this is representing. I don't understand what the legend for the Y axis means.


It's a market depth chart - the orange line on the left represents the cumulative standing buy orders, the blue line represents the standing sell orders, the green line is the historical price per trade with the latest trade (thus value) being at the bottom. The Y axis is a representation of the bitcoin value, and the X axis is the representation of the USD buy/sell prices.


Here's a traditional market chart showing USD vs BTC since the beginning of Bitcoin. Salient features: enormous bubble a year ago, and many months of stability at $5 before the recent runup. http://bitcoincharts.com/charts/mtgoxUSD#tgSzm1g10zm2g25zv


It's not just you. For me, the "graph" is blank. (Chrome on Windows)


I also think government dilutes the effectiveness of the experts it employs, but at the same time the private sector is amazingly, exceedingly incapable of seeing benefit from anything but immediate profits. Investing in highly risky pilot programs which have high costs and uncertain benefits is something the private sector, except perhaps a few optimistic billionaires, would never do. If we had never supported difficult-to-quantify never-never projects, we wouldn't be finding exoplanets, we would have never discovered that asteroids are clumps of unfathomably abundant resources, we would have never gotten to the Moon, and Europeans would never have explored the New World, because the benefits at the time seemed dubious and the funding came from governments.

I am optimistic just because I have a high amount of trust for German government. They are a lot more cautious and less bandwagon-hopping and more capable of pulling stuff like this off whereas for most other industrialized governments the Energiewende would just be a dramatic stage show created by former-execs-turned-bureaucrats to put taxpayer dollars into contractor's pockets without any concern for actual results.


I think you're overlooking the age of exploration, funded mostly by private interests, and things like the first transatlantic cable.


Ah, but maybe he's not overlooking, for one example, the broadband monopoly or (essentially non-competitive) duopoly that exists in contemporary American society.

Compare what we've got with countries where the government owns the physical medium and leases it to a variety of bit-carriers--it's tough to argue against that kind of observable evidence.

There are plenty of cases where government stewardship of the playing field--with private companies competing on that playing field--works out for the great benefit of the citizenry.


This is not really possible if you are working on a mass-market product, like a videogame. Financial success is synonymous with publicity, so if you STFU you will almost certainly not succeed.

Sure, you don't need to brag about it or post sales figures directly on your site (Minecraft), but people are going to notice and often try to clone your game.


"If you destroy publicly funded research, you leave us in a situation where only the big corporations can afford the drastic security precautions needed to continue biotechnology research - and you therefore further promote a situation you say you are trying to avoid."

Considering the scary amount of money and power Monsanto has, and the terrifying things they do with their GM patents, this is probably be best argument if you are convinced GM plants are evil.

Well, that and the fact that it's /research/ and not the industrial growing of GM foods.


Why recharge when you can just replace the batteries in such a situation? It would make for a better show, anyways.


Crytek deserves all the praise they get for their hard work in rendering...

...but...

Why are we conditioned to think that "the future of gaming" is solely a function of a game's audiovisual quality? I would like to see a "future of gaming" video that shows off highly-interactive (rather than just very pretty) gameworlds and characters. NPCs that notice you are trying to put a bucket on their heads, for example.


Visual quality is an "easy" problem. We know exactly how it ought to work (ray tracing has been photorealistic for some time now), it's just a matter of tweaking it to run fast enough on the hardware we have. And we're really close to pulling it off.

AI is, by contrast, a hard problem. We don't have a very good idea at all of how to exactly simulate a personality, but it's likely to be orders of magnitude more complex than the most elaborate physical simulation ever designed. The most intelligent NPC ever developed was really, really stupid. The appearance of realistic characters in games at all is an elaborate stage show. You can spend artist time making the show seem convincing in more circumstances, but evolving it past that at all is one of the hardest problems in computer science. We're nowhere near realism.

Now, with that said, in the demo they showed off some of their pathfinding and destructible environment improvements. There are plenty of gamers who care just as much (or more) about that stuff.


As someone who has worked intensively both in the fields of rendering and AI, I can say there are easy and hard problems in both. There is much in the way of AI that we already know, it's just that your average developer actually knows very little about AI (not trying to be insulting, it's just been true in my experience) -- and much more so your average company or publisher does not care to fund AI beyond enemies that can throw themselves into your gun barrel. Propositional logic, expert systems, backwards chaining, and neural networks are very powerful tools that we know a lot about, but most games rarely implement (perhaps trivially at most). We still tend to hand-design/script quests, when these fields of AI could provide very powerful emergent gameplay. What it mostly boils down to (for the sake of interesting gameplay) is proper knowledge-representation (i.e. abstraction) and reasoning (a prolog-like reasoning system would be a good start). And of course, path-finding/locomotion is pretty much a given. Interestingly, it is less about implementing these and more about designing a game that can use them properly - almost without doubt you need to create a game world that functions on its own, where every NPC has its own set of motivations, which is a very different task from creating a game that is limited to what the player does.

Other than that, there are still plenty of hard problems left in rendering, especially with regards to effective procedural generation (which arguably falls more under modeling). Our ray tracing is still far from "realistic" - take a more complex scene than a table with a wine glass, and we can still usually discern the difference, especially with complex materials and subsurface scattering (human skin STILL does not look quite right). As it is, we are only able to calculate light travel by points (we use the term ray tracing, but typically every ray does a number of steps, unless every model is mathematically calculated). Light travel should be determined by complex volumes -- which is pretty much impossible to simulate on today's hardware ( O(n^6) magnitude at least - growing by volume, travel distance, number of objects, volume bounces (dear god), and number of pixels calculated ).


I hope I didn't seem to be implying there weren't hard problems in both fields-- but I do think there's a bigger gap in the current state of the art than you claim.

The path to realism in graphics is pretty clear because the real-world behavior of light is well-understood. There may be equations yet to be perfected, and challenges in efficiently simulating that behavior, but the direction is never in question; you can always tell if you've made it look more realistic or less.

By contrast, AI in games has reached a local maxima; the combination of pathfinding, scripted cues, an elaborate finite state machine, and a few basic heuristics is capable of simulating human behavior well enough, and with few enough glitches, that it presents a realistic universe to the player so long as they don't exceed the scripted bounds. However, if they do exceed those bounds, the whole thing appears paper-thin. You can extend the bounds with additional effort in scripting, but not infinitely, and with diminishing returns-- and doing so gets you no closer to having real AI which would be able to make those same decisions on its own.

Striking out and attempting to make an intelligent NPC-directed world from scratch... I'm not saying it's impossible, but I haven't seen anything, from AAA titles to indie games to tech demos, to imply it's coming soon.

But maybe that's just because I haven't seen the Minecraft of AI-driven gameplay yet.


Good points :)

I am actually working on such an AI system, believe it or not (As I am sure quite a few others are)! I don't doubt my algorithms, because I have already prototyped them and they work (at least well enough to create some interesting, if flawed gameplay), but I do doubt that I will ever get a game out that uses them effectively (making even a simple game is incredibly hard, and I have a bad habit of creating overly-complex stuff).

Your traditional AAA game does, as you mention, basically use finite state machines to determine enemy behavior (and incredibly simple ones, at that).

Why is a FSM bad?

Because someone MUST explicitly declare every combination of situation, and the paths between them.

How does my system work? It uses a combination of AI techniques that have been around a long time (just never used effectively in games).

1) Rules and queries (like prolog), with backwards chaining and propositional logic (basically a big, graph-based system). How might this work? Let's pose a few fake rules and a query. A) apples are red B) oranges are orange. C) apples and oranges are fruit D) red fruit is good...QUERY: are apples good? Using backwards chaining, this can easily be deduced.

2) Motivators and deterrents. Every NPC has their own set of motivations, as well as "bad" things they try to avoid. They attempt to maximize their output towards attaining their goals, prioritized with various points (i.e. get food: 1 point, stay alive: 10,000 points, etc)....the best outcome can be found with any traditional AI system (like a minmax tree, A*, or whatever is appropriate for the given case). This is not just about food or living, but can be emotional concepts like "seek happiness" or concepts that involve the physical world "find shelter"...

3) Basic AI for sensory and locomotion purposes (line of site, perception, planning, gathering knowledge, etc.)

4) Abstraction of communication -- not at the syntactical level, and not at the grammar level, but in its crudest terms ("higher level language")...I call it caveman-speak but there has to be a better phrase...i.e. "Me Hungry" or "Why You Steal" -- using sentence fragments, you can construct a variety of phrases without giving the computer a hard time interpreting it. On the surface, you can throw in the syntactical sugar to make it sound less crude to the user, but on the backend writing a full-blown english parser is not a wise idea.

5) There is more, but I won't type it out here...

In the simplest terms, think of it like an empty stage in a play. There is no director (Finite State Machine) but there are characters and props, and the characters have backstories/motivations/genetic makeup/birthrights that determine what they will do and how they will react to changes in their environment. When you have rules, motivators, and a knowledge base, you can let the AI run wild -- and producing new rules grows with linear complexity, rather than exponential (as with a FSM).

Alternatively, if you even played the Sims, imagine something like that, but with more comprehensive AI (NPCs seek to maximize their comfort levels (id) while acting within their restraints (ego/super ego)).

You can track my progress (or lack thereof) at gavanw.com...unfortunately I work a fulltime job in addition, so development is occasionally slow.


Very interesting, with your work on voxels will modellers soon have to model the inside of objects - eg) Rather than modelling the human 'skin', model each part - clothes, body, tissue, bone?


Well, traditional polygon-based systems will be around a long time, but I do plan to keep everything volumetric in my engine. However, for the most part a lot of the volumes are procedurally generated (i.e. the wood pattern inside a trunk is a function of distance and angle from the trunk's center).


Sounds like the Minecraft of AI-driven gameplay... I would like to subscribe to your newsletter :)


Sign me up as well. I look forward to small-scale deployments of more 'genuine' AI in games.


That makes 2 subscribers ;)


Make that 3.

I've always wondered why game rules have to be so rigid (use a state machine) and always thought it was the technology but what you described makes sense and makes use of older technology but in new applications.

I just took one AI course while in college but got hooked ever since.

Question: Can you expand on the motivators and deterrents concept? Specifically, how do you relate something like "getting food" with the world objects? Assigning points to specific world objects(apple, cow, chicken...) seems to get us back to a state-machine like game. Humans have experience (memory and patterns) to guide them, what would the characters have?


Yes, to answer your question, there are many ways to go about implementing it, but here is how I do it:

In reality, you have one knowledge base, which is simply a set of facts/rules about the world. For example...

a) berries are fruit b) red fruit is good (+1 score) c) green fruit is bad (-1 score) d) enemies are dangerous e) eating while in danger is bad (-10 score) f) eating while full is bad (-2 points)

In a finite state machine, you would have to have several combinations of state (are berries present? are enemies present? are you full? etc). Say (for argument's sake) you have 5 possible states (binary). In a FSM, this would have 32 possible permutations of the states, which all must have corresponding actions associated with them. In a motivation based system, it simply evaluates each state individually, and adds up the points. It chooses to pursue the state that gives the most points. So, here, that would be eating red fruit while enemies are not present.

The way prolog-like systems work, the computer does not have to have ANY idea what food or hunger or anything like that is -- it can be completely oblivious to abstract concepts. All it is doing, in essence, is (searching for and) matching strings. Those strings are the identifiers used for classes (or instances) of game world objects, NPCs, etc. So if you gave an NPC a motivator (kill(enemy) = +10 points, Bob = enemy, get_caught_doing(kill) = -100 points), it will find the best way to kill Bob while avoiding negative consequences. This is (obviously) metacode, and the actual implementation would require better organization and more explicit syntax.

Make sense?


Yup, makes sense.

I have written exactly 15 lines of prolog but I'm somewhat aware of how they work. This is great stuff, but as you said the devil is in the details. Please consider writing up your experiences and sharing it on HN.


Definitely, will do :) It is also encouraging to see that people are interested in this.


With AI, there's also the problem of being more accurate than you want to be. A triple-A title like MW3 or Skyrim is essentially "a movie you walk through". NPCs have lines, they follow a script.

You could easily have NPCs that run a little life simulation, (like in the early versions of Oblivion) but then you have the problem of NPCs going off the script. If you accurately simulate mental states, and some villager rolls a 1 on the d2000 mental state die, then proceeds to kill all the plot-critical NPCs in their village? Now you've got a more realistic, yet completely broken, game.


If you were to accurately simulate mental states, there would need to be some kind of cause like the player murdering all the NPC's friends in front of them to end up on 1 rather than a randomly generated value


It seems to me, the problem with AI as applied to gaming, is that just having "good AI" doesn't mean having the smartest AI that can kick the player's butt, it means having AI that the player can observe and formulate strategies against. If it doesn't behave in a human way, and make it's decisions clear to the player then it's not fun.

In short, for gaming the measure of good AI is "fun", not "intelligence".


Also, better rendering is generally better (unless you want a cartoony cell renderer).

Better AI might not always be better. You don't want stupid AI bugs, but you don't necessarily want the AI to be smarter than the player. You may want an AI with a realistic character, but maybe not. Good AI makes some sense in a sandbox world, but it's not necessarily fun.


> your average company or publisher does not care to fund AI beyond enemies that can throw themselves into your gun barrel

Ironically the game where I've seen the most convincing AI currently is the Forza Motorsport series (increasing with each release). There is nothing short of impressive in seeing opponent drivers trying to pressure you to fault, intentionally breaking on you way after the apex just when you are about to floor it, showing excess of confidence throwing them out of the track, or reacting to pressure you put on them (you can literally see some panicking or getting aggressive in their driving), learning to clock faster times lap after lap and race after race, and even from your own lines. Sure they're nowhere near a real human, and with experience you can land better times than the AI and outsmart them, but there's plenty of physics and graphics going on already and there are trade-offs to be made.


In a video game anything the player doesn't see is a waste of time. In the case of games with large branching story lines it's a tradeoff between putting effort in to content which might be unseen and adding "depth." AI is the same. A genius AI that does all kinds of magic in the background is useless if the player never gets to see the magic.

Games don't want true AI solutions, they want to entertain the player. Put enough veneer on the world to make the player suspend disbelief and move on. Anything else is a waste of money for the most part.


Static photorealistic scenes are still a pretty hard problem. If it was just "ray tracing and you're done" it'd be a different story. You have complex lighting interactions (best modelled using the radiosity technique) and you have complex sub-surface light interactions which are a prominent compnent to making ordinary human skin appear real. These aren't even completely solved problems in the state of the art today.

Now, all of that is just square zero. A mere preface to the real problems: realistic animation and realistic interactivity. You can make a pretty decent photorealistic world with current technology but it would have to be an untouchable, unmoving, unrealistic frozen world. Adding in animations is a problem at the same computational intensity as the rendering problem. And making the world capable of being interacted with a level of fidelity approaching the quality of the rendering and animation is a problem so lacking in proposed solutions that it doesn't even have a sound theoretical underpinning yet.


Classical raytracing by itself is highly unrealistic unless you also implement an add-on physical-based rendering-equation-solver such as Path Tracing or Photon Mapping. The first one beginning highly noisy and "converging" to photo-realistic over a looong time -- the second being only as realistic as the number of photons your hardware can manage.

Now without such a physical-based rendering-equation-solver, realtime raytracing of "simple scenes" (couple of spheres and boxes) is doable today. The more complex your scenes get (even with acceleration structures), the more you need to lower your viewport resolution. Your high-end desktop GPU can then handle raytracing for a low-end "mobile" resolution such as 320x240 or whatever. But your low-end mobile GPU probably can't. Also for fully accurate antialiasing you'd need to render at 2x your resolution and downscale.

tl;dr: realtime raytracing of simple scenes is feasible, realtime photorealistic full-screen antialiased raytracing of complex scenes isn't and if current tech development trends continue, won't be for another decade perhaps.

I don't see "we're really close to pulling it off" because as GPU flops and caps happily increase, so do unfortunately screen resolutions. In realtime raytracing, we're always lagging behind and always have to compromise: (1) high resolution, (2) antialiasing, (3) complex scenes, (4) photo-realism -- for real-time speeds, pick any two.


I didn't read the grandparent as saying we are really close to real-time path tracing; I read it more as real-time rasterization is getting rapidly closer to looking like path tracing both in the quality of the scenes and the types of scenes it can represent.

Also FLOPS has been increasing much faster than resolutions. 1920x1200 is about 3x the pixels of 1024x768. Graphics FLOPS has increased way more than 3x since 1024x768 was considered a high-resolution screen.


Good points ... that's a silver lining, too =)


I think the fact that strides have been made in the general area of dynamicizing certain actions (like simultaneously rendering mouth movements with spoken language in games) are steps toward that grand goal of games having AI that not only acts smart, but even has dynamic scripting, based on "observances" and other environmental stimuli. Also, hinted/dynamic animation is pretty unbelievable now. Just see how real it looks when a character is tackled in a Madden game.

I'm confident even the "hard" problems will be solved in the very near future.


That's a good point, and maybe I was too binary in my original statement-- there are a lot of parts of NPC design that are more physics simulation than AI. I'd still say there's a categorical distinction between "respond appropriately to being hit by a linebacker" and "respond appropriately to someone putting a bucket on your head", though. Like you say, we're close to realism on the former; much farther from the latter.


>Visual quality is an "easy" problem. We know exactly how it ought to work (ray tracing has been photorealistic for some time now), it's just a matter of tweaking it to run fast enough on the hardware we have. And we're really close to pulling it off.

That's a good point. And I don't even think it's so much about AI being really hard as it is about photorealism being such a clear goal. Because really there is no limit to the other dynamics that can be explored that aren't as unattainable as a human AI. I still think the ghosts in Pac-man are clever. And while I'm by no means tuned in on the gaming community today, just the fact that you still play these super good-looking games with a few buttons and two joysticks says a lot.


I see this kind of sentiment all the time from people outside the game development industry. Every single time a new graphics card is launched, a new console, a new engine, whatever.

Do people really think that it's a matter of CPU time goes in and intelligence comes out?

People seem to think that if we were not spending so much time on graphics, that we would have amazing AI by now, but it's simply not the case.

The reason we use simple state based AI's for the most part comes down to controllability.

In order to actually design a fun gameplay experience you need to have ways to tweak the AI to behave in the way that the player will find fun. You want to be able to ask "Why did that character not throw a grenade in this situation?", be able to find the answer, and tweak it.

The techniques that come out of AI research don't expose parameters in this form, and so are essentially useless for real game AI.

Probably the biggest CPU use from AI today are perception checks, like can entity X see entity Y from it's location. Checks like these tend to cross cut into engine code and just feed into the very simple state based AI.


It would be cool to see incredibly rare behaviours, such as if you are going up against a bunch of mercs, wipe the floor with all of them, and round the corner and shove your rifle into the last one's face he'd be like "fair cop mate, I'm out of here". This would only happen every once in a while but it would be a fantastic reward for the player to feel like they can, if they do well enough or play in a certain style, make the AI feel simulated emotions - fear, rage, complacency, etc. So if you snuck through the building and carefully knocked out guys they might not be fearful, but complacent, but if you did the same thing and brutally fucked them all up with knifes, they'd be much more alert but fearful and prone to error.

Personally I don't think this would be hard to simulate, for instance fear would be marked with increased cursing, decreased accuracy, increased error, and a potential for full scale running away. Complacency would mean that guards don't check every corner on their patrol routes, have smaller visual cones, and are slower to react when shit hits the fan. Confidence could be marked by seeking cover less, drastically increased accuracy, and so on.

So yes, whilst in the distant future we could have truly "intelligent" AI which would lead to unpredictable gameplay, we at present have the tools to create enemies that can provide a much richer and more vast array of feedback to the player's behaviour.


"It would be cool to see incredibly rare behaviours"

Working in game design with complex AI, I can tell you that the last thing players ever want is rare behaviors. They tend to think of these things as bugs even if they are simply that, rare behaviors.

Then again, I've been working in game systems where the rare behaviors are sometimes positive and sometimes negative for the player. Given this, the negative ones stand out as "bugs" even if they are not.


The problem with current AI is that it is programmed. Thus it stretches to where the programmer programmed it. AI needs to learn, it needs to be let to produce emergent behaviors, not just those which were programmed into it.


There was a post on Gamasutra by one of the Civ IV devs talking about this very problem. It turns out that most players don't really want an enemy that makes perfect moves most of the time. The problem with AI is not making it hard enough, but making it feel good to win against. That's a much more difficult problem than making a good AI in many games.


I would argue against this -- you can make very effective AI without machine learning. I think that perhaps what your are trying to say is that current AI feels too "hand-scripted" -- if X, then Y. Agree about emergent behavior totally, but emergent behavior can be produced without machine learning. Simple example:

Enemies set of motivations: kill the player

Knowledge base: 1) An organism can be killed with an excessive amount of directed energy (heat, force, etc). 2) The player is an organism 3) Some general rules about the physical world (how gravity works, how fire works, what switches and levers activate which objects, blah blah blah).

Now, the enemy might take some pre-programmed path to kill the player (like fire a gun at the player). Or, if circumstances dictate that it is best, they might produce a more "creative," emergent approach. Say, for example, the enemy pulls a lever. This was never added in by the programmer. The lever activates a trap door, above the players head, and a bunch of boulders crush the player. Switch action -> directed energy -> kill organism -> player death.

In shorter words, it really comes down to the programmer determining an effective knowledge base (abstraction), and letting the AI run wild with reasoning. :) This is not to say machine learning wouldn't be the icing on the cake, but you could just as well let enemies (or NPCs) have the knowledge before the game is launched without having to learn it.


I think labeling it the "future of gaming" is simply for attention. If anything, we're not talking about much more than the next iteration of consoles.

While XBox and Sony were battling over who could have the prettiest graphics, Nintendo made lots of money with the Wii's unimpressive hardware, and Apple (maybe even unintentionally) and other smartphone manufacturers came in and revolutionized the mobile gaming market with unimpressive gaming hardware.

There is a "future of gaming" for consoles and more serious gamers, and there is a "future of gaming" for more casual gamers.


Crytek isn't going to do your job for you, they make the tools that others use to make high interactive gameworlds and characters. They also happen to make a game from their engine called Crysis but I wouldn't call the be-all-end-all of FPSs.


Agreed - the future of gaming should be in gaming.

One of my favourite games is The Witcher 2 because it explores player choice with branching story lines. A lot of things you do in the game change things later on in ways that few other games do.


Very valid point. In my opinion Crysis 2 sucked ass purely because it was a technical showcase more than an engaging gaming experience, unlike Mass Effect 3 for instance.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: