Jevons was talking about coal as an input to commercial processes, for which there were other alternatives that competed on price (e.g. manual/animal labour). Whatever the process, it generated a return, it had utility, and it had scale.
I argue it doesn't apply to generative AI because its outputs are mostly no good, have no utility, or are good but only in limited commercial contexts.
In the first case, a machine that produces garbage faster and cheaper doesn't mean demand for the garbage will increase. And in the second case, there aren't enough buyers for high-quality computer-generated pictures of toilets to meaningfully boost demand for Nvidia's products.
I recently had a discussion with a higher ranked executive and his take on AI changed my outlook a bit. For him the value of ChatGPT:tm: wasn't so much the speed up in any particular task (like presentation generation or so). It's a replacement for consultants.
Yes, the value of those only exists mostly if your internal team is too stubborn to change its opinion. But that seems to be the norm. And the value (those) consultants add is not that high in the first place! They don't have the internal knowledge of _why_ things are fucked up _your particular way_ anyways. That part your team has to contribute anyhow. So the value add shrinks to "throw ideas over the wall and see what sticks". And LLMs are excellent at that.
Yes, that doesn't replace a highly technical consultant that does the actual implementation. Yes, that doesn't give you a good solution. But it probably gives you 5 starting points for a solution before you even finish googling which consultancy to pick (and then waiting for approval and hoping for a goodish team). And that's a story that I can map to reality (not that I like this new bit of information about reality..)
If we accept that story about LLM value, then I think NVIDIA is fine. That generated value is far greater than any amount of energy you can burn on inferring prompts and the only effect will be that the compute-for-training to compute-for-inference ratio decreases further
"Throw ideas and see what sticks" sounds very entry-level. Maybe it saves time it would take for one of your team to read first two chapters of a book on the topic.
That exec was hiring consultant and no longer is, in meaningful proportion, thanks to LLM?
The point isn't that the result from an LLM is particularly valuable. The point is that the advice that you get from your typical (management) consultant isn't particularly useful. And that's the only bar you need to clear.
There are basically two reasons for consultants:
Either you need some once-removed thing (be that you selling/buying some part to/from a competitor, accounting, lawyering). That part sensible people will not replace by LLMs. But here it isn't that you yourself lack the expertise at all. Here it is absolutely necessary, that someone else does the actual implementation.
Or you have some general "we need to do better" feeling. And here you have again two options: 1) you know _what_ your problem is and you just need the best solution there is. This is (obviously somewhat tongue-in-cheek) essentially corporate espionage. Again you cannot replace that with an LLM (or maybe you can I don't know), but you will pay a lot of money for it. Or 2) you don't know what the problem is. Now you are competing in finding an appropriate starting point with 20-somethings fresh from university that are not wanted in your organisation and therefore won't get access to the relevant information anyhow. So yeah, I'm willing to believe that the typical success rate of those consultancy projects is low to negative.
If you are only given a 3 week crash course in $BUSINESS, you won't be able to produce much more than a generic set of "have you thought about that?". And THAT is something that I believe LLMs to be reasonably good at. And they are dirt cheap and instantly available compared to any kind of human consultant.
Now I don't think that will necessarily a net-negative for consultants in general. I do think, similar to ATMs, that consultations are mostly becoming cheaper by that.
Thing is, most code is written by entry-level/junior programmers, as the whole career path has been stacked to start grooming you for management afterwards, and anything beyond senior level is basically faux-management (all the responsibilities, none of the prestige). LLMs, dirt-cheap as they are and only getting cheaper, are very much in position to compete with the bulk of workforce in software industry.
I don't know how things are in other white-collar industries (except wrt. creative jobs like copywriting and graphics design, where generative AI is even better at the job as it is at coding), but the incentives are similar so I expect most of the actual work is done by juniors anyway, and subject to replacement by models less sophisticated than people would like to imagine they need to be.
I argue it doesn't apply to generative AI because its outputs are mostly no good, have no utility, or are good but only in limited commercial contexts.
In the first case, a machine that produces garbage faster and cheaper doesn't mean demand for the garbage will increase. And in the second case, there aren't enough buyers for high-quality computer-generated pictures of toilets to meaningfully boost demand for Nvidia's products.