Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Output quantity consumed (almost) always increases with falling inputs (ie, costs, whether in dollars or GPUs). But for Jevon's paradox to hold, the slope of quantity-consumption-increase-per-falling-costs must exceed a certain threshold. Otherwise, the result is just that quantity consumed increases while quantity of inputs consumed decreases.

Applied to AI and NVIDIA, the result of an increase in the AI-per-GPU on demand for GPUs depends on the demand curve for AI. If the quantity of AI consumed is completely independent of its price, then the result of better efficiency is cheaper AI, no change in AI quantity consumed, and a decrease in the number of GPUs needed. Of course, that's not a realistic scenario.

(I'm using "consumed" as shorthand; we both know that training AIs does not consume GPUs and AIs are also not consumed like apples. I'm using "consumed" rather than the term "demand" because demand has multiple meanings, referring both to a quantity demanded and a bid price, and this would confuse the conversation).

But a scenario that is potentially realistic is that as the efficiency of training/serving AI drops by 90%, the quantity of AI consumed increases by a factor of 5, and the end result is the economy still only needs half as many GPUs as it needed before.

For Jevons paradox to hold, if the efficiency of converting GPUs to AI increases by X, resulting in a decrease in price by 1/X, the quantity of AI consumed must increase by a factor of more than X as a result of that price decrease. That's certainly possible, but it's not guaranteed; we basically have to wait to observe it empirically.

There's also another complication: as the efficiency of producing AI improves, substitutes for datacenter GPUs may become viable. It may be that the total amount of compute hardware required to train and run all this new AI does increase, but big-iron datacenter investments could still be obsoleted by this change because demand shifts to alternative providers that weren't viable when efficiency was low. For example, training or running AIs on smaller clusters or even on mobile devices.

If tech CEOs really believe in Jevons Paradox, it means that last month when they decided to invest $500 billion in GPUs, then this month after learning of DeepSeek, they now realize $500 billion is not enough and they'll need to buy even more GPUs, and pay even more each one. And, well, maybe that's the case. There's no doubt that demand for AI is going to keep growing. But at some point, investment in more GPUs trades off against other investments that are also needed, and the thing the economy is most urgently lacking ceases to be AI.



Thanks for that.

There's a Jevons Paradox article up now and I'll put most of my thoughts there: <https://news.ycombinator.com/item?id=42863808>

If you care to respond though, my first question would be what examples of falling input prices not subject to the Jevons Paradox are. Several of the more notorious ones involve energy, and that was Jevons's principle topic of study (The Coal Question most notably).

I've got my own theory of how technological mechanisms function, with an ontology of nine elements. Fuels are one of those, information is another. See prior comments: <https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...>

As might be pertinent to AI and LLM, whilst fuels and power applications seem to scale linearly against input (constant slope, if not 1:1 relation), information processing delivers far more variable returns, often with critical thresholds. Network effects and Metcalfe's Law are the best known of these (if highly inaccurate themselves, see Tilly-Odlyzko's refutation), but another is the limited returns of predictive and targeting applications.

For the latter, the 18 order of magnitude increase in computing power from 1965--2025 (60 years, about 20--30 Moore's Law cycles) has roughly doubled the length of accurate long-term weather forecasting from roughly 5 days to 10. It's made possible fully-resuable first-stage boosters for orbital spaceflight, which is visually impressive, but has only resulted in a five-fold reduction ($1,400/kg vs. $5,400/kg) in low-Earth orbit (LEO) launch costs (Falcon Heavy vs. Saturn V). SpaceX are looking for another factor of 2--4 reduction (to $250--600/kg), but that's still far less improvement than we've seen in raw compute. At some point orbital physics, the rocket equation, and fuel chemistry simply dominate other considerations.

Similarly, AdTech makes possible far more targeted advertising, but to heavily diminishing returns, the core result has been an abandonment of non-targetable media by advertisers, notably print and broadcast, as well as an arms-race between the browser (for a very small fraction of the market) and advertisers (the largest of which also has the largest browser marketshare), and a concentration of advertising revenue amongst two online entities, Google (a/k/a Alphabet) and Facebook (a/k/a Meta).

Which makes me wonder what applications AI LLMs might practically be put to. Advertising, manipulation, fraud, and propaganda certainly seem to be benefiting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: