I don't like Ed Zitron automatic dismissal of everything AI and the constant profanities in his writing are getting old, and it's usually not very well structured, but that said... I like the perspective he has about the money involved.
OpenAI needs 1 trillion dollars in next four years just to keep existing. That's more than all currently available capital, together, from all private equity. If you take all private equity together, it still won't be enough for OpenAI in next four years.
It's just staggering amount of money that the world needs to bet on OpenAI.
I like reading Zitron’s output, but this remark stood out to me as him weighing in on a domain he’s basically clueless about.
Spread across 4 years, there’s way more than $1tn in available private capital. Berkshire Hathaway alone has 1/3 of that sitting in a cash pile right now. You don’t have to look at many balance sheets to see that the big tech companies are all also sitting on huge piles of cash.
I don’t personally believe OAI can raise that money. But the money exists.
The dude is a living embodiment of "overconfident and wrong". He picks a side, then cherry picks until he has it all lined up so that his side is Obviously Right and anything else is Obviously Wrong.
Then reality happens and he's proven wrong, but he never fucking learns.
He is following a very well worn path of writing histrionically about a bubble and making money off it. Point out the obvious that it's a bubble. Throw out a lot of figures supporting it. Get your emotional hooks in (prey on fears of job loss, hatred of corporate mismanagement/execs etc) and charge a subscription for it. I liked an article or two of his but he's basically meatGPTing The Information with cursing.
You could simply read The Information itself. Numerous headlines at the moment including the words slop, fantasy, 'Internal Oracle Data Show Financial Challenge of Renting Out Nvidia Chips'. It's not all packaged up like a lewis black rant, but it's not rare.
Those numbers still fall way short of $1T and the $4.3b in rev is only the first half of 2025 - they're projecting $12-13b for the year.
I have my own reservations about the company, but there's a pretty real path toward huge revenues and profitability for them that seems pretty obvious to me.
They do not need it. Arguably no one needs it. I am at best luke-warm on LLMs, but how are any sane people rationalizing these requests? What is the opportunity cost of spending a billion or even a hundred billion dollars on compute instead of on climate tech or food security or healthcare or literally anything else?!
Well, it's the rich's money, dedicated to replacing as many people as possible in the Great war on labor.
I easily see the rich betting a trillion dollars especially if it's not their money and they start employing government funds in the name of a fictitious AI arms race.
They smell blood in the water. Reducing everyone to as minimum wage as possible.
Capital is already concentrated, aligned along monopolies and cartels, oligarchical control, and AI is the final key to total control to whatever degree they desire.
> Well, it's the rich's money, dedicated to replacing as many people as possible in the Great war on labor.
A lot of "the rich's" money is actually backed by the pension funds, 401k and similar investment vehicles.
That's the dirty secret in most of today's world. A lot of ultra-large companies would absolutely deserve getting broken up just for being way too powerful, the problem is any such attempts would send the stonk markets and, with them, people's pensions tanking.
Which is precisely why this arrangement exists. Think of it as chaining the galley slaves to the galley - if it sinks, then so do they, so their "interest" is to keep rowing.
But that is a short-term perspective. Long term, if we do nothing, we remain galley slaves.
> A lot of "the rich's" money is actually backed by the pension funds, 401k and similar investment vehicles.
Not really clear what you mean by this. But arguably all of the rich's money (and all money in general) is backed by labour/the ability to exchange that money for someone else's labour.
The amount of shares in many a large company that are held by passive or semi-passive investment vehicles.
It's not just high net worth individuals and nation-state entities (wealth funds) that pump money into YC, Meta, Apple, NVidia, Microsoft and god knows what else, the bulk of the ownerships is held indirectly by the wide masses via one sort or another of pension schemes.
Elon Musk doesn't play with his own money on xAI, he plays with the money of his investors, and so do all the other actors in the AI bubble.
I think you're greatly overestimating how much ownership the general public has. The bottom 90% of the population own like 12% of all equities, while bottom 50% only own 1% [0]
Those can be viewed as tools by the elite to control the capital of the plebes. Who runs/controls those funds?
My knowledge of the insides of Tesla's governance highlights this. The requirement of index funds to invest fixed amounts in in companies allows CEOs to exert more board control and avoid "investor activism" for things like "roman salutes".
This will only be true a few decades. Nobody entering the workforce today will ever have any retirement funds. Most people who have been in it for a decade will never have any. Pensions aren't even a thing for most people. There will be a point where the common folk have very little remaining incentive not to burn it all down.
There will be at least one trillionaire within the next 12-24 months, and likely there will be multiple trillionaires within 5 years, the way wealth is consolidating at an accelerated rate. These amounts of investment don't seem to be fictional anymore.
Altman has sort of linked the fate of OpenAI to that of Nvidia, AMD, Oracle, Microsft etc with these huge deals / partnerships. We've seen the impact of these deals on stock prices before even a penny has changed hands.
Tracks with his reputation for power play and politics.
That certainly explains why Microsoft is so desperate to force everyone into using their AI whether they want to or not. I'm wondering if the deal will end Microsoft when OpenAI goes belly up.
I want windows to play games, for computing I use Linux but they keep foisting shite I don’t want on me just to play games, AI and sodding OneDrive can piss off.
I’ve kept windows around because it was less painful to game on than Linux but Linux is better than ever and windows is getting worse, at some point those lines are gonna cross and for the first time in 30 odd years I’ll not be running a single device with a Microsoft operating system on.
It's not just pushing it on the users. There's also a heavy-handed push to get teams to use more AI coding inside the company. I'll let you guess what that does to software quality, on average...
Regarding Linux gaming, the biggest problem there right now is all the multiplayer games with kernel anti-cheat. But I suspect that it'll be resolved eventually by Valve pushing for SteamOS support (although I doubt it'll ever work on any random Linux distro).
I'll just throw in support for gaming on Linux – it's pretty nice feeling these days! I still have the occasional (once every 5–8 months?) update cause a short-lived bug, but it's a very justifiable trade-off to avoid Windows these days.
It depends on the extent to which the promise was peddled and whether MSFT can be trusted with the cash balance - investors will reflect that in the stock price in future if there is a bubble bursting event. If that scenario pans out, Apple will be sitting there very pretty given it has not spent any real money on pursuing LLMs.
> OpenAI needs 1 trillion dollars in next four years just to keep existing. That's more than all currently available capital, together, from all private equity.
Recent estimates I've seen of uninvested private equity capital are ~$1.5 trillion and total private equity $5+ trillion, with several hundred million in new pricate equity funds raised annually, so this simply seems incorrect even assuming either only currrent “dry powder" is considered, or only new funds available over the next four years, much less both and/or the rest of private equity.
> OpenAI needs 1 trillion dollars in next four years just to keep existing.
That's a mischaracterization: They need that order of investment to meet the demand they forecast.
It's unclear what other trajectories look like.
Additionally, I don't know who Ed Zitron is but he clearly doesn't follow how infrastructure projects are funded and how OpenAI is doing deals.
See for example the AMD deal last week where they seem to have at least partially used their ability to increase AMD's stock price to pay for future GPUs.
Mining companies do the kids of "circular deals" that OpenAI is criticized for all the time - they will take equity in their supplier companies. It's easy to see similar arrangements for this $1T investment in the future.
In late 2021, Ed Zitron wrote (on Twitter) that the future of all work was "work from home" and that no one would ever work in an office again. I responded:
"In the past, most companies have had processes geared towards office work. Covid-19 has forced these companies to re-gear their processes to handle external workers. Now that the companies have invested in these changed processes, they are finding it easier to outsource work to Brazil or India. Here in New York City, I am seeing an uptick in outsourcing. The work that remains in the USA will likely continue to be office-based because the work that can be done 100% remotely will likely go over seas."
He was wryly communicating, "your argument was so stupid I don't even need to engage with it".
In my experience he has a horrible response to criticism. He's right on the AI stuff, but he responds to both legitimate and illegitimate feedback without much thoughtfulness, usually non-sequitur redirect or ad hominem.
In his defense though, I expect 97% of feedback he gets is Sam Altman glazers, and he must be tired.
He's right on the AI stuff? How do you figure that? As far as I can tell, OpenAI is still operating. It sounds like you agree with him on the AI stuff, but he could be wrong, just like how he was wrong about remote work.
I'm actually more inclined to believe he's wrong if he gets so defensive about criticism. That tells me he's more focused on protecting his ego than actually uncovering the truth.
Wether or not OpenAI is sustainable or not is only a question that can be answered in hindsight. If OpenAI is still around in 10 years, in the same sort of capacity, does OP become retroactively wrong?
My point is, you can agree that OpenAI is unsustainable, but it's not clear to me that is a decided fact, rather than an open conjecture. And if someone is making that decision from a place of ego, I have greater reason to believe that they didn't reason themselves into that position.
The fact they are not currently even close to profitable with ever increasing costs and the sobering scaling realities there is something you could consider, and if you do believe they are sustainable, then you would have to believe (in my opinion, unlikely scenarios) they will somehow become sustainable, which is also a conjecture.
Seems a little unreasonable to point out “they are still around” as a refutation of the claim they aren’t sustainable when, in fact, the moment the investment money faucet keeping them alive is turned off they collapse and very quickly.
No, it's a question answerable now. If you're losing twice as much money as you're making, the end of your company is an inescapable fact unless you turn that trend around.
What Zitron points out, correctly, is that there currently exists no narrative beyond wishful thinking which explains how that reversal will manifest.
I don't think he's right about everything. He is particularly weak at understanding underlying technology, as others have pointed out. But, perhaps by luck, he is right most of the time.
For example, he was the lone voice saying that despite all the posturing and media manipulation by Altman, that OpenAI's for-profit transformation would not work out, and certainly not by EOY2025. He was also the lone voice saying that "productivity gains from AI" were not clearly attributable to such, and are likely make-believe. He was right on both.
Perhaps you have forgotten these claims, or the claims about OpenAI's revenue from "agents" this year, or that they were going to raise ChatGPT's price to $44 per month. Altman and the world have seemingly memory-holed these claims and moved on to even more fantastical ones.
I am hypothesizing that for OpenAI to survive for longer than two years, it will have to (in no particular order):
- Successfully navigate a convoluted and onerous relationship with Microsoft, one that exists both as a lifeline and a direct source of competition.
- Raise more money than any startup has ever raised in history, and continue to do so at a pace totally unseen in the history of financing.
- Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.
- Have such a significant technological breakthrough that GPT is able to take on entirely unseen new use cases, ones that are not currently possible or hypothesized as possible by any artificial intelligence researchers.
- Have these use cases be ones that are capable of both creating new jobs and entirely automating existing ones in such a way that it will validate the massive capital expenditures and infrastructural investment necessary to continue.
I ultimately believe that OpenAI in its current form is untenable. There is no path to profitability, the burn rate is too high, and generative AI as a technology requires too much energy for the power grid to sustain it, and training these models is equally untenable, both as a result of ongoing legal issues (as a result of theft) and the amount of training data necessary to develop them.
He is right about this too. They are doing #2 on this list.
Is he right on the AI stuff? Like, on the OpenAI company stuff he could be? I don't know? But on the technology? He really doesn't seem to know what he's talking about.
I generally don't agree with him on much; it's just nobody really talks about how much money those companies burn, and are expected to burn, in bigger perspective.
For me 10 billion, 100 billion and 1 trillion are all very abstract numbers - until you show much unreal 1 trillion is.
Attach your name to this publicly, and you're a clown. I don't know why the world started listening to clowns and taking them seriously, when their personas are crafted to be non-serious on purpose.
There's another important bit there: 'we need to be able to tell the AI to make money for us and no-one else can compete with us on that'. I think both halves of that are questionable.
I mean FWIW they could probably make those folks happy by just spitting out a list of everything to short because of ai disruption on each new release lol
Hasn't he been saying that OpenAI is going to shut down every year for the last few years now? And that models are never going to get better than they were in 2022? I think he's pretty clearly a grifter chasing an audience that wants to be reassured that big tech is going away soon.
Ed Zitron may be many things but he is no grifter. He writes what he believes and believes what he says, and I basically agree with all of it. The chattering class in SV has been wildly wrong for years, and they'll really look foolish when the market crashes horribly and these companies collapse.
He's saying more than that the companies are going to collapse; he's making pronouncements about the underlying technology, which are claims that are much harder to defend. I'm not entirely sure he understands the distinction between the companies and the technology, though.
Respectably...what?? Ed at this point is one of the most well read people on Earth for this topic. Of course he knows the difference between the companies and the technology. He goes in depth both on why he think the companies are financially unviable AND why he's unimpressed by LLM's technologically alllll the time.
Even as someone who is generally inclined to agree with his thesis, I find Ed Zitron's discussions as to why AI does not and will never work deeply unconvincing.
I don't think he fundamentally gets what's going on with AI on the tech level and how the Moore's law type improvements in compute have driven this and will keep doing so. He just kind of sees that LLM chatbots are not much good and assumes things will stay like that. If that were so investing $1tn would make no sense. But it's not true.
The original meaning of AI is what some now call AGI. Some don't choose to follow meaning shifts forced by large companies for advertisement purposes. Same like Full* Self** Driving***.
How do you want to define grifter? He shows up, makes a lot of big promises, talks a lot of shit, doesn't actually engage with any real criticism, gets paid for it, and then exits, stage left. He could be right, he could be wrong, but he leaves no room for debate. If all you want is someone to yell at you about how right your feelings on something are, I mean, hey, I have a therapist too. I don't ask her for financial advice though.
There's a stronger case for world hunger being bottlenecked than healthcare. World hunger is a logistics problem now, but no amount of money lets you print doctors.
You can't just throw money at the world hunger problem, it will end up in some warlord's coffers. Hunger still exists because it is politically useful to keep people hungry.
With a little bit of lag time (school) we could have a metric fuckton of doctors. We have a metric fuckton of shitty lawyers. Doctors are artificially gated in the US
What’s the joke? “What do you call the person who graduated last in their class from med school? A doctor.”
The optimal amount of bad doctors is not zero. But there is a point of "ChatGPT does a better job than this man does, and we're talking GPT-4o, not GPT-5 Pro". In which case we have a problem.
I don't know what wealth distribution means in this context, or why it's relevant at all, but food grows fast and doctors take like 20 years to grow no matter how much money you throw at it or where you get the money. And the context above was more specifically "fully pay health care costs" which is a comical fantasy the moment you try to actually define what that means, because the limit is not the price.
Changing the entire paradigm of medical care would be possible with enough money. There's no logical reason it takes 20 years to become a doctor. The fact that it does severely hampers both the quantity and quality of doctors. Becoming a doctor is much less about knowledge and intelligence than it is about attrition resistance. Loads of capable students disregard medical careers each year for more rapidly attainable positions. In many cases these are the MOST capable students because they recognize the problems with pursuing medical degrees.
Certainly the most skilled and advanced in the medical field will need significant schooling but there needs to be a major reform in healthcare training. One that produces more knowledgeable and skilled professionals and not a glut of questionably competent nurse practitioners.
> Doesn't America alone already spend 2 or 3 trillion a year on healthcare?
There's a huge difference between "paying for healthcare" and "paying a healthcare provider" here in the United States. Oftentimes the latter has 2 or 3 additional zeroes attached.
Sure, but Congress invariably pretend to say the former but mean the latter. You're asking for single-payer instead of privatized health insurance, they will sooner bankrupt the country than switch to sanity. Congress and its funding sources are now captured by privatized health insurance:
In 2023-4, Health came #7 in total political donations, #8 is Lawyers & Lobbyists; the combined "Finance/Insur/RealEst" is #1; would be useful to see "Insurance" broken out by health insurers vs non-health (can anyone cite a more granular breakdown?). [https://www.opensecrets.org/elections-overview/sectors]
It's not single payer vs privatized insurance. Why is this myth so persistent in US?? There are many different options for public healthcare, of which single payer is but one, and it's not even the most popular worldwide. Many European countries are not single payer, including e.g. Germany.
That would just be just reopening decades of debate in the US.
For whatever reasons, the consensus in the US after decades of talking comes down to single payer vs privatized insurance. Congress isn't going to implement single-payer, so the menu reduces to either we choose good or bad regulation of privatized insurance.
We don't have time for yet another decade of debate, since health insurance premiums (net, post-tax-credit) in the US are about to jump this November open enrollment by median 18% overall, or 114% for people on ACA due to the expiration of enhanced premium tax credits [0]. Expect that to feature prominently in the news cycle by Thanksgiving.
(Germany's multi-payer system (government + mandatory statutory contributory insurance + optional private insurance) would in theory be fine if US Congress was ever incentivized to implement such a thing. But it very clearly isn't, since the 1950s - look at the lobbying money trails. Let the good not be the enemy of the perfect. The ACA was the closest the US (briefly) came to mandatory statutory contributory insurance, but the federal mandate was abolished back in calendar 2019 by the "Tax Cuts and Jobs Act of 2017").
Have you considered that those decades of debate haven't resulted in a public healthcare system precisely because single payer is what was pushed by the pro side, and many people in US (rightly or wrongly) have a problem with the notion of government telling them that they can't pay money for better healthcare?
I have done a lot of research into this area. Obesity and other self inflicted health issues are definitely a factor, but not, by far, the whole picture.
Our cost per service is 2-4x or more, and the larger reliance on specialists creates significant complexity and even more costs. So, we do spend 2x, but we get 1/3 to 1/4 of "care" per dollar. In other words, we get less actual care. And the care is biased to fixing things as opposed to preventing things. And it is also biased to those who are wealthier.
Some of the cost drivers:
- Administration is 25% of costs, far less in other countries. Insurance company profits and complex administration with confusing and overlapping methodologies that obfuscate costs and comparisons.
- Capital costs are 25% of costs, far less in other countries. Multiple, private, and overlapping hospitals demand more capital and private capital with its expected returns
- Doctor compensation is 2x to 4x more, nursers 2x. Specialists here get truly rich, not true in other countries.
So, quite a lot of the extra spend is not efficient, and goes to insurers, owners of hospitals, and doctors.
I also have personal experience. To get a simple ultrasound, you are talking about $450 for a primary care visit to get a referral for a $650 specialist to get a $1000 ultrasound ($800 scan plus $200 reading), to get a $650 follow-up visit with the specialist to discuss the results. That is almost $3,000 of actual out of pocket costs to me, with a good insurance plan ($2K per month for a couple), the "claimed" costs were significantly higher. MRI and CT are even higher. Similar for a broken ankle, which cost me over $4000 out of pocket.
I am, relatively speaking, well off compared to average, and was able to do this, but that hurt, and significantly disincentivizes me in the future.
Our health system is broken, and pumping more money into only makes it worse.
It's neither, your outcomes are poorer because access is not uniform. If you can afford it, US healthcare is the best in the world, but if you can't you basically don't get it (or at least, you don't get it until the problems are bad enough it's an emergency and you get saddled with life-crushing debt for the bare minimum to stabilise you from the ER)
If that's true, all you have to do is convince other people that it's true and they can just vote for someone to deploy that money. Don't need to wait for someone else to do it.
The AI is getting better at things like maths. I recently asked it about iterating the Burrows-Wheeler transform, and it appeared to really understand that. It's not super easy to reason about why it's reversible, etc. and I felt that it got it.
This is obviously not AGI, and we're very far from AGI as we can see by trying out these LLMs on things like stories or on analyzing text, or dealing with opposing arguments, but for programming and maths performance is at a level where it's actually useful.
My answer to this is - so what? What are the effects in the real economy?
Theres probably a good 2-3 years left of runway for substantial progress before things really fall off a cliff. There has to be some semblance of GDP (real) growth.
I think it might be possible to automate development of short programs. I also think it might be possible to reduce the confusion, misunderstandings and cliches, so that the models can work for longer stretches.
But people probably expect to get the next version for what they pay in subscriptions now, so I can't imagine much more revenue growth for the model companies.
A lot of this is because there isn't a good definition of AGI. Look at Sama's recent interviews, that's how he deflects, along with the statement about the Turing test having ultimately been inconsequential. They have an internal definition of AGI that is "the model can perform the vast majority of economically viable tasks at the level of the highest humans" which isn't the story the investors are expecting when they hear AGI, so they're trying to stay mum to truthfully roadmap AGI while not blatantly lying to capital.
https://www.wheresyoured.at/the-case-against-generative-ai/
OpenAI needs 1 trillion dollars in next four years just to keep existing. That's more than all currently available capital, together, from all private equity. If you take all private equity together, it still won't be enough for OpenAI in next four years.
It's just staggering amount of money that the world needs to bet on OpenAI.