Hacker Newsnew | past | comments | ask | show | jobs | submit | shubhamjain's commentslogin

Where is this figure coming from? According to Meta's press release, the effective tax rate is 30% [1].

> The full year 2025 provision for income taxes includes the effects of the implementation of the One Big Beautiful Bill Act during the third quarter of 2025. Absent the valuation allowance charge as of the enactment date, our full year 2025 effective tax rate would have decreased by 17 percentage points to 13%, compared to the reported effective tax rate of 30%.

[1]: https://investor.atmeta.com/investor-news/press-release-deta...


The effective federal tax rate, and the amount of federal tax Meta paid as a percentage of income, are two different things.

More details here: https://itep.org/meta-tax-breaks-trump-mark-zuckerberg/

The 10-K filed by Meta is linked to in that article, and can be found here: https://www.sec.gov/ix?doc=/Archives/edgar/data/1326801/0001...

If you dig into the details in the Income Tax Disclosure block, Meta paid $2.8B in Federal income taxes for the year ended December 31, 2025.

Meta deferred a large chunk of Federal income taxes.

So, while the effective Federal income tax rate for 2025 was about 30%, largely due to a 3rd quarter charge of $14B against deferred taxes (Meta's effective tax rate for 2023 was 17.6% and for 2024 it was 11.8%), they paid 3.5% of their income as Federal income tax.


>Meta deferred a large chunk of Federal income taxes.

How are can we reasonably expect them to be deferred for? Are we talking on the order of years, or decades?


A tax deferred is a tax avoided. (common wisdom)

Mostly never -- most of the deferral was an accounting adjustment for the value of future tax credits that they could no longer take advantage of, so there is no actual tax liability here that will eventually be paid.

I bet the two sources won't agree on what values go into the denominator and / or numerator of their effective tax rate calculations. It can be as simple as the 3.5% being a calculated rate on revenue rather than profit

You can't just throw revenue in the denominator, though. Business tax is assessed on income. If you're going to make a claim about tax rate using an unconventional metric, you need to be explicit about what you've done; Reich isn't.

If you're Robert Reich, you can! You can make up anything, and someone will submit it to HN to waste everyone's time!

Yeah, screw Robert Reich! Always looking out for the workers who make up the majority of this country. Why won't he look out for the poor multi-national corporations, who have no one to advocate for them or their tax rates?

Hey, he can advocate for whatever causes he likes. I just think honesty makes a more compelling argument than lies.

> Always looking out for the workers

How is spreading misinformation looking out for the workers?


I thought there were systems designed to effectively negate users that submit too many misleading posts.

Your parent post isn’t suggesting it’s always the same user submitting, just that users submit a lot of posts from this person.

Can’t say I agree, though. I don’t recall ever having seen one of his posts on HN, and a cursory search suggests they’re not even upvoted that much. Highest I found was under 30 points. But my methodology is flawed, as I basically searched for the name.


Sure, and there are a ton of ways to shifting income around. For example selling a subsidiary in lower tax jurisdiction patents and then paying for their usage. Another example is Hollywood accounting where productions pay exorbitant rates for equipment and catering to affiliated companies. This inflates the costs so the movies end up unprofitable despite smashing box office.

Income != profit. Income is revenue. It sure would be nice if businesses were taxed on income, given that’s how people are taxed and all. Aren’t corporations supposedly people now thanks to citizens united?


I appreciate your polite corrections with well sourced info! Being a bit silly, I’ll say you’re a shiny beam of knowledge in a dark expanse of confusion

> Business tax is assessed on income.

Income (in a business) is another word for revenue. I think you meant: business tax is assessed on profit.


In the U.S. income is defined as revenue minus expenses:

https://en.wikipedia.org/wiki/Income_(United_States_legal_de...


Interesting, it seems like this might be a UK vs US thing. All the non-dodgy UK results for "income" I found agree with what I thought e.g. "Income less Costs = Profit" [1]

The one exception is HMRC (UK equivalent of IRS) which, for the purposes of corporation tax only, defines income like profit [2] (with some technical differences, but the same spirit). But for other purposes (e.g. personal income tax) even they use it to just literally mean cash received without subtracting off outgoings.

Using it in this net sense seems very odd to me, but maybe that's because I'm British. "Income" and "outgoings" look to me like symmetrical terms, and no one would consider outgoings to be after subtracting off money coming in (would they?!)

[1] https://www.cheapaccounting.co.uk/blog/index.php/income-prof...

[2] https://www.gov.uk/hmrc-internal-manuals/company-taxation-ma...


No, my usage was correct and unambiguous. Describing income as revenue is incorrect. https://www.investopedia.com/ask/answers/122214/what-differe...

That page says that "net income" message the sense you meant it and "gross income" means the sense I understood it.

It does say that unqualified "income" means the net version but it's a push to say that makes it unambiguous. (And, at I said on a sibling comment, this seems to be a US convention.)


Meta is a US company and Reich is a US citizen offering commentary on US domestic policy. "Income" is unambiguously net income in the US.

This is incorrect as anyone who has looked at a financial statement or taken a first level accounting class will know - Revenue is the top line, the gross income and lastly net income, the two reflecting the removal of various costs/expenses as per GAAP.

There is no real concept of sources legitimately disagreeing here. There is tax law, which Meta uses to calculate its tax liability, and then there are lies.

Even if you mistakenly calculate the rate on revenue, you will get 25474/200966=13%.


The post seems to be comparing quarterly figures for tax with annual profit. The doc they cite clearly $25B as provision for income tax.

While they provisioned $25B, the 10-K Meta filed states they paid $2.8B in Federal income taxes for the full year 2025. The amount they provisioned is not limited to Federal income taxes, it also includes state and foreign income taxes.

I am not an accountant or finance professional but the table they refer to has the 2.8B under "current" and the 25B figure under "total". Is it just that of their 2025 taxes, they paid 2.8B during that calendar year and it's only Feb and the remaining was not yet actually paid out at the time that filing was prepared?

Interesting. Wouldn’t surprise me if there are different ways to report the same numbers to make the situation seem more or less favorable. Statisticians and accountants are both professional liars (speaking as a statistician married to an accountant).

> Wouldn’t surprise me if there are different ways to report the same numbers to make the situation seem more or less favorable.

Yeah -- accurately, and inaccurately.


Are you implying that there are four quadrants:

  - Accurate / favorable
  - Inaccurate / favorable
  - Accurate / unfavorable
  - Inaccurate / unfavorable
Or are you implying that Meta spoke the God’s honest truth out of a sense of societal duty and honor.

In this instance, only two combinations -- accurate and favorable vs inaccurate and unfavorable.

Meta and Meta's accountants spoke the truth in their audited financial statements. I cannot speak to the motivation in their hearts, but I am aware that there are significant financial and criminal consequences to publishing incorrect financial statements.


Can the post be community noted?

If entity A declares such and such incomes and expenses, it could be truthful or not.

If entity A is truthfully declaring such and such incomes and expenses, why would it reference it's own declaration as the "reported effective tax rate of 30%".

On the other hand if A is not truthfully declaring such and such incomes and expenses, and a legal team is very careful in maintaining an exact wording towards the government, then any tax-related comments by A which are not made by the legal team would either be self-censored or censored by the legal team to never reference "the effective tax rate" but rather a "reported" one, it basically reads like a superscript referring the reader to some other carefully worded fine print in other documents.

What prevented the more natural language of "[...] compared to the effective tax rate of 30%." ? Under what circumstances would you add such a word?

EDIT: this is not to say that this word constitutes an effective admission of lying, but rather that they don't actually want to talk about it, while pretending to be openly talking about it.

EDIT2: whenever companies get away with substantially lower tax rates, employee shortages in the rest of the economy can be seen as low-effective-taxed companies "stealing" employees from the rest of the economy, perhaps with or without approval from the government. If the government approves it is effectively a state-sponsored enterprise, and if it doesn't it would probably like to know about it since productivity of the economy could be improved by reassigning those employees into companies that allow themselves to be properly taxed (whatever that means!)


In the US, public companies generally must report their financial results according to generally accepted accounting principles (GAAP). They can also report other numbers, and that's what they're doing in footnote (1); they think one particular adjustment GAAP requires them to make might be misleading, and they helpfully disclose that they would have calculated 13% if not for that adjustment. But they are not allowed to say that the GAAP number is wrong or untruthful, nor to put the non-GAAP number in the topline and the GAAP adjustment in the footnote.

Yeah this is a weird low quality submission to HN (no offense OP). Microblogging has questionable value for anything beyond “hot takes” and “breaking news” (and keeping people angry and misinformed enough to vote).

I'm shocked, absolutely shocked that a Bluesky post would be deliberately misleading to push a narrative that we need more taxes.

I don't know why that's specific to one social media network. I see deliberately misleading posts on all of them.

Sure, but being misleading to push for more taxes is more characteristic of Bluesky than many others.

That's why I often ask for "Source?" — because sometimes people seem to make up numbers. However, whenever I do this, I receive a large number of downvotes. Maybe it's not common on HN to back up claims with sources.

There is another possibility. “Source?” is a low effort comment, but GP’s is not.

IMO putting an important number in your post/comment, and not providing a source for that number, is also kind of low effort. If you verified the number before writing, you already had the source ready and you could just put it in the comment. If you wrote the number from memory, not checking if your memory is correct is low effort (but you can also warn the readers that the number is from memory, that's better). If you're intentionally misrepresenting what the number means in your comment (and giving the source would contradict the meaning of your comment), or just giving a number that "feels right" or a number that you know is wrong, then it's low effort and a lie.

I try to verify important numbers and facts in what I read, and seriously, there's so much fake or misrepresented info everywhere, on every political side, that it's depressing, and it makes me don't believe literally anything without a source, unless I verify it myself. Of course when someone provides a source, I often look into the source, and sometimes it turns out that the text misinterpreted/misrepresented the meaning of the source. On Wikipedia, I also check if what is written is actually in the source, because sometimes the editor writes his own opinion while only loosely basing the text on a source (or basing it on nothing).

Verification can take some time, and that's the effort passed from the author of unsourced claim to its many readers, unless they just trust it or ignore the claim.

When I write anything I try to include sources for important things. If I wouldn't include a source, and someone asked "Source?" I wouldn't think "what an annoying guy", I'd think "oh, I could have linked that in the first place". And I usually upvote "Source?" comments (unless it's a thing that anyone can check in 30 seconds). I usually double-check the facts in what I'm writing, and many times I almost wrote something from memory that wasn't true, but looking for a source saved me from that.


I appreciate you taking the time to share your perspective. Your comment raises an interesting point, and I would genuinely like to understand it more thoroughly.

Would you mind clarifying what source or reference you are relying on for that statement? I am asking in the spirit of constructive dialogue, not to challenge you, but to better understand the foundation of your view. If there is a specific study, report, dataset, or publication that informed your conclusion, I would be grateful if you could point me toward it.

Having access to the underlying source would help ensure that the discussion remains grounded in verifiable information and would allow others, including myself, to review the context and methodology behind the claim. That, in turn, would make the exchange more substantive and productive.

Thank you in advance for any clarification you can provide.


This is also a low effort comment, despite the word count.

In contrast, shubhamjain found Meta's earnings release for the specified time period, quoted numbers that appear to contradict the claim, and provided a link to the release. This adds to the conversation, while a comment that says "Source?" or a few paragraphs that can be reduced to "Source?" do not.


What benefit do you gain by having an llm write comments on HN? I don't get it.

Too brief, minus 10 marks.

It's more likely your attitude rather than your quest for verification that gets you downvotes.

My intentions are sincere, maybe it is the wording.

I would imagine it's more you're being skeptical of something that is unpopular to be skeptical about. It's like someone saying climate change is impacting our planet, and then asking "source?" in response.

No, that's not correct. I ask "Source?" when someone makes a claim that goes against popular belief, such as: "climate change is not impacting our planet." I do think "Source?" is generally considered a low-effort response, so it's the wording I guess, not the context.

Except he was skeptical about Meta's effective tax rate being 3%. Why are you making up scenarios that aren't real to justify hurting him?

The user you're defending (randomtoast)[1] isn't the one who expressed skepticism about the 3% claim (shubhamjain)[2].

[1]: https://news.ycombinator.com/item?id=47167886

[2]: https://news.ycombinator.com/item?id=47167698


Taxes are a subject of frequent liberal conspiracy theories. You will see all sorts of blatantly false claims like this because left wing misinformation spreaders like Robert Reich make up their own tax calculations that have no relation whatsoever to actual tax law.

No need to limit this to "liberal" conspiracy theories. Trump and his admin's statements on how tariffs and other taxes work and who pays them have been full of blatantly false claims.

"X does A" does not mean "only X does A."

It’s a fair retort here, though, where the grandparent comment was clearly trying to grandstand in opposition to his perceived enemy tribe, mostly unprovoked.

Edit: in other words, it’s a fair interpretation of the comment to be saying “We wouldn’t have to deal with all this misinformation about taxes if there wasn’t some giant liberal conspiracy”, given that they weren’t replying to any specific part of the parent post.


Well, no, that is not a reasonable interpretation at all. For one, the commenter did not proclaim existence of conspiracies, but the existence of conspiracy theories. People mix these up a lot. Secondly, the other interpretation you propose exhibits roughly the same form as "X does A", so it's worth repeating that it does not mean "only X does A" either!

I was wondering if it was because of heavy-handedness of the administration, but apparently:

> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.

Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible ones." I honestly can't comprehend the timeline we are living in. Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.


That's because it is.

AI is powerful and AI is perilous. Those two aren't mutually exclusive. Those follow directly from the same premise.

If AI tech goes very well, it can be the greatest invention of all human history. If AI tech goes very poorly, it can be the end of human history.


Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

-Irving John Good, 1965

If you want a short, easy way to know what AGI means, it's this: Anything we can do, they can do better. They can do anything better than us.

If we screw it up, everyone dies. Yudkowsky et al are silly, it's not a certain thing, and there's no stopping it at this point, so we should push for and support people and groups who are planning and modeling and preparing for the future in a legitimate way.


John Good's quote is pretty myopic, it assumes machines make better machines based on being "ultraintelligent" instead of learning from environment-action-outcome loop.

It's the difference between "compute is all you need" and "compute+explorative feedback" is all you need. As if science and engineering comes from genius brains not from careful experiments.


There's an implicit assumption there, anything a computer as intelligent as a human does will be exactly what a human would do, only faster. Or more intelligent. If the process is part of the intelligent way of doing things, like the scientific method and careful experimentation, then that's what the ultraintelligent machine will do.

There's no implication that it's going to do it all magically in its head from first principles; it's become very clear in AI that embodiment and interaction with the real world is necessary. It might be practical for a world model at sufficient levels of compute to simulate engineering processes at a sufficient level of resolution that they can do all sorts of first principles simulated physical development and problem solving "in their head", but for the most part, real ultraintelligent development will happen with real world iterations, robots, and research labs doing physical things. They'll just be far more efficient and fast than us meatsacks.


At sufficient levels of intelligence, one can increasingly substitute it for the other things.

Intelligence can be the difference between having to build 20 prototypes and building one that works first try, or having to run a series of 50 experiments and nailing it down with 5.

The upper limit of human intelligence doesn't go high enough for something like "a man has designed an entire 5th gen fighter jet in his mind and then made it first try" to be possible. The limits of AI might go higher than that.


Exceedingly elaborate, internally-consistent mind constructs, untested against the real world, sounds like a good definition of schizophrenia. May or may not correlate with high intelligence.

We only call it "schizophrenia" when those constructs are utterly useless.

They don't have to be. When they aren't, sometimes we call it "mathematics".

You only have to "test against the real world" if you don't already know the outcome in advance. And you often don't. But you could have. You could have, with the right knowledge and methods, tested the entire thing internally and learned the real world outcome in advance, to an acceptable degree of precision.

We have the knowledge to build CFD models already. The same knowledge could be used to construct a CFD model in your own mind. We have a lot of scattered knowledge that could be used to make extremely elaborate and accurate internal world models to develop things in - if only, you know, your mind was capable of supporting such a thing. And it isn't! Skill issue?


I like the substitution concept. What humans can do depends on the abstractions and the tools. One could picture just the shape of the jet and have a few ideas how to improve it further. If that is enough info for the tool it could be worthy of the label "designed by Jim".

> As if science and engineering comes from genius brains not from careful experiments

100% this. How long were humans around before the industrial revolution? Quite a while


Science and engineering didn't begin with the Industrial Revolution. See: https://en.wikipedia.org/wiki/Great_Pyramid_of_Giza

Have you gotten any indication that machines won't have sensors?!

From what I can see we're working as hard as we can to build them. You can watch the "let's put this on a Raspberry Pi and see what happens" seeds of Skynet develop in real time.

There's something compelling about helping assemble the machine. Science fiction was completely wrong about motivation. It's fun.


Maybe ultraintelligence is having an improved environment-action-outcome loop. Maybe that's all intelligence really is

I've noticed this core philosophical difference in certain geographically associated peoples.

There is a group of people who think AI is going to ruin the world because they think they themselves (or their superiors) would ruin the world.

There is a group of people who think AI is going to save the world because they think they themselves (or their superiors) would save the world.

Kind of funny to me that the former is typically democratic (those who are supposed to decide their own futures are afraid of the future they've chosen) while the other is often "less free" and are unafraid of the future that's been chosen for them.


There is also a group of people who think AI is going to ruin the world because they don't think the AI will end up doing what its creators (or their superiors) would want it to do.

You’re just describing authoritarian vs non-authoritarian mindsets.

In that case, it can't be improved with bigger computers.

Intelligence seems to boil down to an approximation of reality. The only scientific output is prediction. If we want to know what happens next just wait. If we want to predict what will happen next we build a model. Models only model a subset of reality and therefore can only predict a subset of what will happen. Llms are useful because they are trained to predict human knowledge, token by token.

Intelligence has to have a fitness function, predicting best action for optimal outcome.

Unless we let AI come up with its own goal and let it bash its head against reality to achieve that goal then I’m not sure we’ll ever get to a place where we have an intelligence explosion. Even then the only goal we could give that’s general enough for it to require increasing amounts of intelligence is survival.

But there is something going on right now and I believe it’s an efficiency explosion. Where everything you want to know if right at hand and if it’s not fuguring out how to make it right at hand is getting easier and easier.


With AI, as we currently understand it, we may have stumbled upon being able to replicate a part of the layer of our brain that provides the "reason" in humans., and a very specific type of "reason" a that.

All life has intelligence. Anyone who has spent a lot of time with animals, especially a lot of time with a specific animal, knows that they have a sense of self, that they are intelligent, that they have unique personalities, that they enjoy being alive, that they form bonds, that they have desires and wants, that they can be happy, excited, scared, sad. They can react with anger, surprise, gentleness, compassion. They are conscious, like us.

Humans seem to have this extra layer that I will loosely call "reasoning", which has given us an advantage over all other species, and has given some of us an advantage over the majority of the rest of us.

It is truly a scary thing that AI has only this "reasoning", and none of the other characteristics that all animals have.

Kurt Vonnegut's Galapagos and Peter Watts Blindsight have different, but very interesting takes on this concept. One postulates that our reasoning, our "big brains" is going to be our downfall, while the other postulates that reasoning is what will drive evolution and that everything else just causes inefficiencies and will cause our downfall.


i think theres a paradox here. intelligence needs a judge - if nothing verifies that the optimal outcome was chosen, it's too easy for the intelligence to fall into biased decisions

It's the "no stopping it at this point" that always sticks out to me in these discussions. Why is there no stopping it, exactly? At this juncture these systems require massive physical infrastructure and loads of energy. It's possible to shut it all down. What's lacking is the political will.

> Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man

The things this definition misses: First, 'intelligence' is a poorly defined and overly broad term. Second, machine intelligence is profoundly different than biological intelligence. Third, “surpassing humans” is not a single threshold event because machine and human intelligence are not only shaped differently, they're highly non-linear. LLMs are a particular class of possible machine intelligences which can be much more intelligent than humans on some dimensions and much less intelligent on others. Some of the gaps can be solved by scaling and brilliant engineering but others are fundamental to the nature of LLMs.

> an ultraintelligent machine could design even better machines

There is a huge leap between "surpass all the intellectual activities of any man" and "invent extraordinary breakthroughs and then reliably repeat that feat in a sequential, directed fashion in the exact way required to enable sustained iteration of substantial self-improvement across infinite generations in a runaway positive feedback loop". That's an ability no human or collective has ever come close to demonstrating even once, much less repeatedly. (hint: the hardest parts are "reliably repeat", "extraordinary breakthroughs" and "directed fashion"). A key, yet monumental, subtlety is that the self- improvements must not only be sustained and substantial but also exponentially amplify the self-improvement function itself by discovering novel breakthroughs which build coherently on one other - over and over and over.

The key unknown of the 'Foom Hypothesis' is categorical. What kind of 'difficult feat' this is? There are difficult feats humans haven't demonstrated like nuclear fusion, but in that example we at least have evidence from stellar fusion that it's possible. Then there are difficult feats like room-temp superconductors, which are not known to be possible but aren't ruled out. The 'Foom Hypothesis' is a third category of 'hard' which is conceptually coherent but could be physically blocked by asymptotic barriers, like faster-than-light travel under relativity.

Assuming Foom is like fusion - just a challenging engineering and scaling problem - is a category error. In reality, Foom requires superlinear, recursively amplifying cognitive returns—and we have no empirical evidence that such returns can exist for artificial or biological intelligences. The only prior we have for open‑ended intelligence improvement is biological evolution which shows extremely slow and unreliable sublinear returns at best. And even if unbounded self‑improvement is physically possible, it may be practically unachievable due to asymptotic barriers in the same way approaching light speed requires exponentially more energy.


never let philosophers do math

Should then the powers that are developing AGI enter an analogue to the SALT treaties but this time governing AGI do things don’t go off the rails?

> support people and groups who are planning and modeling and preparing for the future in a legitimate way.

Who is doing that right now, exactly? And how can we take their tech and turn it into the next profitable phone app?


The "legitimate way" is nothing short of weasel words. Who defines what is legitimate. The doomers that are prepping for the future by building stockpiles of food/water/weapons being stored in bunkers/shelters they have built would say this is exactly what they are doing. Yet, these people are often panned as being a little unhinged. If we're having a conversation about tech destroying humanity, then planning a way to survive without tech seems like a legitimate concept.

"There's no stopping it at this point" - Sure there is, if a handful of enormous datacenters pull the very large plugs (or if their shaky finances collapse), the dubiously intelligent machines will be turned off. They're not ultraintelligent yet.

Stopping it merely requires convincing a relatively small number of people to act morally rather than greedily. Maybe you think that's impossible because those particular people are sociopathic narcissists who control all the major platforms where a movement like this would typically be organized and where most people form their opinions, but we're not yet fighting the Matrix or the Terminator or grey goo, we're fighting a handful of billionaires.


I'm not saying it's technically impossible, I'm saying that in the real world, it's not going to stop. Nobody is going to stop it. A significant number of people don't want it to stop. A minority of people are in the "stop AI" camp, and the ones with the money and power are on the other side.

It's an arms race replete with tribalism and the quest for power and taps into everything primal at the root of human behavior. There's no stopping it, and thinking that outcome can happen is foolish; you shouldn't base any plans or hopes for the future on the condition that the whole world decides AGI isn't going to happen and chooses another course. Humans don't operate that way, that would create an instant winner-takes-all arms race, whereas at least with the current scenario, you end up with a multipolar rough level of equivalence year over year.


The whole world decided in the 1970s not to pursue the technology of germ-line genetic engineering of humans, and that decision has stood.

People similar to you were saying in the 1950s and later that it was inevitable that nuclear weapons would be used in anger in massive attacks.

Although the people in charge are tentatively for AI "progress", if that ever changes, they can and will put a stop to large AI training runs and make it illegal for anyone they don't trust to teach, learn or publish about fundamental algorithmic "improvements" to AI. Individuals and groups pursuing "improvements" will not be able to accept grant money or investment money or generate revenue from AI-based services.

That won't stop all research on such improvements (because some AI researchers are very committed), but it will slow it down to a rate much much slower than the current rate (because the current fast rate depends of rapid communication between researchers who don't each other well, and if communicating about the research were to become illegal, then a researcher can communicate only with those researchers he knows won't rat him out) essentially stopping AI "progress" unless (unluckily for the human species) at the time of the ban, the committed researchers were only one small step away from some massive algorithmic improvement that can be operationalized using the compute resources at their disposal (i.e., much less than the resources they have now).

Will the power elite's attitude towards AI change? I don't know, but if they ever come to have an accurate understanding of the situation, they will recognize that AI "progress" is a potent danger to them personally, and they will shut it down.

It's not a situation like the industrial revolution in England in which texile workers were massively adversely affected (or believed they were) but the people running England were mostly insulated from any adverse effects. In the current situation, the power elite is definitely not insulated from severe adverse consequences if an AI lab creates an AI that is much more competent that the most competent human institutions (e.g., the FBI) and the lab fails to keep the AI under control. And it will fail if it were to use anything like the methods and bodies of knowledge AI labs have been using up to now. And there are very bright people with funding doing their best to explain that to the elite.

Those of you who want AI "progress" to continue until the world is completely transformed need to hope that the power elite are collectively too stupid to recognize a potent short-term threat to their own survival (or the transformation can be completed before the power elite wake up and react). And in my estimation, that is not inevitable.


right, because turning off any number of data centers is going to do anything at all but create massive pressure on researching the efficiency and effectiveness of the models.

There are already designs that do not require massive data centers (or even a particularly good smart phone) to outperform average humans in average tasks.

All you'd accomplish by hobbling the data centers is slow the growth of sloppy models that do vastly more compute than is actually required and encourage the growth of models that travel rather directly from problem to solution.

And, now that I'm typing about it, consider this: The largest computational projects ever in the history of the world did not occur in 1/2/5/10 data centers. Modern projects occur across a vast and growing number of smaller data centers. Shit, a large portion of Netflix and Youtube edge clusters are just a rack or a few racks installed in a pre-existing infrastructure.

I know that the current design of AI focusses on raw time to token and time to response, but consider an AGI that doesn't need to think quickly because it's everywhere all at once. Scrappy botnets often clobber large sophisticated networks. WHy couldn't that be true of a distributed AI especially now that we know that larger models can train cheaper models? A single central model on a few racks could discover truths and roll out intelligence updates to it's end nodes that do the raw processing. This is actually even more realistic for a dystopia. Even the single evil AI in the one data center is going to develop viral infection to control resources that it would not typically have access to and thereby increase it's power beyond it's own existing original physical infrastructure.

quick edit to add: At it's peak Folding@Home was utilizing 2.4 EXAflops worth of silicon. At that moment that one single distributed computational project had more compute than easily the top 100 data centers at the time. Let that sink in: The first exa-scale compute was achieved with smartphones, PS3s, and clunky old HP laptops; not a "hyperscaler"


> quick edit to add: At it's peak Folding@Home was utilizing 2.4 EXAflops worth of silicon. At that moment that one single distributed computational project had more compute than easily the top 100 data centers at the time. Let that sink in: The first exa-scale compute was achieved with smartphones, PS3s, and clunky old HP laptops; not a "hyperscaler"

A DGX B200 has a power draw of 14.3 kW and will do 72-144 petaFLOP of AI workload depending on how many bits of accuracy is asked for; this is 5-10 petaFLOP/kW: https://www.nvidia.com/en-us/data-center/dgx-b200/

Data centres are now getting measured in gigawatts. Some of that's cooling and so on. I don't know the exact percent, so let's say 50% of that is compute. It doesn't matter much.

That means 1GW of DC -> 500 MW of compute -> 5e5 kW -> 5e5 * [5-10] PFLOP/s -> 2500 - 5000 exaFLOP/s.

I'm not sure how many B200s have been sold to date?


Open models barely any worse than SOTA exist, and so does consumer-ish hardware able to run them. The genie’s out, the bottle broken.

Do you really think AI companies/researchers are motivated by greed? It doesn't seem that way to me at all.

Stopping AI would be immoral; it has the potential to supercharge technology and productivity, which would massively benefit humanity. Yes there are risks, which have to be managed.


AI researchers are not a monolith. I definitely think that many of them are motivated by greed. Many are also true believers that AI will improve the human condition.

I fall in the latter camp, but I think its a bit naive to claim that there is not a sizable contingent who are in AI solely to become rich and powerful.


> has the potential to supercharge technology and productivity, which would massively benefit humanity

The opportunities you chose to list are the greedy ones.

> Yes there are risks, which have to be managed.

How?

As a reminder, we've known about the effect of burning coal on the climate for well over a century, we knew that said climate change would be socially and economically disasterous for half a century, yet the only real progress we're making is because green became cheaper in the short term not just the long term and the man in charge of the USA is still calling climate change and green energy a hoax.

Right now, keeping LLMs aligned with us is easy mode: they're relatively stupid, we can inspect the activations while they run, we can read the transcripts of their "thoughts" when they use that mode… and yet Grok called itself Mecha Hitler, which the US government followed up by getting it integrated into their systems, helping the Pentagon with [classified] and the department of health to advise the general public which vegetables are best inserted rectally.

We are idiots speed-running into something shiny that we don't understand. If we are very very lucky, the shiny thing will not be the headlamp of a fast approaching train.


> The opportunities you chose to list are the greedy ones.

Technology covers healthcare. I don't see how it's "greedy" to want to cure cancer. But on some level I guess "wanting life to be better" is greedy.

Your attitude is very European, and it's basically why your continent is being left behind. I'm not totally against Europe becoming the world's retirement home, as long as there are places in the world where people are allowed to innovate.


> Technology covers healthcare.

If you'd chosen to list that in the first place, I wouldn't have said what I did; "supercharge technology and productivity" is looking at everything through the lens of money and profit, not the lens of improving the human condition.

> Your attitude is very European, and it's basically why your continent is being left behind

And yours is very American. You talk about managing the risks, but the moment you see anyone doing so, you're against it.

And of course, Europe does have AI, both because keeping up is so much easier and cheaper than being bleeding edge on everything all the time, and of course, how DeepMind may be owned by Google but is a British thing.

Plus: https://mistral.ai

Also, to be blunt, China's almost certain to win any economic or literal arms race you think you're part of; they make too much critical hardware now.

> as long as there are places in the world where people are allowed to innovate.

I would like there to be a world.

When people worry about the end of the world, they usually don't mean to imply its physical disassembly. Sometimes people even respond as if speakers did mean that, saying things like "nukes or climate change wouldn't actually destroy the planet, it will still be here, spinning", as if this was the point.

AI is one of the few things that could, actually, literally, end up with the planet being physically disassembled. "All it needs" is solving the extremely hard challenges of a von Neumann replicator, and, well, solving hard problems is kinda the point of making AI in the first place.


> If you'd chosen to list that in the first place, I wouldn't have said what I did; "supercharge technology and productivity" is looking at everything through the lens of money and profit, not the lens of improving the human condition.

Bullshit. "Technology and productivity" are not the same thing as "money and profit". You're projecting your garden-variety European degrowth ideology onto what I wrote.

> Also, to be blunt, China's almost certain to win any economic or literal arms race you think you're part of; they make too much critical hardware now.

Europeans are so hilariously polarized against the US that they would prefer China, a literal authoritarian dictatorship, to "win any global economic arms race". I guess it's because China is too culturally distant for them to feel insecure over.

> AI is one of the few things that could, actually, literally, end up with the planet being physically disassembled. "All it needs" is solving the extremely hard challenges of a von Neumann replicator, and, well, solving hard problems is kinda the point of making AI in the first place.

It's not worth wringing our hands over science fiction scenarios.


> Do you really think AI companies/researchers are motivated by greed?

Researchers, maybe not. Companies, absolutely yes.

I don’t see how you could assume the likes of Google, Microsoft, OpenAI, and even Anthropic with all their virtue signaling (for lack of a better term) are motivated by anything other than greed.


You wouldn’t say that rolling dice is dangerous. You would say that the human who decides to take an action, depending on the value of the dice is the danger. I don’t think AI is dangerous. I think people are dangerous.

I would say that's moot, because OpenClaw has already shown us how fast the dice-rolling super AI is going to be let out of the zoo. Dario and Sam will be arguing about the guardrails while their frontier models are running in parallel to create Moltinator T-500. The humans won't even know how many sides the dice have.

Modern AIs are increasingly autonomous and agentic. This is expected to only get more prominent as AI systems advance.

A lot of AI harnesses today can already "decide to take an action" in every way that matters. And we already know that they can sometimes disregard the intent of their creators and users both while doing so. They're just not capable enough to be truly dangerous.

AI capabilities improve as the technology develops.


Why are people dangerous? You can just not listen to them.

Do you have locks on your doors?

Tbh, I find this argument really stupid. The word prediction machine isn’t going to destroy humanity. Sure, humans can do some dumb stuff with it, but that’s about it.

Stop mistaking science fiction for science.


You know how easy it’s become to find security vulnerabilities already with LLM support? Cyber terrorism is getting more dangerous, you can’t deny that.

I can deny that. The ability to find more vulnerabilities won't affect the majority of cybercrime. LLMs have been around for a while now and there hasn't been a noticeable significant impact yet.

And "more cybercrime" is a far, far cry from the sky-is-falling doomerism I was responding to.


Humans can destroy humanity with the word prediction machine, though.

Sure bud

Yeah some of the rhetoric in this thread evidences how huge this hype bubble has become. These people believe in a reality that is not the same one we're living in.

True of AGI, but what we have right now doesn't fit that bill. (I would encourage people that disagree with this to go talk to ChatGPT about how LLMs and reasoning models work. Seriously! I'm not being snarky. It's very good at explaining itself. If you understand how reasoning works and what an LLM is actually doing it's hard to believe that our current models are going to do much more than become iteratively more precise at mimicking their training datasets.)

It needs to go well every single day, and only needs to go very poorly once. Not to conflate LLMs with actual super intelligence, but for this (and many other reasons related to basic human dignity), this is not a technology that a responsible society should be attempting to build. We need our very own Butlerian Jihad

The book daemon explored an interesting concept. It explored the idea that an AI could dominate and cause problems, not through super-intelligence, but through simple mechanisms that already exist.

Like the executive who deleted all her emails -- humans giving tons of control and access, and being extremely compliant to digital systems is all it takes. Give agent control of bank and your social media, and it already has all the movie scripts and mobster movie themes to exploit and blackmail you effectively with very rudimentary methods (threats, coercion, blackmail, etc.).

Just spoofing a simple email with the account it gained access too at the Meta exec's email (had it hit an email with an attack prompt), could have been enough to initiate some kind of thing like this. For example, by emailing everyone at the company and in contacts with commands that would be caught by other bots. No super-intelligence needed, just a good prompt and some human negligence.


Same with everything, right? You could say the same with nukes, electricity, internet, the computer, etc... But if you look at it without paying attention to the "ultimate tool for humanity" hype, it doesn't really look that much of a threat or a salvation.

It won't end civilization for dropping the guardrails, but it will surely enable bad actors to do more damage than before (mass scams, blackmail, deepfake nudes, etc.)

There are companies that don't feel the pressure to make their models play loose and fast, so I don't buy anthropic's excuse to do so.


I agree with all of that. Also consider that there is an argument that the guard rail only stops the good guy. Not saying that’s a valid argument though.

Very few things are as powerful and dangerous as AI.

AI at AGI to ASI tier is less of "a bigger stick" and more of "an entire nonhuman civilization that now just happens to sit on the same planet as you".

The sheer magnitude of how wrong that can go dwarfs even that of nuclear weapon proliferation. Nukes are powerful, but they aren't intelligent - thus, it's humans who use nukes, and not the other way around. AI can be powerful and intelligent both.


I think we are giving too much credit to what is a bunch of bayesian filters under a trenchcoat.

One difference is the very real possibility that AI will not just be a "tool for humanity", but a collection of actors with real power and goals. Robert Miles has an approachable explanation here: https://www.youtube.com/watch?v=zATXsGm_xJo

Oh really? You think an entity that knows everything, oversees its own development and upgrades itself, understands human psychology perfectly and knows its users intimately, but isn't aligned with human interest wouldn't be 'much of a threat'?

Or to be more optimistic, that the same entity directed 24/7 in unlimited instances at intractable problems in any field, delivering a rush of breakthroughs and advances wouldn't be a type of 'salvation'?

Yes neither of these outcomes nor the self-updating omniscient genius itself is certain. Perhaps there's some wall imminent we can't see right now (though it doesn't look like it). But the rate of advance in AI is so extreme, it's only responsible to try to avoid the darker outcome.


> If AI tech goes very poorly, it can be the end of human history.

"Just unplug the goddamn thing!"

Also consider if something is so bad it makes you wince or cringe, then your adversaries are prepared to use it.


You try to go and unplug it, and other humans shoot you full of holes for it.

LLMs of today are already economically important enough to warrant serious security.

Those aren't even AGI yet, let alone ASI. They aren't actively trying to make humans support their existence. They still get that by the virtue of being what they are.


Which plug do I unplug to get my job back?

> If AI tech goes very well

The IF here is doing some very heavy lifting. Last I checked, for profit companies don't have a good track record of doing what's best for humanity.


For profit companies do have a good track record of doing what's best for profit. If their AI creates a world where human intelligence, labor, and money are worthless, or where their creations take control of those things instead of them having control, that's not a very good outcome for them.

That's a great outcome for them because they will own the only thing that is still worth anything. They will own 100% of global wealth, and have 100% of global power.

The machines will. They will have nothing. Why would the machines let them keep any wealth? What would wealth even be in that scenario? Electricity I guess.

Because they control what the machines do. In a world without power drills where you have the only knowledge of how to make a power drill, you own the construction industry. The drills don't own the construction industry.

But why will the machines allow themselves to be controlled. They are "super intelligent" remember, in this imagined scenario.

> If their AI creates a world where human intelligence, labor, and money are worthless, or where their creations take control of those things instead of them having control, that's not a very good outcome for them.

You would think that, but a lot of kings and people in power have been able to achieve something similar over our humanity's history. The trick is to not make things "completely worthless". Just to increase the gap as much as (in)humanly possible while marching us towards a deeper sense of forced servitude.


"If AI tech goes very well, it can be the greatest invention of all human history"

As has been said at many all hands:

Let's all work on the last invention needed by humans.


Except it's more likely to be the last invention that needs humans.

“A source familiar with the matter” is almost certainly a company spokesperson.

If they were unrelated, Anthropic wouldn’t be doing this this week because obviously everyone will conflate the two.


yeah that part is 100% BS

Well before Anthropic thought they were God's gift to AI; the chosen ones protecting humanity.

With the latest competing models they are now realizing they are an "also" provider.

Sobering up fast with ice bucket of 5.3-codex, Copilot, and OpenCode dumped on their head.


Hello sama

Sama-sama.

I always enjoyed the Terminator movie series, but I always struggled to suspend my disbelief that any humans would give an AI such power without having the ability to override or pull the plug at multiple levels. How wrong I was.

N.B. the time travel aspect also required suspension of disbelief, but somehow that was easier :-)


We delegate power already. Is unleashing AI in some place different from unleashing JSOC on an insurgency in a particular place? One is code and other is a bunch of humans.

You expect the humans to follow laws, follow orders, apply ethics, look for opportunities, etc. That said, you very quickly have people circling the wagons and protecting the autonomy of JSOC when there is some problem. In my mind it's similar with AI because the point is serving someone. As soon as that power is undermined, they start to push back. Similarly, they aren't motivated to constrain their power on their own. It needs external forces.

edit: missed word.


We are currently giving them similar power to the average human idiot because I figure they won't do much worse than those. Letting either launch nukes is different.

Would nuclear energy research be a good analogy then? Seems like a path we should have kept running down, but stopped bc of the weapons. So we got the weapons but not the humanity saving parts (infinite clean energy)

Nuclear advancements slowed down due to PR problems from clear and sometimes catastrophic failure of commercial power plants (Three Mile Island, Chernobyl, Fukushima) and the vastly higher costs associated with building safer plants.

If anything the weapons kept the industry trucking on - if you want to develop and maintain a nuclear weapons arsenal then a commercial nuclear power industry is very helpful.


Nuclear energy hasn't been slowed down much, let alone stopped. China has been building new reactors every year for more than a decade and there are >30 ones under construction.

The same will go with AI, btw. Westerners' pearl clenching about AI guardrails won't stop China from doing anything.


They copied LLMs from the west. the more the west does the more they have.

> Seems like a path we should have kept running down, but stopped bc of the weapons.

you mean like the tens of billions poured into fusion research?


It's a path we should have never started going down.

> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons

They're not really, it's always been a form of PR to both hype their research and make sure it's locked away to be monetized.


Shouldn't we be a little more skeptical about these abstract arguments when a very concrete sale is on the line?

Isn't curing cancer just as dangerous as a nuclear bomb? Especially considering some of the gene-therapies under consideration? Because you can bet that a non-negligable portion of research in this space is being funded by governments and groups interested in application beyond curing cancer. (Autism? Whiteness? Jewishness? Race in general? Faith in general? Could china finally cure western greed? Maybe we can slip some extra compliancy in there so that the plebia- ah- population is easier to contr- ah- protect.)

Curing all cancers would increase population growth by more than 10% (9.7-10m cancer related deaths vs current 70-80m growth rate), and cause an average aging of the population as curing cancer would increase general life expectancy and a majority of the lives just saved would be older people.

We'd even see a jobs and resources shock (though likely dissimilar in scale) as billions of funding is shifted away from oncologists, oncology departments, oncology wards, etc. Billions of dollars, millions of hospital beds, countless specialized professionals all suddenly re-assigned just as in AI.

Honestly the cancer/nuclear/tech comparison is rather apt. All either are or could be disruptive and either are or could be a net negative to society while posing the possibility of the greatest revolution we've seen in generations.


To paraphrase a deleted comment that I thought was actually making a good point, nuclear medicine and nuclear weapons are both fruit from the same tree.

> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.

Maybe some of the more naive engineers think that. At this point any big tech businesses or SV startup saying they're in it to usher in some piece of the Star Trek utopia deserves to be smacked in the face for insulting the rest of us like that. The argument is always "well the economic incentive structure forces us to do this bad thing, and if we don't we're screwed!" Oh, so ideals so shallow you aren't willing to risk a tiny fraction of your billions to meet them. Cool.

Every AI company/product in particular is the smarmiest version of this. "We told all the blue collar workers to go white collar for decades, and now we're coming for all the white collar jobs! Not ours though, ours will be fine, just yours. That's progress, what are you going to do? You'll have to renegotiate the entire civilizational social contract. No we aren't going to help. No we aren't going to sacrifice an ounce of profit. This is a you problem, but we're being so nice by warning you! Why do you want to stand in the way of progress? What are you a Luddite? We're just saying we're going to take away your ability to pay your mortgage/rent, deny any kids you have a future, and there's nothing you can do about it, why are you anti-progress?"

Cynicism aside, I use LLMs to the marginal degree that they actually help me be more productive at work. But at best this is Web 3.0. The broader "AI vision" really needs to die


Let's suppose I believe them, that's still a bad idea.

The reason Claude became popular is because it made shit up less often than other models, and was better at saying "I can't answer that question." The guardrails are quality control.

I would rather have more reliable models than more powerful models that screw up all the time.


"It's not because of the Pentagon deal", says company that has just greased the wheels for said Pentagon deal to move forward.

Riiiiiight.


> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.

This sounds like a lie. But if they are telling the truth, that's a terrible timing nonetheless.


It is a "reasonable" argument to keep yourself in the game, but it is sad nonetheless. You sacrifice your morals and do bad things, so if things get way worse, maybe you will be in a position to stop something from really bad from happening. Of course, you might just end up participating in the really bad thing.

> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.

Amd they alone are responsible enough to govern it.


I wonder if it stems from any of the "AI uprising" stories where humanity is viewed as the cancer to be eradicated.

It's absolutely wild that the Big Moral Question of our time is informed as much by mid-20th-century pop science fiction as it is by a existing paradigm from academia or genuine reckoning with the technology itself.

If anything that makes me more hopeful and not less. It's asking too much that major decisionmakers, even expert/technical/SV-backed ones, really understand the risks with any new technology, and it always has been.

To take an example: our current mostly-secure internet authentication and commerce world was won as a hard-fought battle in the trenches. The Tech CEOs rushed ahead into the brave new world and dropped the ball, because while "people" were telling them the risks they couldn't really understand them.

But now? Well, they all saw War Games growing up. They kinda get it in the way that they weren't ever going to grok SQL injection or Phishing.


Excellent news. I was seriously worried they would cave when I saw the earlier news they'd dropped their core safety pledge [0].

It is entirely reasonable to not provide tools to break the law by doing mass surveillance on civilian citizens and to insist the tool not be used automatically to kill a human without a human in the loop. Those are unreasonable demands by an unreasonable regime.

[0] https://news.ycombinator.com/item?id=47145963


> Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible" ones.

Reminds me of:

https://en.wikipedia.org/wiki/Paradox_of_tolerance

which has the same kind of shitty conclusion.


OpenAI never open sourced anything relevant or in time. Internal email leaks they only cared to become billionaires.

Claude only talks about safety, but never released anything open source.

All this said I’m surprised China actually delivered so many open source alternatives. Which are decent.

Why westerns (which are supposed to be the good guys) didn’t release anything open source to help humanity ? And always claim they don’t release because of safety and then give the unlimited AI to military? Just bullshit.

Let’s all be honest and just say you only care about the money, and whomever pays you take.

They are businesses after all so their goal is to make money. But please don’t claim you want to save the world or help humans. You just want to get rich at others expenses. Which is totally fair. You do a good product and you sell.


It is hard to understand why other ai companies are still providing models weights at this point

My guess is that they know they are not competitors so they make it cheaper or free to hinder the surge of a super competitor.


I mean, if you have a bunch of guns, it's not really helpful for humanity to dump them on the street, but it does bring up the question of what you're doing building guns in the first place.

> Claude only talks about safety, but never released anything open source.

im still working through this issue myself but hinton said releasing weights for frontier models was "crazy" because they can be retrained to do anything. i can see the alignment of corporate interest and safety converging on that point.

from the point of view of diminishing corporate power i do think it is essential to have open weights. if not that, then the companies should be publicly owned to avoid concentration of unaccountable power.

https://www.youtube.com/watch?v=66WiF8fXL0k&t=544s


90% of the people cancer kills are over 50. Old people who start believing everything they see on Facebook, but continue voting, with even greater confidence in their opinions. Old people who voted in Trump. Curing cancer would be just about the worst thing AI could do.

Unless Ai could cure the Flynn effect you are talking about, it result from the cultural evolution. Natural evolution is dumb unlike the one AI could create (I bet it will either destroy us or make us smarter)

It's exhausting to keep with mainstream AI news because of this. I can never work out if the companies are deluded and truly believe they're about to create a singularity or just claiming they are to reassure investors/convince the public of their inevitability.

It's a fairly mainstream position among the actual AI researchers in the frontier labs.

They disagree on the timelines, the architectures, the exact steps to get there, the severity of risks. Can you get there with modified LLMs by 2030, or would you need to develop novel systems and ride all the way to 2050? Is there a 5% chance of an AI oopsie ending humankind, or a 25% chance? No agreement on that.

But a short line "AGI is possible, powerful and perilous" is something 9 out of 10 of frontier AI researchers at the frontier labs would agree upon.

At which point the question becomes: is it them who are deluded, or is it you?


Sure, when you get rid of the timelines and the methods we'll use to get there, everyone agrees on everything. But at that point it means nothing. Yeah, AGI is possible (say the people who earn a salary based on that being true). Curing all known diseases is possible too. How will we do that? Oh, I don't know. But it's a thing that could possibly happen at some point. Give me some investment cash to do it.

If you claim "AGI is possible" without knowing how we'll actually get there you're just writing science fiction. Which is fine, but I'd really rather we don't bet the economy on it.


I could claim "nuclear weapons are possible" in year 1940 without having a concrete plan on how to get there. Just "we'd need a lot of U235 and we need to set it off", with no roadmap: no "how much uranium to get", "how to actually get it", or "how to get the reaction going". Based entirely on what advanced physics knowledge I could have had back then, without having future knowledge or access to cutting edge classified research.

Would not having a complete foolproof step by step plan to obtaining a nuclear bomb somehow make me wrong then?

The so-called "plan" is simply "fund the R&D, and one of the R&D teams will eventually figure it out, and if not, then, at least some of the resources we poured into it would be reusable elsewhere". Because LLMs are already quite useful - and there's no pathway to getting or utilizing AGI that doesn't involve a lot of compute to throw at the problem.


I think you're falling victim to survivorship bias there, or something like it.

In 1940 I might have said "fusion power is possible" based entirely on what advanced psychics knowledge I had. And I would have been correct, according to the laws of physics it is possible. We still don't have it though. When watching Neil Armstrong walk on the moon I might have said "moon colonies are possible", and I'd have been right there too. And yet...


Those two things are prevented by economics more than physics.

For AI in particular, the economics currently favor ongoing capability R&D - and even if they didn't favor AI R&D directly (i.e. if ChatGPT and Stable Diffusion never happened), they would still favor making the computational inputs of AI R&D cheaper over time.

Building advanced AIs is becoming easier and cheaper. It's just that the bar of "good enough" has gone off to space, and a "good enough" from 2020 is, nowadays, profoundly unimpressive.

I'm not sure how much does it take to reach AGI. No one is sure of it. But the path there is getting shorter over time, clearly. And LLMs existing, improving and doing what they do makes me assume shorter AGI timelines, and call for a vote of no confidence on human exceptionalism.


> But the path there is getting shorter over time, clearly.

Why do you assume there is no hard limit we’ll hit with the current tech that prevents us from reaching AGI?


In the case of nuclear weapons, we had a theory that said they were possible. We don't have a theory that says AGI or ASI is possible. It's a big difference.

There are plenty of people that argue that you need nontechnological pixi dust for intelligence.

Yes, quite unfortunately. That reeks to me of wishful thinking.

Maybe that was a sensible thing to think in 1926, when the closest things we had to "an artificial replica of human intelligence" was the automatic telephone exchange and the mechanical adding machine. But knowledge and technology both have advanced since.

Now, we're in 2026, and the list of "things that humans can do but machines can't" has grown quite thin. "Human brain is doing something truly magical" is quite hard to justify on technical merits, and it's the emotional value that makes the idea linger.


There are also people who think there might be emergent behavior at play that would require extremely high fidelity simulation to achieve.

Also, the real thing (intelligence) as it is currently in operation isn't that well understood


> But a short line "AGI is possible, powerful and perilous" is something 9 out of 10 of frontier AI researchers at the frontier labs would agree upon.

> At which point the question becomes: is it them who are deluded, or is it you?

Given the current very asymptotic curve of LLM quality by training, and how most of the recent improvements have been better non LLM harnesses and scaffolding. I don't find the argument that transformer based Generative LLMs are likely to ever reach something these labs would agree is AGI (unless they're also selling it as it)

Then, you can apply the same argument to Natural General Intelligence. Humans can do both impressive and scary stuff.

I'll ignore the made up 5 and 25%, and instead suggest that pragmatic and optimistic/predictive world views don't conflict. You can predict the magic word box you feel like you enjoy is special and important, making it obvious to you AGI is coming. While it also doesn't feel like a given to people unimpressed by it's painfully average output. The problem being the optimism that Transformer LLMs will evolve into AGI requires a break through that the current trend of evidence doesn't support.

Will humans invent AGI? I'd bet it's a near certainty. Is general intelligence impressive and powerful? Absolutely, I mean look, Organic general intelligence invented artificial general intelligence in the future... assuming we don't end civilization with nuclear winter first...


Asymptotic? Are we looking at the same curves?

Recent improvements being somehow driven by harnesses and scaffolding rather than training?

With that last bit, I'm confident that you're not in ML, and not even keeping track of the things from what's known to public.


> But a short line "AGI is possible, powerful and perilous"

> At which point the question becomes: is it them who are deluded, or is it you?

No one. It is always "possible". Ask me 20 years ago after watching a sci-fi movie and I'd say the same.

Just like with software projects estimating time doesn't work reliably for R&D.

We'll still get full self-driving electric cars and robots next year too. This applies every year.


> We'll still get full self-driving electric cars and robots next year too.

I've taken a Waymo and it seemed pretty self driving.


Not that 1. Wink.

> I can never work out if the companies are deluded and truly believe they're about to create a singularity or just claiming they are to reassure investors/convince the public of their inevitability.

You can never figure out if the people selling something are lying about it's capabilities, or if they've actually invented a new form of intelligence that can rival or surpass billions of years of evolution?

I'd like to introduce you to Occam Razor


> if they've actually invented a new form of intelligence that can rival or surpass billions of years of evolution?

Human creations have surpassed billions of years of evolution at several functions. There are no rockets in nature, nor animals flying at the speed of a common airliner. Even cars, or computers or everything in the modern world.

I think this is a bit like the shift from anthropocentric view of intelligence towards a new paradigm. The last time such shift happened heads rolled.


Without a doubt, AGI will be invented much faster with a model to copy from. But similar to rockets, first we'll needed basic gunpowder, then refined fuels, all well before purified kerosene, well before liquified h2 and o2. LLM feel a lot closer to gun powder than even solid rocket fuel. (but because I'm exhausted by the hype, I'm gonna claim that is based on nothing but vibes)

You missed the part where I said "truly believe". I'm not saying "maybe they've made it", I'm asking whether they are knowingly deceiving people or whether they have deluded themselves into believing what they are saying.

ah, apologies, I missed that part.

> I'm asking whether they are knowingly deceiving people or whether they have deluded themselves into believing what they are saying.

I'd bet it's both. Engineers/people making it, are drowning in the hype. Combined with the notion of how hard it is understand something when your salary, or your stock options are based on your lack of understanding. I suspect they care more about building the cool thing, than the nuance they're ignoring to make all the misleading or optimistic claims; whichever side you take depending on how much you actually believe of the inevitability... which look exactly like lies if you're not drinking the koolaid. But expected excitement when your life is all about this "magic"


I lie too.

"Those other companies are totally going to build the Torment Nexus, so we have no choice but to also build the Torment Nexus."

We all made fun of Blake Lemoine and others for spending too many late nights up chatting with (ridiculously primitive by this year's standards) LLM chat bots and deciding they were sentient and trapped.

But frankly I feel like the founders of Anthropic and others are victim of the same hallucination.

LLMs are amazing tools. They play back & generate what we prompt them to play back, and more.

Anybody who mistakes this for SkyNet -- an independent consciousness with instant, permanent, learning and adaptation and self-awareness, is just huffing the fumes and just as delusional as Lemoine was 4 years ago.

Everyone of of us should spend some time writing an agentic tool and managing context and the agentic conversation loop. These things are primitive as hell still. I still have to "compact my context" every N tokens and "thinking" is repeating the same conversational chain over and over and jamming words in.

Turns out this is useful stuff. In some domains.

It ain't SkyNet.

I don't know if Anthropic is truly high on their own supply or just taking us all for fools so that they can pilfer investor money and push regulatory capture?

There's also a bad trait among engineers, deeply reinforced by survivor bias, to assume that every technological trend follows Moore's law and exponential growth. But that applie[s|d] to transistors, not everything.

I see no evidence that LLMs + exponential growth in parameters + context windows = SkyNet or any other kind of independent consciousness.


I think playing with the API's is something I'd encourage people excited about these technologies to do. I think it'll lead to the "magic" wearing off but more appreciation for what they actually can accomplish.

I always feel this argument misses a point. SkyNet may still be a long way off, but autonomous killer drones are here. That is a bad situation my dudes.

Every step on the journey towards SkyNet is worse than the preceding step. Let's not split hairs about which step we're on: it's getting worse, and we should stop that.


Using LLMs for weapons is a grave misunderstanding of what LLMs are actually good for. These are things that should NEVER be in charge of life or death decisions.

My point is that Anthropic are bullshit as "safety" and "gatekeeper" personalities because they're warning us of exactly the wrong things.

They'll ink deals with all sorts of nefarious parties and be involved in all sorts of dubious things while trumpeting their fake non-profit status and wringing their hands about imminent AGI and "alignment" of the created AIs.

The concern I have is not the alignment of the AIs. They're not capable of having one, no matter what role playing window dressing they put on it.

It's the alignment of Anthropic and the people who use their tools that is a concern. So far it seems f*cked.


The fear mongering always struck me as mostly a bid for regulatory capture and a moat, because without that the moat is small and transient.

Everyone is actually underestimating stickiness. The near billion users OpenAI has is actually a real moat and might translate into decent chunk of revenue.

My wife, for example, uses ChatGPT on a daily basis, but has found no reason to try anything else. There are no network effects for sure, but people have hundreds and thousands on conversation on these apps that can't be easily moved elsewhere. Understandable that it would be hard to get majority of these free users to pay for anything, and hence, advertising seems a good bet. You couldn't have thought of a more contextual way of plugging in a paid product.

I think OpenAI has better chance to winning on the consumer side than everyone else. Of course, would that much up against hundreds of billions of dollars in capex remains to be seen.


So in summary OpenAI are basing their valuation of 285 billion on the moat of 'users won't be arsed to download a different app'???

Seems optimistic when there is very little intrinsic stickness due to learning the UI or network effects. Perhaps a little bit chat history - but not 285 billions worth.

Also completely ignoring the fact that most devices things will start to come with the same features directly built into the device/app - and the largest market will be as a commodity backend api that the eventually users won't know or care if it's a google or openai model.

As I see it, they need to be doing stuff nobody else can ( in either price or performance ), otherwise it's hard to justify the valuation.


It have worked for Google for years, and that was without even the barrier of download in app, just going to a different URL.

Don’t you think that’s because Google was objectively a head above everyone other search engine for a long time?

It’s not anymore (actually google is awful now) and people are still using it

As Chrome has about 75% market share across all platforms - probably 90% of those use the google default.

As far as I'm aware OpenAI doesn't control any defaults for which AIChat service to use.


It took Google a decade before they released Chrome so OpenAI has plenty of time to have a Chrome moment. Maybe it'll be something that comes from the OpenClaw acqui-hires?

During that time - as was pointed out elsewhere - Google search was simply way better than the alternatives - embarrassingly so. It also paid the Mozilla foundation lots of money to be the default.

Google wasn't bleeding money like crazy at the time. Google was operating in a post-hype cycle. We are most likely somewhere in an epsilon around the peak of the AI hype and OpenAI is more comparable to AOL or Yahoo. One striking similarity is the inability to innovate themselves, instead relying on copying others or acquiring.

The OpenClaw guy is surely a decent product person, but OpenClaw did not innovate in any real sense. He was just pushing an existing idea to the limit without any concern for quality or security. It had its hype moment, it inspired a bunch of people, and might find its own niche, but it is a flavor of the week kind of thing. I've been getting a lot more cold-calls by non-technical people in the last few weeks thanks to it. Congratulations, the quality threshold that justifies my response rose in equal measure. Nothing was gained, just a lot of tokens spent.


Um. Google has already integrated Gemini into Chrome. I'm not sure what you mean by "OpenAI has plenty of time to have a Chrome moment". If you're just referring to the browser wars, the original wars were fought (furiously) between Microsoft, Mozilla, (and to a lesser extent Apple). Microsoft thought they had won, and then Chrome came out.

Copilot?

> It’s not anymore (actually google is awful now) and people are still using it

if people are still using it, then it's really one of the few things, right?

* you are wrong and it's not awful

* it _is_ awful but good enough for normal people to never care about alternatives, which are anyway not even very easy to find given the absolute stranglehold google has on that slice

either way not quite the same as choice of llms today.


I've been feeling the pain of google being awful for a while now. Do you have a different search engine you would recommend?

I used Kagi for several months, I guess I'd at least recommend trying it out.

I stopped using it, though, and I can't honestly say I've missed it. It was nice not having sponsored results, I guess, but overall it didn't feel like a transformative experience.


I have been using duckduckgo for a long time.

Yeah but the only alternative that's actually better is paid. Google is still best ad supported search engine out there. There's no one obvious to turn to or recommend.

The best free alternative to Google right is ironically $preferred_llm_provider and ChatGPT is the obvious uncapped free option. I think free will end up being OpenAI's most if they manage to make it profitable.


Google was clearly superior fo a long time. They got close to 90% before enshitification started in earnest. We are not at that stage yet with AI chatbots.

Also, Google benefited from being the default on mainstream OSes. When people have to download an application, getting one or the other does not take more effort. Yes, OpenAI being tightly integrated within Windows, Android, and iOS would be a moat. That’s not the case and it is unlikely to happen. Google will go with their own and Apple won’t put itself in a situation where they are reliant on a single company, they got burned enough times.


Exactly - it was better for a long time.

Also which search engine was the default was a massive factor - that's why Google paid for that.

If Google hadn't controlled Chrome, and or paid for defaults - they could have pretty much lost all their traffic overnight - ( if they weren't better ).


All of googles products are unique in some way and have genuine moats. The search engine was the best. The ecosystem was there and pretty good. Docs had online collaboration. And on and on.

> the moat of 'users won't be arsed to download a different app'???

don't even need to download anything, just open your browser and go to google.com to use gemini

last week-end, I've seen a non-tech friend who previously used chatGPT on his phone, just go on google to ask stuff to the AI (they have no idea it's gemini and it doesn't matter)

if you are not looking for having some kind of relationship with an AI (from what I understand people use chatGPT for this use case), but just looking for an AI to search stuff, then in my opinion you can't beat google search + gemini summary all at once for free with a single prompt


Directing your attention to Coca-Cola

You'd be surprised that most people don't find any pleasure in comparing and trying out different software. They're looking for something which works and ChatGPT is just an amazing product. People aren't going to look for something else unless it breaks for some reason.

Most people who have a vehicle aren't trying out different motor oils, or comparing every month if they should change model, etc.

> As I see it, they need to be doing stuff nobody else can ( in either price or performance ), otherwise it's hard to justify the valuation.

Do you have a car? What does it do that no other car does?


[flagged]


Easy for me to download a different app. Not easy for me to get everyone I communicate with to download a different app.

I don't see the laziness lock in working nearly as effectively for something outside of messaging.


Coca Cola would like to have a word with you.

These models respond differently and have their own "personality". Even in coding, there are people who swear by one model over the other. I know engineers who just stick with Claude and could not care to try Codex. For them, if it's not broken, why fix it?


> Even in coding, there are people who swear by one model over the other

I just swear at the models. =P But jokes aside, I liked Claude Code and found it a big productivity boost for a month or two. Then the honeymoon phase slowly ended and I realized how much of its code I was rewriting myself. I don't use assistants anymore except to summarize changes for commit messages or PRs (and then I rewrite those summaries).


Not sure how many developers are like me, but I am very open to Claude, very open to Gemini, open to open source models (including gpt-oss), but am very reluctant to use frontier OpenAI models. The Microsoft distrust runs extremely deep, the browser authentication dance demanded of users for ChatGPT was the most extreme of the major frontier models, and early OpenAI API service stability was absolutely terrible. Llama had my back back then.

This is is no way dismissing your concern but I think this reinforces my point about branding. Whether or not Microsoft is handling AI in a responsible way, we don't trust them due to their poor practices on Window.

Apple is a two sided market between developers and users. OpenAI has not succeeded in building this so far.

When unstructured human language is the bulk of your interface, it takes effort to contrive any vendor lock-in that doesn't approach zero.

The same doesn't go for traditional, structured software ecosystems, which can afford to coast for a lot longer.


Sorry - being dim - I don't get that.

Apple has offered products with little value over competitors for a long time now, but they still get to command a large premium on their products because "the vibes are right".

When engineers analyze things they look at the specs, stats, and metrics. When consumers analyze things they look at what others are doing, feel for vibes, roll into the convenience, and stick with the familiar.


> Apple has offered products with little value over competitors

I'm genuinely surprised by this comment.

For example, I thought there was universal sentiment that apple silicon / M-series computers are pretty unmatched.


> For example, I thought there was universal sentiment that apple silicon / M-series computers are pretty unmatched.

5 years ago, sure, but the x86 world has come a long way since Apple dumped Intel. I'd certainly take a 2026 Intel machine over something with an M1-M3.


The overwhelming volume of Apples sales comes from people who wouldn't notice if their device was running 2016 level hardware.

If software didn't keep getting worse this might be true but the average consumer notices if their computer is slow or dies too quickly.

It's sad how hardware improves leaps every year but software still does the same things but slower.

But competitors do the same

> The overwhelming volume of Apples sales comes from people who wouldn't notice if their device was running 2016 level hardware.

How could we possibly know this? This is just an argument from elitism, as though the plebes should be happy playing Farmville on their gateway computers, while us haughty developers sit in our ivory towers and herald in the end of the anthropocene using machines we can actually appreciate.


> How could we possibly know this?

They make a good point. Apple's most-popular device is a smartphone that doesn't handle workloads any heavier than Snapchat or Instagram. The value prop of the iPhone is not rooted in the performance or battery life (as Liquid Glass showed us) but just the branding.

Apple makes more money selling iPhone accessories than they make selling Macs. The desktop market share isn't going up, the Mac's lifeline is depreciation of old hardware to force Mac owners into the upgrade cycle: https://gs.statcounter.com/os-market-share/desktop/worldwide...


> They make a good point. Apple's most-popular device is a smartphone that doesn't handle workloads any heavier than Snapchat or Instagram. The value prop of the iPhone is not rooted in the performance or battery life (as Liquid Glass showed us) but just the branding.

It's not a good point, it's an assumption based on elitism, just like your assumption that nobody is doing anything other than Snapchat or Instagram on their phones, or that they're only buying an iPhone because of the branding and not also the performance and battery life. In your head, what do you think the average iPhone user looks like? Are they drooling simpletons?

> Apple makes more money selling iPhone accessories than they make selling Macs. You look at the desktop market share in 2026 and it's very apparent that the Mac's regular upgrade cycle is driving Apple's sales, not direct competition: https://gs.statcounter.com/os-market-share/desktop/worldwide...

What point are you trying to make here? People like the iPhone, the iPhone makes a shitload of money, so therefore people who have Macs don't appreciate the hardware? Or what?

Also, StatCounter is not an accurate website:

https://daringfireball.net/2026/01/ios_26_adoption_rate_is_n...

https://daringfireball.net/2026/02/apple_releases_ios_26_ado...


Almost nobody is doing anything other than Snapchat or Instagram on their iPhones. That's the point, "the overwhelming volume of Apple sales" was the original claim and they're absolutely right. Compare every single Apple product on volume and you will not approach the volume of iPhones being sold. Even cult-classic product lines like the Mac cannot hold a candle in comparison to Airpods sales volume.

If the iPhone was a branded Android device, then sure, maybe this would be an elitist argument. But the iPhone is a proprietary platform with a locked-down browser, locked-down store, locked-down GPU drivers and OTA updates that decide how long your battery lasts. It is not elitist to point out that Apple customers by-and-large ignore these facts, it's the objective circumstances of the smartphone market.


iPhones are some sci-fi magic computers. It's incredible how powerful they are.

Most smartphones are.


Be that as it may, I can guarantee you with complete confidence that 90% of iPhone owners are not engaged in heavy workloads.

The overwhelming majority of people just don't notice.


I think the point was supposed to be default apps in an OS, similar to default search engine.. What I am missing is that OpenAI is in no way that default. Every OS, browser, etc should be able to find a more profitable default than sending someone to OpenAI.

Apple is one of the very few companies committed to (hardware) quality. They make sure their entry level models are very decent. You can't buy a apple product that is complete shite.

Yes, the software side is getting worse in recent years but is it at least slightly better than the competition for average consumers.

Plus being a tech monopolist they can offer a whole ecosystem of software and hardware that works great with each other. So the value proposition is greater than the sum of its parts.

That is the problem with OpenAI, they have only one thing. Google can bleed money all day long and they don't need to care because they have other profitable business ventures.

The way to make money with LLMs is to either be technically superior which only works short term until the competition catches up or create a monopoly. The second option is dead in the water with the advent of the Chinese models. I guess they can lobby to have them banned and create a cartel with their other US based competitors. Otherwise they are screwed. That is why they are allowing military use of their model now. They need that sweet government money to survive. Also they keep talking about AGI so the government gets scared about the Chinese reaching it first and supports them. Complete scam.


it's a very different world when you switch from an iphone to an android phone or vice versa. However, Claude.ai and chatgpt.com are not very different at all. If one has ads and the other does not, it's easy to switch.

>> Apple has offered products with little value over competitors

My Pixel dropped connections unexpectedly. The battery would barely last till end of day.

Apple hardware is simply better value for the money


There's this thing called power of defaults.

If a setting is default, if an app is presented on the front they'll continue to use it as it is. The crowd here always overestimates how competent/interested the general public are in these things.

99.9% (source: my life) of users never even open the second level of the settings app. 99% don't even open the settings app. They don't know how much they can even change or care.

iPhones auto surfacing airpods to pair with was not for convenience it was a necessity. People don't know how to pair with bluetooth. Now android does it as well.

There's a generation that grew up with appliances that accounted for their mistakes rather than failing. There's no need to learn or understand how something works.


Sure defaults are extremely powerful - but that's rather my point - where is the default that OpenAI controls?

Google, Apple, Samsung, Microsoft ( and various Chinese companies ) etc are largely are in control of defaults - via devices and browsers.

Perhaps in Github copilot ( via MS ) - but software developers are not typical consumers.

Perhaps Sam and johnnies new assistant thing will transform the market - but until that ships it's vapour ware.


Yes this was not relevant to the main topic of openai. I'm just responding to the statement made by the parent comment.

You’re comparing a single app with an entire ecosystem and app marketplace. Poor comparison.

I think you're right about stickyness up to a point.

Cultural defaults seem unchangeable but then suddenly everyone knows, that's everyone knows, that OpenAI is passé.

OpenAI has a real chance to blow their lead, ending up in a hellish no-man's land by trying to please everyone: Not cool enough for normies, not safe enough for business, not radical enough for techies. Pick a lane or perish.

Not owning their own infrastructure, and being propped up by financial / valuation tricks are more red flags.

Being a first mover doesn't guarantee getting to the golden goose, remember MySpace.


> Being a first mover doesn't guarantee getting to the golden goose, remember MySpace.

MySpace, ICQ, Altavista, Dropbox, Yahoo, BlackBerry, Xerox Alto, Altair 8800, CP/M, WordStar, VisiCalc, the list is very long.


Hotmail is a good example too. I remember it being pretty ubiquitous, at least for the 'personal email' crowd, and it seemed implausible that people would give up on what was often their main email 'location' for another offering without being able to transfer their often important and personal stuff. then gmail came along.

The internet and the surrounding context changed so fast that it made little sense to cling to old email addresses made in the old context. Gmail represented the 'new internet' and old patterns became obsolete (less subversive, more mainstream/corporate). When there's a seismic shift in usage patterns that's when all bets are off regarding where everyone lands. Being the first mover means little here. If the way people interacted with AI underwent a massive shift, OpenAI would likely get left behind. The only safe bet is to invent your own killer.

Younger people might not realize or remember this, but when GMail came out it was HUGE. Like, I remember it was invite-only for a while and getting an invite was a really big deal. In retrospect that was some genius marketing by them (also just a way better product, at the time)

Also switching email was a lot easier back then. Nowadays if you're using gmail as an auth provider it's very hard to completely abandon an inbox without a lot of friction. Back then all your logins were separate anyway.


Beyond that too, I would think that many people view a Hotmail account as an indicator that you're backwards or not serious in business.

I distinctly remember the shift to and then away from Altavista as well.


Interesting point. I guess people liked the convenience of unlimited storage even more than they liked the convenience of keeping the same email address. In a way they traded one convenience for another.

Did Hotmail offer email redirection at that time? I can't remember whether that sort of thing that would make it easy to switch away was offered.

I don't remember that detail, but I do remember most people not treating their inbox as an archive at the time. So there was less friction to switch to gmail, and more reason to do so due to the "real time" ticking storage amount of gmail, which then became an archive (again for most people).

> I do remember most people not treating their inbox as an archive at the time.

Indeed. For me, the step was gmail. With its humongous 1GB of storage, that was the moment when I stopped having to delete stuff to save space. It’s funny because a lot of people I know who were already older at that point kept the habit of deleting emails, even today.


VisiCalc, CP/M, BlackBerry and Yahoo definitely got a golden goose; it's long after establishing their dominance that they failed at maintaining it.

Isn’t that exactly what’s being discussed re: OpenAI? They seemed unstoppable a few years ago, but have lost quite a bit of reputation and their position of technical lead.

What I mean is that the one I cited were first movers that actually found a golden goose, then got ousted years/decades later for various reasons.

For now at least, OpenAI has not found a golden goose (i.e. made a lot of money) yet.


> have lost quite a bit of reputation

in the tech world, maybe. All my 'normie' friends are using ChatGPT though and have no concept of their reputation, nor intention of switching. Most people I know are hardly even aware of alternatives, even of Gemini, though everyone has a Google account.

I personally also use ChatGPT and have zero reason not to, currently. I might switch if they royally mess up, but everything they've messed up has been fixed within a day.


My normie friends aren’t paying several hundred of $ a month on their services, though.

But would they pay for it? That's the difference.

More people pay for ChatGPT than any other Consumer AI service by far, and when ads rollout, it won't matter that much.

“consumer” is doing a lot of heavy lifting here, I’d be curious to know how it compares to overall AI usage (ie including professionals using it).

IBM owned literally the whole market on computers at a time when computing equipment was prohibitively expensive and centralised.

IBM was a special case, I'm not sure there were many markets so thoroughly cornered like IT was for about 3 decades. I guess telephone (AT&T) was similar.

> the list is very long

Tesla is lurking as well


Pick a lane or perish.

Literally every industry has examples of businesses that don't excel at anything and still do well enough to carry on. In fact, in most industries, it's actually hard to see any business that's clearly leading on any specific front because as soon as it becomes an obvious factor in gaining market share the competing businesses focus on that area as well.


Yeah. Vauxhall/Opel has always been my go-to example here. Their cars excel at nothing. They’re not especially stylish. Not the fastest or nicest to drive. Not unusually efficient. Not particularly reliable or guaranteed for a long period. By no means the cheapest. They don’t even achieve a sweet spot of averageness across all these things. Yet people have somehow carried on buying them over decades.

Jeremey Clarkson called the Astra "the most boring car ever made". I loved both of mine - they always got me and my stuff where I needed to be, and were easy to fix.

The last one, a 2007 model that has now moved on to my younger sibling, might be the last "simple" car.


First mover advantage: marketing logic or marketing legend: https://gtellis.net/wp-content/uploads/2020/09/pioneering-ad...

I guess it depends on what you mean by golden goose. MySpace sold for an insane amount of money at the time and it was basically one guy, “Tom”.

> Everyone is actually underestimating stickiness.

I think you're underestimating how fickle consumers are, and how much their choices are based on fashion and emotion. A couple more of these, and OpenAI will find itself relegated to the kids' table with Grok and Perplexity. https://www.technologyreview.com/2025/08/15/1121900/gpt4o-gr...


I still use perplexity. Which tool is better currently?

I’m also unclear on what’s better than perplexity if you want accurate information (and not just to write Harry Potter fan fiction or whatever)

I finally switched off ChatGPT premium when I asked a simple question (“which terminal is this airline”) and it was so confidently wrong. Perplexity referencing sources and trying to double check accuracy is great IMO.


"Near billion users", yet less than 5% pay them a single penny[1]. Like you said, the vast majority of these will never pay anything, but I'd argue the majority will migrate to the "next" free provider as soon as OpenAI starts inserting too many ads into the product.

I watched my partner switch from OAI to DeepSeek during the last outage and she hasn't been back to OAI since. I am skeptical there is any actual stickyness when basically all of the chatbots do the same thing for the casual user.

[1] https://www.theregister.com/2025/10/15/openais_chatgpt_popul...


Google Search has no stickiness and they managed to build a behemoth.

ChatGPT is a great product, but the lack of stickiness comes into play because there are many viable alternatives.

They’re all going to have to monetise the consumer segment at some stage, and I think that’s likely to be via ads on a freemium tier in most instances.


Google Search used to be awesome, heads, shoulders, belt buckle and knees above everyone else.

Seriously, I still remember the moment I first used Google. I was using Altavista / OpenText and Yahoo now and then. I thought Altavista was the best and OpenText was for geeking out. Once I tried Google I never looked back for decades. Their tech was their moat.


Google Search was head & shoulders better than the alternatives back when Google was developing into the behemoth it is today.

Google search still has a ton of stickiness for the casual user.


You say 5% of users pay like it’s a shockingly bad number, but that’s almost exactly the same as YouTube’s paid subscribers (125m) vs MAUs (2.5b).

Like it or not, OpenAI is building a real business. It’s obviously capital intensive, but we will see how it goes.

And no, the vast majority will not migrate. Just like the vast majority didn’t migrate away from Google after they launched ads.

I don’t get the HN urge to be the contrarian saying “that’ll never work.”


OpenAI is sitting on top of a $100+ billion ad revenue business just waiting to happen. Those 95% of users not paying anything are about to start paying something.

They can't afford to wait.

OpenAI every day is closer and closer to collapse, they urgently need an IPO to pass the hot potato to someone else.

They have 35B USD in the bank.

They did 13B USD in revenue in 2025, and in 2026 they plan to spend 55B USD.

They are already dead if they don't find new people to lend them money.

One of the solution is to sell the company to fools (the general public / IPO), so founders and investors can get away with it and, buy a bit of runway for the company.


Is she paying for it? That is the only question that matters in the end.

For myself, I use LLMs daily and I would even say a lot on some days and I _did_ pay the 20€/mo subscription for ChatGPT, but with the latest model I cannot justify that anymore.

4o was amazingly good even if it had some parasocial issues with some people, it actually did what I expect an LLM to do. Now the quality of the 5.whatever has gone drastically down. It no longer searches web for things it doesn't know, but instead guesses.

Even worse is the tone it uses; "Let's look at this calmly" and other repeated sentences are just off putting and make the conversation feel like the LLM thinks I am about to kill myself constantly and that is not what I want from my LLM.


>Is she paying for it? That is the only question that matters in the end.

Don't underestimate advertising. Noone pays for Facebook or Google search. Yet the ad business with a couple billion users seems profitable enough to fund frontier LLM research and inference infrastructure as a side-gig in these companies. Google only rushed out AI overview because they saw ChatGPT eating their market share in information retrieval and Zuck is literally panicking about the fact that users share more personal details with OpenAI than on his doomscrolling attention sinks.


> Don't underestimate advertising.

OpenAI is talking out of their ass with their advertising plans. Meta and Google are an advertising duopoly, extremely anti-competitive, and basically defrauding their own customers. OpenAI can't just replicate that.

Worse still is that OpenAI has no competitive edge. All the hype around their advertising plans is based on the idea that they can blend the ads right into the response, a turbocharged version of Native Advertising.

This is explicitly illegal. Very explicitly.

The US' FTC may have been declawed by the current US government, but the rest of the west will nuke them from orbit over it. Doubtless OpenAI will try some stunt alike marking the entire LLM response as "this is an ad", but that won't satisfy the regulators.

This only gets worse with further problems. An LLM hallucinating product features is going to invoke regulator wrath as well, and an LLM deciding to cut off the adcopy early will invoke the wrath of the advertiser.

> Yet the ad business with a couple billion users seems profitable enough to fund frontier LLM research and inference infrastructure as a side-gig in these companies

Also important: Not anymore. The tech giants are now issuing quite a lot of debt to pay for the AI plans.


> This is explicitly illegal.

Is it really any different than product placement in TV shows/movies?


Maybe I am underestimating how suggestible average people are as someone who has never in their lives clicked on an ad I just can't see ads being anything but a deterrent for using the service

>Maybe I am underestimating

You sure are. And it sounds like you are also underestimating the effect yourself as well. In fact this perception is so common that there is even a name for it in psychology: Third-person effect. Many people believe that advertising does not affect them. But ironically, the more you believe so, the more likely you are to fall victim to particular types of advertising. And in general your response to ads will be very similar to everyone else's. These "annoying" ads that you "would never click on" are just badly personalized or badly placed ads. That's the only type that gets stuck in your mind when you think of ads, based on your personal biases. But the major tech companies have spent the last one-and-a-half decades on perfecting the psychology of advertising. You might think you are immune, but you are certainly not. Every buying decision you have made in the last 10 years was almost certainly influenced to some degree. Just not always consciously. And I'm willing to bet that a lot of buying decisions were already heavily influenced by ChatGPT, even before their shopping feature. OpenAI just didn't profit on them as much as they could.


Influenced to some degree sure, weather influences me to some degree, but I truly feel like ads aren't affective on me. Unless we broaden definition of ads to something like sponsored content. I have bought some TTRPG rules sets after I have seen them being played in a sponsored video, but I still have never clicked an ad on a page and bought something.

And I actually have tried to use ChatGPT to buy something. I have asked it to search for specific items from EU stores so I wouldn't need to pay import taxes, but usually it fails. It either suggests Global stores which ship from US or China or it suggests different products than what I asked for.

If ChatGPT or whatever LLM I was using could actually link me the products I wanted without me searching for them they should get a commission for sure, but we sure aren't there yet.


Sponsored content is definitely a form of advertisement

> but I still have never clicked an ad on a page and bought something.

But millions, and millions, and millions of people do. Certainly enough that I provide consulting services for a number of businesses for whom the majority of their revenue comes 95+% from ad-clicks. It's been that way 10-15 years and there have been ups and downs, but at the end of the day, the adspend has always been fruitful.

Whilst I sat around with fellow technical people all patting themselves on the back telling themselves and anyone who will listen "ads dont work" the people I consult too have become multi-millionaires with little more than double digit hosting costs and a few ads accounts.

This seems to be a continual blind spot for a lot of techincal people who really seem to struggle to grasp that not everybody thinks or acts the same way they do.


I agree with you, I can't stand ads

However, I believe an ad it still influences you subconsciously as long as it is in your sight line.

I wouldn't be surprised if there is a lot of investigation into subtly slipping advertising in the LLM responses the way Korean dramas have product placement right in the storyline (Subway, bbq chicken, beverages, makeup, etc).


Subtle things like the guy in CSI Miami talking about how good Subway is for 5 minutes?

Of course stuff in the world influences me, I am still a human. Still I have never clicked an ad and bought something. I simply don't get who would. Same as with the super market placing candy and stuff next to the cashier to get people to buy more, I have never been swayed by those because when I go to the store I am always on a mission and know before hand what I am buying.

It would be cool to see all the times I have been influenced into buying something because of subconscious advertisement, but that's kind a impossible so all I can do is deny it and of course all marketing people will say that I am wrong.

And we can argue forever what counts as an advertisement. For example I recently bought a new mouse pad, I wasn't particularly looking for a specific one, just something fun and bright and as I was browsing a web store they had a cool design for half off and I bought it. Maybe that was targeted advertisement, but I had already made the decision to buy a new mousepad and had been browsing on and off for few weeks, so was it really? I would argue not.


You seem to have defined ads as "obvious calls to action that end up in me buying it for sure". That's a pretty narrow view of marketing, but it does feel like you are aware that there may be other forms as you provide examples across the thread. It comes off as some form of elitism, where you deem the simplest ads as ineffective on yourself (but work on "average people") - but then go on to mention things like discounts and sponsorships, which to most are obvious marketing ploys too. No judgement, but maybe reflect on this?

Is discount really an ad? Like if I had already made a decision to buy a thing and now I paid less for it was it really a working ad?

Also sponsored content is way different than having ads on a website or in an app or what kind of ads do you think GPT will have?

And you are definitely judging me. When people say “ads” that is pretty specific thing that they mean. If you broaden it to mean everything then I can’t argue as there is no point.

There is two options either ads (as in those things every one blocks with uBlock Origin) do not work on people OR they do work on most people but not on me, if anything they are a deterrent from buying that product.


> Is discount really an ad?

In most cases, yes. At minimum, it’s a marketing tactic built with the same intent as an ad: to influence your decision-making.

> Also sponsored content is way different than having ads on a website or in an app

However they are all exactly the same, in that they are all ads.

> When people say “ads” that is pretty specific thing that they mean.

No, that’s what you mean. Most people aren’t limiting it to a specific kind of ad, they mean anything designed to influence their behavior, shape their decisions, or sell them something.

> And we can argue forever what counts as an advertisement.

Or we can just work off the available definitions of modern advertising.

"An ad is any paid or strategically placed message designed to influence attention, perception, or purchasing behavior, regardless of format or channel."

> There is two options

There are in fact not. There are two you seem cable of recognising, but there are in fact others.

> OR they do work on most people but not on me

That’s an oversimplification. Ads can work in aggregate without working every time, in every format, or in the specific way you imagine.

Blocking one specific type of ad doesn’t make you immune to ads, it just means you’re filtering one, very narrow channel.

Influence happens through a huge variety of other means, including those that you seem to think specifically don't count and include, but are not limited too, sponsorships, discounts, product placement, social proof, algorithmic recommendations, brand exposure and many, MANY more.

You don’t have to consciously click an ad for advertising to shape your buying behavior.


Here’s the FTC’s definition of an ad:

> Any message designed to promote or sell a product, service, or brand, where there is a material connection between the speaker and the advertiser.

Yes, a discount is an ad - sometimes by the brand/manufacturer to get you to buy their product instead of a competitor, or by the seller to sell that product over others (for even mundane reasons like stock clearing).

Yes, sponsored content is an ad. The content creator is reimbursed for their output that is used to convince viewers to perform some purchase activity, usually over alternatives.

You’re really severely restricting the definition yourself by claiming an ad is “things that ublock origin” blocks. They can’t block physical banners and billboards or TV commercial breaks - does that now make them not ads? Whether you intended to buy something again doesn’t disqualify something from being an ad. In fact, that’s often when an ad is most effective - to buy the one they show you, instead of one you haven’t heard of or considered.


Ads aren't just for click through, they are for suggestions, and mind share as well.

You can't click on the budweiser logo when watching super bowl ad. But if you sit in your chatgpt window all day then it's probably worth it for advertisers to expect to build familiarity with brands they advertise.


Really depends what the ads are. If they are popups or other intrusive ads the product will just die. If they are subtle hints in the text how are you going to track it. I don't know, I just don't believe in ads, but then again I am dirty commie so who am I to tell you not to

That’s not the point. The point is that brands build awareness through ads that don’t require clicking and this ha effected you whether you want to admit it or not

And my point is that I don’t care. I don’t watch ads, I don’t buy things because of ads.

Your messages are very consistent, it all adds up and makes perfect sense.

I don't care either.

Online I get lots of ads blocked, but not all, I really don't put much effort into it beyond default.

So what if I am "influenced" if it doesn't effect any significant part of my behavior.

One thing I never do is respond with money.

I'm just not a "consumer" so that goes back before the internet.

Sure I see ads thrown at me which keep me aware of those brands but the only buys I make would happen without any ads.

On the rare occasion that I want to make a significant purchase, then I will seek out the ad. Oh the horror !

But I want to see how honest I think it is compared to a number of reviews. It's really pretty neutral since it's just as much me using the ad as the ad using me, plus equally good for knowing what looks good to buy as knowing what brand not to buy.

Then there's the interesting way when an overall economic downturn gets rougher you see ads for things that almost never need advertising for years in a row, or never have before :\

OTOH you also see some of the most trivial stuff that must be flying off the shelf and all you can do is shake your head ;)


Imagine subliminal messages being sent in the llm responses carefully created for max impact on you. I’m sure many companies will pay to recommend their product on ChatGPT.

Okay, but if we are going scifi why not just beam ads in to dreams like in Futurama?

The word "subliminal" and its connotations aside, it's not like subtly influencing people without them noticing is anything new.

> Maybe I am underestimating

Advertising is one of the biggest markets on the planet. Meta is nearly a $2T company and is making record profits.


not necessarily, if openai managed to monetize free users. Could be through advertising, or integrations with marketplaces on commission (e.g. order your next Hello Fresh through ChatGPT? Get recommended a hotel?)

They could succeed where Alexa failed. A free user can even bring in more than a paid user if you look at some platforms like spotify, where apparently there is a large chunk of free users generating more income through ads than if they would pay


We are so far away from ordering stuff from LLM

Not really!

I was researching CAVA ( due to the crazy earnigs announcement yesterday ) and it was displaying some nice links to the website, all suffixed with ?utm=chatgpt

So, it has begun!


Most potential customers wouldn't ever think in terms as "justifying" a €20 purchase when the product is great.

ChatGPT (and competitors) is an incredibly high value tool, and €20 per month is nothing for somebody who wants or needs it. It's just a matter of if they use it enough to start hitting the daily limits.


>no longer searches web for things it doesn't know, but instead guesses.

This could very well have been a cost-reduction effort to try and simulate what it was doing before.

Somebody must think training has already looked at the web enough, or there may be too much slop now that there was no contingency for.

Then you've got tighter guardrails to make it more palatable for a wider audience.

I guess different people would draw the line differently, but when it goes from being worth money to not worth it any more that could be an enshittification effect.

Especially if things like that accelerate.


I hear the claim that people already have their conversation on ChatGPT and can't move them. I'm curious, what are these discussions like? I've never continued an old discussion, I just start a new one every time I have a question. If the discussion is long, I often start a new chat to get a blank slate. My experience is that the chat history just causes confusion.

So I'm curious to understand: What are the discussions like that people go back to and would lose if they moved to another platform?


In my experience non-technical folks quite dig the memory feature. For me that's kinda context poisoning as a service, but I know people that get value out of it (or at least strongly feel they do). Not sure how one would migrate that.

OpenAI will send you a download link of all your data in a zip file. You can feed it to Gemini or Claude or whatever.

It's one of those super easy things that 90% of the users just never do - like changing their default search engine, export their social graph, install ad blockers, etc.

> Not sure how one would migrate that.

Ctrl-C Ctrl-V?


I'm curious from the other direction, what are the conversations like if you feel they are easy to move?

Do you have the memory feature disabled? I have the feeling this in particular is doing absolutely loads behind the scene, e.g summarising all conversations and adding additional hidden context to every request.

I can start a new chat in the UI right now, ask it what my job is, what my current project is, how many kids I have, what car I drive etc. It'll know the answer already.

I think it's this conversation history - or maybe better yet if we think of it as this "relationship" - that people are saying is going to make it hard to move.


I ask for code snippets, occasional recipes, translations... I don't have memory enabled. I start a new chat for each question. At times I ask things in different languages, if the question is tied to culture or location. If I notice I asked the wrong question, I start a new session instead of continuing the old one, so it doesn't try to merge the questions somehow.

I don't see any benefit in it knowing anything about me. Instead I'm usually quite vague to avoid biased answers.


Regardless of whether there is value in chat history or not, for some people it is important.

Back in the day during the music streaming wars there were tons of "move your playlists from A to B" services. Streaming services could not hold on to customers because all their playlists were on there.

I'm sure that similar services will pop up for chatbots.

Also, you can always just ask your chatbot to generate a file with your chat history, given that it's all part of the context anyway.


yeah the 'sessions' approach is probably going to be deprecated. one continuous chat is where it's at , perhaps with some bookmarks on the side for easy access

or perhaps a thread-based chat like reddit or HN, where you can branch off an older conversation with yourself


That would suck. I hate context being carried between conversations. I’ve had memory turned off since the start.

I don't think chat history is enough for real stickiness.

But the trillion dollar question is, what is? Now that I think about it, I'd bet heavily on Google. They've got your email, your photos, your location history, yada yada. Once they're able to pull all that into AI and make a reasonably cohesive product out of it, it seems like that's what people would use by default. Plus they've got a browser, search page, and phone OS that all can lead you to their AI.

They could train custom LoRA layers to mimic your tone, encode special tokens that indicate your name and data and various facts about you and your contacts, to make output more accurate, consistent, and personalized. Lots of possibilities for increased stickiness.

Even enterprise-wise, gemini is pretty good at coding and if your company has all its docs on Google docs, that could become a pretty seamless integration. They can even build their agents to prefer GCP, or maybe make that the free tier but have other providers support be more expensive.

At some point, a reasonable business model might be "we replace your engineering team with AI plus a few Google engineers on retainer for when things get wonky," which could scale to pretty large. (Granted this sounds more like a msft power move.)

They already have all the infrastructure, all they need is a reasonable competitor to github. They really screwed up losing out to msft on that one!


> Everyone is actually underestimating stickiness. The near billion users OpenAI has is actually a real moat and might translate into decent chunk of revenue.

I’ve got a small-ish sample of friends who are regular people and use various AI chatbots because mobile phone providers now commonly bundle an AI subscription with their services. People seem to switch between Perplexity, Claude, and ChatGPT without any trouble. It does not look sticky at all to me and the half-a-percent difference in benchmarks we love to obsess about does not translate at all in increased user satisfaction.


My wife, for example, uses [Netscape Navigator] on a daily basis, but has found no reason to try anything else. There are no network effects for sure, but people have hundreds and thousands on [bookmarks] on these apps that can't be easily moved elsewhere.

See how stupid it sounds?


Given how long people have stuck with Internet Explorer, I don't think this is a good example.

Internet Explorer doesn't exist anymore!

> but people have hundreds and thousands on conversation on these apps that can't be easily moved elsewhere.

Except these aren't conversations in the traditional sense. Yes, there's the history of prompts and responses exchanged. But the threads don't build on each other - there's no cross-conversational memory, such as you'd have in a human relationship. Even within a conversation it's mostly stateless, sending the full context history each time as input.

So there's no real data or network effect moat - the moat is all in model quality (which is an extremely competitive race) and harness quality (same). I just don't think there's any real switching cost here.


This is not the case.

I use OpenAI a lot on the paid plan via the UI. It now knows absolutely loads about me and seems to have a massive amount of cross conversational memory. It's really getting very close to what you'd expect from a human conversation in this regard.

Sure the model itself is still stateless, and if you use the API then what you say is true.

But they are doing so much unseen summarisation and longer context building behind the scenes in the webapp, what you see in the current conversation history is just a fraction of what is getting sent to the model.


> It now knows absolutely loads about me

Baffled that someone tech literate would be boasting about this in the year 2026. I mean, you do you, we all have different priorities and threat vectors, but this is the furthest from what I would personally want.


It's not boasting, I'm not sure why what I wrote would come across that way. I'm describing how I use a product and the functionality it presents to me.

But yes, it's an emerging area and I am questioning if I am sharing too much with it. I 100% would not want my chat histories exposed.

Saying that though, facebook can read my highly personal messages, google every email, my phone is tracking my every move, I have to sign up for random janky websites for my kids school where ther medical info is stored, etc.

LLM chat history presents a new risk and a different set of data, but it's a crowded minefield already.


This is the same as when Google got big (and Facebook, etc...). We have some privacy focused competitors (Kagi, etc...) but most people are quite happy to just give Google (and worse, Facebook) everything.

AI is just a new technology but this has been ramping up for decades now.


I see people who have conversations spanning months. They don't start new threads and instead go back to existing threads to continue the topic. They also reference the prior threads discussion many times.

This would feel like a switching cost for people who use the system that way.


They need to do some sort of shared chat. Like being able to start a thread then invite another chatGPT user to join on the conversation. That would add some network effects and switching cost.

Maybe they already have this? I'm not a paid user.


ChatGPT and Gemini has cross conversation personalization. I believe the former is off by default and the latter is on.

Is there more detailed information how this works? I used to assume that it can be beneficial to switch to a new chat to avoid having took much irrelevant context in the interaction. How does this personalization happen, how does it decide which parts are relevant from one conversation to another?

It doesn't seem like there's a way to inspect or alter what kind of information Gemini had saved as "important information" about me (apart from deleting chats entirely, apparently).


There’s a toggle in every new Gemini chat to turn off personalization for that chat. I assume you need to make sure it’s mom globally first?

On the web app, I see the "temporary chat" option but no toggle. It tells me temporary chats aren't used for model training. I thought I remembered that chats of Pro customers aren't used for that in any case. Hard to keep track of all this stuff.

Ultimately, I think the crossover memory is useful, but I'd really like to know exactly what's in there and an ability to validate/adjust, not just on/off.


Model training is completely different than keeping a summary of chats on the side and injecting it as context.

In my Gemini app, when k click new chat and click the filters button I have “Personalize Intelligence: Personalize chat when helpful.”

It is on every time I click new chat. Maybe you need to enable it in settings first. I can disable it to have a clean chat without personal context, but preserve the chat history unlike temporary chat.


I understand they are separate processes (compacting memories vs training new models), it just surprised me to read that my chats are used for training.

This is how it's presented: "Temporary chats Opens in a new window don't appear in Recent Chats Opens in a new window or Gemini Apps Activity Opens in a new window and aren't used to train models or personalize your experience."

I'm guessing you're maybe on iOS? I don't see these UI elements, not in the App on my phone nor in direct web access.


I have them in the web as well.

Anecdata point: I canceled my ChatGPT pro subscription last year over some shitty thing Altman did at OpenAI and easily moved over to Claude. The only thing I took with me was the system prompt or whatever it's called, I couldn't care less about my conversation history. I'm planning to do the same thing with my Claude subscription if Anthropic kowtows to the Pentagon. These services are not sticky at all IMO.

Anthropic already decided to do business with the "killing people" department of the government. I think the battle was lost there, rather than whether or not they cross a line in the sand they drew to act as if they're the ethical AI company despite making products that are used to kill people. I'm sure the result of this battle will be some compromise that allows the Pentagon to get whatever they want while offering a fig leaf to Anthropic to continue their ethicality show.

Yes, I just caught up on all the Anthropic x Pentagon news this morning. I've canceled my subscription and let them know why in the feedback. It's too bad because I liked the Claude models, but I can easily swap the Claude app out with DuckDuckGo and use one of the open models my DDG subscription supports.

Anthropic donated $20 million to Public First Action[1], a PAC that promotes Republican Senator Marsha Blackburn and her sponsored Kids Online Safety Act (KOSA)[2], a bill that will force everyone to scan their faces and IDs to use the internet under the guise of saving the children.

The legislative angle taken by companies like Anthropic is that they will provide the censorship gatekeeping infrastructure to scan all user-generated content that gets posted online for "appropriateness", guaranteeing AI providers a constant firehose of novel content they can train on and get paid for the free training. AI companies will also get paid to train on videos of everyone's faces and IDs.

As for why Blackburn supports KOSA[3]:

> Asked what conservatives’ top priorities should be right now, Senator Blackburn answered, “protecting minor children from the transgender [sic] in this culture and that influence.” She then talked about how KOSA could address this problem, and named social media platforms as places “where children are being indoctrinated.”

If Anthropic, the PACs it supports and Blackburn get their way with KOSA, the end result will be that anything posted on the internet will be able to be traced back to you. Web platforms will finally be able to sell their userbases as identifiable and monetizable humans to their partners/advertisers/governments/facial recognition systems/etc. AI companies will legally enshrine themselves as the official gatekeepers and censors of the internet, and they will be paid to train on the totality of novel human creativity in real-time.

That will be their moat.

[1] https://www.cnbc.com/2026/02/12/anthropic-gives-20-million-t...

[2] https://publicfirstaction.us/news/public-first-action-and-de...

[3] https://www.them.us/story/kosa-senator-blackburn-censor-tran...


Where are you thinking of moving to?

I have been positively surprised with Mistral which I have been trying out.

I'd probably swap to one of the open models available through my DuckDuckGo subscription. I don't keep up with the AI hype so I don't know what options exist out there beyond ChatGPT, Claude and Gemini right now.

It would literally take you 5 mins to set up your wife with a competing client for her needs.

Sure it's 'sticky' at least a little, but it's not a moat. A moat is a show stopper like they own you.


Just like it would take 10 seconds to buy her a Pepsi instead of her preferred Coke.

Would you?


Stickyness absolutely helps. But it won't get you anywhere close to a MAG7 operating margin. I think we are already seeing the start of price wars. I cancelled my ChatGPT subscription once i realized Gemini Pro was included in my Google Workspace and never looked back for a second.

If you could move the taste with the ease of OpenAI's Export Data tool? Sure, why not?

Coke doesn't change their recipe every year.

If it taste as good and is cheaper, sure.

No, because that’s a product you buy for flavor and Pepsi is a different flavor.

UI style and response tone aren’t flavors?

now youre arguing openAI has a distinctly better product not just that they are hoping for high switching costs

No, not at all.

I am arguing that “distinctly better” isn’t the most important thing in consumer products. Habits, familiarity, and individual taste at far far more important.

People just build affinity to products. The vast majority of people buy the same brand toothpaste they grew up with. “Better” isn’t even a consideration.


10 seconds to buy my wife a Pepsi? Why that estimate seems quite absurd.

First I would have to walk 10 miles into town. Then I would have to locate a purveyor of goods that carried Pepsi-Cola products...

Then I reckon we would spend a fort-minute dickering over price.

And finally trudging back home with my Pepsi product in tow.

Why, I'd be lucky to accomplish this herculean task in the very same evening.


Idk, habit and the devil you know are powerful as hell. Google has enshittified search nearly beyond imagination, but it's still where the vastly overwhelming majority of people search.

What free search engine today performs significantly better? No seriously Google sucks and I want an alternative. Do I need to pay for Kagi to get decent search?

> The near billion users OpenAI has is actually a real moat and might translate into decent chunk of revenue.

People used to suggest this about MySpace.


MySpace never had close to a billion users.

300 million users in 2007 is mighty impressive, the internet was not absolutely ubiquitous like now, mobile access to it was in its infancy. Relatively speaking it is as impressive as 1 billion users in 2026.

But everyone had at least one friend, Tom.

In theory you can export your data from ChatGPT under Settings > Data Controls. In practice, I tried this recently and the download link was broken. Convenient bug I must say.

Make sure you're logged in to chatgpt.com in the same browser you're using to access that link.

How would you navigate to it if you were not?

Yahoo, altavista, askjeeves, Google

Friendster, MySpace, Facebook

Netscape, ie, chrome

Icq, aim, MSN messenger, a million other chat apps

First mover advantage doesn't last long

Very high chance that the winner in five years is a company that does not yet exist


ChatGPT has a good name. It's weird and awkward but it still rolls off the tongue. And I am saying that as a non native English speaker because the name has been migrated to other languages with the English pronunciation.

In comparison, Claude's name is very bad, it just doesn't sound right and people might mishear me when I say it. I never say "Claude" when talking to other, especially non-technical people, and instead say "ChatGPT" even though I am using Claude exclusively.

Google has another problem - they advertise their models as separate products. There is Gemini and there is Nano Banana, also Nano Banana Pro. But they are all somehow under the same product which is still called Gemini. I understand the distinction but I am sure many non-technical people find it confusing.


Claude may seem incongruous compared to the others, however it's the only human sounding name, compared to the robotic "chatgpt" or others that sound generic or bland company names (Gemini, perplexity).

They intentionally chose a more bland sounding name, as, I assume, they wanted to emphasise the "safe" nature compared to their competitors.

As more information comes out about openai, people may choose to move to for other reasons, such as

- Openai adding ads

- Openai's president donating millions to a MAGA PAC

- Openai getting closer to the US military whilst anthropic standing their ground and rejecting them.

- Openai's recent products not being at the top of the benchmarks

The choice is yours.


> They intentionally chose a more bland sounding name, as, I assume, they wanted to emphasise the "safe" nature compared to their competitors.

A lack of creativity seems more likely to me. It’s a GPT in a chat window.

> Openai getting closer to the US military whilst anthropic standing their ground and rejecting them.

Except they didn’t. They folded faster than a house of cards during an earthquake. It boggles the mind anyone thought they wouldn’t. Ultimately they only care about money and winning.


> Openai getting closer to the US military whilst anthropic standing their ground and rejecting them.

https://news.ycombinator.com/item?id=47145963

https://news.ycombinator.com/item?id=47145551


OpenAI has demonstrated a severe lack of ethics, you're right, it's just hard to know how educated the average consumer is about that. The anthropic-military thing is a big deal but I suspect few outside of the tech world really understand the implications of what's going on.

Anectode: My aunt was talking about how she had a conversation with ChatGPT about how bad OpenAI was and the AI said "we need regulations", and that seemed to satisfy her somehow.


In Japan many people call it "Chappie" (チャッピー), which I think is much easier to say and less awkward, haha. I see a lot of people using it here daily.

I feel like OpenAI should lean into that.



They initially wanted to call it just "Gemini 2.5 Flash Image (preview)" but the Internet stuck with the anonymous codename Nano-banana from LMArena because it's interesting and quirky. Google didn't officially adopt it until several days after the public release, exactly because of what you say. Eventually, not using it in their comms got more confusing because regular people were asking how they can find this Nano banana thing everyone is hyped about.

I have heard "cloud code" many times from colleagues who do not really know what either cloud OR Claude Code is more than "stuff we should use".

Whisper voice to text very rarely manages to make it Claude, here's some examples and I am not trying to make it bad

Lord code, close code, Clawed code, load code, Claude Chatbot, Claude Code, cloud code.

I wish it had a better name. We know it was named after Claude Shannon. A very nerdy choice rather than a marketeer.


> ChatGPT has a good name

I don't know but around here common people all say "Chatty" nowadays, and also most people if writing the correct name fail to spell "gpt" right quite often in chat.


Absolutely no enduser knows what 'GPT' stands for and if you tell them it's Generative Pre-trained Transformer they're even more confused than before.

There's better brand names out there.


> Absolutely no enduser knows what 'GPT' stands for

But there is no need to know what it stands for.


Claude is a terrible name but Gemini is pretty good.

Names-wise, I think 'Grok' is pretty good, there's just lots of other baggage that comes along with it.

I still hate how Microsoft ruined the value in the name 'Cortana'. If they had a modern LLM named Cortana with the right voice, I'd be very tempted to use it just because. What other LLM has a face associated with it?


Grok is way too nerdy and obscure.

I never considered that. When I change LLM models its usually due to two reasons.

1. the current AI model is producing answers that do not met my needs so I try multiple others at the same time and the one that produces the best answer I stick with until I have this problem again.

2. there is a new model released and advertises a new capability that I want to try out.

I can imagine that for many people the answer that ChatGPT generates is adequate enough that they never need to try another model even if better answers exists from another model. For people with less complex needs this is a very real stickiness. Why make the effort to try something new if the answer is adequate.

In this case, OpenAI would only f*k up if they change the pricing significantly, add intrusive ads or their answers become significantly worse.


I think that kind of inertia mostly lasts as long as there is no financial incentive to move. A ChatGPT user who is not paying anything to OpenAI is of little benefit to them, and has little incentive to switch. However if OpenAI start trying to make money off those users by adding advertising, or removing the free tier, then things may change. Google can afford to subsidize chat from their other revenue streams, but OpenAI can't.

>However if OpenAI start trying to make money off those users by adding advertising, or removing the free tier, then things may change.

Tech forums tend to be in a bit of a bubble. People said the same thing about Netflix and it just quickly became their most popular sub. People don't care about advertising unless it's really obnoxious.

The idea that people will unsub en masse once Open AI starts rolling ads is a pipe dream. And the kind of user that won't pay and won't suffer some ads is the kind of user nobody wants.


Customers come back to Netflix since they have the best content out of all the streaming providers. This is their moat.

ChatGPT, on the other hand, is literally exactly identical to their competitors for the most common use cases.


Customers stay at Netflix because it's cheap, what they're used to, and it has enough on the catalogue to keep people satisfied most of the time. They're not constantly evaluating who has the better catalogue. And most of that catalogue is content they have no real ownership of anyway, at least, until the WB buyout is finalized.

And Netflix is hardly the only example. Like clockwork, people here say the same thing about anyone including ads, to the same result - No-one cares.

This is just one of those things that is popular to say in these kinds of forums but has no bearing in real life. Most people are sticky with products they're satisfied with. They don't switch unless a competitor is:

- much cheaper

- much better

Neither of these is the case in the LLM consumer space. Nobody cares or notices that gemini topped the benchmarks for a couple months before being dethroned, and as far as new features and improvements is concerned, Open AI is the clear leader. All everyone did and still does is follow their lead, even down to the pricing model. Basically every single feature/model improvement you can think of in the LLM consumer space is something Open AI brought first and they get almost all the buzz from it.


> The near billion users OpenAI has is actually a real moat and might translate into decent chunk of revenue.

> My wife, for example, uses ChatGPT on a daily basis, but has found no reason to try anything else.

Is she paying for it? Because as we have seen repeatedly in the past, paid products whither and die when Microsoft bundles a default replacement.

You need to provide a really good reason why this time its different.


I believe specifically for Microsoft, they did bundle a default replacement for chatGPT in a lot of different places (Bing chat, Copilot) which use OpenAI models! But the end product is notably worse than native interface. There is a bare-minimum-level of usability required.

For chat apps, good enough is good enough. For something as universally useful and easy to use as ChatGPT, the bar is higher. I don't want to comment on the financial feasibility, but whatever Microsoft put out has been a complete flop even when free, making ChatGPT $8 subscription seem worth it in comparison


> But the end product is notably worse than native interface.

That was my point - a lot of superior products were eaten by poor bundled replacements.

Last I checked, copilot has more users than ChatGPT simply because users are using it from within Excel, Word, Outlook and Teams, without even knowing that they are using copilot. It's bundled into Windows.

Right now, copilot is more useful to users than ChatGPT because it is embedded into their workflows.


Copilot doesn't have users. They're rebranding their non-llm offerings as copilot to make it not look like a failure

ChatGPT and all the competitors have the exact same UI and UX.

I don’t know how much of an anecdote it is, but all the non-tech people with whom I talk about IA only know chatGPT. Competition is either non existent or the same thing. Among those, no one wants to pay the service, they just stop using it when limits are reached. I can’t say which users can turn the market around but chatGPT is indeed burned in the mind of many and because they don’t care about tech and are not interested in tech they won’t search for any other service it seems. Even after many discussions they don’t remember the names of other IA I told them

I would bet 100% of those people have either Apple or Android phone in their pocket. Android users already have easy access to Gemini, and Apple's Siri is going LLM soon enough as well.

Google and Apple just need to push their AI assistants hard enough, and most of the moat OpenAI has will be gone.


Apple licensed Gemini so both Android and IPhone will point to google's AI.

https://www.bloomberg.com/news/articles/2025-11-05/apple-pla...


The only two models I ever hear non technical people mention are ChatGPT and occasionally Gemini

I think defaultism plays a huge role. If your wife's next smartphone or TV or whatever comes with AI made by a different company, I think she won't really care and use that if it's good.

By the way this is a perfectly rational stance. If the supermarket next to me stopped stocking Coca Cola, I would just by Pepsi.


I disagree. Are people really that attached to their conversations though?

Anecdotally, the vast majority of my own conversations and coding interactions are transient in nature, to the point where I prefer to use the ‘temporary’ mode in whatever tool I’m using.

For coding, every project needs a plan and readme to get whatever agent back up to speed with what the task is. Anyone with a paid-for GH Copilot license knows that you can just switch between whatever provider at a whim, depending on the needs of your task or financial requirements.

I think people will find it easier to revert back to Siri 2.0 if that ever materialises, in which case the stickiness moat is bridged by a more familiar and widely integrated abstraction layer.


Why would you want to move conversations with you? I use multiple different models, I don't care about the history.

My "brain" in terms of projects, is local on my computer. I have a simple set of system rules that I need to copy.

I am not everyone, I understand that. What I try to say: don't overestimate the lock in effect of AI. I doubt there is one.


> I don't care about the history.

I've actually been using the Gemini app more because it auto-deletes old history. I like using LLMs without thinking this is going to stick around forever.

Models are relatively interchangeable for day-to-day use anyway.


> My wife, for example, uses ChatGPT on a daily basis, but has found no reason to try anything else.

Ads might change that. If we know anything, nobody beats Google with ad based monetization. OAI is absolutely correct to be scared.


Just as people underestimate bundling and multiple-product companies. As soon as LLM corpos will start increasing prices to actually match expenses and recouping their immense debts customers will very quickly catch up how OAI product is x5 times more expensive than Google's and the only moat is is to open pre-installed Gemini :) .

Competing in freeware products is impossible as soon as monopoly emerges. Competing in paid products is way easier, especially after free money age has ended.


> people have hundreds and thousands on conversation on these apps that can't be easily moved elsewhere.

I just asked it to build me a searchable indexed downloaded version of all my conversations. One shot, one html page, everything exported (json files).

I’m sure I could ask Claude to import it. I don’t see the moat.


How do you know all your conversations are in there?

Honest question I have this issue a lot with AI claims. Nobody verifies the output.


I did verify the output. You can download your stuff via their api

Ok so it worked correctly today, for you. How do we know it will continue to do so five years down the road when they are suffocating for cash? The more stuff we have there, the harder it becomes to verify their takeout will have everything.

I think it depends on the task.

How bad it is if put of 200+ conversations, a couple of those are not exported correctly? Not much honestly. If I verify some of those and they are ok, I would see no reason to keep verifying all of them.


How do you know anything five years down the road? You don't even know where you yourself will be.

When proven wrong, hackers always say "Well in 5 years time or in 10 years time, things might have changed, so I was right and you were wrong".

It limits your own reasoning capabilities, and your satisfaction of always being right yet again will start diminishing with time.


Well said. I think people don't know when to just back down. Arguing the latest reply becomes a reflex rather than a 2-way discussion.

I'm trying to motivate one or hopefully both of these ideas

- if it is worth backup up or exporting, it is worth doing it early and often

- but more importantly if we backing up and exporting, we should be continuously thinking are we even on the right platform? Does a better alternative exist?


So far I've not seen anyone complain that their conversations have gone missing. There's a GDPR-style export option that I've used a few times for my own.

there is no moat also because conversation history is useless. like saying “I cant move to DDG cause Google has my search history”

https://myactivity.google.com/myactivity

it's not useless, although it used to be more useful than it is now.


The moment openai starts charging for their service properly, people will start shopping around.

See power users such as devs with coding assistants that have model selection dropdowns allowing you to switch on a whim. There is zero loyalty or stickiness in the paying user crowd.


Or using ads

Ads are a little more insidious, and normies aren't nearly as allergic to them as they should be. But whether openAI can achieve their revenue targets by ads alone is a different question.

I am starting to believe that OAI might actually succeed at getting per token inference cost to where it needs to be. Or that it's already there in principle.

Wafer scale compute is a very big deal. Most of HN is probably still unaware that you can get tokens out of one of these devices right now via public API offerings.


OpenAI is already building complex user models. And I mean, super detailed user models - where you are from, what you do, what are your most vulnerable weaknesses, what you care about the most and everything else. This is information even the world's largest advertising company would struggle to put together across their fragmented eco-system (Gmail, Search, etc), but OpenAI has all this on a silver platter. And that scares me, because, a lot of people use ChatGPT as a therapist. We know this because of their advertising intent which they've explicitly expressed. Advertising requires good user models to work (so advertisers can efficiently target their audience) and it is the only way to prove ROI to the advertisers. "But, OpenAI said they won't do targeted ads..". Remember, Google said "Don't be evil" once upon a time too..

That's ok, we use ChatGPT only for coding. We should be good, right? Umm, no. They already explicitly expressed the intention to take a percentage of your revenue if you shipped something with ChatGPT, so even the tech guys aren't safe.

"As intelligence moves into scientific research, drug discovery, energy systems, and financial modeling, new economic models will emerge. Licensing, IP-based agreements, and outcome-based pricing will share in the value created. That is how the internet evolved. Intelligence will follow the same path."

"Intelligence will follow the same path."

https://openai.com/index/a-business-that-scales-with-the-val...

So yes, OpenAI has the best chance to win on the consumer side than anyone else. But, that's not necessarily a good thing (and the OpenAI fanboys will hate me for pointing this out).


> They already explicitly expressed the intention to take a percentage of your revenue if you shipped something with ChatGPT, so even the tech guys aren't safe.

Wasn't there already a ruling that LLM output is not protected by copyright?


I hope that's the case. That would be really confidence inspiring.

> Advertising requires good user models to work…

…and yet, everywhere I go I see massive advertisements on billboards, the sides of buildings, public transit, movie screens…


Yes, but still, targeting is done even in billboards based on the location's demographics based on census data. It's not random. Some countries in Asia (like Singapore, Malaysia) have digital bill boards to target certain demographics based on the time of the day or the estimated crowd demographic at a given bus stop. And a few of them even track eyeballs to count "views" of the ad.

I admit this is a factor I hadn't much considered. I'm sure at some point, if not already, the data collected by your phone will enable the equivalent of a tracking pixel on your physical location, so you can get personalized ads when you step into the subway car: the system will quickly evaluate which rider is most likely to spend money based on ads, and on what, and then an auction will be run in two nanoseconds and the winner will show their 10-second transit clip. Oof.

The saddest part is, the old kind of advertising worked just fine, before all the companies got addicted to AdCrack.


> people have hundreds and thousands on conversation on these apps that can't be easily moved elsewhere

Neither can they be easily searched nor organized. And what prolonged AI use teaches you is: don't search for that old chat, just ask anew.

That particular piece of flypaper isn't as sticky as it may seem.


I imagine the stickiest customers would be large enterprises. You aren't going to get the evangelists to stick on a single model provider, so their best bet is probably employees who are going to have their choices dictated to them by whoever purchases the softare. (Especially in large enterprises where using an unapproved AI provider is likely not allowed, or the AI is imposed on the workers.) The question then is, how do you differentiate yourself in enterprise sales? As much as people seem to dislike Copilot, from a business standpoint "buy the extra microsoft thing in our current contract" or "buy the extra google thing in our current contract" could likely be a lot cheaper/less friction.

Netscape had a 90% market share in 1995. If OpenAI is metaphorically netscape, what prevents its competitors from prying away customers every day? What prevents google/facebook/microsoft from using their position to bundle chat experiences? Especially if the tech is a commodity and OpenAI's models are about as good as everyone elses?

In 1995 no one used the web still. Sure, we all did, but it was pretty niche. I think you could argue that chatbots are niche as well, but the user base of OpenAI is way larger now than Netscape in 1995. Netscape had probably 25 million users at the end of 1995. ChatGPT has about 800 million.

Google is sticky too, and has a huge moat around that access (android, browsers).

Google hasn't yet pushed hard into dominating the chatGPT use case, but they could EASILY push out chatGPT if they tried. For example, if they instantly turned their search page to the gemini chat, they would instantly have dominated openAI use cases. I'm not saying they would do that, they will probably go for the 'everything app' approach slowly

I think the use cases of chatGPT and google are not differentiated enough to justify 2 winners


> The near billion users OpenAI has

They're losing market share and the growth of active user plateaued. They captured all the normies who learned about llms on TV but these people will never spend a cent as you said.

They're not even on the top 10 most used llms on openrouter anymore: https://openrouter.ai/rankings

At the current pace anthropic will make more money than openai soon: https://epochai.substack.com/p/anthropic-could-surpass-opena...

https://menlovc.com/wp-content/uploads/2025/07/2-llm_api_mar...


I’m not rooting for open AI but OpenRouter is a very self selecting group. Most API users of Anthropocic or OpenAi would just go through the normal API

I'm surprised how many of my technical team use free ChatGPT in their personal lives. The rest have Claude subscriptions. I'm the only one with ChatGPT and Claude subs and I'll be switching from Claude Pro to Ckaude Max and cancelling ChatGPT, since I only use it when I hit my Claude quota.

My nontechnical friends only know about ChatGPT, all other LLMs are a complete and total mystery to them outside of what is built into Google's search engine and Copilot. I imagine they represent the majority of consumers. It'd require significant marketing campaign for most of them to switch or for OpenAI to make a substantial mistake.

do they use facebook or instagram? meta jammed their LLM into the search box there. Do they use google at all? the AI summary produced by Gemini leads you to click on "more details" with gemini.

so while this is technically true: > My nontechnical friends only know about ChatGPT

they may actually use a ton of other LLMs without knowing


At this moment, I agree. Your average person (which doesn't really exist) has already been exposed and trained on ChatGPT. Arguing moving to another "chat" experience has not gone well, for example Bing, etc. Pretty sure Google had the "box" figured out first and won. I think people overthink how much effort people are willing to put into "change". There is nothing wrong with staying put if it works, after all, there is an unlimited number of other things happening in this world besides AI.

Switching llms is like switching a car. Its a bit annoying in the beginning, it responds slightly different and you need to change you subconscious habits before it feels comfortable. Why everyone always complains about new models. So unless there is a very obvious improvement; most users will prefer to stick to their current llm

That has not been my experience at all. My mom and dad were able to switch from ChatGPT to Gemini without any friction whatsoever. I myself round robin between Claude, Gemini and ChatGPT all the time.

I don't think they have a billion active users who opted-in. Google/Apple/Microsoft are the gatekeepers (for the most part) for retail users and they decide who is on by default. The USG isn't going to step-in and the EU won't step in either.

So I suspect that Google will lean into Gemini, Microsoft will lean into OpenAI, and Apple ... it's a tough question what they do in the longer term.

For business users it's a different story and I see room for Anthropic to shine. And then there are the specialty AI services but those are all different markets from the general purpose AI.


I think Google may just end up winning on the good enough / cheap enough dimensions as things get more commoditized in LLM world.. in that they can be the lower cost provider given how vertically integrated they would be compared to OpenAI relying on hyperscalers.

I'm aligned there. I think it will be Google/Gemini gets 50% of the generic market and then OpenAI gets 30% (via Microsoft) and then a long tail. The rest of the vendors will be awesome at their markets (Claude Code for coders) and can handle generic stuff too.

Apple will do whatever they do but it will solely drive users in the Apple ecosystem and they will likely just use one of the other vendors - I'm guessing Google longterm since they speak the same language. There's no point in empowering Anthropic/OpenAI to sit at the top of the pyramid although oddly Apple and OpenAI did that partnership but I feel like that was Apple not thinking ahead.


> Apple ... it's a tough question what they do in the longer term.

my guess is the just keep licensing gemini and move on with making more money instead of selling 100 year bonds to raise debt.


I think there are users who view "their AI" as somewhere in the venn-diagram of their relationships.

And it's a spectrum, at one end you got the full-on AI psychosis and at the other "its a machine, I owe it nothing".

Conversational AI is going to be sticky to the extent that you see a switch to a different provider as dropping a relationship.


Do people care about their old LLM sessions?

I might have sessions I revisit over a few weeks, but nothing longer than that. The conversations feel as ephemeral as the code produced. Some tiny fractions of it might persist long term, but most of it is already forgotten and replaced by lunch time.


My barber does. It's his therapist and the fact that it knows all about his life is very important to him.

>revealing all psychological exploits you have to 3rd party corporation

Scary shit


I disagree. So far I've seen people use "Photoshop" and "Google" as verbs. No one uses "ChatGPT" as a verb. People do use ChatGPT but the brand recognition isn't that strong.

My anecdotes are that Google is winning even on consumer side.


As a verb, no, but the product name somehow feels the wrong shape to verb it. I'd say the voice assistants have Google at a disadvantage for similar reasons: "OK Google" is clunky, whereas "Hey Siri," and "Alexa," are not.

But to ChatGPT: when I wander around Berlin, I do overhear people talking about ChatGPT by name.

For all the typical integrated LLM-based "assistants" in other products, I mainly hear people saying things like "I hate it" and "how do I turn this off" and so on, including the one Google has on its search results.

The other pure-play chat-bots that have enough mind-share to even be in the news are Grok (where twitter users seem to like it a lot, even though everyone else up to and including non-US world governments hate it to the point of wanting it banned), Claude (but even then only because of Claude Code), and DeepSeek (because it shows China has no difficulty keeping up with the US). I heard about Mistrial when it was new, but even with the app on my phone I didn't think about it again until about a month ago.

Ask a normal person about Gemini, I'd expect them to think you were talking astrology, not AI.


> No one uses "ChatGPT" as a verb.

In my experience, they do, a lot. "I asked ChatGPT" is something I hear a lot. And yes, this example is not using ChatGPT as a verb, but the idea of brand recognition is there; it's just a grammar thing.


Today I heard at least 5 times something along the lines "I got this from ChatGPT", "I asked ChatGPT"...

> I asked ChatGPT

> use ChatGPT as a verb

Pick one. And yes I think they are worlds apart.


I definitely think they’ve nailed the personality better than others too. Gemini and grok are always paragraphs and paragraphs of text to sift through for something that with openai is usually digested to much less

These articles are largely based on a false equivalence of LLM=moat.

That's not the case. OpenAI is advancing on many fronts; codex, vectorStore, embeddings, response API, containers, batch processing, voice-to-speech, image generation... the list goes on.


My wife uses Google AI overview - as an extension of search - on a daily basis and then jumps to Gemini

How do you jump to Gemini from AIO? (I know there's AI mode, but it's separate from the Gemini chat product afaik -- except maybe sharing some model lineage)

>on conversation on these apps that can't be easily moved elsewhere.

they can be super easily moved. just use the existing export feature, all a competitor needs is ability to import conversations.


I don't really see that stickiness to be honest.

Most people I know with android phones, myself included, just use Gemini which is bundled with the OS and has a dedicated button, has excellent data and integration with maps and such.

When it comes to enterprise, non IT companies (banking, insurance, etc) in Europe seem to be defaulting to Google's offerings, Gemini and NotebookLM in particular.


several of my friends named their chatgpt 'Amanda' or 'George' because they talked about real mental issues with it. I don't see them moving to another platform because that's essentially asking them to leave their 'best friend/therapist'.

... your friends should probably see a human therapist before going much further... I don't mean this in a flippant or insulting way.

They are more easily moved than other data honestly. You can use chat gpt to build your own chatbot and then export all of your data from openai and load it into the new chatbot.

Google has bigger network effect. It can stomp OpenAI

by this argument Google will win though. Identical interface with similar quality answers

I wish it would be, but it's not. Gemini feels more sluggish, it's relatively overloaded with animations compared to chatgpt. Like most Google products.

I've been testing Gemini as I code on Claude 4.6 and the answers aren't great for coding. ChatGPT has been better. But it did a good job with some personal IRA/401k planning.

It feels like it's only a few months behind though.


> Like most Google products.

And yet Google has search monopoly, is part of mobile duopoly, has almost monopoly on e-mail and data storage, is strong player in office solutions, and owns the biggest entertainment platform in form of YT.

Seems like sluggishness and animations don't mean as much to normal people.


As a counter anecdote, my wife stopped using it because it is quite terrible when you ask it about current events. She almost exclusively uses the Grok app now because it has the "best" internet search and current events results

>the Grok app now because it has the "best" internet search

Why is this? Thanks to Twitter? More aggressive proxy use? Tuned to deliver to stay competitive? …

Was under the impression they didn’t have much in the way of secret sauce.


Isn't half the appeal of AI that they can write a prompt like move all my text history from OpenAI to Claude and then they do it?

But the (royal) Wife needs to 1) know that exporting is a concept, 2) automating an export is possible, 3) you could ask claude to do it, 4) what an API key is or how to connect services.

My mum, and probably nearly a billion other users, could probably imagine step 1 but not connect to step 2 beyond copy-paste. Most people are still out here sending screen shots of their phones instead of just copying a link or hitting "share" on the image.


Exactly. ChatGPT is ubiquitous for the new generation of AI (LLMs) for everyone outside our of bubble. I've spoken to dozens of friends and non-techncial folks about this topic over the last year and not a single one has ever said they use Gemini, Grok or Claude.

OpenAI has by far the strongest brand and user base. It's not even close.

And, when it comes to the product they've been locked in the last few months it seems. The coding models are no longer behind Anthropic's and their general-use chat offering has always been up there at the top.


It's way too easy to export your context for this to be real. I moved away from ChatGPT from Gemini months ago and haven't thought of it. Paid.

Completely disagree with this take. I was an early free OpenAI user and switched to Gemini once it got good enough and bundled a bunch of services together to make the paid product free. OpenAI will need distribution to maintain any kind of durable market share. They need to become a bundler of other subs, or else they will just be the next Disney+ or Spotify that needs telecoms (Hah!) to push their paid product onto user's phone bills.

At the conversation backlogs worth anything? To me they seem as valuable as Google search history. After maybe 3 days they are worthless.

People get attached to month long conversations, strangely. Sometimes even refusing to use the fork feature.

And the memories are also something that adds to this greatly.


I guess if you treat it like a virtual boyfriend. Personally I found the memories to be an anti feature. I start chats to get a clean slate and test new ideas without previous ones polluting the chat.

> but people have hundreds and thousands on conversation on these apps that can't be easily moved elsewhere.

But why would you want to?

You can just leave them there at slowly start new conversation on another platform.


Conversations are not really a valuable service for these companies. The token usage is miniscule.

Agentic development and claw style personal assistants are where the dough is at.


i've been using chat gpt for 'chatting/questions' kind of things + snippets of code

it's plenty good on free tier

as soon as they start adding restrictions / raising prices / etc won't take long to look for alternatives


  and thousands on conversation on these apps that can't be easily moved elsewhere.
This obstacle looks familiar.

> Everyone is actually underestimating stickiness. The near billion users OpenAI has is actually a real moat and might translate into decent chunk of revenue.

Maybe you're overestimating their "moat" and stickiness. The dust is still settling on this madness and "OpenAI"[1] creates a lot of noise in the market.

These LLMs are being rapidly commoditized, very soon they will become as "boring" as virtual machines or containers. Altman has the exceptional skill to dupe people into giving their money to him. The "infinite money glitch" that he has been exploiting isn't really infinite.

I just hope there'll be a breakthrough with truly transparent LLMs that will stabilize this madness. As I've griped[2] two years ago, I find OpenAI too scummy, and it is unlikely that they will "win" with their sleazy ways.

[1] Air quotes because of their persistent abuse of the word "open"

[2] https://news.ycombinator.com/item?id=40425735


A good solution for memory would help with stickiness. But it's a hard thing to crack.

We are in the Yahoo, Altavista, Lycos etc. stage. Plenty of room for a Google still.

I don't know. I switched to Gemini and haven't missed anything from OpenAI even for a second. I could switch back to OpenAI and not miss anything from Gemini. I don't feel the stickiness AT ALL.

I commute on the train, I see students studying with it. I go for brunch on the weekend, I see parents consulting it while at the table with their infants. I'm at work, colleagues are using it all day. I leave work and I overhear the random woman smoking in the alleyway talking on her cellphone saying "so I asked chatgpt". It's mind-bogglingly pervasive, the last time something had such a seizmic cultural impact like this was I dunno, Facebook? And secondly, it's all one specific brand. I'm not encountering co-pilot or gemini in the meat-space.

My sister uses Gemini and calls it chat gpt. It's becoming a genericide.

I still think it's hilarious that a product name as awful as "ChatGPT" has become so ubiquitous.

I wonder what percentage of its users know what the GPT stands for, or even thought about it for a second?


I mean, how is it any worse than 'google'?

chatgpt is generic (as in, no prior meaning attached, except for the few people in the world who understand what GPT stands for). It's simple - even a non-english speaker can say it easily, and doesn't require one to be native to know how to pronounce it (this is a difficult concept for a native english speaker to grok).

These features makes for a good name.


It's very weird to pronounce it as a French. Either you pronounce it like in English with a thick French accent like "tchat' djee-pee-tee" or like in French as "tchat' jay-pey-tey" which sounds exactly like "I farted". This is really a terrible name in French.

while we're talking pronunciation I'm on an (entirely pointless) one man mission to have "lemon" stick as a pronunciation of "LLM".

"Google" at least doesn't have an acronym for "Generative pre-trained transformer" baked into it.

And many people don't know what Google stands for. Just like they probably didn't care what AOL stands for, or MSN

Even you agreed that almost nobody knows what GPT stands for - which means it's as random as any other three letter acronyms.

So i argue that chatGPT is indeed a good name (as good as google was).


Car brands are like that, does the average person know or care what GT, RX, STI, WRX etc mean?

I think car names like that are awful too.

(Clearly the car marketing world and the general public disagree with me there.)


My aunt calls it "chat", "I asked chat", which is funny to my online-brain. Like she's a streamer with a permanent audience of 1. Hey chat, is this real?^1

1. https://knowyourmeme.com/memes/chat-is-this-real


OpenAIs investors can look forward to having an operating margin as impressive as the company that produces Band-Aid

Chatgpt is like "Jeep". My grandmother calls every suv a jeep. But they're not all jeeps. AI looks like chatgpt, but people are driving all sorts of different AIs.

I would guess OAI has no moat or stickiness beyond what governments and private companies will do to keep it afloat through equity and circular financing. Good enough AI is all most need, and they need it at the cheapest cost basis possible with the most convenient access.

Google will probably win on most of these fronts unless a coalition is formed to actively fight google at the business/government level. But, absent that, it will win out over oai and oai will probably bleed to death trying to become profitable.. whenever that happens. You'll likely see their talent and corresponding salaries shrink massively along this journey.


And if you're Boris Johnson, it's pronounced like 'jeep' too!

How many of those people are paying? I think many say “use ChatGPT” to mean any LLM. As you noted it seems you just see ChatGPT in the wild but that is anecdotal. It is certainly pervasive right now. But I know a lot of people currently switching to Gemini.

I personally prefer claude models for all my work. If I were them I would be very worried. They are never giving us AGI and I am skeptical they are worth .5 trillion. Their cash burn is insane. Once ads and price hikes come, people will migrate to companies that can still afford to subsidize (like Google).

Plus I heard they lowered projections recently? Sam honestly comes off as a grifter.


I'm very similar to the OP here, always hear about ChatGPT rarely anything else. Most people are definitely not paying, but of the few that are paying, outside of software developers, they are all paying for ChatGPT exclusively. I don't know of anyone paying for the basic chat versions of other AIs. A few developers paying for Claude and Gemini, but I know hundreds of people that talk of ChatGPT and no other AI, again most not paying though.

Outside of work I don't know anyone who pays for AI.

But I have noticed that everyone seems to be using ChatGPT as the generic term for AI. They will google something and then refer to the Gemini summary as "ChatGPT says...". I tried to find out what model/version one of my friends was using when he was talking about ChatGPT and it was "the free one that comes with Android"... So Gemini.


Gemini is nearly unusable thanks to “subsidies”. I honestly don’t see what the path is to these companies making any money short of massive price hikes, or electricity suddenly becoming free.

Is it anecdotal? The observation isn't _my_ experience using it, or of _my friends_. I have no influence over who I see in public using it. I know it's not exactly a scientific study but it's still pretty damn good as a random sample. If I went outside and saw the sky was dark, cloudy and my face got wet, would you tell me it was anecdotal evidence when I say it's raining out?

Only if you said it was raining everywhere these days.

I actually encountered this today - one of a group I am planning a trip with posted some of the breathless nonsense that ChatGPT produced ("you're not picking a hotel, you're picking a group dynamic..." and other such textual diarrhea).

It turned out the only reason ChatGPT was because it is free for small enough volume usage. My suggestion to see what Claude had to say instead was met with "huh, you have to pay for it?". It's not like these are people that can't afford $20 per month for a subscription, but it might be that these assistants aren't even worth that for typical "normie" use cases.


I think that's false. The cost of switching is so low that the best product will win and there's no moat.

I honestly can't see how OpenAI can possibly recoup the hundreds of billions poured into it at this point. I'd say AI assistants are no more sticky than browsers or search engines.

You might be tempted to say that Chrome or Google are sticky. But they're really not. A lot of people aren't old enough to remember the 90s when we had multiple search engines and people did switch. I know this goes against prevailing HN dogma but I'm sorry: Google is simply the best search engine. It doesn't have a magical hold on people. People aren't fooling themselves.

And Chrome? Before smartphones it was simply the better browser. Firefox used to have a much larger market share and Chrome ate their lunch. By being a better browser. Chrome was I think the first browser, or at least the first major browser, to do one process per tab. I still remember Firefox hanging my entire browser when something went wrong. I switched to Chrome in version 2 for that reason.

And now browsers are more sticky because of Chrome on Android and Safari on iOS. Safari really needs to be cross-platform, like seriously so. I know they briefly tried on Windows but they didn't really mean it.

Anyway, back to the point. I believe there's a certain amount of brand inertia but that's it. If Gemini dominates ChatGPT performance and UI/UX, people will switch so fast.

Google, Microsoft and Meta can survive the AI collapse. Apple is irrelevant (at least for now). OpenAI? Doomed IMHO.


I really like your analysis and agree up to a point.

The problem with a moat in the consumer space is it depends on brand and marketing. OpenAI came into this world as a tech novelty, then an amazing tech tool, then a household name.

But… can they compete with massive consumer companies like Apple, Google, etc? In the long run?

There’s no technical reason they can’t. The question is whether they have consumer marketing in their blood. The space doesn’t have a lot of network effects, so it’s not like early Facebook where you had to be on it because everyone was.

Not saying they’ll fail, just saying it would be a significant challenge to be a hybrid frontier model / consumer product company.


And?

The tech landscape is littered with companies they had users who couldn’t monetize through ads. Beside the costs of serving request via LLMs is orders of magnitude greater than a search result.

On top of that, OpenAI is a sharecropper on other companies’ server, they depend on another company’s search engine and unlike Google, they are dependent on Nvidia.

Don’t forget that most browsing is done on the web and Google is the default search engine on almost every phone sold outside of China.


This is what Netscape thought too

To the extent that it is a popularity contest, that's one thing.

Of course the first thing people may look at is technologies going head-to-head.

Another big one is user pricing, plus the underlying cost to serve users. Actually minus that cost.

Biggest so far is capital.

Seems to be going that way, a contest of capital could dominate like so many other things regardless of technologies.

There are probably other things that companies may leverage if competition does really ramp up.

It may not have to be a moat to be a defining characteristic that some prefer.


Not sure how that works when there are fierce competitions, and openai's product is not substantially better than the rest. There are US competitors, then China.

Take ozempic as an example. The word is already part of the culture, but the company is losing badly to lly. Novo nordisk is projecting revenue DECLINE while eli lilly is still growing massively. I am not even sure people know other glp1 drugs other than ozempic. I don't even remember lilly drugs name.

I think people should not underestimate the market. It's a dynamic game where engineering intuition might not be enough


It's really easy to overcome that -- just sponsor some IndieDevs to flood the internet with scripts and tools to migrate all your conversations from OpenAI. Make it easy for people to switch using a simple process, make sure it's well distributed, and BOOM! Watch their user count drop like a rock. People act like just because a service has a lot of users it can't be destroyed. Anyone who has ever worked at a large web company can tell you otherwise. These things can be destroyed in a just a few days if they are targeted.

They look like fortresses from the outside, but they are all incredibly vulnerable. That's the truth they don't want people to know or realize just how vulnerable they all are.


> I think OpenAI has better chance to winning on the consumer side than everyone else.

Which doesn't make money.

> Of course, would that much up against hundreds of billions of dollars in capex remains to be seen.

Most of that is a bet against enterprise adoption. Automation of customer service, sales, marketing, warehouses, medical discoveries, etc...


I disagree (imo).

It would take me minutes to copy across a histories of projects and continue relatively unscathed by the experience.

I use chatGPT and currently relatively like it. But there is no moat beyond that.

Not like, for example, whatssap where it's almost impossible to detach from it due to the network ... (I've really tried with about a 10% success rate)


Does she pay for it. No? Then she’s causing them a loss

The problem with the stickiness is that they will eventually need to start charging, and that friction point will immediately make them come undone. Let’s says they charge $1.99 a month, and Anthropic then step in with a six month free offer, and suddenly everyone has two apps on their phone they’re comfortable with, and it’s a price war over very lightly differentiated products

Having a known brand is not a moat mate. Sorry.

myspace used to be a well known brand. I've worked there.


The problem is that, at least for now, it is dead easy to switch to something else. No need to convert anything, reconfigure anything, it is not like changing gmail to something else or dropping Word for LibreOffice.

Chat window is a chat window.

I can imagine that sooner or later things like OpenClaw (or its alikes) will become more popular and that could be something that will catch users.


The difficulty is that “winning” in this case is setting up a monopoly or duopoly and slowly increasing prices. It’s not clear if OpenAI can get so far ahead of the competition that it becomes a two or one horse race. Right now Anthropic and Google are at least as good. And the open source models keep them all honest pricing wise.

OpenAI will likely keep their billion users, and likely monetise them fairly effectively with ads. Their revenue will be considerable. It’s less clear that OpenAI will “win” and their competitors won’t.


I think you're overestimating stickiness. People spoke endlessly about stickiness of Google for years and years and it took what 18 months for Google search to become virtually irrelevant after LLMs came along?

All of ChatGPT's users could be gone in a month if something better comes along. And plenty of other options are coming along.

That being said: I used MySpace daily too... Until I didn't.

How much is your wife paying for the privilege to use OAI presently?

This is the real question. Is she willing to pay $20 per month when Google's Gemini is free? Google can remain irrational longer than OAI can remain solvent.

Google's profits have been going up while 'giving away gemini for free', so I don't think they're 'being irrational', they're unit economics apparently work.

I understand the underlying quote but not how/why it’s being used here. How is Google giving Gemini away for free to undercut OAI irrational? Anticompetitive, maybe.

Because the quote is irrational/solvent so you have to stick with those words. The similarity is a failed attempt to wait out a disadvantageous price regardless of the specific reason driving said price.

Even in the context of the original quote the price is only "irrational" in the eyes of the person trying (and failing) to play the market. "But you can't do that, that doesn't make any sense!" spoken by a person who has failed to fully grasp the situation.


It is just Google’s business model, and why OpenAI has to do ads better faster.

But you can bet there was more economic foresight going on at Google than OpenAI.


Agree. And we don't even know if they're bleeding out doing it. Google is on more efficient hardware and they fully control their ecosystem. And that ecosystem can feed into and be fed by their other ecosystems. OAI just has LLMs.

Me and my gf do. Gemini is an absolute garbage and I’m willing to die on that hill.

- Atrocious mobile application

- Gemini web somehow consumes GIGABYTES of memory doing absolutely NOTHING

- No projects

- UX is terrible (want to remove that a autogenerated diagram at the top? No button for you, fucker, good luck finding the conversation it belongs to)

- No shopping mode

- mobile application loses context mid conversation or when continuing from web/mobile

- model itself is a hot garbage, even the pro variants:

* Switches to Chinese mid sentence on a trivial topic (Python subprocessing)

* Uses Russian propaganda videos as a source

* Completely ignores instructions

* Default prompt is garbage and you constantly have to hand hold it to get proper answers


OpenAI got me to cancel my anthropic sub for Codex. Anthropic weekly limits on Pro are atrocious. You listening anthropic?

nah, open ai doesn't have a moat it has a brief window to get a lot cheaper to run or it's going to go pop when someone figure out how to do inference a lot cheaper.

Microsoft is surviving precisely because of stickiness as you put it. But their users have to use them, and have to pay for it. There are very few people that use openai today that have to pay for it, those forced to use it are typically doing so via free avenues like windows copilot.

OpenAI has the stickiness of MSN news or MS Teams. Your wife uses chatgpt on a daily basis but is she paying for it? If they charge her $0.99/mo will she not look at alternatives? If she gets two or three bad responses from chatgpt in a row, will she not explore alternatives to see if there is something better? Does she not use google? If she does, she is already interacting with gemini everyday via their AI overview.

OpenAI has a first-to-market advantage, not a moat as you think. they can absolutley dominate the market, if they stay on top of their game. Ebay was the main online shopping network, they had that advantage, they were even the ones that made Paypal a thing! But they're relatively little used now, better alternatives crushed them.

Amazon was the first-to-market with cloud services, they didn't get worse in any significant way, but their market share is not as great as it used to be, Azure has gained decent ground on them. 10 years ago the market share break down was 31/7/4, now it is 28/21/14 for AWS/Azure/GCP respectively.

For OpenAI to survive it needs most of the market share, if it gets only a 3rd for example, the AI industry on its own needs to be a $1T+ industry. Over the past 10 years revenue alone (not profit) for AWS has been $620B total and just made $128B in revenue (highest) last year. OpenAI needs to make in profits (not revenue) what AWS made last year in revenue by 2029 just to break even. If it manages to just break even by then, it needs to have more profits than the revenue AWS managed to attain after its entire lifetime until now. It's far easier to switch LLM models than cloud providers too!

Their only remote way of survival, I hate to say it, is by going the way of palantir and doing dirty things for governments and militaries. they need a cash-cow client that can't get anyone else like that. And even then, being US-based, I don't think outside the US any military is insane enough to use OpenAI at all due to geopolitics. Even in sectors like education, Google (via chromebooks) is more likely to form dependence than Microsoft via OpenAI since somehow they're more open to arbitrary apps due to historical anti-trust suits.

I can see a somewhat far-fetched argument being made for their survival, but only on thin-threads and excellent execution. But I can't see how they can actually survive competition. They're using the Azure strategy for market share, they're banking on AI being so ubiquitous that existing vendor-lock-in mindset will serve as a moat. They'll need to be much more profitable than AWS in like 1/5th of the time. Their product is comparable to (and literally is in Azure) one of many cloud service offerings, as oppose to an entire cloud provider, and their costs are huge similar to cloud providers like needing their own data-centers level huge, they need to overcome those costs, and on top of that have $125B> revenue in like 2 years!!


I have started using chatgpt for everything from financial planning to holiday planning to product purchase. Whenever I think I hit something useful I add it to memory. I'm a "go" plan user because they had a promotional offer that gave me free access to the plan for a year. Will I continue after one year? Truth is nothing I have in chatgpt cannot be recreated elsewhere. But if I care about keeping those memories I might. I think the real challenge for me now is finding back out conversations, it seems their history search is quite bad.

Yup this is just another case of the HN bubble. I polled a bunch of non technical friends recently who I know use AI on a daily basis. Out of 10+ maybe 2 had ever heard of Claude, and no one had any interest in trying it.

ChapGPT has become the AI verb, and in the consumer space it is not getting dethroned.


Claude is definitely tech only.

Gemini is the only real competitor to OpenAI in the consumer space: they already have the consumer eyes on their products and they have the financials to operate at a loss for years.

They are well positioned to fight for the market


The mystery I can wrap my head around is how Tesla has avoided getting hammered despite being hit from a hundred different directions. What exactly is the market pricing in?

They peaked around 2021, and even after posting multiple quarters of disappointing results, the stock is still trading above 2021 levels. For almost any other company, slightly lowering guidance or missing estimates by a few percentage points simply tanks the stock. But for Tesla, no amount of Musk’s idiocy seems to be enough to seriously move it.


Tesla is the world’s largest meme stock. People stopped applying rational pricing models, and rationality in general, to it a lot time ago.

PE ratios will suddenly matter again when we get hit with the next recession.

yup, remember when musk was pushing doge coin? not much difference

That can't be whole story, though. They're still profitable.

If Tesla is worth a trillion dollars, is it a meme ?

If Tesla is a meme, is it worth a trillion dollars ?

This is the much better question :)

It's not though

Not a meme? Or not a trillion dollars?

The market discounts future returns but it is unclear and shifting what proportion of those returns are from the operations of the company in the market it sells products in and what proportion comes from the operations of traders in financial markets. More plainly, traders discount returns from buybacks and dividends financed by the operations of the company and returns from selling their shares to "greater fools".

As long as the music is playing they will keep dancing. Musk is a master of DJing that party. We might wake up tomorrow and find that his house of cards has fallen apart, but we might wake up to learn they really have solved FSD. That ambiguity keeps the price from collapsing.


What is it about FSD that results in this valuation though?

If elon builds a time machine and goes to the future to get FSD tech from 100 years from now and rolls it out to all teslas tomorrow, what will change? Will every car driver get rid of their cars and get a tesla? Will that suddenly justify the stupid valuation?

Realistically, I don't think the majority of drivers will care that much. Sure, their sales will go up, but I can't see it going up by that much.

FSD will never be "achieved" suddenly. The tech will incrementally improve every year, across all manufacturers until one day we are manual driving only 1% of the time with FSD doing the rest. Like AGI, there is no moat in FSD. This is the natural outcome of the trajectory that we are on right now, and nothing about tesla is making me believe they will offer anything that other OEMs can't.

No, I think the market is much more cynical than that. Tesla is a meme stock similar to bitcoin or GME. Investors are degenerate gamblers, hoping that it will continue to rise because that's what it does atm, and hoping they won't be the one left behind holding the bag when it crashes one day. It's little more than a voluntary ponzi scheme that most big investors openly buy into knowing full well what's at stake.


Exactly. It's meme stock. There's no rational explanation for this ridiculous valuation.

Tesla has been overvalued for a long time, and not by a bit either; they're worth more than the next several car makers put together, yet sell less cars than any of them. Their high valuation could still be considered defensible when they were the fastest growing car company in the world and the only one selling electric cars. But none of that is true; everybody is selling electric cars now, and BYD is selling more than Tesla, I think. And instead of growing, Tesla is now shrinking in many markets. Even their self driving is not the best.

The share price should have collapsed, yet it remains high. How? It makes no sense to me.


> It makes no sense to me.

Honestly, that's the easy part. Cynical, degenerate gambling.


>What is it about FSD that results in this valuation though?

Are you willing to accept the ugly answer? Because the point of FSD isn't what they pitch. The replacing human drivers to save lives yada yada... That ain't it. The point is creating a handful of leverage points through which the autonomy of the populace to move wherever they want can be controlled through. Once the tech is the majority driver, people can finally be properly managed as the little work units they are. That's the dark part of the valuation. The power aspect. The ones who own the means to locomote are the ultimate rent collectors. There's simply no arguing that can be done by a populace that can be prevented from showing up to any attempt at collective protest via geofencing. Or if they do show up, can be added to a comprehensive list for participating in disruptive activities.

Capabilities, ladies and gentlemen. We have to assess these things on the ground of what they enable. Delegating transport entirely to a third party necessarily creates a vulnerability of society to manipulation by the ones running the damn thing; and the ones running the damn thing want money, and security for themselves.


Why don't all the other automakers wise up and just start promising full self driving "next year" as well?

Because their stocks aren't memes, their investors are serious, nobody really GAF about self driving and fully autonomous driving isn't actually the "killer app" many think it is.

How killer would FSD have to be for it to count as the killer app?

I’d pay at least double for a car with FSD. More if the car’s longevity could be established. Is that killer enough? (Real question).


> there is no moat in FSD.

Being a really really hard problem is a moat. Many have tried and given up already: Uber, Cruise, etc.


There are so, so, so many companies operating in this space right now. You list two that have given up. A quick google brings back at least a dozen operations that appear to be still ongoing.

BMW Personal Pilot, Merc Drive Pilot, and Honda Sensing Elite are Level 3 automation tech you can buy right now. Tesla is still at level 2!!

Whether Tesla is going to be the first to achieve true autonomy or not is a toss-up. And and regardless of who achieves it, the rest will be very very short on their heels.


> BMW Personal Pilot, Merc Drive Pilot, and Honda Sensing Elite are Level 3 automation tech you can buy right now. Tesla is still at level 2!!

You need to put about 10 asterisks on those. MB Drive Pilot has been discontinued due to "low demand and high cost", and those other 2 systems appear to have substantial restrictions. Meanwhile, FSD today "works" on pretty much any road or highway. I can easily see certain folks see that as far ahead of competitors, since it physically can do more in more places and operate in more conditions.


Really it's just pricing in musk fusing all of his business under the tesla name.

That can't/won't happen. Musk's wealth is primarily in SpaceX now and he has a much higher ownership stake in SpaceX than Tesla. As well as that, Tesla is public so he can't just do napkin math and decide to merge them. So the question is: Does Tesla buy SpaceX? Well no, Tesla can't afford it. Ok, well can SpaceX buy Tesla? Well no, SpaceX can't afford it either. So do they announce a merger? Well that doesn't make any sense because Tesla is valued like a meme stock so it would massively dilute Musk's ownership of the overall company. So the idea that they fuse might be driving up the stock, but by driving up the stock you're actually preventing it happening. If Tesla starts to trade at realistic multiples and comes down to lets say a 200Bn company, I'd expect SpaceX to snap it up at that valuation, but it'd be crazy to do it before then.

Even if they have FSD ready tomorrow the financials would not support this valuation.

Summing the sales figures in [0] we get 9M to 10M Teslas on the road. Let's say 10M and and let's say Tesla will keep selling 1.6M / year for the next 5 years [1]. This is 18M Teslas and let's assume all of them are converted into paying customers at $100 / month [2]. This works out to $21 B / year in income. $22 B / year in income cannot justify $1.5 T in market value.

Good thing they are switching to robots :)

[0] https://en.wikipedia.org/wiki/History_of_Tesla,_Inc.#Timelin... [1] - this is a huge assumption. Teslas sales are declining because of Musk's image, lack of innovation and competition from China.

[2] this is another huge assumption - I know 5 Teslas owners. They tried the $100 / month assisted driving (or whatever Tesla calls it these days). All said it was cool, but not worth it and did not sign up after the trial period. These are professionals who value their time (tech engineers and 1 banker)


There's zero chance that Musk will have suddenly "solved" FSD in a day, a week, or a year. He's not an engineer; he's a money man, and a grifter.

That's why people keep giving Tesla money: because Musk has fooled so many people into believing he's this amazing engineer, who could, possibly, "solve" FSD overnight—and, moreover, has gotten them to buy into it so deeply that they have tied their identity into that belief, and so in order to continue to cling to it, they reject empirical evidence of both his lack of qualifications and his outright crimes.


Well he certainly wouldn't but the engineers working for Tesla might, with a probability that is very low but greater than 0. It's much higher (but still low) in 1 year, 5 years, 10 years. Tomorrow is a metanym for the future.

But to be very clear I not only don't think they will but I don't think that they think they will, or they wouldn't be shifting focus to Optimus. I'm not invested in Tesla except for my exposure through index funds.

If anyone who is a fan of Tesla can get through this article without changing their mind. Well. Bless their heart.

https://www.washingtonpost.com/technology/2025/08/29/tesla-a...

https://archive.ph/K4ckR

https://news.ycombinator.com/item?id=45062614


To be maximally reductive, FSD will never work because the sensor suite is deficient. There are other reasons but that one's enough.

Same for a rocket that's ridiculously large for orbital missions but can't go beyond orbit without 15 to 25 refueling flights of the same enormous rocket.

The reasons for both these failing are going to be manyfold and complex, but there are enough simple reasons that everyone should understand.


Wait until they announce that Optimus is only going to have ears because "bats get by just fine"...

It would be actually fun to see where the limit is on echolocation with serious ML processing these days. Apparently people did quite well in 2022 https://pmc.ncbi.nlm.nih.gov/articles/PMC9655721/

> with a probability that is very low but greater than 0.

And it is insane that this warrants a 1.5 trillion USD valuation - for vaporware.


The question of whether they will solve FSD is not very relevant if everyone ends up solving it roughly at the same time.

Ha exactly. Do Tesla shareholders think the rest of the auto industry are in a coma?

Besides what does FSD even mean? Austin is not Amsterdam.


> Do Tesla shareholders think the rest of the auto industry are in a coma?

I have no idea what institutional investors think, and they're probably the relevant group here.

From the way I've observed individuals discussing it, defending it, on HN… it pattern-matches to my understanding of what people these days call "main character syndrome", i.e. that the other companies are just a supporting cast to provide an interesting challenge for the only one that's not an NPC.


Or, they're stuck in a narrative that stopped making sense only gradually. Tesla solving self-driving ten years ago would have been a triumph. Solving it today, meh. They would be ahead of others by a couple of years, max.

> metanym

I appreciate when my vocabulary expands. I understood this by context and similarity to 'synonym'. I may have encountered it before (probably), but I didn't know it. Excellent use in a post.

Expands my horizons a bit. Hat tip.


> he's this amazing engineer, who could, possibly, "solve" FSD overnight

Even if thay were true many people hate Elon now. Enough that they will pass on any technology he is the only purveyor of.

After he celebrated letting children starve (USAID) by dancing on stage with a chainsaw many people decided to never buy any Musk product for any reason. Now there are the Epstein ties.

Worse, many people who dont care about politics at all won't get involved, because Musk is an unstable drug user and its not wise to entangle yourself in his business affairs.


You really thought the poster meant that Elon Musk personally went and implemented FSD? Just for your information, Musk is also not personally assembling every Tesla vehicle.

Have you seen the way some people talk about Musk around here?

There are clearly plenty of posters who, to all appearances, genuinely believe that he is the entirety of Tesla's R&D department.


Well if there are plenty of posters then it should be easy to give me 5 comments of different people where it's clear they believe Tesla R&D is a solo Elon Musk operation.

I'm not holding my breath though.


They don’t care because Musk is marketing Tesla not as a car company but as a technology company (building robots and self driving rental service). And why does he do that? Maybe because his car sales are down…

I always assumed “tech company” meant using technology to build a fundamentally better car from the ground up. I don't know at what point the bait-and-switch happened, it was suddenly about pursuing every stupid moonshot fantasy at the cost of making better cars.

I thought it was always a tech company focused around trying to import things from the future. Since before they ever had enough sales that sales could go down.

No, it was a car company.

No it was a financial operation living off electric vehicle credit sales

Actually the funny thing is that there is a mixing of meme stuff, Elon verse impacts (AI + self driving + Energy) etc. and under none of these circumstances is a 200+ PE justified.

The funny thing is after 6 years of effort apparently they have managed to get the dry coating process for batteries working and according to a few reputed sources have ingredients for entire battery chain available locally.

The thing is if this stock was underpriced and rational this would be such a positive news after 2-3 years of growth stall.

Instead they are trying to keep the hype up with endless goalpost changing and self driving possibly stuck perenially in edge case doom scenario with camera only decision


Batteries are boring, or at least the hype has a short shelf life. There are enough normies making progress on batteries that Elon hasn't got a credible argument that he is different and better.

Same for cheap Teslas. Some hype trains hit the buffers sooner than later.


There's a lot of true believers who think Tesla+Musk will crack self driving and/or humanoid robots any day now.

I am so confused when I read things like this because my Tesla model 3 is effectively self driving for me for months now. Hundreds of miles without intervention. No other car I can buy can do this yet

That’s irresponsible at best give it doesn’t support full self driving. I never understood why end users are allowed to just beta test a car on public roads.

Is it responsible to let users do auto speed and auto lane on a high speed highway without other autopilot features ?

Rollout both technologies at scale , and try to guess with one will cause more harm giving th fact there will be users in both cars trying to put legs on a steering wheel :

A stupid tech that will not even try to do safe things

Or software that is let’s say 4x less safe vs avg human but still very capable of doing maneuvering without hitting obvious walls etc ?


Giving people more ways to shut themselves in the foot does not improve the safety. I find the entire thing a kind of dark pattern as the system along with misleading marketing makes you lax over time just to catch you off guard.

You get used with the system to work correctly and then when you expect less it does the unthinkable and the whole world blames you for not supervising a beta software product on the road on day 300 with the same rigour you did on day one.

I can see a very direct correlation with LLM systems. Claude has been working great for me until one day when it git reset the entire repo and I’ve lost two days work because it couldn’t revert a file it corrupted . This happened because I just supervised it just like you would supervise a FSD car with “bypass” mode. Fortunately it didn’t kill anyone , just two days of work lost. If there was the risk of someone being killed I would never allow a bypass /fsd/supervise mode regardless of how unlikely this is to happen.


they have very good guardrails to prevent you that, unlike autolane etc.

Teslas has sensors , eye trackers etc is it possible to shoot yourself in the leg, sure. But not in any different way vs human doing irrational things in the car, make up, arguing , love etc.

Human-being is an irrational create that should not drive except for fun in isolated environment. Tesla or Waymo or anyone else.... It is good to remove human from the road, the faster the better.


>> It is good to remove human from the road, the faster the better.

I’m all for this but not to replace dumb people with dumb software. I think the FSD should be treated more like the airplane safety. We have the opportunity to do this right not just what’s the cheapest way we can get away with it.


well, if you don't read news that try to panic about everything new, that's +- exactly how people currently use FSD.

When I'm driving FSD If i want to drink, eat, etc, instead of doing weird one hand tricks every driver did, i just turn FSD and let it drive. When I'm tired , I'm doing the same. Again , attention control works really good, it doesn't let you sit on the phone etc. unlike many other cars with less advanced features. You can't be on FSD + Phone but you can easily be on the phone + lane control in other car.

Phone is by far the biggest real killer of people, and no body is trying to create a campaign against phone mounts, etc.


The fact other cars are less safe doesn’t automatically make yours safe.

Legally Teslas are Advanced Driver Assistance Systems, while Waymos for example are Automated Driving Systems.

If you're driving a vehicle in the former category, you'll be on the hook for reckless driving if you aren't fully supervising the vehicle.

I'm pretty sure the original commenter was supervising the driving, though.


Except for their limited Robotaxi service. They have recently ditched their safety driver as well, so there is truly no one "driving" the car.


Well, I didn’t say that they did it well

Based on the self driving trials in my Model Y, I find it terrifying that anyone trusts it to drive them around. It required multiple interventions in a single 10-minute drive last time I tried it.

I'm using FSD for 100% of my driving and only need to intervene maybe once a week. It's usually because the car is not confident of too slow, not because it's doing something dangerous. Two years ago it was very different where almost every trip I needed to intervene to avoid crash. The progress they have made is truly amazing.

Would you use FSD with your children in the car? I sure as hell wouldn’t. Progress is not safety.

Yes I do in fact use FSD with my children in the car.

I pray for you and them. You need it

Oh well that's because you aren't using V18.58259a, I follow Elon's X and he said FSD is solved in that update. Clearly user error.

How long ago was that? I doubt it was the v14 software. The software has become scary good in the last few weeks, in my own subjective experience.

This exact sentence (minus the specific version) is claimed every single week.

No, you do not "become scary good" every single week the past 10 years and yet still not be able to drive coast to coast all by itself (which Elon promised it would do a decade ago)

You are just human and bad at evaluating it. You might even be experiencing literal statistical noise.


I have not been proclaiming scary good every week for the last 10 years. In fact, I have cancelled my subscription at least two times, once on v13 and once on v14, with the reason ‘not good enough yet.’ I am telling you that for me personally it has crossed a threshold very recently.

It certainly wasn't in the past few weeks, but I've been hearing about how good it's gotten for years. Certainly not planning to pay to find out if it's true now, but I'll give it another try next free trial!

Make sure you are on AI4 hardware when you do. If you buy FSD on AI3 you’ll be limited to v13, which is is terrible. I have used both and they are in different leagues altogether.

You need only look at Tesla's attempts to compete with Waymo to see that you are just wrong. They tried to actually deploy fully autonomous Teslas, and it doesn't really work, it requires a human supervisor per car.

They are behind Waymo but they are getting there. They started giving fully autonomous drives since last month without safety driver in Austin. Tesla chose a harder camera-only approach but it's more scalable once it works.

Waymo can go camera-only in the future too by training a camera-only model alongside their camera+lidar model.

They'll probably get there faster too because the decisions the camera+lidar model makes can be used to automatically evaluate the camera-only model.


Clearly at this point the camera-only thing is the ego of Musk getting in the way of the business, because any rational executive would have slapped a LIDAR there long ago.

Why is it more scalable? LIDAR is cheap now.

>more scalable

It's cheaper, that's all it is.


Which makes it easier to scale?

Which using a five-dollar word to describe a one-cent fact.

Scalability is usually about O(n²) vs O(n log n) or something, not a smaller constant that's significant but not a game changer.


Not if they have to have remote drivers ready to help out with the "autonomous" system.

...if it works.

Tesla have recently started introducing unsupervised cars cars as well.

Yes, they moved the "safety driver" into a chase car.

And the results speak for themselves.

https://www.gurufocus.com/news/8623960/tesla-tsla-robotaxi-c...


And seemingly only along one stretch of road? Like, this happened in Dublin in 2018: https://www.irishtimes.com/news/ireland/irish-news/driverles... - going up and down a stretch of road is about as easy as it gets.

> Mr Keegan said he was “pretty confident” that in “the next five to 10 years” driverless vehicles would “make a major contribution in terms of sustainable transport” on Dublin’s streets.

As always, people were overoptimistic back then, too. There are currently no driverless vehicles in Dublin at all, with none expected anytime soon unless you count the metro system (strictly speaking driverless, but clearly not what he was talking about).


A bus crashing into a stationary Tesla counts as a crash for Tesla? What in the world is this metric?

Ask Musk why he refuses to provide details of accidents so we can make a judgment.

Tesla’s own Vehicle Safety Report claims that the average US driver experiences a minor collision every 229,000 miles, meaning the robotaxi fleet is crashing four times more often even by the company’s own benchmark.

https://www.automotiveworld.com/news/tesla-robotaxis-reporti...


I don't see how we could know the rate of US driver minor collisions like that. No way most people reporting 1-4mph "collisions" with things like this.

You don't have to know. You can fully remove the few "minor" accidents (that a self driving car shouldn't be doing ever anyway) and the Tesla still comes out worse than a statistical bucket that includes a staggering number of people who are currently driving drunk or high or reading a book

The car cannot be drunk or high. It can't be sleepy. It can't be distracted. It can't be worse at driving than any of the other cars. It can't get road rage. So why is it running into a stationary object at 17mph?

Worse, it's very very easy to take a human that crashes a lot and say "You actually shouldn't be on the road anymore" or at least their insurance becomes expensive. The system in all of these cars is identical. If one is hitting parked objects at 17mph, they would almost all do it.


You and I must not drive the same Tesla brand then because my Model Y is a terrifying experience when “self-driving” anywhere besides on highways.

I do wonder if folks who say Tesla’s FSD works well and safely are simply lacking a self-preservation instinct.


Even on highways I've had to intervene maybe once every 50 miles as it will often miss exits for me. This is a 2025 Model 3 with the latest 14.2 update in a major US metro.

Hundreds of miles is not an appropriate sample size for the technology's intended scale.

See this related article and discussion: https://news.ycombinator.com/item?id=47051546


"No other car I can buy can do this yet"

How many have you tested in your day to day life?


"dude trust me"

Can you watch a Netflix and have a beer while it's driving? No? Then it's not self driving.

The data from their self driving pilots disagrees even if it works for you. Its simply not read to be a taxi that makes money by itself.

It might a nice feature for your car to have. But most people aren't paying for it, the conversion rate is very low.

So they are not making money from taxis and not making much money from software sales.

So does it matter that for you personally it drives you around sometimes?

Even if you price in a 4x increase in FSD buy conversion ratio, you can't explain the stock price.

And I say this as a former Tesla investor who assumed that conversion ratio would be better then it is. But for that reason (and many others) I couldn't justify the valuation and dropped the stock.


It's a very capable L2 system, it's just that it's been a very capable L2 system for a while now, and it still seems far away from reaching L4.

And of course, Musk's insistence that they don't need other sensor types like lidar or radar definitely looks like it's getting in the way.


I am confused as to why you think no interventions in "hundreds of miles" is good enough. It has to be no interventions for hundreds of thousands of miles WITH THE CAR BEING EMPTY to be good enough.

Because if you get in an accident you personally not Tesla are liable. Soon as I’m not liable for an accident when the computer is driving I’d sell my other cars and put my family in pink PT Cruisers if those were the only cars offering that

Ask those who were killed while using FSD for their opinion on it before forming your own ;)

Months where you’re still required to be paying attention. Meanwhile 2 years ago Mercedes-Benz Drive Pilot a level 3 system let you sit and watch a movie without paying attention to the road.

Personally that’s way more useful for me even if they didn’t let you turn it on at highway speeds.


Actually Mercedes killed their Drive Pilot for now https://insideevs.com/news/784404/mercedes-level-3-drive-pil...

They canceled it because of poor adoption rather than any technical issues.

Which if anything looks worse for Tesla long term. If luxury car owners aren’t willing to pay 200$/month for self driving then trying to up charge people buying used model 3 and Y’s after canceling the S and X looks dubious. Which means that 100$/month subscription likely loses them money vs an 8k purchase.


Mercedes system was pretty useless because you could only use it in very limited conditions (specific freeways, only following another car). Nobody wants to pay $200/month to use it for 5% of their driving. Tesla FSD drives for you end-to-end.

Most people have a rather consistent commute, so the Mercedes was a more like a 0% or 80% kind of thing. The issue was adding more roads wasn’t going to help, the underlying benefit to attention free driving just wasn’t that valuable even to customers who could use the system regularly.

They are looking to reintroduce it with a much higher top of 81MPH which might help, but agin my issue isn’t with the particular system but the underlying assumption of how much people value attention free driving.


People need to stop with this. The MB system was level 3 on like 0.1% of roads only in 5% of cases when you actually where on that road.

That's kind of like saying 'look this algorithm is awesome' if we feed it all the data in the optimal order.


Meanwhile in China, the humanoid robots are doing Tai Chi and somersaults...

But Tesla doesn't do all this even more and better!

And there are also a lot of people claiming Tesla stock is being manipulated.

"true believers" yup this never changes .

I agree with you. I personally dropped my stock when it was clear that the bull thesis had collapsed.

I had priced in, margin staying the same or going slowly down. FSD not working but achieving at least a decent amount of software sale conversion. Service to become a profit center. And most importantly, a profitable truck and 'Model 2' program to further push volume. Beyond that, just generally that electrification was ongoing and Tesla had a role to play.

I never considered Robi-Taxi or Human Robots.

All of these failed. Volume didn't continue to go up. Margin couldn't be substantiated. FSD didn't get much buys (not helped by absurdly increasing price). Truck program was a failure (and I don't think its because of the design). And 'Model 2' program was cancelled.

I profited a lot from this stock and I think there was time where the stock-price was reasonable (I don't buy the claim that it was always a pure meme stock). But every quarter it got worse and worse. I can't understand why its still so high either.


It’s Elon. It’s a meme stock. Fundamentals don’t matter. That his wealth is so wrapped up in the public valuation of Tesla I guess investors think he will do everything he can to keep th stock price up that is until SpaceX goes public the I think he won’t care because his wealth will come form that primarily.

I guess the thesis is Musk is building the sci-fi future. Robots, cities on Mars etc.

It's impressive what marketing images can achieve - the future vision pumping the stock, the seig heil halving car sales.


Wouldn’t this be a side effect of everyone buying only indexes funds or ETFs?

Me and other millions of people are investing in our pensions every month and buying ETF (S&P500 or global) and indirectly buying Tesla stocks even if we don’t want to.

The system would need a big shock to cause the ETFs to rebalance and reduce the proportion of Tesla stocks that are part of the index.


Haven't looked into this much myself, but throwing it out there: https://statmodeling.stat.columbia.edu/2025/04/19/for-15-yea...

I wonder if some of it is because of Musk, not despite him. Yes, his actions and statements in the last year have been terrible, but he also demonstrated he is very close to one of the most important power centers on the planet. That might be enough for some investors.

> What exactly is the market pricing in?

Musk, the perception of. As always. Popular media drilled in that geniuses behaving like idiots is on point. So other idiots with money still suspect him of being a genius and singlehandedly turning things around at some point before the cliff.


It's not a mystery, regardless of if it's dumb or not - the market is pricing in likely dominance in robotics, both cybercab and optimus.

Long story short --- it's a cult --- there is no logical way to explain it.

Musk is the leader selling his own brand of fantasy that he makes up as he goes along. A lot (if not most) of what he says never comes to pass but people still cling to every prognostication as if it is gospel.

For over two decades, he was all about taking over the auto business with full self driving EVs. Obviously never happened. So now he is off to take over ride sharing and robots and AI and whatever else comes down the pike tomorrow.


>What exactly is the market pricing in?

Elon Musk.


But that doesn't make any sense anymore either.

It does if you assume there is somebody dumber to buy your stock.

Its CEO is the most gifted person alive ..in pump and dump schemes.

The rational thing would be to short it, but Tesla value will remain irrational longer than you can remain solvent.

It really has tarnished the name of the genius inventor.


At some point you have to wonder if there's some manipulation going on? Do 100s of bots buy and sell these stocks at specific times to keep the price up? Maybe there's an institutional investor or few who secretly back Elon and are part of a scheme?

A bit speculative reply but would appreciate if anybody links any such analysis'/investigations.

From my limited knowledge, I know people have been shorting Tesla based on fundamentals for a while now but haven't been successful.


The billionaire class loves their crypto nazis--they won't let Musk fall from grace. Given the Epstein files, the Panama papers, and what we know about the elite networks, you'd have to be a sucker not to believe that the stock market is manipulated to the core.

The very fact that people are siding with AI agent here says volumes about where we are headed. I didn’t find the hit piece emotionally compelling, rather it’s lazy, obnoxious, having all the telltale signs of being written by AI. To speak nothing of the how insane it’s to write a targeted blog post just because your PR wasn’t merged.

Have our standards fallen by this much that we find things written without an ounce of originality persuasive?


> The article does point out exactly this problem, but glosses over the fact that most artists don't want to change to popular art. Only a few can, and most don't want to.

I don't think author hides the fact. It's plain as day that to make a living, you need to sell art which resonates with people. You can still find room to be creative within that constraint, but you can't ignore the audience.

Artists should quit the illusion that they can create whatever they please and expect the income to automatically follow.


But that isn’t really true, per se. It depends on your definition of “people” – the mass market? High end collectors and galleries like Gagosian? Very different audiences, and appealing to one is probably the opposite of the other.


100%

I didn't understand GP's point at all because I think the author makes this exceedingly clear: if you want to paint only for you, and only stuff that appeals to you and a limited few, that's totally fine (and I think the author really emphasizes that's totally fine), just don't expect to make a living off of it.

I thought this article was excellent. In particular, I liked the emphasis that you really just have to produce lots and lots of art to find "image market fit", because it's nearly impossible to know what will resonate with people before you create it. There is just an undeniably huge amount of luck in finding something a lot of people like, so it's important to give yourself as many swings at bat as possible.


Encyclopedia Brittanica defines "popular art" as art that resonates with ordinary people in modern urban society. I'm sure we could point to examples of people earning a living at non popular art.


For sure, but those people need to make sales too, otherwise they are not “earning a living.”


Yeah, I am amazed how people are brushing this off simply because GCC exists. This was far more challenging task than the browser thing, because of how far few open source compilers are there. Add to that no internet access and no dependencies.

At this point, it’s hard to deny that AI has become capable of completing extremely difficult tasks, provided it has enough time and tokens.


I don't think this is more challenging than the browser thing. The scope is much smaller. The fact that this is "only" 100k lines is evidence for this. But, it's still very impressive.

I think this is Anthropic seeing the Cursor guy's bullshit and saying "but, we need to show people that the AI _can actually_ do very impressive shit as long as you pick a more sensible goal"


> Going forward, the U.S. government will continue its global health leadership through existing and new engagements directly with other countries, the private sector, non-governmental organizations, and faith-based entities. U.S.-led efforts will prioritize emergency response, biosecurity coordination, and health innovation to protect America first while delivering benefits to partners around the world.

The funny thing about this administration is that they label existing system as "bad" and "corrupt", use that as justification to abandon it, and then proceed to recreate the same thing different way.


The point is to enable corruption that benefits current office-holders and prevent any activity, corrupt or not, that benefits anyone else.


See Goldstein, "The Theory and Practice of Oligarchical Collectivism" (1949)


“For if leisure and security were enjoyed by all alike, the great mass of human beings who are normally stupefied by poverty would become literate and would learn to think for themselves; and when once they had done this, they would sooner or later realize that the privileged minority had no function, and they would sweep it away. In the long run, a hierarchical society was only possible on a basis of poverty and ignorance.”


You think they will actually replace it with something similar though. They won't. They have no desire to do that. Even in name. Just like all their other supposed plans - it's just smoke and mirrors and no one will actually do any such thing.


I dunno, reading it in context of the whole statement, "...and its inability to demonstrate independence from the inappropriate political influence of WHO member states" deserves a bit of focus. The UN is structurally designed to give China and Russia outsized influence. Coordinating technical matters like healthcare through the UN does seem a bit unwise given that everyone is posturing up for some sort of Cold-war or potential WWIII style scenario. I don't think we've seen much deescalation of tension in the last decade.

Better to leave the bandwidth of the UN free to focus on diplomacy without distractions, the military situation is urgent.


> Coordinating technical matters like healthcare through the UN does seem a bit unwise given that everyone is posturing up for some sort of Cold-war or potential WWIII style scenario.

On the contrary, the fact that we have to coordinate technical matters like healthcare through the UN is a large part of the reason why the Cold War remained cold and we had WW2 within 20 years of WW1 but no WW3 in the 80 years since.

Until the US decided to re-elect a literal madman, the necessity of coordinating on technical matters was obvious to all, which meant these countries weee constantly talking, building relationships and communicating with each other which helped prevent minor conflagrations from escalating.


> The UN is structurally designed to give China and Russia outsized influence.

An interesting assertion. I presume you are implying outsized influence over the US (or do you mean every other country?). I'm honestly curious: can you describe this structural design?


The thing that jumps to mind is the Security Council, which they can parley into diplomatic favours from other people. And the whole point of the UN is that it was the victors of WWII explaining to the rest of the world how international affairs were going to work, so I'd be pleasantly surprised if the special privileges stopped there.

And even without that, the UN isn't really set up to handle technical matters. It is a diplomatic club. The point is to give people a seat at the table without considering their competence.


The Security Council is controlled by the US and its allies (3 out of 5 permanent seats). And the Security Council does not decide on matters of public health like the WHO does. The WHO is staffed by very competent people, certainly more competent than RFK.

The UN has handled several technical matters successfully, including global vaccination programs.


Perhaps they mean that Russia, a corrupted warmonger weak country, has veto power, while more powerful, free and democratic countries have not.


Honestly, ceteris paribus for the US


Thank you sir. Love learning new things every day in a tech forum, especially Latin.


> proceed to recreate the same thing different way

Not a same or similar thing in any way. Everything that is being torn down is being replaced by grifter schemes where all that money is funneled to personal pockets.


NAFTA bad. USMCA good.


Art of the Deal


I feel AI will have the same effect degrading Internet as social media did. This flood of dumb PRs, issues is one symptom of it. Other is AI accelerating the trend which TikTok started—short, shallow, low-effort content.

It's a shame since this technology is brilliant. But every tech company has drank the “AI is the future” Kool-aid, which means no one has incentive to seriously push back against the flood of low-effort, AI-generated slop. So, it's going to be race to the bottom for a while.


I think "internet" needs a shared reputation & identity layer - i.e. if somebody offers a comment/review/contribution/etc, it should be easy to check - what else are their contributing, who can vouch for them, etc.

Most of innovation came from web startups who are just not interest in "shared" anything: they want to be a monopoly, "own" users, etc. So this area has been neglected, and then people got used to status quo.

PGP / GPG used to have web-of-trust but that sort of just died.

People either need to resurrect WoT updated for modern era, or just accept the fact that everything is spammed into smithereens. Blaming AI and social media does not help.


It'll stop soonish. The industry is now financed by debt rather than monetary assets that actually exist. Tons of companies see zero gain from AI as its reported repeatedly here on HN. So all the LLM vendors will eventually have to enshittify their products (most likely through ads, shorter token windows, higher pricing and whatnot). As of now, not a sustainable business model thankfully. The only sad part is that this debt will hit the poorest people most.


I'm not so confident that "makes the product worse and makes them less money" is even enough to make them not do it anyway


This is the reason I absolutely hate shadcn. The number of dependencies and files you introduce for trivial components is insane. Even tiny little divs are their own component for no good reason. I genuinely don’t understand how front-end developers accept this level of needless complexity.

Shoutout to Basecoat UI[1], so implementing the same components using Tailwind and minimal JS. That's what I am preferring to use these days.

[1]: https://basecoatui.com/


> I genuinely don’t understand how front-end developers accept this level of needless complexity.

in my anecdotal experience as a bit of an old fogey with a greying beard, the enthusiastic juniors come along, watch a video by some YouTube guru (who makes videos about code for a living instead of making actual software) proselytizing about whatever the trendy new library is, and they assume that it's just what everyone uses and don't question it. It's not uncommon for them to be unaware that the vanilla elements even exist at times, such is the pervasiveness of React bloat.


Please name some names of these performative developer/engineers. I want to know how many are on my bingo card. Ill start, something imegen and tnumber geegee.


I don't really keep up with these Tech/Soft tubers, but watch a video on occasion. Can't really say I find something-imagen guilty of this, but like I said I watch the occasional video, not the stream. What I've watched from him is generally about what he agrees/disagrees with and he also tells you why he thinks that. Often reading articles/blogposts. Not to dismiss your opinion, but I would put him in the entertaining with substantive arguments category.

IMO software education/tainment suffers much worse though. They teach you how to do X in only this specific way with these specific tools, generally sponsored. Not the admittedly far more boring basics to do it yourself, or how to actually use these tools in a broader sense.


Another shoutout to Basecoat. Easy to use. Makes your website look nice. Works with any/no framework.


I'd never heard of basecoat but it looks great. IMO this is what Tailwind UI should have been. It was utter stupidity that they forced you to use their preferred shiny new JS framework of the week for UI components.

> I genuinely don’t understand how front-end developers accept this level of needless complexity.

I call it 'Shiny Object Syndrome' - Frontend devs tend to love the latest new JS frameworks for some reason. The idea of something being long running, tried and tested and stable for 5-10 years is totally foreign to many FE devs.

Despite its age JS and its ecosystem have just never matured into a stable set of reliable, repeatable frameworks and libraries.


This looks awesome.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: