Hacker Newsnew | past | comments | ask | show | jobs | submit | mrwrong's commentslogin

have you tried using $NEWEST_MODEL ?

It’s because depending on the person the newest model crossed the line into being useful for them personally. It’s not like a new version crosses the line for everyone. It happens gradually. Each version more and more people come into the fold.

For me Claude code changed the game.


get ready to tick those numbers over to 2026!

> The variance is great

this strikes me as a very important thing to reflect on. when the automobile was invented, was the apparent benefit so incredibly variable?


> was the apparent benefit so incredibly variable?

Yes, lots of people were very vocally against horseless-carriages, as they were called at the time. Safety and public nuisance concerns were widespread, the cars were very noisy, fast, smoky and unreliable. Old newspapers are filled with opinions about this, from people being afraid of horseless-carriages spooking other's horses and so on. The UK restricted the adoption of cars at one point, and some Canton in Switzerland even banned cars for a couple of decades.

Horseless-carriages was commonly ridiculed for being just for "reckless rich hobbyists" and similar.

I think the major difference is that cars produced immediate, visible externalities, so it was easy for opposition to focus on public safety in public spaces. In contrast, AI has less physically visible externalities, although they are as important, or maybe even more important, than the ones cars introduced.


Is this a trick question? Yes it was. A horse could go over any terrain while a car could only really go over very specific terrain designed for it. We had to terraform the world in order to make the automobile so beneficial. And it turned out that this terraforming had many unintended consequences. It's actually a pretty apt comparison to LLMs.

agreed. we should instead be sneering at the AI critics because "you're holding it wrong"

you are saying X, but a completely different group of people didn't say Y that other time! I got you!!!!

It’s fair to call out that both aspects are two sides of the same coin. I didn’t try to “get” anyone

um, no it's not. you have fallen into the classic web forum trap of analyzing a heterogenous mix of people with inconsistent views as one entity that should have consistent views

do you get scared when you hear other ghost stories too?

> Wouldn't the best way to spend a life be to go all in on one thing?

no


> a technical sense you are a stochastic parrot.

I am not. I'm sorry you feel this way about yourself. you are more than a next token predictor


If I am more than a next token predictor… doesn’t that mean I’m a next token predictor + more? Do you not predict the next word you’re going to say? Of course you do, you do that and more.

Humans ARE next token predictors technically and we are also more than that. That is why calling someone a next token predictor is a mischaracterization. I think we are in agreement you just didn’t fully understand my point.

But the claim for LLMs are next token predictors is the SAME mischaracterization. LLMs are clearly more than next token predictors. Don’t get me wrong LLMs aren’t human… but they are clearly more than just a next token predictor.

The whole point of my post is to point out how the term stochastic parrot is weaponized to dismiss LLMs and mischaracterize and hide the current abilities of AI. The parent OP was using the technical definition as an excuse to use the word as a means to achieve his own ends namely be “against” AI. It’s a pathetic excuse I think it’s clear the LLM has moved beyond a stochastic parrot and there’s just a few stragglers left who can’t see that AI is more than that.

You can be “against” AI, that’s fine but don’t mischaracterize it… argue and make your points honestly and in good faith. Using the term stochastic parrot and even what the other poster did in attempt to accuse me of inflammatory behavior is just tactics and manipulation.


> But the claim for LLMs are next token predictors is the SAME mischaracterization. LLMs are clearly more than next token predictors. Don’t get me wrong LLMs aren’t human… but they are clearly more than just a next token predictor.

it's simply not. I find this argument by analogy very lazy. you need to do the work to show what that "and more" is and how it's the same for humans and LLMs. you can't just hand wave that it feels the same and leave it at that


> It’s surreal to read claims from people who insist we’re just deluding ourselves, despite seeing the results

just imagine how the skeptics feel :p


X has a pasta dish is an easily verifiable factual claim. the pasta dish at X tastes good and is worth the money is a subjective claim, unverifiable without agreeing on a metric for taste and taking measurements. they are two very different kinds of disagreements

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: