Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article is garbage. I was half expecting or hoping for a nuanced analysis of regressions manifested in a specific leading model as a result of purported "upgrades" but instead found an idiot who doesn't understand how LLMs work or seem to even care, really.

Idiots like this seem to want a robot that does things for them instead of a raw tool that builds sometimes useful context, and the LLM peddlers are destroying their creations to oblige this insatiable contingent.





A "robot that does things" is the overpromise that doesn't deliver.

I actually agree with the article that non-determinism is why generative AI is the wrong tool in most cases.

In the past, the non-determinism came from the user's inconsistent grammar and the game's poor documentation of its rigid rules. Now the non-determinism comes 100% from the AI no matter what the user does. This is objectively worse!


The different flavors of non-determinism are interesting.

There’s chat-vs-api; same model answers differently depending on input channel.

There’s also statistical. Once in a rare while, a response will be gibberish. Same prompt, same model, same input mode. 70% of the time, sane and similar answers. 0.01% of the time, gibberish. In-between, a sliding-scale — with a ‘cursed middle’ of answers that are mostly viable except for one poisoned thing that’s hard to auto-detect…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: