Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Has it been shown or admitted that the quotes were hallucinations, or is it the presumption that all made up content is a hallucination now?


Another red flag is that the article used repetitive phrases in an AI-like way:

"...it illustrates exactly the kind of unsupervised output that makes open source maintainers wary."

followed later on by

"[It] illustrates exactly the kind of unsupervised behavior that makes open source maintainers wary of AI contributions in the first place."


I used to be skeptical that AI generated text could be reliably detected, but after a couple years of reading it, there are cracks starting to form in that skepticism.


Gen AI only produces hallucinations (confabulations).

The utility is that the infrenced output tends to be right much more often than wrong for mainstream knowledge.


You could read the original blog post...


How could that prove hallucinations? It could only possibly prove that they are not. If the quotes are in the original post then they are not hallucinations. If they are not in the post they could be caused by something is not a LLM.

Misquotes and fabricated quotes have existed long before AI, And indeed, long before computers.


[dead]


You've deeply misunderstood their argument in some way I can't quite figure out.

It's simple. We know the quotes are fake, but we don't know for sure if they're hallucinations. The blog post does not resolve this uncertainty.

And yes other answers are reasonably plausible.

You said in another comment that they're "retreating" and "refusing to read" and... no. Your insults are not justified at all.


There is no goalpost moving here.

I read the article.

My claim is as it has always been. If we accept that the misquotes exist it does not follow that they were caused by hallucinations? To tell that we would still need additional evidence. The logical thing to ask would be; Has it been shown or admitted that the quotes were hallucinations?


[flagged]


I think you're missing their point. The question you're replying to is, how do we know that this made up content is a hallucination. Ie., as opposed to being made up by a human. I think it's fairly obvious via Occam's Razor, but still, they're not claiming the quotes could be legit.


[dead]


You seem to be quite certain that I had not read the article, yet I distinctly remember doing do.

By what proceess do you imagine I arrived at the conclusion that the article suggested that published quotes were LLM hallucinations when that was not mentioned in the article title?

You accuse me of performative skepticism, yet all I think is that it is better to have evidence over assumptions, and it is better to ask if that evidence exists.

It seems a much better approach than making false accusations based upon your own vibes, I don't think Scott Shambaugh went to that level though.


https://news.ycombinator.com/item?id=47026071

https://arstechnica.com/staff/2026/02/editors-note-retractio...

>On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.


There is a third option: The journalist who wrote the article made the quotes up without an LLM.

I think calling the incorrect output of an LLM a “hallucination” is too kind on the companies creating these models even if it’s technically accurate. “Being lied to” would be more accurate as a description for how the end user feels.


The journalist was almost certainly using an LLM, and a cheap one at that. The quote reads as if the model was instructed to build a quote solely using its context window.

Lying is deliberately deceiving, but yeah, to a reader, who in a effect is a trusting customer who pays with part of their attention diverted to advertising support, broadcasting a hallucination is essentially the same thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: