Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Y'all, we need to get away from calling everything written by an LLM "slop". To me, slop is text for the purpose of padding content or getting clicks or whatever. Whether or not this was written in full or in part or 100% by a human who sounds like an LLM, the content here was interesting to think about and was organized and easy to read. Maybe I'm the only person reading past the word choice and grammar to extract the ideas from the article instead of playing a game of "human or AI" with every piece of writing I see.


If something is not worth writing, it is not worth reading.


On one hand, yes: expanding bullet points to slop makes things strictly worse.

On the other hand, if one uses AI but keeps content density constant (e.g. grammar fixes for non-native speakers) or even negative (compress this repetitive paragraph), I think it can be a useful net productivity boost.

Current AI can't really add information, but a lot of editing is subtracting, and as long as you check the output for hallucinations (and prompt-engineer a lot since models like to add) imo LLMs can be a subtraction-force-multiplier.

Ironically: anti-slop; or perhaps, fighting slop with slop.


People are complaining about this article because of the lack of density


Well that's a completely wrong take.


For whatever it's worth, I felt that regardless of whether it was written by a human, or AI, or AI-then-human, it was poorly written. I was going to dismiss it until I saw the links to the papers at the bottom, which I found pretty interesting and well worth the read.

The essay kind of works for me as an impressionistic context for the three papers, but without those three papers I think it's almost more confusing than it helps.


I would say that many of the sentences in this essay are not worth reading. Most of them are of the form described, eg not x but y

Eg

> This suggests that the EM structure isn’t just an analogy — it’s the natural grain of the optimization landscape

I don't care if someone uses llm. But it shows a lack of care to do it in this blatant way without noting it. Eg at work I'll often link prompt-response in docs as an appendix, but I will call out the provenance

If you find those sentences to be helpful, great! I find it decreases the signal in the article and makes me skim it. If you're wondering why people complain, it's because sharing a post intended to be skimmed without saying, hey you should skim this, is a little disrespectful of someone's time


> This suggests that the EM structure isn’t just an analogy — it’s the natural grain of the optimization landscape

As someone in the field, this means nothing, and I'm very suspicious of the article as a whole because it has so many sentences like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: