Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is already the case, SEO content, sponsored comparison sites, influencer marketing, it's all about subtle framing. LLMs just supercharge the problem by making it easier and cheaper to scale.

The real issue isn't that LLMs lie, it's that they emphasize certain truths over others, shaping perception without saying anything factually incorrect. That makes them harder to detect than traditional ads or SEO spam.

Open-source LLMs and transparency in prompt+context will help a bit, but long-term, we probably need something like reputation scores for LLM output, tied to models, data sources, or even the prompt authors.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: