Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It doesn't need to make up any.


It does, since that's a fundamental reality of the current architecture, that most everyone in AI is working to reduce.

If you don't want hallucinations, you can't use LLM, at the moment. People are using LLM, so having giving it data, to hallucinate less, is the only practical answer to the problem they have.

If you see another, that will work within the current system of search engines using AI, please propose it.

Don't take this as me defending anything. It's the reality of the current state of the tech, and the current state of search engines, which is the context of this thread. Pretending that search engines don't use LLM that hallucinate data doesn't help anyone.

As always, we work within the playground that google and bing give us, because that's the reality of the web.


Use a database if you want something that doesn't make things up, not a neural net.


I didn’t choose to use a neural net, search engines which are arguably critical and essential infrastructure rug-pulled.


I'm on your side. Good advice for everyone.


But completely irrelevant to this thread, unrelated to the reality of search engines, and does nothing to help the grandparent.


Given how LLMs work, hallucinations still occur. If you don't want them to do so, give them the facts and tell them what (not) to extrapolate.


How to draw an owl:

1. Start by drawing some circles.

2. Erase everything that isn't an owl, until your drawing resembles an owl.


Simpler: if you don't want them to do so, don't engage the LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: