Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not an evangelist for AI at all, I just love it as a tool for my creativity, research and coding.

What I’m saying is that there should be a disclaimer: hey, we’re testing these models for the average person, that have no idea about AI. People who actually know AI would never use them in this way.

A better idea: educate people. Add “Here’s the best way to use them btw…” to the report.

All I’m saying is, it’s a tool, and yes you can use it wrong. That’s not a crazy realization. It applies to every other tool.

We knew that the hallucation rate for gpt 4o was nuts. From the start. We also know that gpt-5 has a much lower hallucination rate. So there are no surprises here, I’m not saying anything groundbreaking, and neither are they.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: