Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Would you feel the same if in 2030, all the actions you describe, work most of the time but still produce questionable output requiring time to verify and fact check due to the probabilistic nature of the LLM engine? This is unsolvable with LLMs. I don't want an embedded or agentic AI but do give me the option to pick a model of my choice and accept the risks when I want to. I don't want tainted generated summaries, replies or code in certain critical areas.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: