Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm curious on how the model's going to face intellectual tasks he can't resolve by referring back to the user. Today most LLM's will give multiple answers to "what's the meaning of life?" and immediately move the wand back to the user. It could be interesting if they'll hang with the question more, dive deeper into contradictions and tell, eventually, they don't know.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: