Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the general term for this is "context poisoning" and is related but slightly different to what the poster above you is saying. Even with a "perfect" context, the LLM still can't infer intent.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: