Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been using LLMs for a long time, and I've thus far avoided memory features due to a fear of context rot.

So many times my solution when stuck with an LLM is to wipe the context and start fresh. I would be afraid the hallucinations, dead-ends, and rabbit holes would be stored in memory and not easy to dislodge.

Is this an actual problem? Does the usefulness of the memory feature outweigh this risk?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: