Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I pretty much never clear my context window unless I'm switching to entirely different work, seems to work fine with copilot summarizing the convo every once in a while. I'm probably at 95% code written by an llm.

I actually think it works better that way, the agent doesn't have to spend as much time rereading code it had previously just read. I do have several "agents" like you mention, but I just use them one by one in the same chat so they share context. They all write to markdown in case I do want to start fresh if things do go the wrong direction, but that doesn't happen very often.



I wouldn't take it for granted that Claude isn't re-reading your entire context each time it runs.

When you run llama.cpp on your home computer, it holds onto the key-value cache from previous runs in memory. Presumably Claude does something analogous, though on a much larger scale. Maybe Claude holds onto that key-value cache indefinitely, but my naive expectation would be that it only holds onto it for however long it expects you to keep the context going. If you walk away from your computer and resume the context the next day, I'd expect Claude to re-read your entire context all over again.

At best, you're getting some performance benefit keeping this context going, but you are subjecting yourself to context rot.

Someone familiar with running Claude or industrial-strength SOTA models might have more insight.


CC absolutely does not read the context again during each run. For example, if you ask it to do something, then revert its changes, it will think the changes are still there leading to bad times.


It wouldn't re-read the context, it would cache tokens thus far which is like photographically remembering the context instead of re-reading it, until you see it "compress" context when it gives itself a prompt to recap so far:

https://www.anthropic.com/news/prompt-caching


When you say “revert its changes” do you mean undo the changes outside of CC? Does CC watch the filesystem?


Yes, reverting outside. This can happen often when one is not happy with CC's output - Esc + revert.


You can tell it that you manually reverted the changes.

That said, the fact that we're all curating these random bits of "llm whisperer" lore is...concerning. The product is at the same time amazingly good and terribly bad.


I know. Typically I'd let CC know with "I reverted these changes."


This article on OpenAI prompt caching was interesting https://platform.openai.com/docs/guides/prompt-caching

As someone who definitely doesn’t know what they’re talking about, I’m going to guess that some analogous optimizations might apply to Claude.

Something something… TPU slice cache locality… guestures vaguely



I have tested today a mix of cleaning often the context with long contexes and Copilot with Claude ended producing good visual results, but the generated CSS was extremely messy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: