I’m not talking about git diffs. I’m talking about the summaries of context. Every commit the ai needs to update the summaries and notes it took about the code.
Did you read the entirety of what I wrote? Please read.
Say the AI left a 5 line summary of a 300 line piece of code. You as a human update that code. What I am saying specifically is this: when you do the change, The AI then sees this and updates the summary. So AI needs to be interacting with every code change whether or not you used it to vibe code.
The next time the AI needs to know what this function does, it doesn’t need to read the entire 300 line function. It reads the 5 line summary, puts it in the context window and moves on with chain of thought. Understand?
This is what shrinks the context. Humans don’t have unlimited context either. We have vague fuzzy memories of aspects of the code and these “notes” effectively make coding agents do the same thing.
False. Nobody does this. They hold pieces of context and summaries in their head. Nobody on earth can memorize an entire code base. This is ludicrous.
When you read a function to know what it does then you move on to another function do you have the entire 100 line function perfectly memorized? No. You memorize a summary of the intent of the function when reading code. An LLM can be set up to do the same rather than keep all 100 lines of code as context.
Do you think when you ask the other person for more context he’s going to spit out what he wrote line by line. Not even he likely will remember everything he wrote.
You think anyone memorized Linux? You know how many lines of code is in the Linux source code. Are you trolling?
Did you read the entirety of what I wrote? Please read.
Say the AI left a 5 line summary of a 300 line piece of code. You as a human update that code. What I am saying specifically is this: when you do the change, The AI then sees this and updates the summary. So AI needs to be interacting with every code change whether or not you used it to vibe code.
The next time the AI needs to know what this function does, it doesn’t need to read the entire 300 line function. It reads the 5 line summary, puts it in the context window and moves on with chain of thought. Understand?
This is what shrinks the context. Humans don’t have unlimited context either. We have vague fuzzy memories of aspects of the code and these “notes” effectively make coding agents do the same thing.