youre projecting a deficiency of the human brain onto computers. computers have advantages that our brains dont (perfect and large memory), theres no reason to think that we should try to recreate how humans do things.
why would you bother with all these summaries if you can just read and remember the code perfectly.
Because the context window of the LLM is limited similar to humans. That’s the entire point of the article. If the LLM has similar limitations to humans than we give it similar work arounds.
Sure you can say that LLMs have unlimited context, but then what are you doing in this thread? The title on this page is saying that context is a bottleneck.
why would you bother with all these summaries if you can just read and remember the code perfectly.