I’m the author of aider, an ai pair programming tool. I use aider to develop aider, and it keeps track of how much of its own code that it writes.
Aider wrote 58% of the code in the last release, and >40% of the previous few.
The release history page [0] plots this stat for each release over the last 12+ months. The overall trend is pretty cool, especially since Claude 3.5 Sonnet.
It’s not an enterprise code base, but it’s not a toy code base either.
I think what many people miss, is how long context LLMs get better (needle in a needle stack) and how Context is very important.
With github copilot or with continue for VS Code the main issue lies in how they decide which context to give:
Ideally it's a graph of all function calls and instantiations of classes so whenever your cursor is in a particular spot you could hop through the graph and get all important code pieces as context.
Currently continue uses vector search of chunks which is just a crutch. I am not sure what copilot or Aider does, but the right context is key.
Another way to improve a coding assistant is to move away from simple paradigms to agent based Workflow that can deploy sub agents and break down tasks. All in the back autonomously while you code and then it surfaces suggestions or changes. This will get possible with increasing inference speeds (groq) and better agent frameworks.
Most devs at our company say LLMs are useless for coding and github copilot is a glorified autocomolete costing 20USD. I think the tech and ecosystem will improve a lot over time and they underestimate LLM abilities due to their bias.
To your point: check out Supermaven. I have no affiliation. But it has way more context than Copilot and is way better at suggestions that make sense in the codebase.
I kinda wonder what the percentage would be for my use of autocomplete. I only ever write out variable names once, the other times they are completed. Like I write "wacz" and it sees it matches with WArehouseZoneController and inputs that.
I probably type less than half the code I commit, even without AI assistance?
Certainly. That statistic is misleading without knowing what exactly it measures. The author no doubt knows it. I am surprised you're the only person on HN pointing it out.
Ok you’ve convinced me to give Aider a try. But is there a way to feed it documentation of a particular library or API? I want it to read the Stripe docs for me for example and then implement a feature.
Aider wrote 58% of the code in the last release, and >40% of the previous few.
The release history page [0] plots this stat for each release over the last 12+ months. The overall trend is pretty cool, especially since Claude 3.5 Sonnet.
It’s not an enterprise code base, but it’s not a toy code base either.
[0] https://aider.chat/HISTORY.html