> skimming through an alien looking codebase, scratching your head trying to figure what crazy abstraction the last person who touched this code had in mind. Oh shit it was me? That made so much more sense back then
This is exactly how you learn to create better abstractions and write clear code that future you will understand.
You are right about the learning part. But I’ve been at this for 20 years. Even the best, most pristine and organizad code I’ve seen has not been “clear”. The average LLM code today is a lot more clear than the average developer code.
I wish more was being invested in AI autocomplete workflows. That was a nice middle-ground.
But yeah my hunch is "the old way" - although not sure we can even call it that - is likely still on par with an "agentic" workflow if you view it through a wider lens. You retain much better knowledge of the codebase. You improve your understanding over coding concepts (active recall is far stronger than passive recognition).
I've had a lot of enjoyment flipping the agentic workflow around: code manually and ask the agent for code review. Keeps my coding skills and knowledge of the codebase sharp, and catches bugs before I commit them!
It also writes lots of bugs which it'll catch some of, in an independent review chat.
This is bogus. If you think LLMs write less buggy software, you haven't worked with seriously capable engineers. And now, of course, everyone can become such an engineer if they put in the effort to learn.
But why not just use the AI? Because you can still use the AI once you're seriously good.
This is definitely not correct in my opinion. You’re essentially saying, instead of a person actually getting better at the craft, just give up and let someone else do it.
IME, not really. When you prompt it to review its own written code, it will end up finding out a bunch of stuff that should have been otherwise. And then you can add different "dimensions" in your prompt as well like performance, memory safety, idiomatic code, etc.
Man, same here, those early days of Cursor were mindblowing; but since then autocomplete has stagnated, and even the new Cursor version is veering agentic like everything else.
I hope if/when diffusion models get a little more traction down the line it'll put some new life into autocomplete(-adjacent) workflows. The virtually instantaneous responses of Inception's Mercury models [0] still feel a little like magic; all it's missing is the refinement and deep editor integration of Cursor.
On the subject of diffusion models, it's a shame there aren't any significant open-weight models out there, because it seems like such a perfect fit for local use.
LLM auto-complete is the most useful experience I've had with LLMs by quite a margin, and those were the early GitHub Copilot versions as well. In terms of models and cost it overperformed. It wasn't always good but it was more immediately useful than vibecoding and spec-driven development (or vibecoding-in-a-nice-dress).
I think most people "moved on" because they both thought the agent workflow is cooler and were told by other people that it works. The latter was false for quite some time, and is only correct now insofar that you can probably get something that does what you asked for, but executed exceedingly poorly no matter how much SpecLang you layer on top of the prompting problem.
In some codebases, autocomplete is the most accurate and efficient way to get things done, because "agentic" workflows only produce unmaintainable mess there.
I know that because there are several times where I completely removed generated code and instead coded by hand.
Why? I thought it was pretty good, just get the rest of your function a lot of times and no context switching to type to an agent or whatever. It just happens immediately and if it's wrong just keep typing till it isn't. You can still use an agent for more complex things.
I just wish I knew of a good Emacs AI auto complete solution.
I can see the logic behind "manual coding" but it feels like driving across country vs taking the airplane. Once I've taken the airplane once, its so hard to go back...
Can't understand this mentality. If I had the time I would much rather never set foot in an airport again. I would drive everywhere. And I would much rather write my own code than pilot an LLM too
The fact so many people think businesses need to do do do, faster faster faster, now now now, at all costs is a major reason everything sucks, everything is fucked up, everyone is exploited.
No, they are not. Even ignoring business where using AI would have consequences for you (medical is one example), there are plenty "normal" software companies that value quality over slop.
Figma's stock has been on a sharp downward trend over the last year. This isn't a notice-able change to their stock price at all. They're down 30% just in the last month, with many days being -5% to -10%.
There have been studies showing aesthetics matter quite a bit for UX - users perceive things that are attractive as being easier to use and less frustrating.
Anthropic is the exact same way, I think they're just trying to avoid having 5 different subscription tiers visible. Probably needing 20x is very niche
This was due to Claude Code the agent harness. 4.6 was trained to use tools and operate in an agent environment. This is different from there being a huge bump in the underlying model's intelligence.
The takeaway here I think is that the "breakthrough" already happened and we can't extrapolate further out from it.
This is exactly how you learn to create better abstractions and write clear code that future you will understand.
reply