In the picture right at the top of the article, the top of the bell curve is using 8 agents in parallel, and yada yada yada.
And then they go on to talk about how they're using 9 agents in parallel at a cost of 1000 dollars a month for a 300k line (personal?) project?
I dunno, this just feels like as much effort as actually learning how to write the code yourself and then just doing it, except, at the end... all you have is skills for tuning models that constantly change under you.
And it costs you 1000 dollars a month for this experience?
every one of the snarky comments like this on myriad of HN threads like this:
1. assumes most humans write good code (or even better than LLMs)
2. will stick around to maintain it
after 30 years in the industry and last 10 as consultant I can tell you fairly definitively that #1 cannot be further from the truth and #2 is frequent cause of consultants getting gigs, no one understand what “Joe” did with this :)
I think it's hard as he's done a huge article about coding with agents, with no code examples.
You can go look at his GitHub but it's a bewildering array of projects. I've had a bit of a poke around at a few of the seemingly more recent ones. Bit odd though as in one he's gone heavy on TS classes and another heavy on functions. Might be he was just contributing to one as it was under a different account.
And a lot of them seem to be tools that wrap a lot of cli tools. There is a ton of scaffolding code, to handle a ton of cli options. A LOT of logger stmts, one file I randomly opened was a logger stmt every other line.
So it's hard to judge, I found it hard to wade through the code as it's basically just a bunch of option handling for tool calls. It didn't really do much. But necessary, probably?
Just very different code than I need to write.
And there are some weird tells that make it hard to believe.
For example, he talks about refactoring for useEffect in React but I KNOW GPT5 is really rubbish at it.
Some code it's given me recently was littered with useEffect and useMemo when it wasn't needed. Then when challenged it got rid of some, then changed other stuff to useEffect when again, it wasnt needed.
And then got all confused and basically blew it's top.
Yet this person says he can just chuck a basic prompt at his codex cli, running GPT5 and it magically refractors the bad useEffects?
Personally, my experience with codex is same as yours, no way I would ever use codex for TS projects and especially not React. I don't know this mate personally but if we were talking about this over beer I would probably tell you (after 3rd one when I am more open to being direct) that I think I trust this blog as much as I trust President (this one or previous ones) to tell the truth :)
My comment was more geared towards an insane amount of comments on myriad of "AI" / "agent coding" posts where soooooo many people will write "oh, such AI slop" assuming that average SWE would write it better. I don't know many things but working with these tools heavily over the last year or so (and really heavily last 6 months) I'll take their output over general average SWE every day of the week and twice on Sunday (provided that I am driving the code generation myself, not general AI generated code...)
(OP) the current projec is closed source. If you look at my cli tools, that's pure slop, all I care is that it works, so reviewing that code for sure will show some weird stuff. Does it matter? It's a tool to fetch logs form a server. I run it locally. As long as is does that reliably, idk about the code.
1. Humans are capable of writing good code. Most won't, but at least it's possible. If your company needs good code to survive, would you take 5% chance or 0% chance?
2. Even when humans write crappy code, they typically can maintain it.
This sounds like a wild take. So what about those trying LLM code, then deciding it isn't good enough, and going back and writing it from scratch themselves, with what they perceive to be better results? They're just wrong and the LLM was just as good?
Higher level abstractions are built on rational foundations, that is the distinction. I may not understand byte code generated by a compiler, but I could research the compiler and understand how it is generated. No matter how much I study a language model I will never understand how it chose to generate any particular output.
COBOL developers may have claimed that higher-level language developers didn't understand what was happening under the hood. However, they never suggested those developers couldn't understand the high-level code itself (what's going on here)—only what lay beneath it.
But COBOL developers resisted modern tooling. A coworker of mine tells the story of when he was working alongside an old mainframe hand more than 25 years ago, and was trying to explain to him how modern IDEs work. The mainframe guy gave him a disdainful look and said "That ain't how computing is done, kid."
Now what the guys above the programmers' paygrade knew was that the aim of software development wasn't really code, it was value delivered to customer. If 300k lines if AI slop deliver that value quickly, they can be worth much more than the 20k lines of beautiful human-written code.
I'd suspect folk with a terminal first approach probably have much stronger understandings of what is going on under the hood which makes approaching new repositories a lot easier if nothing else.
Alternatively, maybe folk who're exposed to more codebases are the best off.
By "modern IDE" I meant something like Turbo Pascal, as compared with the (at best) ISPF-based editor the mainframe guy was using. This took place in the early 90s.
Then you'd love "Real Programmers Don't Use PASCAL", by Ed Post. It's about Fortran vs PASCAL, though it does mention COBOL in passing. It's copyright 1983!
This is like having a normal distribution where the average is a 1x engineer, a standard deviation below is a 0x engineer, and a standard deviation above is a 10x engineer. Which would make more sense because someone running 9 agents with multiple git checkouts and what-not is just managing a team of synthetic vibe coders.
In the picture right at the top of the article, the top of the bell curve is using 8 agents in parallel, and yada yada yada.
And then they go on to talk about how they're using 9 agents in parallel at a cost of 1000 dollars a month for a 300k line (personal?) project?
I dunno, this just feels like as much effort as actually learning how to write the code yourself and then just doing it, except, at the end... all you have is skills for tuning models that constantly change under you.
And it costs you 1000 dollars a month for this experience?