I think it's kinda double whammy, one the one hand working with AI leaves a lot of 5-15 minute breaks perfect for squeezing in a comment on a HN thread, while also supplanting the sort of work that would typically lead to interesting ideas or projects, substituting it with work that isn't that interesting to talk about (or at least hasn't been thought about for long enough to have interesting things to say).
This resonates. I build products on top of LLMs, and the most interesting work I do has nothing to do with AI; it's designing structured methodologies, figuring out what data to feed in before a conversation starts, deciding what to do when the model gives a weak answer. The AI is plumbing.
But nobody wants to hear about prompt calibration or pipeline architecture. They want to hear "I replaced my whole team with agents." The boring, useful work is invisible, and the flashy stuff gets all the oxygen
Causal graphs are interesting, but in my experience, the bottleneck isn't the representation; it's getting the model to actually follow through on weak signals instead of moving on to the next topic. A graph won't help if the system doesn't know what to do when it hits a node that doesn't resolve cleanly.
What's your experience been with them?
It's more Zeno's paradox. You take one step, get 90% of the way to the finishing line. Now you look ahead and still a bunch of distance ahead of you. You take another step and get 90% of the way there. Now you look ahead and see there's still more distance ahead of you,...
Preemtive betrayal is a terrible strategy if there are more than two parties in the game, and they are allowed to cooperate.
You have to be one heck of a smooth conversationalist to convince them to take a number and patiently wait in line to be the ones to be attacked next.
If you're the guy that the others in the room know shoots first, you're also the guy the others in the room will shoot when he's reaching for something in his jacket pocket.
The prisoner's dilemma leads to mutual defection as the dominant equilibrium strategy in the one-shot version. Cooperation emerges as the equilibrium on repetition. The Han Solo gunfight is literally the one-shot version. When countries go to war that calculation is more complicated.
This is true, but it ignores the fact that claude constantly pushes the code toward more complexity.
Any given problem has a spectrum of solutions, ranging from simple and straightforward, to the most cursed rube goldberg machine you've ever seen. Claude biases toward the latter.
When working on larger code bases, especially poorly factored ones (like the one claude tends to build unsupervised), it's default mode of operation is to build a cursed rube goldberg machine. It doesn't take too long before it starts visibly floundering when you ask it to make changes to the software.
Complexity management is something human software engineers do constantly. Pushing back against complexity and technical debt is the primary concern for a developer working on a brownfield project. Everything you do has to take this into account.
In Claude's world, every user is a generational genius up there with Gauss and Euler, every new suggestion, no matter how banal, is a mind boggling Copernican turn that upends epistemology as we know it.
Yeah it's very much the opposite of how Claude Code tends to approach a problem it hasn't seen before, which tends toward constructing an elaborate Rube Goldberg machine by just inserting more and more logic until it manages to produce the desired outcome. You can coax it into simplifying its output, but it's very time consuming to get something that is of a professional standard, and doesn't introduce technical debt.
Especially in brownfield settings, if you do use CC, you really should be spending something like a day refactoring the code for every 15 minutes of work it spends implementing new functionality. Otherwise the accumulation of technical debt will make the code base unworkable by both human and claude hands in a fairly short time.
I think overall it can be a force for good, and a source of high quality code, but it requires a significant amount of human intervention.
Claude Code operating on unsupervised Claude code fairly rapidly generates a mess not even Claude Code can decode, resulting in a sort of technical debt Kessler syndrome, where the low quality makes the edits worse, which makes the quality worse, rinse and repeat.
I've tested it in both w3m and dillo, should work fine as long as your browser renders noscript tags. It's very much designed from the ground up to handle browsers like that. Just requires you to manually wait a few seconds and then press the link.
One configuration that might break is if you're running something like chrome or firefox, and rigging it to not run JS. But it's really hard to support those types of configurations. If it works in w3m, it's no longer a "site requires JS" issue...
Thanks a lot for considering no-JS browser like Dillo, in the current web hellscape is certainly a difficult task. I checked and it works well in Dillo on my end.
reply