What bugs are you seeing? I use Claude Code a lot on an Ubuntu 22.04 system and I've had very few issues with it. I'm not sure really how to quantify the amount of use; maybe "ccusage" is a good metric? That says over the last month I've used $964, and I've got 6-8 months of use on it, though only the last ~3-5 at that level. And I've got fairly wide use as well: MCP, skills, agents, agent teams...
there's currently ~6k open issues and ~20k closed ones on their issue tracker (https://github.com/anthropics/claude-code/issues). certainly a mix of duplicates / feature requests, but 'buggy mess' seems appropriate
maybe we don't have AGI to prevent all bugs. but surely some of these could have been caught with some good old fashioned elbow grease and code review.
> Meanwhile I'm watching a community of mostly young people building and using tools like copilot, cursor, replit, jacob etc and wiring up LLMs into increasingly more complex workflows.
And yet, I don't see much evidence that software quality is improving, if anything it seems in rapid decline.
I don't see much evidence that software quality is improving
Does it matter? Ever since FORTRAN and COBOL made programming easier for the unwashed masses, people have argued that all these 'noobs' entering the field is leading to software quality declining. I'm seeing novice developers in all kinds of fields happily solving complex real world problems and making themselves more productive using these tools. They're solving problems that only a few years ago would require an expensive team of developers and ML-experts to pull off. Is the software a great feat of high quality software engineering? Of course not. But it exists and it works. The alternative to them kludging something together with LLMs and OpenAI API calls isn't high quality software written by a team of experienced software engineers, it is not having the software.
Even if that were true (and I'd challenge that assumption[0]), there's no dichotomy here.
Software quality, for the most part, is a cost center, and as such will always be minimal bearable.
As the civil engineering saying goes, any fool can make a bridge that stands, it takes an engineer to build a bridge that barely stands.
And anyway, all of those concerns are orthogonal to the tooling used, in this case LLMs.
[0] things we now take for granted, such as automated testing, safer languages, ci/cd, etc; makes for far better software than when we used to roll our own crypto in C.
I realise that and like the article. I was trying to convey in my response that devs should have these things in their toolkit not that you "did the wrong thing"[1] somehow by using treesitter for this.
PyCharm understands pytest fixtures and if this is really just about a single fixture called "database", it takes 3 seconds to do this refactoring by just renaming it.
To add some balance to everyone slating Jinja in the comments, I've personally found it great to use.
Sure you CAN write unmaintainable business logic spaghetti in your templates, doesn't mean you SHOULD (Most criticism appears to come from this angle).
The defaults it ships out of the box makes the shell actually usable. Unsure I could ever go back to a regular bash/zsh prompt.
A lot of people will tell you this is slow and you've got to use X,Y,Z instead. If you're new, I'd strongly recommend just sticking with this, it's much easier to configure.