Hacker Newsnew | past | comments | ask | show | jobs | submit | Jackevansevo's commentslogin

if I ran an OS upgrade and was greeted by something like this I'd immediately be swapping OS.


It’s optimistic to assume that there’ll be any better options left.


Because Anthropic and the rest of them are lying to you about the sophistication of these tools.

The fact that claude code is a still buggy mess is a testament to the quality of the dream they're trying to sell.


>claude code is a still buggy mess

What bugs are you seeing? I use Claude Code a lot on an Ubuntu 22.04 system and I've had very few issues with it. I'm not sure really how to quantify the amount of use; maybe "ccusage" is a good metric? That says over the last month I've used $964, and I've got 6-8 months of use on it, though only the last ~3-5 at that level. And I've got fairly wide use as well: MCP, skills, agents, agent teams...


there's currently ~6k open issues and ~20k closed ones on their issue tracker (https://github.com/anthropics/claude-code/issues). certainly a mix of duplicates / feature requests, but 'buggy mess' seems appropriate


you can also judge for yourself based on the changelog https://github.com/anthropics/claude-code/blob/main/CHANGELO...

maybe we don't have AGI to prevent all bugs. but surely some of these could have been caught with some good old fashioned elbow grease and code review.


For the longest time I've been using vims built-in `compiler` feature with tartansandal/vim-compiler-pytest combined tpope/vim-dispatch


Could you share an example of a workflow using the built in feature to run tests in Vim?


Sure, here's a recorded example: https://www.youtube.com/watch?v=TUeousvp4PQ


Interesting approach! Can you share more about this?


Rather than write something up or linking to a bunch of articles I recorded a quick screen capture: https://www.youtube.com/watch?v=TUeousvp4PQ


> Meanwhile I'm watching a community of mostly young people building and using tools like copilot, cursor, replit, jacob etc and wiring up LLMs into increasingly more complex workflows.

And yet, I don't see much evidence that software quality is improving, if anything it seems in rapid decline.


I don't see much evidence that software quality is improving

Does it matter? Ever since FORTRAN and COBOL made programming easier for the unwashed masses, people have argued that all these 'noobs' entering the field is leading to software quality declining. I'm seeing novice developers in all kinds of fields happily solving complex real world problems and making themselves more productive using these tools. They're solving problems that only a few years ago would require an expensive team of developers and ML-experts to pull off. Is the software a great feat of high quality software engineering? Of course not. But it exists and it works. The alternative to them kludging something together with LLMs and OpenAI API calls isn't high quality software written by a team of experienced software engineers, it is not having the software.


Even if that were true (and I'd challenge that assumption[0]), there's no dichotomy here.

Software quality, for the most part, is a cost center, and as such will always be minimal bearable.

As the civil engineering saying goes, any fool can make a bridge that stands, it takes an engineer to build a bridge that barely stands.

And anyway, all of those concerns are orthogonal to the tooling used, in this case LLMs.

[0] things we now take for granted, such as automated testing, safer languages, ci/cd, etc; makes for far better software than when we used to roll our own crypto in C.


Author here: I'm super familiar with this kind of find and replace syntax inside vim or with sed. Usually it works great!

But in this specific situation it was tricky to handle situations with things spanning over multiple lines + preventing accidental renames.


For those tricky situations, there's "sledgehammer and review" and the second-order git-diff trick:

https://blog.moertel.com/posts/2013-02-18-git-second-order-d...


I realise that and like the article. I was trying to convey in my response that devs should have these things in their toolkit not that you "did the wrong thing"[1] somehow by using treesitter for this.

[1] like that's even possible in this situation


This is super cool! I wish I'd known about this.


Author here, I'm not aware of any IDE that can do this specific refactor


PyCharm understands pytest fixtures and if this is really just about a single fixture called "database", it takes 3 seconds to do this refactoring by just renaming it.


To add some balance to everyone slating Jinja in the comments, I've personally found it great to use.

Sure you CAN write unmaintainable business logic spaghetti in your templates, doesn't mean you SHOULD (Most criticism appears to come from this angle).


Mozilla forever determined to do anything but actually improve their core product.

I know it's opt-in, but nobody is going to switch to a browser because they ship this kinda stuff.


The defaults it ships out of the box makes the shell actually usable. Unsure I could ever go back to a regular bash/zsh prompt.

A lot of people will tell you this is slow and you've got to use X,Y,Z instead. If you're new, I'd strongly recommend just sticking with this, it's much easier to configure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: