Hacker Newsnew | past | comments | ask | show | jobs | submit | freedomben's commentslogin

As long as you force it to use the pro model and not flash, it is pretty usable. If you go with the default settings though, it will use flash aggressively which results in pretty bad code. I only use it with pro exclusively now.

Even with pro, I have caught it going off the rails a few times. The most frustrating was when I asked it to do translations, and it decided there were too many to do so it wrote a python script that ran locally and used some terrible library to do literal translations, and some of them were downright offensive and sexual in nature. For translations though, Gemini is the best but you have to have it do a sentence or two at a time. If you provide the context around the text, it really knocks it out of the park


flash is the fast (duh) model though. its not always beneficial to use pro. in practice: 1/ set to flash 3.1 ; 2/ force to pro...sometimes. mainly when the cli fails to predict what model to use.

note that it will sometimes fall back to flash 2, which sucks


Flash will absolutely destroy a complex codebase. It's like a drunk junior programmer. Don't trust it with anything more complex than autocomplete.

Pro is expensive, but good. However they've decreased the pitiful stipend they used to include in even the ultra plan to the point were it's barely usable. I pivoted back to ChatGPT Pro after the recent downgrade they gave Ultra users. Googles Ultra plan cost 2.5x as much and delivers about half the usage.


I got really burned by that quality reduction. I subscribed to the AI pro level, and was using it quite a bit, but I stopped because I had to be super attentive to the output because it would make simple mistakes. It was really a shame, because for a while they're Gemini was the best and the AI pro level would allow you enough usage to use it throughout the day as long as you weren't hammering it

did you mean the first "compressed" to be "uncompressed" ?

I did that too, and ended up just skipping helm and using envsubst to interpolate the values I need at runtime from env vars. Nearly everyone preferred that approach. YMMV of course.

I think that's a good approach if it works for your use case. Sometimes you might want something slightly more sophisticated like basic logic (loops/conditionals). In those cases, you can still use helm but you have an extremely simple template and avoid many of the "can't read this template" helm pitfalls.

BTW, this is what Flux V2 (GitOps) supports as well and we do at our company. It's worked well and covered almost all our use cases.

Same, I've tried three or four times to make it work, including one attempt that just translated compose.yaml into k8s yaml, and every time I came away thinking, "just use k8s". K8s yaml looks complex, and can start to feel very boilerplate, but attempts to hide the complexity often just lead to something not-flexible-enough because it encodes convention over configuration, and inevitably some project runs into limitations and pretty soon you've just built an abstraction layer that leaks or is equally complex/verbose and now you have to learn something new.

Just use k8s and follow similar patterns is the conclusion I've arrived at personally.


Agree you don't need a 3rd party framework on top of it, but Laravel has been a joy to use (though to be fair, I'm a big rails and phoenix fan, so I've been infected by prior art).

Laravel is great for when you know you're gonna onboard/offboard a bunch of developers over months/years, and you want them to feel right at home as fast as possible.

For more long-term business, I'd always recommend go "chosen libraries put together well" over "framework everyone knows", as the developer churn will be lower, and having more control over your design and architecture tends to be more important (and applicable) when people stick around for longer.


As a solo dev, I’ve found myself spinning up little servers for various things and then just letting them run for months between needing to make changes.

At first (and for admittedly way too long), I used this as a way to try out fun new frameworks - Node+Express for one thing, Phoenix for another, SvelteKit for a third.

I noticed it was a huge pain to dive into these things once every 6 months. I’d forgotten how it worked, and for some of them at least, I could look up docs and examples.

My Node+Express thing was the worst because it was all homegrown. There’s very little convention in that world, and you have to make your own. No docs were coming to save me, and this was in the Before Times, like 3 years ago pre-LLM.

Anyway I ported everything to Rails and it’s wonderful. I know how it works, there’s almost 30 years of examples online and they even mostly still work, and LLMs are great at it too.

Lots of power in a good framework, in a situation that’s a good fit for it!


Symfony scales better than Laravel when you need to go big over the long term. This has been my experience anyway.

I realize Laravel is built on Symfony but using Symfony directly is a different experience


Symfony kind of fits the “well-chosen libraries” approach as each of its components can be pulled in individually with no bearing on the architecture of your application.

Barman[1] and Wal-G[2] are the two I've seen most recommended. They do things a little differently though so not drop-in by any means.

I've just transitioned to using a full compressed/encrypted sql dump from a cron job. It's been more convenient anyway when I want to restore. But incremental backups are hard that way so if your database is big, it's not a great solution. It's also not my primary backup (I use a managed pg with point-in-time backups) just a snapshot backup, so that's worth considering as well.

[1] https://github.com/wal-g/wal-g

[2] https://github.com/EnterpriseDB/barman


Hiring the original author is surely more expensive than using Claude to just maintain the current feature set. That is one of the possible approaches we came up with for continuity on our own clusters. It's not a great solution and is far outside our business, but it's better than nothing.

In an ideal world I'd love to just hire/sponsor the author, but we're a small non-profit that can't even afford to hire devs we need for our core product, so hiring the author just to maintain our backup solution is out of the question.


See also for the original project shutdown announcement: https://news.ycombinator.com/item?id=47919997

------------------------------------------------------------

Edit: It looks like the author may have changed his mind and might revive the original project:

From README.md in https://github.com/pgbackrest/pgbackrest:

> MAINTENANCE UPDATE After I announced that I am no longer maintaining pgBackRest my inbox blew up. It took a while to sort through the messages — many of them were well wishes and thank-yous for my work over the years.

> But a pattern soon emerged. It is clear that many pgBackRest users, especially those with pgBackRest users of their own to support, would prefer the project to continue with me as the primary maintainer. I would like nothing more, but after months of fundraising I had just decided it wasn't going to happen.

> Now the situation has changed, and it appears all but certain that I will be able to secure enough funding to continue the project. This time pgBackRest will be funded by a coalition of sponsors so that a single acquisition will no longer affect my ability to continue work on the project. We should also be able to bring on another maintainer to distribute the workload and provide continuity in the future.

> I know this has been a shock and there is a lot of uncertainty. Please be patient — the current version of pgBackRest works, and there are no critical outstanding bugs or security issues so there is no need to immediately fork the project.

> I expect to make a more definitive announcement by the end of the week. Until then, please hold tight and know that we are actively working to revive pgBackRest.


> I doubt anyone would willingly leave the co-author advertisement (because that's what it is, an advertisement) on display in all their commits unless they've gone all-in on the fad and are actively proud of the fact that they're not writing any code themselves.

Agree, though once a commit is pushed it's too late to remove it without rewriting history, which is a sin much worse than forgetting to remove it. I frequently use Claude to commit work that I have written, because LLMs are really, really good at writing commit messages. My muscle memory early on sometimes ran gp (my alias for git push) instead of gca (my alias for git commit --amend) and unintentionally pushed. Even though I had written the changes myself (not used Claude for the code), it made it look like I vibed it (which really pissed me off btw. I'm still mad about it. I despise some company injecting ads into my work)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: