(Author here) I found that over time I spend more time striping someone's badly designed abstractions to get to the real functionality. LLMs are surprisingly good at figuring it out, plowing through the code and documentation and finding out that a 100MB library is in reality a HTTP client for 7 REST endpoints, or something like this.
(Author here) I meant "design" as in designing physical objects — and all our "programming" is "design" in this definition, because "manufacturing" is done by compilers and bundlers.
And I wouldn't write this article 3 months ago. Since then the quality of the output jumped significantly, it is now possible to put the agent into a proper harness (plan/edit/review/test) and the output is good — and if it's not, you discard it and try again, or point out a detail for the next cycle of improvements.
Yes, this requires a lot of forethought to set up, but it works.
I'm not talking only about "web things", I'm working on a project that involves engineering calculations and a lot of optimization of hot paths, both CPU and GPU.
My faviourite configuration pattern for SaaS code: all the configuration for all targets, from local development setup, to unit tests, to CI throwaway deployments, to production is in a single Go package. The current environment is selected by a single environment variable.
Need something else configured beyond your code? Write Go code to emit configs for the current environment, in "gen-config some-tool && some-tool" stanza.
I have had a similar experience with a team member who was quietly unhappy about a rule. Instead of raising a discussion about the rule (like the rest of the team members did) he tried to quietly ignore it in his work, usually via requesting reviews from less stringent reviewers.
As a result, after a while I started documenting every single instance of his sneaky rule-breakage, sending every instance straight to his manager, and the person was out pretty soon.
Linux caught Kent when he tried to sneak in non-bugfixes into a RC, and berated him.
After that (not before, this is a critical distinction) Kent said "I don't want to abide by the rules, because I have my concerns".
This is very similar to the situation I have described, except that in Linux it was Linus who was skipping reviews on Kent's code trusting him not to subvert the rules, and in situation I described the team collectively was trusting each other not to subvert the rules.
You've explained everyone is unhappy with it and that you worked to get the one person who actually acted upon it fired. It's hilarious but in a pretty sad way that you're portraying this as an inevitability. It wasn't, it was just you. You had a choice, and you chose to do this. It wasn't inevitable.
I didn't make myself quite clear — the others were raising points on _other_ rules, and as a result we tuned the rules quite often, as we discovered what works better and what works worse.
PostgreSQL server is a single process that starts in under 100ms on a developer's laptop.
In the company I work for we use real PostgreSQL in unit tests — it's cheap to start one at the beginning of a suite, load the schema and go, and then shut it down and discard its file store.
I keep thinking of moving that file store to tmpfs when run on Linux, but it's nowhere near the top of the performance improvements for the test suite.
So: no more mocks or subsitute databases with their tiny inconsistencies.