We switched to 'software engineer' to encapsulate that, I think. You can receive requirements and churn out code or you can go up a level and think about the solution. Go another level up and think about the problem. Another level and it's the context of the problem. Further than that and it's the priority of it. And even higher up is how it fits in the product roadmap and the architectural decisions.
At some point you stop developing and start weighing up the requirement against your understanding of the system and the environment it works in.
'yes, you should' needs to be reconciled with 'it's f*g expensive' and 'risk is low'.
nowadays, 'risk is low' isn't true anymore and it's actually cheaper to have a robot spit out a reimplementation of the 5.4% of what you need out of your dependencies instead of auditing the 100%.
People have for years. The real question is do people enjoy not putting any thought into their super convenient JavaScript stack too much to actually do anything about it. Delaying updating to new packages assuming the vulnerability will be discovered in two days or whatever is putting a knee brace on a leg that needs to be amputated. Sooner or later there will be a vulnerability good enough to not be caught in a couple days, or a zero-day damaging enough that not updating immediately is a huge risk. Assuming they won’t be in anything critical enough to disastrously compromise your stack is wishful thinking at its finest.
The part that always gets me is I tend to only install a few packages like React and maybe some kind of data access layer. But you let that recurse down a few levels and suddenly you've installed a thousand packages, some of them hopelessly obsolete, some of them for patently stupid things that are 1 line of code, etc, etc. I.E. you can't choose to be thoughtful if the main entry points into the language are all built on a pile of garbage.
Oh yeah, for sure. The problem (mostly) isn’t people installing packages willy-nilly: it’s that the attack surface is fractal, which is just plain nuts.
Now that npm supports --before, yarn supports npmMinimumAge, and pnpm supports minimumReleaseAge, it's quite possible to stay safe and avoid acciasional bleeding-edge upgrades. Stay a couple months into the past, give testers time to look at newer releases and vet their safety (or report an exploit attempt).
npm's immaturity is arguably demonstrated by the fact it is always catching up.
Please correct me if I'm wrong but signed packages are still impractical in NPM which is why supply chain attacks still work by editing existing versions or pushing new point releases without a signature.
Or if you put all of the credentials in GitHub actions which is even more trivially exploitable through the actions marketplace because it is just git with a thin proxy, you have an even wider attack vector
In principle I agree, but chrome has an auto-update setup and using that mechanism to download several GBs of data that is not critical to the app itself is cause for question.
Chrome is not entitled to my disk space just because I installed it and Microsoft has been excoriated for the exact same behaviour with AI.
>Chrome is not entitled to my disk space just because I installed it
When you install any program it becomes entitled to your disk space, by the definition of installation. If you don’t like the program, you can just uninstall it and it’ll no longer take up your disk space.
It's entitled to what is a reasonable usage of disk space, which you generally know by the size of the installer. Some install mechanisms bypass that because they give you a minimal installer that then downloads the full package. It's not entitled to unlimited usage.
Using that same mechanism to pull in several GBs worth of extra data without any warning is sketchy. If this happened and did not respect any settings for running on a metered network then it is even worse.
Other applications where this entitlement is better understood usually have a mechanism to purge the space it uses. e.g. Docker will consume whatever space you give it but you have commands to purge that space or to limited how much it will consume if it goes through a VM.
I really don't know why anyone would try to defend a tech company on what is a table-stakes expectation for being a good actor in the ecosystem. It's really lowing the bar for the supplier's sake instead of keeping the bar high for the consumer.
As a counter point, Call of Duty (the game) was mocked for requiring a good 200+GB of disk space and the conspiracy was they did that to push other games out of your storage. The market response there is easy: don't buy COD and don't install it.
I don't think it's quite the same for a browser that abuses network effects to stay useful. In which case Chrome is to Google what IE6 was to MS. A separate topic but we know that not all browsers are considered equal on the web.
You can also yell "hey Alexa add an open crotch G-string to my basket" and it'll be funny for the first couple of times but once it becomes a meme it's just annoying and is filtered out.
You could just as well say "Sir, this is a Wendy's. To shreds you say? Don't call me Shirley" and the model would ignore it
I would love if it coding agents didn't default to GitHub for their deep VCS integration.
If I could get the same bells and whistles by wiring up another forge, so long as it offered a decent API and/or sent events over a webhook, I'd have everything self-hosted.
The agents would need to expose an interface on their own end but as long as you implemented it with a plugin, it'd take the dependency of GitHub and you could use MCP or skills for the rest of it.
The neat thing about Tangled is it's built on an open protocol (https://atproto.com)—this allows us to effectively build an API-free system since all data on Tangled can effectively be ingested via the AT Protocol firehose.
Which is to say, this is perfect for agents given they don't need any bespoke SDK from us: simply write Tangled records for issues, pulls, whatever to your PDS and it'll show up on Tangled. We plan to start working on some exemplar agents first-party that would 1. enhance Tangled itself, 2. showcase cool things you can do with an open data firehose.
You do realise that writing Tangled records for issues, pulls, whatever constitutes both a spec and API.
The fact that you use a protocol to define it is beside the point. You still have to define what a Tangled record is, and the interface that accepts it, and the mechanism to resolve it on the client.
How else do you define what a 'tangled' is even if the underlying structure is git.
I would really just like the quirkier internet of old.
Flamewars these days are just created by shit-stirrers in another country who are just pumping out rage bait from an massive array of smartphones. It's not even an impassioned flamewar, it simply exists to aggravate.
Using AI to forcefully disengage by simply suppressing that content would be nice and also have the secondary effect of depriving various internet resources of ad revenue.
I'd argue the issue is people have figued out that "shit stirring" can make actual meaningful differences to reality, be they foreign or local.
When the limit of effect a flamewar would have is if Star Trek or Star Wars got the top billing, or Vim was recommended to new programmers instead of Emacs, it was a fun novelty.
But now there's real money and power resulting from this shit stirring of course people will use it as a means to an ends. They've optimised professional shit-stirring because it's so valuable now.
I know the 4 player version from the Yakuza games. I only knew about the solitaire version until then from a demo version on Net Yaroze on the PlayStation, where you basically got some weird games along side the demos on a new demo disc every month.
Reminds me of poker.
Also I miss the excitement of a new issue of a magazine with a demo disc of a few new games.
Usually by the time a PR has been submitted it's too late to dig into aspects of the change that come from a poor understanding of the task at hand without throwing out the PR and creating rework.
So it's helpful to shift left on that and discuss how you intend to approach the solution. Especially for people who are new to the codebase or unfamiliar with the language and, thanks to AI, show little interest in learning.
Obviously not for every situation, but time can be saved by talking something through before YOLOing a bad PR.
Yes, it should be cheap to throw out any individual PR and rewrite it from scratch. Your first draft of a problem is almost never the one you want to submit anyway. The actual writing of the code should never be the most complicated step in any individual PR. It should always be the time spent thinking about the problem and the solution space. Sometimes you can do a lot of that work before the ticket, if you're very familiar with the codebase and the problem space, but for most novel problems, you're going to need to have your hands on the problem itself to get your most productive understanding of them.
I'm not saying it's not important to discuss how you intend to approach the solution ahead of time, but I am saying a lot about any non-trivial problem you're solving can only be discovered by attempting to solve it. Put another way: the best code I write is always my second draft at any given ticket.
More micromanaging of your team's tickets and plans is not going to save you from team members who "show little interest in learning". The fact that your team is "YOLOing a bad PR" is the fundamental culture issue, and that's not one you can solve by adding more process.
I don't disagree that a practical spike is a good way to grasp a novel problem (or work with a lack of internal knowledge because it's legacy code) but there is still something to be said for attempting to work things out in the abstract too, and not necessarily by adding process, but by redeveloping that internal knowledge and getting familiar with the business domain.
In a greenfield project I will have a lot of patience for a team that doesn't grasp the problem space too well yet, and needs to feel around it by experimenting and prototyping. You have to encourage that or you might not even be building anything innovative.
For the longer term legacy project then the team can't really afford to have people going down rabbit holes and it's more beneficial to approach things in the abstract and reduce the problem as much as possible. Especially with junior or mid-level engineers who can see an old codebase as a goldmine for refactoring if left unattended.
As for the fundamental culture issue... maybe. AI increases the frequency of low quality PRs and puts a bigger burden on the reviewer. I can live with this in the short term if people take lessons from it and keep building up their own skillset. I feel this issue is not unique to my team and LLM-driven development is still novel enough that we're all figuring out the best way to tackle it.
Asking a more junior developer or someone who "show little interest in learning" to discuss their approach with you before they've spent too much time on the problem, especially if you expect them to take the wrong approach seems like the right way to do things.
Throwing out a PR of someone who doesn't expect it would be quite unpleasant, especially coming from someone more senior.
This is how I try to approach it. I don't think it's a new thing for a new hire to come in hot and try to figure things out themselves rather than spending time with the team. Or getting lost down rabbit holes.
At some point you stop developing and start weighing up the requirement against your understanding of the system and the environment it works in.
reply