I didn't read much of the page - just was scrolling a bit to see what the fuck that thing is doing, and that was more than enough to know that I'll never touch whatever those people are doing.
> On the one hand, moving from assembly language to C made programmers less effective in some ways and more effective in others. On the other hand, the transition from writing code by hand to using AI is arguably a bigger shift,
I think that's a bad comparison, mainly for two reasons:
We still have people dealing with assembly for building the compilers. We don't quite have that for AI - which became very clear with the claude code leak: We just have people trying to get the correct behaviour, but without understanding why.
The second reason has more practical implications: I generally rely on my compiler to be deterministic, and not introduce security issues (which they sometimes do, by optimising away safeguards placed by the programmer - but that is relatively rare, and we can trace and fix that). But generally as a developer I can rely on my compiler to produce machine or byte code I don't have to think about.
The same is not true for AI - the regularly produce insecure code. This can partially be mitigated by having another AI review it - but it's not a proper solution. Until we get proper AI which can actually understand things we need somebody in the loop who can understand the code.
I'd recommend for every developer to get one or more colourblind friends. I have some, and regularly send them screenshots of what I'm working on to get feedback what they can see and what they can read/can't read.
They've been absolutely invaluable for making sure their kind of people can't use my apps properly.
8% of the male population has some form of colorblindness (for women it’s around 0.5%). I have deuteranomaly colorblindness. If you search for images on the internet related to that type of colorblindness you’ll find representations of how we see color and how we see the world.
It is not a fun condition to have, and leads to lots of problems in my everyday life. This blog post accidentally accentuated that issue, since the colors are (to what I can understand) very similar looking to me as a colorblind person.
1 in 12 men and 1 in 200 women go through the same sorts of experiences, and it’s worth it, if you aren’t color deficient, to try out some of the colorblindness sites and see the world as we do.
> 1 in 12 men and 1 in 200 women go through the same sorts of experiences,
Almost everyone to an extent loses some colour definition in their vision as we age, even those lucky enough to have excellent colour vision to start with, some lose a lot more than others and it is gradual so mostly not noticed at first. The is one of the reasons many grandparents have the saturation oddly high on their TVs (the other main reason, of course, being they've just never changed it from the default that is picked to make the display “pop” under bright show-room lighting conditions).
Thank you both for sharing your lived experience as well as concrete examples for understanding. I, like I am sure many others, live a richer life knowing what others are going through and how I can make tiny adjustments, even if it's just awareness, to account for how others different from in one way or another go through life.
Had one of those happen in high school — science teacher talking about colour blindness and shows students the colour blindness tests, one student assumes he’s being trolled and that one of the test images was a solid colour.
Pro-tip: there are browser extensions able to simulate various kinds of color blindness.
That is better then a random friend, because a.) there are various kinds of colorblindness b.) you wont ask the random friend to work for your company for free.
On the other hand, the random friend generally has a great deal of experience with what interventions help and don't help.
A filter shader generally won't tell you that substituting white for green in a red/green indicator is a great option, or that a colour that they can “see” is still ambiguous when you have to describe it: “I'm clicking the purple button, but it's not doing anything” “Purple? There's no purple in the app!”
Ubisoft release some tools to simulate colorblindness. It's easy to extract de color transformation from their shader directly.
I'm not colorblind but I use that quite often to check roughly if the color palette I choose is fine
https://github.com/ubisoft/Chroma
It's obviously a typo (or an excellently delivered joke) but I did get a chuckle out of the idea of someone going out of their way to ask color blind friends for feedback just to do the opposite out of spite for some reason.
.. to get an idea of the impact of your UI design on color-limited folks out there ..
I used this a few times to great effect, it was very revealing to see that my carefully selected teals and ambers were incomprehensible to some folks I really wanted to use my apps .. didn't take much iteration to come to a happy palette though, just needed a bit of care.
For a lot of tasks smaller models work fine, though. Nowadays the problem is less model quality/speed, but more that it's a bit annoying to mix it in one workflow, with easy switching.
I'm currently making an effort to switch to local for stuff that can be local - initially stand alone tasks, longer term a nice harness for mixing. One example would be OCR/image description - I have hooks from dired to throw an image to local translategemma 27b which extracts the text, translates it to english, as necessary, adds a picture description, and - if it feels like - extra context. Works perfectly fine on my macbook.
Another example would be generating documentation - local qwen3 coder with a 256k context window does a great job at going through a codebase to check what is and isn't documented, and prepare a draft. I still replace pretty much all of the text - but it's good at collecting the technical details.
I haven’t tried it yet, but Rapid MLX has a neat feature for automatic model switching. It runs a local model using Apple’s MLX framework, then “falls forward” to the cloud dynamically based on usage patterns:
> Smart Cloud Routing
>
> Large-context requests auto-route to a cloud LLM (GPT-5, Claude, etc.) when local prefill would be slow. Routing based on new tokens after cache hit. --cloud-model openai/gpt-5 --cloud-threshold 20000
How is that surprising? We've been taking that into account for any LLM related tooling for over a year now that we either can drop it, or have it designed in a way that we can switch to a selfhosted model when throwing money at hardware would pay for itself quickly.
It's just another instance of cloud dependency, and people should've learned something from that over the last two decades.
Not so much that it was surprising, rather that we looked at a competitor’s site and noticed that a) their prices went way up and b) their branding changed to be heavily AI-first.
So we thought, hmm, “wonder if they are increasing prices to deal with AI costs,” and then projected that into a future where costs go up.
We don’t have this dependence ourselves, so this seems to be a competitive advantage for us on pricing.
> Or, maybe, don’t: when people do, they take much more than they eat. Compared with ordering from the menu, all-you-can-eat breakfasts waste more food—up to twice as much, according to one study.
Is that a cultural thing? We have pretty much zero food waste on any buffet as you can easily only take what you actually want to eat. It's just basic good education to be considerate with resources, especially food resources - and I rarely see people taking more than they actually eat, so it's not just an "our family" thing. If you do throw away a lot of foot on a buffet you're just an inconsiderate asshole - and if a restaurant location has significant food waste from that they should just start charging for leftovers.
I think it might be partially cultural, American buffets have a lot of leftover food, and people tend to take a lot of food and throw away a lot of food. There's a variety of reasons why it has to be thrown away, but it is.
It's paywalled so I didn't read more than the first paragraph. But maybe the waste comes from overestimation of the amount of food to produce? Even if everyone eats a perfect portion for themselves, if you overestimate the total then you'll have food waste if the food can't be preserved.
> Even if everyone eats a perfect portion for themselves, if you overestimate the total then you'll have food waste if the food can't be preserved.
That'd be just poor planning on part of the hotel/restaurant. It'd be a valid excuse when starting new, but after a few weeks that should be under control.
If you only do breakfast buffets it's a bit harder - but you monitor the situation, and as breakfast time approaches the end you reduce things you can't store or re-use otherwise. Pretty much any hotel I've been to in the last few years had that kind of items run out without restocking them when we had a late breakfast.
If you also do lunch/dinner buffets you have some more options, and can have some dishes reusing the leftovers. I've also seen that regularly - they had the planned dishes, and a few smaller pots with something they came up with to reuse whatever was left over.
Yubikey (and nitrokey and other HSMs) are technically smart cards, which perform crypto operations on the card. This can be an issue when doing lots of operations, as the interface is quite slow.
Anything pkcs#11 you can proxy. I'm using that on some systems - I have an old notebook with a nitrokey hsm at home. It binds pkcs11-proxy to a local wireguard interface, so I'm registering systems I want to be able to use those keys to that notebooks wireguard. They still need a pin for unlocking a session as well.
As a German, after encountering Russian bureaucracy once, I commented to my wife that the main difference between Russian and German bureaucracy is that in Russia at least you can pay your way out.
reply