extensions choose on which site they're active and if they provide any available assets (e.g. some extensions modify CSS of the website by injecting their CSS, so that asset is public and then any website where the extension is active can call fetch("chrome-extension://<extension_id>/whatever/file/needed.css" if it knows the extension ID (fixed for each extension) and the file path to such asset... if the fetch result is 404, it can assume the extension is not installed, if the result is 200 it can assume the extension is installed.
This is what LinkedIn is doing... they have their own database of extension IDs and a known working file path, and they are just calling these fetches... they have been doing it for years, I've noticed it a few years back when I was developing a chrome extension which also worked with LinkedIn, but back then it was less than 100 extensions scanned, so I just assumed they want to detect specific extensions which break their site or their terms of use... now it's apparently 6000+ extensions...
You can have decoupled Controllers from Views using React. That's the basis of the "original" Flux/Redux architecture used by React developers 10+ years ago when React was just beginning to get traction.
A flux/redux "Store" acts as a Model -> contains all the global state and exactly decides what gets rendered. A flux/redux "Dispatcher" acts as a Controller. And React "Components" (views) get their props from the "Store" and send "events" to "dispatcher", which in turn modifies the "Store" and forces a redraw.
Of course they aren't "entirely decoupled" because the view still has to call the controller functions, but the same controller action can be called from multiple views, and you can still design the architecture from Model, through Controller (which properties can change under what conditions) and then design the Views (where the interactions can happen).
I was asking more in the abstract. Web UI frameworks usually sit on top of considerable abstraction (in the form of the DOM, eventing system, etc), so I'm not sure your reply exactly answers my question.
My experience is similar, but I feel I'm actually thinking way harder than I ever was before LLMs.
Before LLMs once I was done with the design choices as you mention them - risks, constraints, technical debt, alternatives, possibilities, ... I cooked up a plan, and with that plan, I could write the code without having to think hard. Actually writing code was relaxing for me, and I feel like I need some relax between hard thinking sessions.
Nowadays we leave the code writing to LLMs because they do it way faster than a human could, but then have to think hard to check if the code LLM wrote satisfies the requirements.
Also reviewing junior developers' PRs became harder with them using LLMs. Juniors powered by AI are more ambitious and more careless. AI often suggests complicated code the juniors themselves don't understand and they just see that it works and commit it. Sometimes it suggests new library dependencies juniors wouldn't think of themselves, and of course it's the senior's role to decide whether the dependency is warranted and worthy of being included. Average PR length also increased. And juniors are working way faster with AI so we spend more time doing PR reviews.
I feel like my whole work somehow from both sides collapsed to reviewing code = from one side the code that my AI writes, from the other side the code that juniors' AI wrote, the amount of which has increased. And even though I like reviewing code, it feels like the hardest part of my profession and I liked it more when it was balanced with tasks which required less thinking...
> 4. Ends up in test environment for, what, a month.. nothing using getaddrinfo from glibc is being used to test this environment or anyone noticed that it was broken
"Testing environment" sounds to me like a real network real user devices are used with (like the network used inside CloudFlare offices). That's what I would do if I was developing a DNS server anyway, other than unit tests (which obviously wouldn't catch this unless they were explicitly written for this case) and maybe integration/end-to-end tests, which might be running in Alpine Linux containers and as such using musl. If that's indeed the case, I can easily imagine how noone noticed anything was broken. First look at this line:
> Most DNS clients don’t have this issue. For example, systemd-resolved first parses the records into an ordered set:
Now think about what real end user devices are using: Windows/macOS/iOS obviously aren't using glibc and Android also has its own C library even though it's Linux-based, and they all probably fall under the "Most DNS clients don't have this issue.".
That leaves GNU/Linux, where we could reasonably expect most software to use glibc for resolving queries, so presumably anyone using Linux on their laptop would catch this right? Except most distributions started using systemd-resolved (most notable exception is Debian, but not many people use that on desktops/laptops), which is a locally-cached recursive DNS server, and as such acts as a middleman between glibc software and the network configured DNS server, so it would resolve 1.1.1.1 queries correctly, and then return the results from its cache ordered by its own ordering algorithm.
For the output of Cloudflare’s DNS server, which serves a huge chunk of the Internet, they absolutely should have a comprehensive byte-by-byte test suite, especially for one of the most common query/result patterns.
I've been using pyenv for a decade before uv and it wasn't a "major pain" either. But compared to uv it was infinitely more complex, because uv manages python versions seamlessly.
If python version changes in an uv-managed project, you don't have to do any extra step, just run "uv sync" as you normally do when you want to install updated dependencies. uv automatically detects it needs a new python, downloads it, re-creates the virtual environment with it and installs the deps all in one command.
And since it's the command which everyone does anytime a dependency update is required, no dev is gonna panic why the app is not working after we merge in some new code which requires newer python cause he missed the python update memo.
this picture does show differently in Chrome and Safari, but if I analyze it using the methods you did I arrive at a different result - I don't see an iHDR chunk there, instead I see a gAMA chunk and if I remove it with pngcrush it shows normally in Chrome.
There have been ads in App Store for a long time. The upcoming change is that they will also appear further down in search results, right now they only show on top...
There are GPUs from 3 different generations in that list... Quadro 6000 is an old Fermi from 2010, Quadro RTX6000 is Turing from 2018, RTX6000 Ada is Ada from 2022...
Oh and there's also RTX PRO 6000 Blackwell which is Blackwell from 2025...
I gave up understanding GPU names a long time ago. Now I just hope the efficient market hypothesis is at least moderately effective and as long as I buy from a reputable retailer the price is at least mostly reflective of performance.
They've hyperoptimized all these marketing buzzwords to the point that I'm basically forced into the moral equivalent of buying GPU by the pound because I have no idea what these marketers are trying to tell me anymore. The only stat I really pay attention to is VRAM size.
(If you are one of those marketers, this really ought to give you something to think about. Unless obfuscation is the goal, which I definitely can not exclude based on your actions.)
extensions choose on which site they're active and if they provide any available assets (e.g. some extensions modify CSS of the website by injecting their CSS, so that asset is public and then any website where the extension is active can call fetch("chrome-extension://<extension_id>/whatever/file/needed.css" if it knows the extension ID (fixed for each extension) and the file path to such asset... if the fetch result is 404, it can assume the extension is not installed, if the result is 200 it can assume the extension is installed.
This is what LinkedIn is doing... they have their own database of extension IDs and a known working file path, and they are just calling these fetches... they have been doing it for years, I've noticed it a few years back when I was developing a chrome extension which also worked with LinkedIn, but back then it was less than 100 extensions scanned, so I just assumed they want to detect specific extensions which break their site or their terms of use... now it's apparently 6000+ extensions...
reply