1. Are each of the JS processes running in its own process and mailbox? (I assume from the description is that each runtime instance is its own process)
2. can the BEAM scheduler pre-empt the JS processes?
3. How is memory garbage collected? Do the JS processes garbage collect for each individual process?
4. Are values within JS immutable?
5. If they are not immutable, are there risk for memory errors? And if there is a memory error, would it crash the JS process without crashing the rest of the system?
1. Yes. Each runtime is a GenServer (= own process + mailbox). There's also a lighter-weight Context mode where many JS contexts share one OS thread via a ContextPool, but each context still maps 1:1 to a BEAM process.
2. No. JS runs on a dedicated OS thread, outside the BEAM scheduler. But there's an interrupt handler (JS_SetInterruptHandler) that checks a deadline on every JS opcode boundary — pass timeout: 1000 to eval and it interrupts after 1s, runtime stays usable. For contexts there's also max_reductions — QuickJS-NG counts JS operations and interrupts when the budget runs out, closest analog to BEAM reductions.
3. QuickJS-NG uses refcounting with cycle detection. Each runtime/context has its own GC — one collecting doesn't touch another. When a Runtime GenServer terminates, JS_FreeContext + JS_FreeRuntime release everything.
4. No, standard JS mutability. But the JS↔Erlang boundary copies values — no shared mutable state across that boundary.
5. QuickJS-NG enforces JS_SetMemoryLimit per-runtime (default 256 MB) and JS_SetContextMemoryLimit per-context. Exceeding the limit raises a JS exception, not a segfault. It propagates as {:error, ...} to the caller. Since each runtime is a supervised GenServer, the supervisor restarts it. There are tests for OOM in one context not crashing the pool, and one runtime crashing not affecting siblings.
Part of the technical assessment I have for hiring new platform engineers involves troubleshooting a service hosted in a headless Linux vm.
Troubleshooting and fluency on the command line are among what I consider core skills. Being able to dig through abstraction layers is not just essential for when things go wrong, they are essential for building infrastructure, and really tells you whether an architecture is fit for purpose.
When I was interviewing people on behalf of a client, I was surprised at the number of people who didn't even know what SSH was. This was for a mid-level software developer and not a junior and they all came with glowing resumes.
They all insisted that it was essential to have a CI/CD process but didn't even know what the "CD" part even did. Apparently you just "git push" and the code magically gets on the server. There are many ways to do deployments and a CI/CD process isn't always suitable and can have many forms, in my opinion, but I was happy to discuss any and all. But it's difficult to do that without the basics. As you said, before I was commissioned the platform had no documentation, was crumbling under tech debt and failing constantly so something like getting on the server to at least figure out what's going on was essential.
I went for a senior sysadmin interview role and they asked me to debug a website in the browser that was only visible on localhost, ssh was available.
They asked me to double check that part because they assumed I just hadn't done that part, because apparently I was the first person who didn't need help with an SSH tunnel.
There’s a lot of that going around, lately. I recently had an interviewer admit I was not in the first round of candidates sent for in-person finals, but they had all bombed out on very basic SSO questions despite having a decade managing Entra; I was a “second choice” candidate and the first one to correctly answer the broad strokes of setting up an SSO app, despite not having touched Entra since it was called Azure AD.
I suspect this is AI’s doing, but cannot be sure. It’s really critical that technical interviewers weed out the over-inflaters though, now more than ever.
This predates AI. I've been interviewing candidates(SRE/DevOps) since 2018, so many candidates that claim to have extensive experience with things completely fall apart when you put them in front of a terminal.
Gathering and mapping unfamiliar systems is part of that skillset. I’m also looking at being able to think laterally, being able descend abstraction layers, and understanding architectural characteristics and constraints (Roy Fielding’s Dissertation), which will recur at each level of abstraction.
From a professional perspective, this is a solid question. And yeah, between the basic tool suite (top/cd/ls -l/df -H/grep/pipe '|'/ssh) and some common sysadmin/engie knowledge, I could get by with Linux just fine. "Just fine" doesn't cut it for troubleshooting sludgepipes and Kubernetes though, and my skills with Powershell finally gave me the confidence boost to take CLI/TUI seriously on Linux.
And man, zero regrets. It's nice having an OS not fight me tooth and nail to do shit, even if it means letting me blow my feet off with some commands (which is why, to any junior readers out there, we always start with a snapshot when troubleshooting servers).
Now to finish my mono-compose for my homelab and get back to enjoying the fruits of my labor...
It's the management structure focused on short-term gains and promotion cycles, combined with a corporate culture focused very much on the same as management with the added twist of politicking, backstabbing, and undercutting other teams.
I've spent much of my life inside Microsoft's ecosystems. Not merely my career, but my technological life itself started with Win 3.11 on a parental laptop. I've spent so long in their orbit that I can generally infer what their latest thing does and how it works from an IT POV based on its product name alone, because I understand how Microsoft thinks from a marketing and engineering perspective.
As you say, they have some truly brilliant folks in their ranks. Those few diamonds are buried under mountains of garbage and slop from above, though. I mean, this is the company who pioneered full-fat PC handhelds 20 years before the Steam Deck, the smart watch a full decade before Apple, the home media ecosystem years before streaming apps dominated, smartphones before the iPhone, I can go on and on. The problem isn't the engineers so much as corporate mismanagement, but they somehow survive like a cockroach based on install size alone.
Sure... but, I’ve got decades of experience doing that stuff, just not frequently enough to keep it in my head, these days. I usually want a small project server to just do shit and the less there is between that and booting up a fresh Linux install, the better. For example, I don’t keep firewall command line syntax in my head, but I know what needs to be done, and I always seem to need it with small home projects. I lose nothing by having a trustworthy gui do it. I’d give this a shot. I doubt I’d use it in a professional environment, but that’s not really my use case these days.
Which goes to show, experience and maturity changes how people use tools. The person I was responding to was at an earlier maturity stage and realized it was hampering their growth
I am more of a TUI person anyways. I have never found web based server management to be as responsive as TUI, same reason I prefer direct attaching than live tailing on a web tool.
I configure my router through a web interface and not the command line either. It isn’t something I want to mess with on my downtime.
Meanwhile, in BEAM land, this is a solved problem, and just watching different languages converge into towards the set of primitives and constraints. It is hard to replicate BEAM’s premptive scheduler, but I would not be surprised if someone will end up thinking they invented something novel by adding queues (mailboxes), or failing that, something like go channels.
And even then, once you have a workable set of primitives, it turns out that there are some orchestration patterns that recurs over and over again, so people will converge towards OTP once the primitives are there.
I don’t have my LLMs generate literate programming. I do ask it to talk about tradeoffs.
I have full examples of something that is heavily commented and explained, including links to any schemas or docs. I have gotten good results when I ask an LLM to use that as a template, that not everything in there needs to be used, and it cuts down on hallucinations by quite a bit.
I remember someone mentioning a system that operated with ASTs like this in the 70s or 80s. One of the affordances is that the source base did not require a linter. Everyone reading the code can have it formatted the way they like, and it would all still work with other people’s code.
I read other people’s code all the time. I work as a platform engineer with sre functions.
Gemini 3 by itself is insufficient. I often find myself tracing through things or testing during runtime to understand how things behave. Claude Opus is not much better for this.
On the other hand, pairing with Gemini 3 feels like pairing with other people. No one is going to get everything right all the time. I might ask Gemini to construct gcloud commands or look things up for me, but we’re trying to figure things out together.
I think mistercheph is right to be concerned. This bill applies to all "operating system providers", defined thusly:
(g) “Operating system provider” means a person or entity that develops, licenses, or controls the operating system software on a computer, mobile device, or any other general purpose computing device.
Regarding penalities:
1798.503. (a) A person that violates this title shall be subject to an injunction and liable for a civil penalty of not more than two thousand five hundred dollars ($2,500) per affected child for each negligent violation or not more than seven thousand five hundred dollars ($7,500) per affected child for each intentional violation, which shall be assessed and recovered only in a civil action brought in the name of the people of the State of California by the Attorney General.
>This bill applies to all "operating system providers", ...
Not really.
>...for the purpose of providing a signal regarding the user’s age bracket to applications available in a covered application store.
So the OS has to provide an age signal to apps from a "covered application store" defined as:
e) (1) “Covered application store” means a publicly available internet website, software application, online service, or platform that distributes and facilitates the download of applications from third-party developers to users of a computer, a mobile device, or any other general purpose computing that can access a covered application store or can download an application.
(2) “Covered application store” does not mean an online service or platform that distributes extensions, plug-ins, add-ons, or other software applications that run exclusively within a separate host application.
It doesn't say "only if there's a covered application store present on the system". But maybe everyone in power will interpret this non-logically in exactly the right way that this doesn't become abusive.
Wouldn’t that classification apply to Linux package managers as well?
They are publicly available online services that distribute and facilitate the download of applications from third party developers to users of a general purpose computing device.
That may not be the intent, but it seems like it would still apply. Many of the “app stores” on Linux are just front ends for the package manager in some way.
I assume the people behind this don’t know things like apt or dnf exist, so it likely wasn’t considered.
There are a large number of things that the people behind this don't know about. That doesn't mean that they screwed up. It means that the law does not apply to those things.
2. can the BEAM scheduler pre-empt the JS processes?
3. How is memory garbage collected? Do the JS processes garbage collect for each individual process?
4. Are values within JS immutable?
5. If they are not immutable, are there risk for memory errors? And if there is a memory error, would it crash the JS process without crashing the rest of the system?
reply