Everybody says that bots put websites down, while marketing oriented folks start practicing AO (agent optimization) - to make their offerings even more available and penetrating.
JS runtimes are fatty, so embedding one instantly adds at least 30-50 Mb of RAM usage. Imagine that you do this for just for a specific function (JSON processing) and your total RAM budget for a whole pod is around 256 Mb.
No doubt, this approach would work reasonably well for machines with plenty of RAM, but I can see why it can be a bottleneck when scaled to N instances. RAM is expensive, and when you multiply those 50 extra megabytes by N, your total costs quickly climb up.
I presume that it's a difference between Windows OS editions. I turned it off too, but it wasn't as easy as one click, I had to use Group Policies to decrapify the adjacent aspects of the system. Group policies are not available in Windows Home edition, for example.
I just went into the settings for the widgets, turned off the "Start" or "News" experience or whatever it is and it's never come back. Just using the toggle it offers in settings. While I was at it I went into the taskbar settings and just turned off the widgets altogether, never came back. I wonder if sometimes we just assume the worst and resort first to "hack" ways to disable some of these things and when a new update comes along from your company or whoever, it gets re-enabled. Instead of just using the built-in functionality for turning things off.
No worries, this is just how manipulative relationships work: they always aim for unidirectional communication.
You're obliged to consume the most important news from the most important entity on the planet Earth (Microsoft/Facebook/X/...), eat piles of informational crap that get dumped onto you, waste you emotional energy on processing the whole thing, participate in drama and show your admiration. Why? Because you're very convenient when in this state, you're mendable and coercible for whatever action the entity wants you to do without saying it directly.
But when it's time to listen to you and your concerns - surprise-surprise, nobody's home. It's one way only, see you next time, maybe.
P.S. Forcefully installing an attention-pollutve app like News in the Start Menu is nothing less than a way to control you. And for an insatiable ego, the sense of control is everything. This is why it keeps coming back, again and again, as if it's a lucky reoccurring coincidence. A Windows Update repairs the system? Yes, plus it repairs the system of control. Security patches are very convenient vehicle for that - once you eat it, you'll be served special dishes you never asked for.
While we have `sandbox-exec` in macOS, we still don't have a proper Docker for macOS. Instead, the current Docker runs on macOS as a Linux VM which is useful but only as a Linux machine goes.
Having real macOS Docker would solve the problem this project solves, and 1001 other problems.
Why not? They're definitely not perfect security boundaries, but neither are VMs. I think containers provide a reasonable security/usability tradeoff for a lot of use cases including agents. The primary concern is kernel vulnerabilities, but if you're keeping your kernel up-to-date it's still imo a good security layer. I definitely wouldn't intentionally run malware in it, but it requires an exploit in software with a lot of eyes on it to break out of.
It's certainly better than nothing. Hence "probably doesn't matter too much in this context" - but of course it always matters what your threat model is. Your own agents under your control with aligned models and not interacting with attacker data? Should be fine.
But too many people just automatically equate docker with strong secure isolation and... well, it can be, sometimes, depending a hundred other variables. Thus the reminder; to foster conversations like this.
counter-intuitively, the fact that docker on the mac requires a linux-based VM makes it safer than it otherwise would be. But your point stands in general, of course.
> Having real macOS Docker would solve the problem
I'm very slowly working on a mock docker implementation for macOS that uses ephemeral VM to launch a true guest macOS and perform commands as per Dockerfile/copies files/etc. I use it internally for builds. No public repo yet though. Not sure if there is demand.
If you expect macOS to behave like Linux, you are asking the wrong OS to do the job. Docker and runtimes like runc depend on Linux kernel primitives such as namespaces and cgroups that XNU does not provide, and macOS adds System Integrity Protection, TCC, signed system frameworks, and launchd behaviors that make sharing the host kernel for arbitrary workloads technically hard and legally messy.
A practical path is ephemeral macOS VMs using Apple's Virtualization.framework coupled with APFS copy-on-write clones for fast provisioning, or limited per-process isolation via seatbelt and the hardened runtime, which respects Apple's licensing that restricts macOS VMs to Apple hardware and gives strong isolation at the cost of higher RAM and storage overhead compared with Linux containers.
What would native containers bring over Linux ones? The performance of VZ emulation is good, existing tools have great UX, and using a virtualized kernel is a bit safer anyways. I regularly use a Lima VM as a VSCode remote workspace to run yolo agents in.
Sometimes you just have to run native software. In my case, that means macOS build agents using Xcode and Apple toolchains which are only available on macOS.
It's not a pleasure to run them in a mutable environment where everything has a floating state as I do now. Native Docker for macOS would totally solve that.
VZ has been exceptional for me. I have been running headless VMs with Lima and VZ for a while now with absolutely zero problems. I just mount a directory I want Claude Code to be able to see and nothing more.
> What would native containers bring over Linux ones?
What would a Phillips screwdriver bring over a flathead screwdriver? Sometimes you don't want/need the flathead screwdriver, simple as that. There are macOS-specific jobs you need to run in macOS, such as xcode toolchains etc. You can try cross compiling, but it's a pain and ridiculous given that 100% of every other OS supports containers natively (including windows). It's clear to me that Apple is trying to make the ratio jobs/#MacMinis as small as possible
Go is just one language, while Dockerfile gives you access to the whole universe with myriads of tools and options from early 1970s and up to the future. I don't know how you can compare or even "replace" Docker with Go; they belong to different categories.
thanks :) the braiding approach is super clever too, this was one of those weird moments where you find something and then have to triple check your results because how could i accidentally find something better than the algorithm that hasn't been touched in decades...
the part i really like is that it gives us small improvement on the pclmul too, as the non-accelerated algorithm doesn't really stand a chance against the accelerated opcode on newer hardware so it probably isn't going to see much use in practice. however... i think hardware solutions could possibly benefit (e.g. ethernet cards)
The user [1] you've mentioned has 160 points being a poster of total four bland messages. This goes against a normal statistical distribution. And this gives away why they do it: the long-term aim is to cultivate voting rings to influence the narratives and rankings in the future. For now, this is only my theory but it may be a real monetization strategy for them.
I'd be interested to know why those comments were flagged actually. They don't scream AI and no-one has replied calling them out as AI, etc. But the vast majority are dead.
That's why. Boring, bland, etc. That account's M.O. is basically "write a paragraph that says nothing." Fwiw, I do think AI can be indistinguishable from dumb, boring people, but usually those kinds of people won't be on HN.
The account was immediately shadowbanned after re-awakening from a long period of inactivity.
I agree it doesn't seem obviously AI. The early comments are all in the same writing style and smell human. Lots of strong opinions e.g.
"logged in after years away and had basically the same experience. the feed is just AI slop and engagement bait now, none of it from people I actually followed." [about Facebook]
HN has got a big problem with silently shadowbanning accounts for no obvious reason. Whether it's an attempt to fight bots gone wrong or something else isn't clear. By the very nature of shadowbanning there is no feedback loop that can correct mistake.
Pretty sure they weren't shadowbanned immediately, since people replied to some of those [dead] comments. Most likely the shadowban was applied retroactively after posting the more obviously generated stuff.
>And this gives away why they do it: the long-term aim is to cultivate voting rings to influence the narratives and rankings in the future. For now, this is only my theory but it may be a real monetization strategy for them.
I don't think it's clear at all why people do this. I suspect a large amount of it, at least on a site like HN, is just hapless morons who think it's "cool".
Probably this is what's happened here. Either the OP's domain was previously used for shady activities, or the almost-free stigma puts the whole .TLD in the grey list of high-risk assets. Probably is also explains the nuclear behavior of the registrar (suspension).
Imagine a mass-produced AI chips with all human knowledge packed in chinesium epoxy blobs running from CR2032 batteries in toys for children. Given the progress in density and power consumption, it's not that far away.
Good luck banning yourself from the future.
reply