Is there a Clang based build for Windows? I’ve been slowly moving my Windows builds from MSVC to Clang. Which still uses the Microsoft STL implementation.
So far I think using clang instead of MSVC compiler is a strict win? Not a huge difference mind you. But a win nonetheless.
Uhhh what? Isn’t the whole point of Bazel that it’s a monorepo with all dependencies so you don’t need effing docker just to build or run a bloody computer program?
It drives me absolute batshit insane that modern systems are incapable of either building or running computer programs without docker. Everyone should profoundly embarrassed and ashamed by this.
I’m a charlatan VR and gamedev that primarily uses Windows. But my deeply unpopular opinion is that windows is a significantly better dev environment and runtime environment because it doesn’t require all this Docker garbage. I swear that building and running programs does not actually have to be that complicated!! Linux userspace got pretty much everything related to dependencies and packages very very very wrong.
I am greatly pleased and amused that the most reliable API for gaming in Linux is Win32 via Proton. That should be a clear signal that Linux userspace has gone off the rails.
You’re converging a lot of ground here! The article is about producing container images for deployment, and have no relation to Bazels building stuff for you - if you’re not deploying as containers, you don’t need this?
On Linux vs Win32 flame warring: can you be more specific? What specifically is very very wrong with Linux packaging and dependency resolution?
> The article is about producing container images for deployment
Fair. Docker does trigger my predator drive.
I’m pretty shocked that the Bazel workflow involves downloading Docker base images from external URLs. That seems very unbazel like! That belongs in the monorepo for sure.
> What specifically is very very wrong with Linux packaging and dependency resolution?
Linux userspace for the most part is built on a pool of global shared libraries and package managers. The theory is that this is good because you can upgrade libfoo.so just once for all programs on the system.
In practice this turns into pure dependency hell. The total work around is to use Docker which completely nullifies the entire theoretic benefit.
Linux toolchains and build systems are particularly egregious at just assuming a bunch of crap is magically available in the global search path.
Docker is roughly correct in that computer programs should include their gosh darn dependencies. But it introduces so many layers of complexity that are solved by adding yet another layer. Why do I need estargz??
If you’re going to deploy with Docker then you might as well just statically link everything. You can’t always get down to a single exe. But you can typically get pretty close!
> I’m pretty shocked that the Bazel workflow involves downloading Docker base images from external URLs. That seems very unbazel like! That belongs in the monorepo for sure.
Not every dependency in Bazel requires you to "first invent the universe" locally. Lots of examples of this like toolchains, git_repository, http_archive rules and on and on. As long as they are checksum'ed (as they are in this case) so that you can still output a reproducible artifact, I don't see the problem
Everything belongs in version control imho. You should be able to clone the repo, yank the network cable, and build.
I suppose a URL with checksum is kinda sorta equivalent. But the article adds a bunch of new layers and complexity to avoid “downloading Cuda for the 4th time this week”. A whole lot of problems don’t exist if they binary blobs exist directly in the monorepo and local blob store.
It’s hard to describe the magic of a version control system that actually controls the version of all your dependencies.
Webdev is notorious for old projects being hard to compile. It should be trivial to build and run a 10+ year old project.
If you did that, Bazel would work a lot better. Most of the complexity of Bazel is because it was originally basically an export of the Google internal project "Blaze," and the roughest pain points in its ergonomics were pulling in external dependencies, because that just wasn't something Google ever did. All their dependencies were vendored into their Google3 source tree.
WORKSPACE files came into being to prevent needing to do that, and now we're on MODULE files instead because they do the same things much more nicely.
That being said, Bazel will absolutely build stuff fully offline if you add the one step of running `bazel sync //...` in between cloning the repo and yanking the cable, with some caveats depending on how your toolchains are set up and of course the possibility that every mirror of your remote dependency has been deleted.
Making heavy use of mostly remote caches and execution was one of the original design goals of Blaze (Google's internal version) iirc in an effort to reduce build time first and foremost. So kind of the opposite of what you're suggesting. That said, fully air-gapped builds can still be achieved if you just host all those cache blobs locally.
> So kind of the opposite of what you're suggesting.
I don’t think they’re opposites. It seems orthogonal to me.
If you have a bunch of remote execution workers then ideally they sit idle on a full (shallow) clone of the repo. There should be no reason to reset between jobs. And definitely no reason to constantly refetch content.
> Game and engine devs simply don't bother anymore to optimize for the low end
All games carefully consider the total addressable market. You can build a low end game that runs great on total ass garbage onboard GPU. Suffice to say these gamers are not an audience that spend a lot of money on games.
It’s totally fine and good to build premium content that requires premium hardware.
It’s also good to run on low-end hardware to increase the TAM. But there are limits. Building a modern game and targeting a 486 is a wee bit silly.
If Nvidia gamer GPUs disappear and devs were forced to build games that are capable of running on shit ass hardware the net benefit to gamers would be very minimal.
What would actually benefit gamers is making good hardware available at an affordable price!
Everything about your comment screams “tall poppy syndrome”. </rant>
I don't think it's insane. In that hypothetical case, it would be a slightly painful experience for some people that the top end is a bit curtailed for a few years while game developers learn to target other cards, hopefully in some more portable way. But also feeling hard done by because your graphics hardware is stuck at 2025 levels for a bit is not that much of hardship really, is it? In fact, if more time is spent optimising for non-premium cards, perhaps the premium card that you already have will work better then the next upgrade would have.
It's not inconceivable that the overall result is a better computing ecosystem in the long run. The open source space in particular, where Nvidia has long been problematic. Or maybe it'll be a multi decade gaming winter, but unless gamers stop being willing to throw large amounts of money chasing the top end, someone will want that money even if Nvidia didn't.
There is a full actual order of magnitude difference between a modern discrete GPU and a high end card. Almost two orders of magnitude (100x) compare to an older (~2019) integrated GPU.
> In fact, if more time is spent optimising for non-premium cards, perhaps the premium card that you already have will work better then the next upgrade would have.
Nah. The stone doesn’t have nearly that much blood to squeeze. And optimizations for ultralow-end may or may not have any benefit to high end. This isn’t like optimizing CPU instruction count that benefits everyone.
The swirly background (especially on the main screen), shiny card effects, and the CRT distortion effect would be genuinely difficult to implement on a system from that era. Balatro does all three with a couple hundred lines of GLSL shaders.
(The third would, of course, be redundant if you were actually developing for a period 486. But I digress.)
I always chuckle when I see an entitled online rant from a gamer. Nothing against them, it's just humorous. In this one, we have hard-nosed defense of free market principles in the first part worthy of Reagan himself, followed by a Marxist appeal for someone (who?) to "make hardware available at an affordable price!".
And that is how easily we lose agency to AI. Suddenly even checking the commands that a technology (unavailable until 2-3 years ago) writes for us, is perceived as some huge burden...
The problem is that it genuinely is. One of the appeals of AI is that you can focus on planning instead of actually doing running the commands yourself. If you're educated enough to be able to validate what the commands are doing (which you should be if you're trusting an AI in the first place), then if you have to individually approve pretty much everything the AI does you're not much faster than just doing it yourself. In my experience, not running in YOLO mode negates most advantages of agents in the first place.
AI is either an untrustworthy tool that sometimes wipes your computer for a chance at doing something faster than you would've been able to on your own, or it's no faster than just doing it yourself.
Only Codex. I haven't found a sane way to let it access, for example, the Go cache in my home directory (read only) without giving it access EVERYWHERE. Now it does some really weird tricks to have a duplicate cache in the project directory. And then it forgets to do it and fails and remembers again.
With Claude the basic command filters are pretty good and with hooks I can go to even more granular levels if needed. Claude can run fd/rg/git all it wants, but git commit/push always need a confirmation.
I mean the direction of the AIs general tasking, it will do the command correctly but what it's trying to achieve isn't going in the right direction for whatever reason. You might be tempted to suggest a fix, but I truly mean for "whatever reason". There's dozens of different ways the AI gets onto a bad path, I would rather catch it early rather than come back to a failed run and have to start again.
I suppose the real question here is “how often should I check on the AI and course correct”.
My experience is if you have to manually approve every tool invocation the we’re talking every 3 to 15 seconds. This is infuriating and makes me want to flip tables. The worst possible cadence.
Every 5 or 15 minutes is more tolerable. Not too long for it to have gone crazy and wasted time. Short enough that I feel like I have a reasonable iteration cadence. But not too short that I can’t multi-task.
lol no. There are literally a hundred plus Unix tools and commands. I couldn’t tell you what 90% of them mean. I sure as hell couldn’t have told you what sed stood for. And if you asked me tomorrow I also wouldn’t be able to tell you.
C programmers are great. I love C. I wish everything had a beautiful pure C API. But C programmers are strictly banned from naming things. Their naming privileges have been revoked, permanently.
It's `xaf`, because the modern world is way too complex for simple Germanic rules to solve it.
But GNU tar was never the issue. It's almost completely straight forward, the only problem it has is people confusing the tar file with the target directory. If you use some UNIX tar, you will understand why everybody hates it.
Someone once tried this on me during Friday drinks and I successfully conquered the challenge with "tar --help". The challenger tried in vain to claim that this was not valid, but everyone present agreed that an exit code of zero meant that it was a valid solution.
Some drunks in a gnu-shaped echo chamber concluded that the world is gnu-shaped. That's not much a joke, if there is one here. Such presently popular axioms as "unix means linux" or "the userland must be gnu" or "bash is installed" can be shown as poor foundations to reason from by using a unix system that violates all those assumptions. That the XCDD comic did not define what a unix system is is another concern; there are various definitions, some of which would exclude both linux and OpenBSD.
I seem to remember "tar xvf filename.tar" from the 1990s, I'll try that out. If I'm wrong, I'll be dead before I even notice anything. That's better than dying of cancer or Alzheimer's.
z requires it's compressed with gzip and is likely a GNU extension too (it was j for bzip2 iirc). It's also important to keep f the last because it is parametrized and a filename should follow.
So I'd always go with c (create) instead of x (extract), as the latter assumes an existing tar file (zx or xz even a gzipped tar file too; not sure if it's smart enough to autodetect compress-ed .Z files vs .gz either): with create, higher chances of survival in that xkcd.
is always a valid command, whether file.name exists or not. When the file doesn't exist, tar will exit with status '2', apparently, but that has no bearing on the validity of the command.
Compare these two logs:
$ tar xvzf read.me
tar (child): read.me: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
$ tar extract read.me
tar: invalid option -- 'e'
Try 'tar --help' or 'tar --usage' for more information.
Do you really not understand the difference between "you told me to do something, but I can't" and "you just spouted some meaningless gibberish"?
The GGP set the benchmark at "returns exit code 0" (for "--help"), and even with XKCD, the term in use is "valid command" which can be interpreted either way.
The rest of your slight is unneccessary, but that's your choice to be nasty.
Like I said, I was operating on a lot of zipped tars. Not sure what you are replying about.
The other commenter already mentioned that the xkcd just said "valid", not return 0 (which to be fair is what the original non xkcd required so I guess fair on the mixup)
Oh, just funny mental gymnastics if we are aiming for survival in 10 seconds with a valid, exit code 0 tar command. :)
As tar is a POSIX (ISO standard for "portable operating system interfaces") utility, I am also highlighting what might get us killed as all of us are mostly used to GNU systems with all the GNU extensions (think also bash commands in scripts vs pure sh too).
Hehe fair enough in that case. Tho nothing said it had to work on a tar from like 1979 ;)
To me at least POSIX is dead. It's what Windows (before WSL) supported with its POSIX subsystem so it could say it was compatible but of course it was entirely unusable.
Initial release July 27, 1993; 32 years ago
Like, POSIX: Take the cross section of all the most obscure UNICES out there and declare that you're a UNIX as long as you support that ;)
And yeah I use a Mac at work so a bunch of things I was used to "all my life" so to speak don't work. And they didn't work on AIX either. But that's why you install a sane toolchain (GNU ;) ).
Like sure I was actually building a memory compactification algorithm for MINIX with the vi that comes with MINIX. Which is like some super old version of it that can't do like anything you'd be used to from a VIM. It works. But it's not nice. That's like literally the one time I was using hjkl instead of arrow keys.
WebDevs who have build systems that take ten minutes and download tens of megabytes of JS and have hundreds of milliseconds of lag are sooooooooooooo not allowed to complain about game devs ever.
Oh, at first I thought you were talking about websites doing that and I was going to say "sure, those people can't complain, but the rest of us can".
Then I realized you said build systems and eh, whatever. It's not good for build systems to be bloated, but it matters a lot less than the end product being bloated.
And you seem to be complaining about the people that are dealing with these build systems themselves, not inflicting them on other people? Why don't they get to complain?
Download bloat is net less impactful than build time bloat imho. Game download and install size bloat is bad. But is a mostly one time cost. Build time bloat doesn’t directly impact users, but iteration time is GodKing so bad build times indirectly hurt consumers.
But that’s all beside the point. What I was really doing was criticizing the <waves hands wildly> HN commenters. HN posters are mostly webdevs because most modern programmers are webdevs. And while I won’t say the file bloat here wasn’t silly, I wonder stand for game dev slander from devs that commit faaaaaaaaaaaaaar greater sins.
Web devs are not a hivemind. That kind of criticism doesn't work well at all when pointed at the entirety of the site.
> Download bloat is net less impactful than build time bloat imho.
Download bloat is a bad problem for people on slow connections, and there's a lot of people on slow connections. I really dislike when people don't take that into account.
And even worse if they're paying by the gigabyte in a country with bad wireless prices, that's so much money flushed down the drain.
Believe you me I wish every website worked on 2G. HN is great at least.
For consoles total disk space is an even bigger constraint than download size. But file size is a factor. Call of Duty is known to be egregious. It’s a much more complex problem than most people realize. Although hopefully not as trivial a fix as Helldivers!
In any case HN has a broadly dismissive attitude towards gamedevs. It is irksome. And deeply misplaced. YMMV.
> When debugging a vexing problem one has little to lose by using an LLM — but perhaps also little to gain.
This probably doesn't give them enough credit. If you can feed an LLM a list of crash dumps it can do a remarkable job producing both analyses and fixes. And I don't mean just for super obvious crashes. I was most impressed with a deadlock where numerous engineers and tried and failed to understand exactly how to fix it.
After the latest production issue, I have a feeling that opus-4.5 and gpt-5.1-codex-max are perhaps better than me at debugging. Indeed my role was relegated to combing through the logs, finding the abnormal / suspicious ones, and feeding those to the models.
So far I think using clang instead of MSVC compiler is a strict win? Not a huge difference mind you. But a win nonetheless.
reply