Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Loss32: Let's Build a Win32/Linux (loss32.org)
344 points by akka47 1 day ago | hide | past | favorite | 434 comments




This might offend some people but even Linus Torvalds thinks that the ABI compatibility is not good enough in Linux distros, and this is one of the main reasons Linux is not popular on the desktop. https://www.youtube.com/watch?v=5PmHRSeA2c8&t=283s

To quote a friend; "Glibc is a waste of a perfectly good stable kernel ABI"

Kind of funny to realize, the NT kernel ABI isn’t even all that stable itself; it is just wrapped in a set of very stable userland exposures (Win32, UWP, etc.), and it’s those exposures that Windows executables are relying on. A theoretical Windows PE binary that was 100% statically linked (and so directly contained NT syscalls) wouldn’t be at-all portable between different Windows versions.

Linux with glibc is the complete opposite; there really does exist old Linux software that static-links in everything down to libc, just interacting with the kernel through syscalls—and it does (almost always) still work to run such software on a modern Linux, even when the software is 10-20 years old.

I guess this is why Linux containers are such a thing: you’re taking a dynamically-linked Linux binary and pinning it to a particular entire userland, such that when you run the old software, it calls into the old glibc. Containers work, because they ultimately ground out in the same set of stable kernel ABI calls.

(Which, now that I think of it, makes me wonder how exactly Windows containers work. I’m guessing each one brings its own NTOSKRNL, that gets spun up under HyperV if the host kernel ABI doesn’t match the guest?)


IIRC, Windows containers require that the container be built with a base image that matches the host for it to work at all (like, the exact build of Windows has to match). Guessing that’s how they get a ‘stable ABI’.

…actually, looks like it’s a bit looser these days. Version matrix incoming: https://learn.microsoft.com/en-us/virtualization/windowscont...


The ABI was stabilised for backwards compatibility since Windows Server 2022, but is not stable for earlier releases.

> Kind of funny to realize, the NT kernel ABI isn’t even all that stable itself

This is not a big problem if it's hard/unlikely enough to write a code that accidentally relies on raw syscalls. At least MS's dev tooling doesn't provide an easy way to bypass the standard DLLs.

> makes me wonder how exactly Windows containers work

I guess containers do the syscalls through the standard Windows DLLs like any regular userspace application. If it's a Linux container on Windows, probably the WSL syscalls, which I guess, are stable.


> NT kernel ABI isn’t even all that stable itself

Can you give an example where a breaking change was introduced in NT kernel ABI?


https://j00ru.vexillium.org/syscalls/nt/64/

(One example: hit "Show" on the table header for Win11, then use the form at the top of the page to highlight syscall 8c)


Changes in syscall numbers aren't necessarily breaking changes as you're supposed to use ntdll.dll to call kernel, not direct syscalls.

That was his point exactly.


The syscall numbers change with every release: https://j00ru.vexillium.org/syscalls/nt/64/

Syscall numbers shouldn't be a problem if you link against ntdll.dll.

So now you're talking about the ntdll.dll ABI instead of the kernel ABI. ntdll.dll is not the kernel.

NTDLL is NT’s kernel ABI, not syscalls. Nothing on Windows uses syscalls to call the kernel.

NTDLL isn’t some higher level library. It’s just a series of entry points into NT kernel.


Yes, the fact that functions in NTDLL issue a syscall instruction is a platform-specific implementation detail.

...isn't that the point of this entire subthread? The kernel itself doesn't provide the stable ABI, userland code that the binary links to does.

No. On NT, kernel ABI isn't defined by the syscalls but NTDLL. Win32 and all other APIs are wrappers on top of NTDLL, not syscalls. Syscalls are how NTDLL implements kernel calls behind the scenes, it's an implementation detail. Original point of the thread was about Win32, UWP and other APIs that build a new layer on top of NTDLL.

I argue that NT doesn't break its kernel ABI.


NTDLL APIs are very stable[0] and you can even compile and run x86 programs targeting NT 3.1 Build 340[1] which will still work on win11.

[0] as long as you don't use APIs they decided to add and remove in a very short period (longer read: https://virtuallyfun.com/2009/09/28/microsoft-fortran-powers...)

[1] https://github.com/roytam1/ntldd/releases/tag/v250831


macOS and iOS too — syscalls aren’t stable at all, you’re expected to link through shared library interfaces.

Apparently there are 3 kinds of Windows containers, one using HyperV, and the others sharing the kernel (like Linux containers)

https://thomasvanlaere.com/posts/2021/06/exploring-windows-c...


Docker on windows isn't simply a glorified virtual machine running a Linux. aka Linux subsystem v2

At least glibc uses versioned symbols. Hundreds of other widely-used open source libraries don't.

Versioned glibc symbols are part of the reason that binaries aren't portable across Linux distributions and time.

Only because people aren't putting in the effort to build their binaries properly. You need to link against the oldest glibc version that has all the symbols you need, and then your binary will actually work everywhere(*).

* Except for non-glibc distributions of course.


But to link against an old glibc version, you need to compile on an old distro, on a VM. And you'll have a rough time if some part of the build depends on a tool too new for your VM. It would be infinitely simpler if one could simply 'cross-compile' down to older symbol versions, but the tooling does not make this easy at all.

Check out `zig cc`. It let's you target specific glibc versions. It's a pretty amazing C toolchain.

https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...


It's actually doable without an old glibc as it was done by the Autopackage project: https://github.com/DeaDBeeF-Player/apbuild

That never took off though, containers are easier. Wirh distrobox and other tools this is quite easy, too.


> It would be infinitely simpler if one could simply 'cross-compile' down to older symbol versions, but the tooling does not make this easy at all.

It's definitely not easy, but it's possible: using the `.symver` assembly (pseudo-)directive you can specify the version of the symbol you want to link against.


Huh? Bullshit. You could totally compile and link in a container.

Ok, so you agree with him except where he says “in a VM” because you say you can also do it “in a container”.

Of course, you both leave out that you could do it “on real hardware”.

But none of this matters. The real point is that you have to compile on an old distro. If he left out “in a VM”, you would have had nothing to correct.


I'm not disagreeing that glibc symbol versioning could be better. I raised it because this is probably one of the few valid use cases for containers where they would have a large advantage over a heavyweight VM.

But it's like complaining that you might need a VM or container to compile your software for Win16 or Win32s. Nobody is using those anymore. Nor really old Linux distributions. And if they do, they're not really going to complain about having to use a VM or container.

As C/C++ programmer, the thing I notice is ... the people who complain about this most loudly are the web dev crowd who don't speak C/C++, when some ancient game doesn't work on their obscure Arch/Gentoo/Ubuntu distribution and they don't know how to fix it. Boo hoo.

But they'll happily take a paycheck for writing a bunch of shit Go/Ruby/PHP code that runs on Linux 24/7 without downtime - not because of the quality of their code, but due to the reliability of the platform at _that_ particular task. Go figure.


> But they'll happily take a paycheck for writing a bunch of shit Go/Ruby/PHP code that runs on Linux 24/7 without downtime - not because of the quality of their code, but due to the reliability of the platform at _that_ particular task.

But does the lack of a stable ABI have any (negative) effect on the reliability of the platform?


Only for people who want to use it as a desktop replacement for Windows or MacOS I guess? There are no end of people complaining they can't get their wifi or sound card or trackpad working on (insert-obscure-Linux-distribution-here).

Like many others, I have Linux servers running over 2000-3000 days uptime. So I'm going to say no, it doesn't, not really.


>As C/C++ programmer, the thing I notice is ... the people who complain about this most loudly are the web dev crowd who don't speak C/C++, when some ancient game doesn't work on their obscure Arch/Gentoo/Ubuntu distribution and they don't know how to fix it. Boo hoo.

You must really be behind the times. Arch and Gentoo users wouldn't complain because an old game doesn't run. In fact the exact opposite would happen. It's not implausible for an Arch or Gentoo user to end up compiling their code on a five hour old release of glibc and thereby maximize glibc incompatibility with every other distribution.


If it requires effort to be correct, that's a bad design.

Why doesn't the glibc use the version tag to do the appropriate mapping?


I think even calling it a "design" is dubious. It's an attribute of these systems that arose out of the circumstance, nobody ever sat down and said it should be this way. Even Torvalds complaining about it doesn't mean it gets fixed, it's not analogous to Steve Jobs complaining about a thing because Torvalds is only in charge of one piece of the puzzle, and the whole image that emerges from all these different groups only loosely collaborating with each other isn't going to be anybody's ideal.

In other words, the Linux desktop as a whole is a Bazaar, not Cathedral.


I don’t understand why this is the case, and would like to understand. If I want only functions f1 and f2 which were introduced in glibc versions v1 and v2, why do I have to build with v2 rather than v3? Shouldn’t the symbols be named something like glibc_v1_f1 and glibc_v2_f2 regardless of whether you’re compiling against glibc v2 or glibc v3? If it is instead something like “compiling against vN uses symbols glibc_vN_f1 and glibc_vN_f2” combined with glibc v3 providing glibc_v1_f1, glibc_v2_f1, glibc_v3_f1, glibc_v2_f2 and glbc_v3_f2… why would it be that way?

> why would it be that way?

It allows (among other things) the glibc developers to change struct layouts while remaining backwards compatible. E.g. if function f1 takes a struct as argument, and its layout changes between v2 and v3, then glibc_v2_f1 and glibc_v3_f1 have different ABIs.


Individual functions may have a lot of different versions. They do only update them if there is an ABI change (so you may have e.g. f1_v1, f1_v2, f2_v2, f2_v3 as synbols in v3 of glibc) but there's no easy way to say 'give me v2 of every function'. If you compile against v3 you'll get f2_v3 and f1_v2 and so it won't work on v2.

Why are they changing? And I presume there must be disadvantages to staying on the old symbols, or else they wouldn’t be changing them—so what are those disadvantages?

> Only because people aren't putting in the effort to build their binaries properly.

Because Linux userland is an unmitigated clusterfuck of bad design that makes this really really really hard.

GCC/Clang and Glibc make it effectively impossible almost impossible to do this on their own. The only way you can actually do this is:

1. create a userland container from the past 2. use Zig which moved oceans and mountains to make it somewhat tractable

It's awful.


> You need to link against the oldest glibc version that has all the symbols you need

Or at least the oldest one made before glibc's latest backwards incompatible ABI break.


Yeah and nothing ever lets you pick which versions to link to. You're going to get the latest ones and you better enjoy that. I found it out the hard way recently when I just wanted to do a perfectly normal thing of distributing precompiled binaries for my project. Ended up using whatever "Amazon Linux" is because it uses an old enough glibc but has a new enough gcc.

You can choose the version. There was apgcc from the (now dead) Autopackage project which did just that: https://github.com/DeaDBeeF-Player/apbuild

It's not at all straightforward, it should be the kind of thing that's just a compiler flag, as opposed to needing to restructure your build process to support it.

Yeah that's what I meant. I also came across some script with redefinitions of C standard library functions that supposedly also allows you to link against older glibc symbols. I couldn't make it work.

Any half-decent SDK should allow you to trivially target an older platform version, but apparently doing trivial-seeming things without suffering is not The Linux Way™.


> Hundreds of other widely-used open source libraries don't.

Correct me if I'm wrong but I don't think versioned symbols are a thing on Windows (i.e. they are non-portable). This is not a problem for glibc but it is very much a problem for a lot of open source libraries (which instead tend to just provide a stable C ABI if they care).


> versioned symbols are a thing on Windows

There’re quite a few mechanics they use for that. The oldest one, call a special API function on startup like InitCommonControlsEx, and another API functions will DLL resolve differently or behave differently. A similar tactic, require an SDK defined magic number as a parameter to some initialization functions, different magic numbers switching symbols from the same library; examples are WSAStartup and MFStartup.

Around Win2k they did side by side assemblies or WinSxS. Include a special XML manifest into embedded resource of your EXE, and you can request specific version of a dependent API DLL. The OS now keeps multiple versions internally.

Then there’re compatibility mechanics, both OS builtin and user controllable (right click on EXE or LNK, compatibility tab). The compatibility mode is yet another way to control versions of DLLs used by the application.

Pretty sure there’s more and I forgot something.


> There’re quite a few mechanics they use for that. The oldest one, call a special API function on startup [...]

Isn't the oldest one... to have the API/ABI version in the name of your DLL? Unlike on Linux which by default uses a flat namespace, on the Windows land imports are nearly always identified by a pair of the DLL name and the symbol name (or ordinal). You can even have multiple C runtimes (MSVCR71.DLL, MSVCR80.DLL, etc) linked together but working independently in the same executable.


Linux can do this as well, the issue is that just duplicates how many versions you need to have installed, and it's not that different in the limit from having a container anyway. The symbol versioning means you can just have the latest version of the library and it remains compatible with software built against old versions. (Especially because when you have multiple versions of a library linked into the same process you can wind up with all kinds of tricky behaviour if they aren't kept strictly separated. There's a lot of footguns in Windows around this, especially with the way DLLs work to allow this kind of seperation in the first place).

There’s also API Sets: where DLLs like api-win-blah-1.dll acts as a proxy for another DLL both literally, with forwarder exports, and figuratively, with a system-wide in-memory hashmap between api set and actual DLL.

Iirc this is both for versioning, but also so some software can target windows and Xbox OS’s whilst “importing” the same api-set DLL? Caused me a lot of grief writing a PE dynamic linker once.

https://bookkity.com/article/api-sets


I did forget to mention something important. Since about Vista, Microsoft tends to replace or supplement C WinAPI with IUnknown based object-oriented ones. Note IUnknown doesn’t necessarily imply COM; for example, Direct3D is not COM: no IDispatch, IPC, registration or type libraries.

IUnknown-based ABIs exposing methods of objects without any symbols exported from DLLs. Virtual method tables are internal implementation details, not public symbols. By testing SDK-defined magic numbers like SDKVersion argument of D3D11CreateDevice factory function, the DLL implementing the factory function may create very different objects for programs built against different versions of Windows SDK.


I only learned about glibc earlier today, when I was trying to figure out why the Nix version of a game crashes on SteamOS unless you unset some environ vars.

Turns out that Nix is built against a different version of glibc than SteamOS, and for some reason, that matters. You have to make sure none of Steam's libraries are on the path before the Nix code will run. It seems impractical to expect every piece of software on your computer to be built against a specific version of a specific library, but I guess that's Linux for you.


Ask your friend if he would CC0 the quote or similar (not sure if its possible but like) I can imagine this being a quote on t-shirts xD

Honestly I might buy a T-shirt with such a quote.

I think glibc is such a pain that it is the reason why we have so vastly different package management and I feel like non glibc things really would simplify the package management approach to linux which although feels solved, there are definitely still issues with the approach and I think we should still all definitely as such look for ways to solve the problem


Non-glibc distros (musl, uclibc...) with package managers have been a thing for ages already.

And they basically hold under 0.01% of Linux marketshare and are completely shit.

AppImage, theoretically, solves this problem (or FlatPak I guess). The issue would really be in getting people to package up dead/abandoned software.

https://zapps.app/ is another interesting thing in the space.

AppImage have some issues/restrictions like it cant run on older linux than one it was compiled on, so people compile it on the oldest pc's and a little bit of more quirks

AppImage are really good but zapps are good too, I had once tried to do something on top of zapp but shame that zapp went into the route of crypto ipfs or smth and then I don't really see any development of that now but it would be interesting if someone can add the features of zapp perhaps into appimage or pick up the project and build something similar perhaps.


This is really cool. Looks like it has a way for me to use my own dynamic linker and glibc version *.

At some point I've got to try this. I think it would be nice to have some tools to turn an existing programs into a zapps (there many such tools for making AppImages today).

* https://github.com/warptools/ldshim


> At some point I've got to try this. I think it would be nice to have some tools to turn an existing programs into a zapps (there many such tools for making AppImages today).

Looks like you met the right guy because I have built this tool :)

Allow me to show my project, Appseed (https://nanotimestamps.org/appseed): It's a simple fish script which I had (prototyped with Claude) some 8-10 months ago I guess to solve exactly this.

I have a youtube video in the website and the repository is open source on github too.

So this actually worked fantastic for a lot of different binaries that I tested it on and I had uploaded it on hackernews as well but nobody really responded, perhaps this might change it :p

Now what appseed does is that you can think of it is that it can take a binary and convert it into two folders (one is the dynamic library part) and the other is the binary itself

So you can then use something like tar to package it up and run it anywhere. I can of course create it into a single elf-64 as well but I wanted to make it more flexible so that we can have more dynamic library like or perhaps caching or just some other ideas and this made things simple for me too

Ldshim is really good idea too although I think I am unable to understand it for the time being but I will try to understand it I suppose. I would really appreciate it if you can tell me more about Ldshim! Perhaps take a look at Appseed too and I think that there might be some similarities except I tried to just create a fish script which can just convert any dynamic binary usually into a static one of sorts

I just want more people to take ideas like appseed or zapp's and run with it to make linux's ecosystem better man. Because I just prototyped it with LLM's to see if it was possible or not since I don't have much expertise in the area. So I can only imagine what can be possible if people who have expertise do something about it and this was why I shared it originally/created it I guess.

Let me know if you are interested in discussing anything about appseed. My memory's a little rusty about how it worked but I would love to talk about it if I can be of any help :p

Have a nice new year man! :p


Can you build GUI programs with this? I'm thinking anything that would depend on GPU drivers. Anything built with SDL, OpenGL, Vulkan, whatever.

No, in my experimentation I tried to convert OBS into static and it had the issue of it's gui not working. I am not exactly sure what's the reason but maybe you can check out another library like sdl etc. that you mention, I haven't tested out SDL,OpenGL etc's support to be honest but I think that maybe it might not work in the current stage or not (not sure), there is definitely a possibility of making it possible tho because CLI applications work just fine (IO and everything) so I am not really sure what caused my obs studio error but perhaps you can try it and then let me know if you need any help/share the results!

Check Detour out: https://github.com/graphitemaster/detour?tab=readme-ov-file#...

I suspect with combination of Detour & Zapps it could be possible.


Interesting. I've had a hell of a time building AppImages for my apps that work on Fedora 43. I've found bug reports of people with similar challenges, but it's bizarre because I use plenty of AppImages on F43 that work fine. I wonder if this might be a clue

I can only speak for Flatpak, but I found its packaging workflow and restricted runtime terrible to work with. Lots of undocumented/hard to find behaviour and very painful to integrate with existing package managers (e.g. vcpkg).

Yeah, flatpak has some good ideas, and they're even mostly well executed, but once you start trying to build your own flatpaks or look under the hood there's a lot of "magic". (Examples: Where do runtimes come from? I couldn't find any docs other than a note that says to not worry about it because you should never ever try to make your own, and I couldn't even figure out the git repos that appear to create the official ones. How do you build software? Well, mostly you plug it into the existing buildsystems and hope that works, though I mostly resorted to `buildsystem: simple` and doing it by hand.) For bonus points, I'm pretty sure 1. flatpaks are actually pretty conceptually simple; the whole base is in /usr and the whole app is in /app and that's it, and 2. the whole thing could have been a thin wrapper over docker/podman like x11docker taken in a slightly different direction.

Not sure what you're talking about, Flatpak runtimes are easy to find and contribute to: https://docs.flatpak.org/en/latest/available-runtimes.html

I wasn't directly involved, but the company I worked for has created its own set of runtimes too and I haven't heard any excessive complaints on internal chats, so I don't think it's as arcane as you make it sound either.


Well flatpak was started pre oci. But its core is is just ostree + bwrap. Bwrap does the sandboxing and ostree handles the storage and mount. Now there still a few more stuff but these 2 are the equivalent to docker. Bwrap is also used for steam and some other sandboxing usecases. Ostree is the core of fedora silverblue. Runtimes are special distros in a way, but since the official one are pretty building everything from source so the repos tend to be messy with buildscripts for everything.

You can build your own flatpak by wrapping bwrap, because that is what Flatpak does. Flatpak seems to have some "convenience things" like the various *-SDK packages, but I don't know how much convenience that provides.

The flatpak ecosystem is problematic in that most packages are granted too much rights by default.


Appimage maybe, don’t say flatpak Cz wherever I update my arch system flatpak gets broken which I have to fix by updating or reinstalling

While true in many respects (still), it's worth pointing out that this take is 12 years old.

Maybe it's better now in some distros. Not sure about other distros, but I don't like Ubuntu's Snap package. Snap packages typically start slower, use more RAM, require sudo privileges to install, and run in an isolated environment only on systems with AppArmour. Snap also tends to slow things some at boot and shutdown. People report issues like theming mismatches, permissions/file-access friction. Firefox theming complaints are a common example. It's almost like running a docker container for each application. Flatpaks seem slightly better, but still a bandaid. Just nobody is going to fix the compatibility problems in Linux.

Ubuntu was getting too good so it had to snap half of its value out of existence.

You can still get firefox as a .deb though.

https://launchpad.net/~mozillateam/+archive/ubuntu/ppa


I think he still considers this to be the case. He was interviewed on Linus tech tips recently. And he bemoaned in passing the terrible application ecosystem on Linux.

It makes sense. Every distribution wants to be in charge of what set of libraries are available on their platform. And they all have their own way to manage software. Developing applications on Linux that can be widely used across distributions is way more complex than it needs to be. I can just ship a binary for windows and macOS. For Linux, you need an rpm and a dpkg and so on.

I use davinci resolve on Linux. The resolve developers only officially support Rocky Linux because anything else is too hard. I use it in Linux mint anyway. The application has no title bar and recording audio doesn’t work properly. Bleh.


I agree 100% with Linus. I can run a WinXP exe on Win10 or 11 almost every time, but on Linux I often have to chase down versions that still work with the latest Mint or Ubuntu distros. Stuff that worked before just breaks, especially if the app isn’t in the repo.

Yes and even the package format thing is a hell of its own. Even on Ubuntu you have multiple package formats and sometimes there are even multiple app stores (a Gnome one and an Ubuntu specific if I remember correctly)

Ultimately this boils down to lack of clear technical and community leadership from Canonical. Too unwilling to say "no" to vanity/pet projects that end up costing all of us as they make the resulting distribution into a moving target too difficult to support in the enterprise - at least, not with the skillset of the average desktop support hire these days.

I want to go to the alternate timeline where they just stuck with a set of technologies... ideally KDE... and just matured them up until they were the idealized version of their original plan instead of always throwing things away to rewrite them for ideological or technical purity of design.


You can also run a WinXP exe on any Linux distribution almost every time. That's the point of project and Linus' quip: The only stable ABI around on MS Windows and Linux is Win32 (BTW, I do not agree with this.)

I think it's not unlikely that we reach reach a point in a couple of decades where we are all developing win32 apps that most people are running some form of linux.

We already have an entire platform like that (steam deck), and it's the best linux development experience around in my opinion.


That’s actually an intentional nudge to make the software packaged by the distro, which usually implies that they are open source.

Who needs ABI compatibility when your software is OSS? You only need API compatibility at that point.


So every Linux distribution should compile and distribute packages for every single piece of open source software in existence, both the very newest stuff that was only released last week, and also everything from 30+ years ago, no matter how obscure.

Because almost certainly someone out there will want to use it. And they should be able to, because that is the entire point of free software: user freedom.


Those users will either check the source code and compile it themself, with all the proper options to match their system; or rely on a software distribution to do it for them.

People who are complaining would prefer a world of isolated apps downloaded from signed stores, but Linux was born at an optimistic time when the goal was software that cooperate and form a system, and which distribution does not depend on a central trusted platform.

I do not believe that there is any real technical issue discussed here, just drastically different goals.


No. People would prefer the equivalent of double-click `setup.exe`. Were you being serious?

I am not an expert on this, but my question is, how does windows manages to achieve it? Why can't Linux do the same?

because they care about ABI/API stability.

And have an ever decreasing market share, in desktop, hypervisor and server space. The API/ABI stability is probably the only thing stemming the customer leakage at all. It's not the be all and end all.

Decreasing market share in the desktop?

Yes, but Windows is even so bad, it's been decreasing market share of desktops.

Your tone makes it sound like this is a bad thing. But from a user’s perspective, I do want a distro to package as much software as possible. And it has nothing to do with user freedom. It’s all about being entitled as a user to have the world’s software conveniently packaged.

What if you want to use a newer or older version of just one package without having to update or downgrade the entire goddamn universe? What if you need to use proprietary software?

I've had so much trouble with package managers that I'm not even sure they are a good idea to begin with.


That is the point of flatpak or appimage but even before that you could do it by shipping the libraries with your software and use LD_LIBRARY_PATH to link your software to them.

That was what most well packaged proprietary software used to do when installing into /opt.


I know you are trying to make a point about complexity, but that is literally what NixOS allows for.

Software installed from your package manager is almost certainly provided as a binary already. You could package a .exe file and that should work everywhere WINE is installed.

That's not my point. My point is that if executable A depends on library B, and library B does not provide any stable ABI, then the package manager will take care of updating A whenever updating B. Windows has fanatical commitment to ABI stability, so the situation above does not even occur. As a user, all the hard work dealing with ABI breakages on Linux are done by the people managing the software repos, not by the user or by the developer. I'm personally very appreciative of this fact.

Sure, it's better than nothing, but it's certainly not ideal. How much time and energy is being wasted by libraries like that? Wouldn't it be better if library B had a stable ABI or was versioned? Is there any reason it needs to work like this?

And you can also argue how much time and energy is being wasted by committing to a stable ABI such that the library cannot meaningfully improve. Remember that even struct sizes are part of the ABI; so you either cannot add new fields to a struct, or you expose pointers only and have to resort to dynamic allocation rather than stack allocation most of the time.

Opinions could differ but personally I think a stable ABI wastes more time and energy than an unstable ABI because it forces code to be inefficient. Code is run more often than they are compiled. It’s better to allow code to run faster than to avoid extra compilations.


Actually the solution to this on Windows is for programs to package all their dependencies except for Windows. When you install a game, that game includes a copy of every library the game uses, except for Windows

Not sure if it's the right solution but it's a description of what happens right now in practice yes.

It also makes support more or less impossible.

Even if we ship as source, even if the user has the skills to build it, even if the make file supports every version of the kernel, plus all other material variety, plus who knows how many dependencies, what exactly am I supposed to do when a user reports;

"I followed your instructions and it doesn't run".

Linux Desktop fails because it's not 1 thing, it's 100 things. And to get anything to run reliably on 95 of them you need to be extremely competent.

Distribution as source fails because there are too many unknown, and dependent parts.

Distribution as binary containers (Docker et al) are popular because it gives the app a fighting chance. While at the same time being a really ugly hack.


Yep. But docker doesn’t help you with desktop apps. And everything becomes so big!

I think Rob pike has the right idea with go just statically link everything wherever possible. These days I try to do the same, because so much less can go wrong for users.

People don’t seem to mind downloading a 30mb executable, so long as it actually works.


What do you mean docker doesn’t help you with desktop apps? I run complicated desktop apps like Firefox inside containers all the time. There are also apps like Citrix Workspace that need so specific dependency versions that I’ve given up on running outside containers.

If you don’t want to configure this manually, use distrobox, which is a nice shell script wrapper that helps you set things up so graphical desktop apps just work.


Then you only support 1 distro. If anyone wants to use your software on an unsupported distro they can figure out the rest themselves.

No, they come online and whine that you didn't package your software for <obscure distro>, that your software is shit and you're incompetent.

And being 100 things is completely unavoidable when freedom is involved. You can force everyone to use the same 1 thing, if you make it proprietary. If people have freedom to customize it, of course another 99 people will come along and do that. We should probably just accept this is the price of freedom. It's not as bad as it seems because you also have the freedom to make your system present like some other system in order to run programs made for that system.

That's Guix.

Even open-source software has to deal with the moving target that is ABI and API compatibility on Linux. OpenSSL’s API versioning is a nightmare, for example, and it’s the most critical piece of software to dynamically link (and almost everything needs a crypto/SSL library).

Stable ABIs for certain critical pieces of independently-updatable software (libc, OpenSSL, etc.) is not even that big of a lift or a hard tradeoff. I’ve never run into any issues with macOS’s libc because it doesn’t version the symbol for fopen like glibc does. It just requires commitment and forethought.


The reason you're getting downvoted is that what you're saying implies a shit-ton of work for the distros -- that's expensive work that someone has to pay for (but nobody wants to, and think of the cost of opportunity).

But you're not entirely wrong -- as long as you have API compatibility then it's just a rebuild, right? Well, no, because something always breaks and requires attention. The fact is that in the world of open source the devs/maintainers can't be as disciplined about API compat as you want them to be, and sometimes they have to break backwards compatibility for reasons (security, or just too much tech debt and maint load for obsolete APIs). Because every upstream evolves at a different rate, keeping a distro updated is just hard.

I'm not saying that statically linking things and continuing to run the binaries for decades is a good answer though. I'm merely explaining why I think your comment got downvoted.


Everyone is mentioning ABI, but this is really an API problem, so "you only need API compatibility at that point" is a very big understatement.

This might be why OpenBSD looks attractive to some. Its kernel and all the different applications are fully integrated with each other -- no distros! It also tries to be simple, I believe, which makes it more secure and overall less buggy.

To be honest, I think OSes are boring, and should have been that way since maybe 1995. The basic notions:

  multi-processing, context switching, tree-like file systems, multiple users, access privileges,
haven't changed since 1970, and the more modern GUI stuff hasn't changed since at least the early '90s. Some design elements, like

  tree-like file systems, WIMP GUIs, per-user privileges, the fuzziness of what an
  "operating system" even is and its role,
are perhaps even arbitrary, but can serve as a mature foundation for better-concieved ideas, such as:

  ZFS (which implements in a very well-engineered manner a tree-like data storage that's
  been standard since the '60s) can serve as a founation for
  Postgres (which implements a better-conceived relational design)
I'm wondering why OSS - which according to one of its acolytes, makes all bugs shallow - couldn't make its flagship OS more stable and boring. It's produced an

  anarchy of packaging systems, breaking upgrades and updates,
  unstable glibc, desktop environments that are different and changing seemingly
  for the sake of it, sound that's kept breaking, power management iffiness, etc.

> tree-like file systems, multiple users, access privileges,

Why should everything pretend to be a 1970s minicomputer shared by multiple users connected via teletypes?

If there's one good idea in Unix-like systems that should be preserved, IMHO it's independent processes, possibly written in different languages, communicating with each other through file handles. These processes should be isolated from each other, and from access to arbitrary files and devices. But there should be a single privileged process, the "shell" (whether command line, TUI, or GUI), that is responsible for coordinating it all, by launching and passing handles to files/pipes to any other process, under control of the user.

Could be done by typing file names, or selecting from a drop-down list, or by drag-and-drop. Other program arguments should be defined in some standard format so that e.g. a text based shell could auto-complete them like in VMS, and a graphical one could build a dialog box from the definition.

I don't want to fiddle with permissions or user accounts, ever. It's my computer, and it should do what I tell it to, whether that's opening a text document in my home directory, or writing a disk image to the USB stick I just plugged in. Or even passing full control of some device to a VM running another operating system that has the appropriate drivers installed.

But it should all be controlled by the user. Normal programs of course shouldn't be able to open "/dev/sdb", but neither should they be able to open "/home/foo/bar.txt". Outside of the program's own private directory, the only way to access anything should be via handles passed from the launching process, or some other standard protocol.

And get rid of "everything is text". For a computer, parsing text is like for a human to read a book over the phone, with an illiterate person on the other end who can only describe the shape of each letter one by one. Every system-level language should support structs, and those are like telepathy in comparison. But no, that's scaaaary, hackers will overflow your buffers to turn your computer into a bomb and blow you to kingdom come! Yeah, not like there's ever been any vulnerability in text parsers, right? Making sure every special shell character is properly escaped is so easy! Sed and awk are the ideal way to manipulate structured data!


Indeed.

AmigaOS was the pinnacle of personal computing OS design. Everything since has been a regression. Fite me.


What about BeOS ?

Not very likely, what if the BeOS API emerged as "the standard" on Linux?

https://cosmoe.org/

It would not solve the ABI problem, but it would give at least an opinionated end to end API that was at some point the official API of an OS. It has some praise on its design too.


It was more about everything since the Amiga being a regression. BeOS was sometimes called a successor (in spirit) to the Amiga : a fun, snappy, single-user OS.

I regularly install HaikuOS in a VM to test it and I think I could probably use it as a daily driver, but ported software often does not feel completely right.


OK, point.

I like FreeBSD for the same reason. The whole system is sane and coherent. Illumos is the same.

I wish either of those systems had the same hardware & software support. I’d swap my desktop over in a heartbeat if I could.


OpenBSD—all the BSDs really—have an even more unstable ABI than Linux. The syscall interface, in particular, is subject to change at any time. Statically linked binaries for one Linux version will generally Just Work with any subsequent version; this is not the case for BSD!

There's a lot to like about BSD, and many reasons to prefer OpenBSD to Linux, but ABI backward-compatibility is not one of them!

One of Linux's main problems is that it's difficult to supply and link versions of library dependencies local to a program. Janky workarounds such as containerization, AppImage, etc. have been developed to combat this. But in the Windows world, applications literally ship, and link against, the libc they were built with (msvcrt, now ucrt I guess).


Because Linux not an OS. The flagship OSS OS is Ubuntu, and it's mostly pretty stable. But OSS inherently implies the ability to make your own OS that's different from someone else's OS, so a bunch of people did just that.

Is it the flagship of Linux Distros right now? I though RHEL (The most common to see paid software package for) would be up there, along side its offshoots of Rocky / Fedora

Ubuntu still suffers the same kind of breakage though. You can't take an moderately complex GUI application that was built on ubuntu 2014 and run it on the latest version. Heck, there's a good chance you can't even build it on the newer version without needing to update it somehow. It's a property of the library ecosystem around linux, not the behaviour of a given distro.

(OK, I have some experience with vendors where their latest month-old release has an distro support release where the most up-to-date option is still 6 months past EOL, and I have managed to hack something together which will get them to work on the newer release, but it's extremely painful and very much not what either the distros or the software vendors want to support)


Isn't the kernel responsible for the ABI?

ABI is a far larger concept than the kernel UAPI. Remember that the OS includes a lot of things in userspace as well. Many of these things are not even stable between the various contemporary Linux distros, let alone older versions of them. This might include dbus services, fs layout, window manager integration, and all sorts of other things.

Yeah, it is almost like that complete OS should be called something else than "Linux".

Thanks

Android makes a sport of breaking ABI compatibly and it hasn't stopped it from being the most popular mobile OS

The reason being JetPack libraries that abstract what Android version is being used.

That's outright not true though.

What are you even talking about? Android is by far the most popular mobile OS worldwide. It's only in the US where iPhones are dominant.

Nvm I can't read

What's interesting to think about is Conway's law and monorepos and the Linux kernel and userland. If it were all just one big repo, then making breaking changes, wouldn't. The whole ifconfig > ip debacle is an example of where one giant monorepo would have changed how things happened.

It's really just glibc

It's really just not. GTK is on its fourth major version. Wayland broke backwards compatibility with tons of apps.

Multiple versions of GTK or QT can coexist on the same system. GTK2 is still packaged on most distros, I think for example GIMP only switched to GTK3 last year or so.

GTK update schedule is very slow, and you can run multiple major versions of GTK on the same computer, it's not the right argument. When people says GTK backwards compatibility is bad, they are referring in particular to its breaking changes between minor versions. It was common for themes and apps to break (or work differently) between minor versions of GTK+ 3, as deprecations were sometimes accompanied with the breaking of the deprecated code. (anyway, before Wayland support became important people stuck to GTK+ 2 which was simple, stable, and still supported at the time; and everyone had it installed on their computer alongside GTK+ 3).

Breaking between major versions is annoying (2 to 3, 3 to 4), but for the most part it's renaming work and some slight API modifications, reminiscent of the Python 2 to 3 switch, and it only happened twice since 2000.


The difference is that you can statically link GTK+, and it'll work. You can't statically link glibc, if you want to be able to resolve hostnames or users, because of NSS modules.

Static linking itself doesn't prevent modules. There's https://github.com/pikhq/musl-nscd for example

Not inherently, but static linking to glibc will not get you there without substantial additional effort, and static linking to a non-glibc C library will by default get you an absence of NSS.

Can't we just freeze glibc, at least from an API version perspective?

We definitely can, because almost every other POSIX libc doesn’t have symbol versioning (or MSVC-style multi-version support). It’s not like the behavior of “open” changes radically all the time, and you need to know exactly what source symbol it linked against. It’s really just an artifact of decisions from decades ago, and the cure is way worse than the disease.

The problem is not the APIs, it's symbol versions. You will routinely get loader errors when running software compiled against a newer glibc than what a system provides, even if the caller does not use any "new" APIs.

glibc-based toolchains are ultimately missing a GLIBC_MIN_DEPLOYMENT_TARGET definition that gets passed to the linker so it knows which minimum version of glibc your software supports, similar to how Apple's toolchain lets you target older MacOS from a newer toolchain.


Yes, so that's why freezing the glibc symbol versions would help. If everybody uses the same version, you cannot get conflicts (at least after it has rippled through and everybody is on the same version). The downside is that we can't add anything new to glibc, but I'd say given all the trouble it produces, that's worth accepting. We can still add bugfixes and security fixes to glibc, we just don't change the APIs of the symbols.

It should not be necessary to freeze it. glibc is already extremely backwards compatible. The problem is people distributing programs that request the newest version even though they do not really require it, and this then fails on systems having an older version. At least this is my understanding.

The actual practical problem is not glibc but the constant GUI / desktop API changes.


In principle you can patch your binary to accept the old local version, though I don't remember ever getting it to work right. Anyway here it is for the brave or foolhardy, here's the gist:

  patchelf --set-interpreter /lib/ld-linux-x86-64.so.2 "$APP"
  patchelf --set-rpath /lib "$APP"

> [...] brave or foolhardy, [...]

Heed the above warning as down this rpath madness surely lies!

Exhibit A: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

Exhibit B: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

Exhibit C: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

Oh, sure, rpath/runpath shenanigans will work in some situations but then you'll be tempted to make such shenanigans work in all situations and then the madness will get you...

To save everyone a click here are the first two bullet points from Exhibit A:

* If an executable has `RPATH` (a.k.a. `DT_RPATH`) set but a shared library that is a (direct or indirect(?)) dependency of that executable has `RUNPATH` (a.k.a. `DT_RUNPATH`) set then the executable's `RPATH` is ignored!

* This means a shared library dependency can "force" loading of an incompatible [(for the executable)] dependency version in certain situations. [...]

Further nuances regarding LD_LIBRARY_PATH can be found in Exhibit B but I can feel the madness clawing at me again so will stop here. :)


Yes you can do this, thanks for mentioning I was interested and checked how you would go about it.

1. Delete the shared symbol versioning as per https://stackoverflow.com/a/73388939 (patchelf --clear-symbol-version exp mybinary)

2. Replace libc.so with a fake library that has the right version symbol with a version script e.g. version.map GLIBC_2.29 { global: *; };

With an empty fake_libc.c `gcc -shared -fPIC -Wl,--version-script=version.map,-soname,libc.so.6 -o libc.so.6 fake_libc.c`

3. Hope that you can still point the symbols back to the real libc (either by writing a giant pile of dlsym C code, or some other way, I'm unclear on this part)

Ideally glibc would stop checking the version if it's not actually marked as needed by any symbol, not sure why it doesn't (technically it's the same thing normally, so performance?).


Ah you can use https://github.com/NixOS/patchelf/pull/564

So you can do e.g. `patchelf --remove-needed-version libm.so.6 GLIBC_2.29 ./mybinary` instead of replacing glibc wholesale (step 2 and 3) and assuming all of used glibc by the executable is ABI compatible this will just work (it's worked for a small binary for me, YMMV).


That's exactly what apgcc from Autopackage provided (20 years ago). https://github.com/DeaDBeeF-Player/apbuild

But compiling in a container is easier and also solves other problems.


Or just pre-install all the versions on each distro and pick the right one at load-time

Crazy how, thanks to Wine/Proton, Linux is now more compatible with old Windows games than Windows itself. There are a lot of games from the 90s and even the 00s that require jumping through a lot of hoops to run on Windows, but through Steam they're click-to-play on Linux.

Wine works on windows too. It's used by the shorthorn project to get software for newer versions of windows to run under XP.

Whoa. Can it be used to run older software on newer Windows too?

Yep, for some games like Elden Ring it even fixes Windows-specific performance hiccups: https://youtu.be/vAooLiCy7rE

My gaming PC isn't compatible with windows 11, so it was the first to get upgraded to Linux. Immediate and significant improvement in experience.

Windows kept logging down the system trying to download a dozen different language versions of word (for which I didn't have a licence and didn't want regardless). Steam kept going into a crash restart cycle. Virus scanner was ... being difficult.

Everything just works on Linux except some games on proton have some sound issues that I still need to work out.


>> some sound issues

Is this 1998? Linux is forever having sound issues. Why is sound so hard?


Sound (oss, alsa, pulseaudio, pipewire...), bluetooth, WiFi are eternal problematic Linux paper cuts.

As always It is Not Linux Fault, but it is Linux Problem.

It's one of the reasons why I moved to OSX + Linux virtual machine. I get the best of both worlds. Plus, the hardware quality of a 128GB unified RAM MacBookPro M4 Max is way beyond anything else in the market.


I think the situation has flipped in the past few years. Since Pipewire came out, I haven't had any problems with audio on Linux and I can dial the latency down to single-digit ms. Meanwhile, on Mac audio has gotten far worse, especially since Tahoe. The latency is tens of ms and I get crackling and skipping when there's high CPU usage.

Audio is still broken pretty regularly in davinci resolve on Linux. Sometimes I need to restart the application to make audio work. And I can’t record sound within resolve at all.

It doesn’t help that they only officially support rocky Linux. I use mint. I assume there’s some magic pipewire / alsa / pulseaudio commands I can run that would glue everything together properly. But I can’t figure it out. It just seems so complicated.


This sounds like a hardware / firmware problem specific to your particular sound chip / card.

Similarly, Bluetooth on my Thinkpad T14 is slightly wonky, and it sometimes fails to register a Bluetooth mouse on wake-up (I have to switch the mouse off and back on). This mouse registers fine on my other Linux machines. The logs show a report from a kernel driver saying that the BT chip behaved weirdly.

Binary-blob firmware, and physical hardware, do have bugs, and there's little an OS can do about that, Linux or otherwise. Macs have less hardware variety and higher prices, which makes their hardware errata lists shorter, but not empty.


That’s possible, but the hardware (a rodecaster pro 2 connected over usb) works just fine in other Linux apps. I can record audio in audacity. And I can play back audio in resolve. I just can’t record audio in resolve.

I think it’s a software issue in how resolve uses the Linux audio stack. But I have no idea how to get started debugging it. I’ve never had any problems with the same hardware in windows, or the same software (resolve) on macOS.


It is hard to blame Linux if only one proprietary app has sound issues.

FWIW I lost sound completely 3 times in the last 2 months on my works windows laptop and it would only come back after a reboot. I assumed it was a driver crash.


Yep, adding onto this, Bitwig's native Linux app has amazing Pipewire integration. It works like an ASIO plugged right into your desktop's audio, letting you attach channels to windows or apps and handle complex monitor/performance/mixing outputs.

It depends on having a properly good implementation, which will come eventually for most apps.


In some games I get a crackle in the audio which I don't get through any native application, nor some games run with proton. I don't know if that's what he means, but it hasn't bothered me enough to figure it out. I use bluetooth headphones anyway, I'm relatively insensitive to audio fidelity.

If you run pw-top, you might see errors accumulating. This is usually due to an underrun from the game requesting an audio quantum that’s too low.

The fix is:

    mkdir -p ~/.config/pipewire/pipewire.conf.d && echo "context.properties = {default.clock.min-quantum = 1024}" | tee ~/.config/pipewire/pipewire.conf.d/pipewire.conf
Basically, just force the quantum to be higher. Often it defaults to 64, which is around 1ms.

Linux sound is fine at least for me. The problem is running Windows games in proton. Sound will suddenly stop, then come back delayed. Apparently a known issue on some systems.

Pipewire + lowlatency kernel fixes 99% sound issues

To be fair, you can have sound issues on windows too. It's not usually on issue on linux anymore either though.

The problem is games over Wine/Proton doing weird things with the sound. Not the sound itself on modern Linux. Heck, I have less issues using audio stuff, or just changing the audio volume on Linux than on the crappy Windows.

> There are a lot of games from the 90s and even the 00s that require jumping through a lot of hoops to run on Windows

What are some examples?


Pretty much all the Renderware based GTAs have issues these days that only community made patches can mitigate.

A recent example is that in San Andreas, the seaplane never spawns if you're running Windows 11 24H2 or newer. All of it due to a bug that's always been in the game, but only the recent changes in Windows caused it to show up. If anybody's interested, you can read the investigation on it here: https://cookieplmonster.github.io/2025/04/23/gta-san-andreas...


I remember seeing a thread about that bug here on HN a while ago, that was a fun read.

I remember not getting Close Combat 2 (from 1997) running on Windows 10 some years ago but I did getting it running under Wine, albeit with some tweaks.

Whether that was a Windows compatibility issue or potentially some display driver thing, I'm not sure. (90's Windows games may have used some DirectDraw features that just don't get that much attention nowadays, which I think may have been the issue, but my memory's a bit spotty.)


Red Alert 2. Then there's games like Dark Forces II that work but don't work with hardware rendering out of the box so they look like crap. I've also had games like Grid complain I didn't have enough VRAM (because I had more than 2GB), games that were tricky to get working because I used a 4K monitor (Sims 2, Crysis 2). And there's games where the original release is borked but a newer version on GoG is okay like Alpha Centauri.

The last time I tried to run Tachyon: The Fringe was Windows 10, and it failed. IIRC I could launch it and play, but there was a non-zero chance that a FMV cutscene would cause it to freeze.

I see there are guides on Steam forums on how to get it to run under Windows 11 [0], and they are quite involved for someone not overly familiar with computers outside of gaming.

0: https://steamcommunity.com/sharedfiles/filedetails/?id=29344...


Lemmings Revolutions. Apparently to run in something else that is not Windows 95/98/Me requires some unofficial .EXE patch that you could download from some shady website. The file is now nowehre to be found.

It's a great game, unfortunately right now I am not able to play it anymore :( even though I have the original CD.

Unfortunately, Wine is of no help here :(

Also original Commandos games.


Anything around DirectX 10 and older has issues with Windows, these days.

One more popular example is Grid 2, another is Morrowind. Both crash on launch, unless you tweak a lot of things, and even then it won't always succeed.

Need for Speed II: SE is "platinum" on Wine, and pretty much unable to be run at all on Windows 11.


Isn’t this because the wine db has those tweaks pre configured?

Windows used to be half operating system, half preconfigured compatibility tweaks for all kinds of applications. That's how it kept its backwards compatibility.

More a case of DirectX radically changing how it worked [0].

[0] https://learn.microsoft.com/en-us/windows/win32/direct3darti...


It's because wine OS selector actually tries to match bug for bug the OS version you set but Window's one gave up after Windows 7.

Anything written with SafeDisk DRM e.g. medal of honor allied assault.

It kinda works both ways, just yesterday I tried to play the Linux native version of 8bit.runner and it didn't work, I had to install the Windows (beta) version and run it through proton.

Funny story: I use Anki (the flashcard program), and I run it on my NixOS laptop. There is a NixOS/nixpkgs package for Anki. It doesn't work. You know how I run Anki, which has a native GNU/Linux version and even an actual nixpkgs package, on my GNU/Linux NixOS laptop? Yeah, I run AnkiDroid, the Android version, through Waydroid. Because the Android version works.

Anki seems to be a habitual offender, I was never able to install it reproducibly and in an obvious way on several distros and always ended up building it from source.

Can somebody explain:

1. The exact problem with the Linux ABI

2. What causes it (the issues that makes it such a challenge)

3. How it changed over the years, and its current state

4. Any serious attempts to resolve it

I've been on Linux for may be 2 decades at this point. I haven't noticed any issues with ABI so far, perhaps because I use everything from the distro repo or build and install them using the package manager. If I don't understand it, there are surely others who want to know it too. (Not trying to brag here. I'm referring to the time I've spent on it.)

I know that this is a big ask. The best course for me is of course to research it myself. But those who know the whole history tend to have a well organized perspective of it, as well as some invaluable insights that are not recorded anywhere else. So if this describes you, please consider writing it down for others. Blog is probably the best format for this.


The kernel is stable, but all the system libraries needed to make a grapical application are not. Over the last 20 years, we've gone from GTK 2 to 4, X11 to Wayland, Qt 4 to 6, with compatibility breakages with each change. Building an unmodified 20 year old application from source is very likely to not work, running a 20 year old binary even less so.

There is no ABI problem. The problem is a lack of standardization for important APIs and infrastructure. There once was a serious effort to solve this: the Linux Standard Base: https://en.wikipedia.org/wiki/Linux_Standard_Base Standardization would of course be the only way to fix this, instead of inventing even more packaging formats which fragment the ecosystem even more. LSB died due to lack of interest. I assume also because various industrial stakeholders are more interest in gaining a little bit of control over the ecosystem than in the overall success of Linux on the desktop. The other major problem is that it is no fun to maintain software, which leads to what was described as CADT: https://www.jwz.org/doc/cadt.html As you see with Wayland and Rust rewrites CADT still continues today always justified with some bullshit arguments why the rewrites are really necessary.

Together this means that basically nobody implements applications anymore. For commercial applications that market is too fragmented and it is too much effort. Open-source applications need time to grow and if all the underpinnings get changed all the time, this is too frustrating. Only a few projects survive this, and even those struggle. For example GIMP took a decade to be ported from GTK 2 to 3.


Archive link for your CADT reference: https://archive.ph/t5m32

I wish websites weren't allowed to know what site a user is coming from.


Hm right, sorry. Somehow this does not happen to me (maybe because of ublock)

Linux API/ABI doesn't cover the entire spectrum that Windows API covers. There is everything from lowest level kernel stuff to the desktop environment and beyond. In Linux deployments, that's achieved by a mix of different libraries from different developers and these change over time.

You never ran into a GLIBC version problem?

In the distant past, if it has happened at all. I can't recollect an instance. Perhaps the advantage of using a distro?

Wasn't there also DLL hell on Windows?

My understanding is that very old statically linked Linux images still run today because paraphrasing Linus: "we don't break user space".


> we don't break user space

The kernel doesn't break user space. User space breaks on its own.


Unfortunately you can't really statically link a GUI app.

Also, if you happened to have linked that image to a.out it wouldn't work if you're using a kernel from this year, but that's probably not the case ;)


> Unfortunately you can't really statically link a GUI app.

But is there any fundamental reason why not?

> Also, if you happened to have linked that image to a.out it wouldn't work if > you're using a kernel from this year, but that's probably not the case ;)

I assume you refer to the retirement of coff support (in favor of elf). I would argue that given how long this obsolete format was supported was actually quite impressive.


The model of patching+recompiling the world for every OS release is a terrible hack that devs hate and that users hate. 99% of all people hate it because it's a crap model. Devs hate middlemen who silently fuck up their software and leave upstream with the mess, users hate being restricted to whatever software was cool and current two years ago. If they use a rolling distro, they hate the constant brokenness that comes with it. Of the 1% of people who don't hate this situation 99% of those merely tolerate it, and the rest are Debian developers who are blinded by ideology and sunk costs.

Good operating systems should:

1. Allow users to obtain software from anywhere.

2. Execute all programs that were written for previous versions reliably.

3. Not insert themselves as middlemen into user/developer transactions.

Judged from this perspective, Windows is a good OS. It doesn't nail all three all the time, but it gets the closest. Linux is a bad OS.

The answers to your questions are:

(1) It isn't backwards compatible for sophisticated GUI apps. Core APIs like the widget toolkits change their API all the time (GTK 1->2->3->4, Qt also does this). It's also not forwards compatible. Compiling the same program on a new release may yield binaries that don't run on an old release. Linux library authors don't consider this a problem, Microsoft/Apple/everyone else does. This is the origin of the glibc symbol versioning errors everyone experiences sometimes.

(2) Maintaining a stable API/ABI is not fun and requires a capitalist who says "keep app X working or else I'll fire you". The capitalist Fights For The User. Linux is a socialist/collectivist project with nobody playing this role. Distros like Red Hat clone the software ecosystem into a private space that's semi-capitalist again, and do offer stable ABIs, but their releases are just ecosystem forks and the wider issue remains.

(3) It hasn't change and it's still bad.

(4) Docker: "solves" the problem on servers by shipping the entire userspace with every app, and being itself developed by a for-profit company. Only works because servers don't need any shared services from the computer beyond opening sockets and reading/writing files, so the kernel is good enough and the kernel does maintain a stable ABI. Docker obviously doesn't help the moment you move outside the server space and coordination requirements are larger.


It seems like Linux's ethos is also its biggest problem. It's a bunch of free software people reinventing, not just the wheel, but every part of the bus. When someone shows up and wants to install a standard cup holder, it's hard when none of your bus is standard.

> If they use a rolling distro, they hate the constant brokenness that comes with it.

Never happens for me on Arch, which I've run as my primary desktop for 15 years.


Maybe you are running a desktop environment which never changes but Gnome has been constantly broken in many different ways for the last 5+ years. At times it felt more like a developer playground than a usable desktop environment. KDE is more stable nowadays but it still breaks in mysterious ways from time to time. I also had major issues for some time when Qt6 started rolling out.

And Arch itself also needs manual interventions on package updates every so often, just a few weeks ago there was a major change to the NVidia driver packaging.


I've been running GNOME. I've never had breakage from upgrading. Of course there's the fact that GNOME neutered itself, removing many of its own features, but that's a different story and has nothing to do with ABIs or upgrading.

> And Arch itself also needs manual interventions on package updates every so often, just a few weeks ago there was a major change to the NVidia driver packaging.

If you're running a proprietary driver on a 12 year old GPU architecture incapable of modern games or AI, yeah... so I actually haven't needed to care about many of these. Maybe 2 or 3 ever...


Building GUI utilities based on VB6 instead of status quo web technologies might actually be more stable and productive.

I would pick Delphi (with which you can build Windows, Linux, macOS, Android, and iOS apps - https://www.embarcadero.com/products/delphi)

Alternatively, RemObjects makes Elements, also a RAD programming environment in which you can code in Oxygene (their Object Pascal), C#, Swift, Java, Go, or Mercury (VB) and target all platforms: .Net, iOS and macOS, Android, WebAssemblyl, Java, Linux, Windows.


Yes, you can build cross-platform GUI apps with Delphi. However, that requires using Firemonkey (FMX). If you build a GUI app using VCL on Delphi, it's limited to Windows. If you build an app with Lazarus and LCL, you CAN have it work cross-platform.

I thought the point was that Windows apps will run on Linux under Wine (and macOS?) so using VCL is a cross-platform GUI development environment.

I made the clarification because the comment I replied to mentioned Android, iOS, and macOS. There are many who used Delphi before FMX appeared and I thought it would be helpful to point out that VCL only makes Windows executables.

You might as well use Lazarus and LCL. It'll give the best of all worlds.

> Alternatively, RemObjects makes Elements, also a RAD programming environment in which you can code in Oxygene (their Object Pascal), C#, Swift, Java, Go, or Mercury (VB) and target all platforms: .Net, iOS and macOS, Android, WebAssemblyl, Java, Linux, Windows.

Wait you can make Android applications with Golang without too much sorcery??

I just wanted to convert some Golang CLI applications to GUI's for Android and I instead ended up giving up on the project and just started recommending people to use termux.

Please tell me if there is a simple method for Golang which can "just work" for basically being the Visualbasic-alike glue code to just glue CLI and GUI mostly.


> Wait you can make Android applications with Golang without too much sorcery??

Why don't you try it out: https://www.remobjects.com/elements/gold/


It's really price-y and I am not sure about if I could create applications for f-droid if they aren't open source and how it might go with something like remobjects.com/gold/

One of the key principles of f-droid is that it must be reproducible (I think) or open source with it being able to be built by f-droid servers but I suppose reproducibility must require having this software which is paid in this case.


I started with VB6 so I'm sometimes nostalgic for it too but let's not kid ourselves.

We might take it for granted but React-like declarative top-down component model (as opposed to imperative UI) was a huge step forward. In particular that there's no difference between initial render or a re-render, and that updating state is enough for everything to propagate down. That's why it went beyond web, and why all modern native UI frameworks have a similar model these days.


> and why all modern native UI frameworks have a similar model these days.

Personally I much rather the approach taken by solidjs / svelte.

React’s approach is very inefficient - the entire view tree is rerendered when any change happens. Then they need to diff the new UI state with the old state and do reconciliation. This works well enough for tiny examples, but it’s clunky at scale. And the code to do diffing and reconciliation is insanely complicated. Hello world in react is like 200kb of javascript or something like that. (Smaller gzipped, but the browser still needs to parse it all at startup). And all of that diffing is also pure overhead. It’s simply not needed.

The solidjs / react model uses the compiler to figure out how variables changing results in changes to the rendered view tree. Those variables are wrapped up as “observed state”. As a result, you can just update those variables and exactly and only the parts of the UI that need to be changed will be redrawn. No overrendering. No diffing. No virtual Dom and no reconciliation. Hello world in solid or svelte is minuscule - 2kb or something.

Unfortunately, swiftui has copied react. And not the superior approach of newer libraries.

The rust “Leptos” library implements this same fine grained reactivity, but it’s still married to the web. I’m really hoping someone takes the same idea and ports it to desktop / native UI.


>React’s approach is very inefficient - the entire view tree is rerendered when any change happens.

That's not true. React only re-renders down from where the update happens. And it skips over stuff that is provably unchanged -- which, fair, involves manual memoization hints. Although with React Compiler it's actually pretty good at automatically adding those so in practice it mostly re-renders along the actually changed path.

>And the code to do diffing and reconciliation is insanely complicated.

It's really not, the "diffing" is relatively simple and is maybe ~2kloc of repetitive functions (one per component kind) in the React source code. Most of complexity of React is elsewhere.

>The solidjs / react model uses the compiler to figure out how variables changing results in changes to the rendered view tree.

I actually count those as "React-like" because it's still declarative componentized top-down model unlike say VB6.


> That's not true. React only re-renders down from where the update happens. And it skips over stuff that is provably unchanged -- which, fair, involves manual memoization hints.

React only skips over stuff that's provably unchanged. But in many - most? web apps, it rerenders a lot. Yeah, you can add memoization hints. But how many people actually do that? I've worked on several react projects, and I don't think I've ever seen anyone manually add memoization hints.

To be honest it seems a bit like Electron. People who really know what they're doing can get decent performance. But the average person working with react doesn't understand how react works very well at all. And the average react website ends up feeling slow.

> Most of complexity of React is elsewhere.

Where is the rest of the complexity of react? The uncompressed JS bundle is huge. What does all that code even do?

> I actually count [solidjs / svelte] as "React-like" because it's still declarative componentized top-down model unlike say VB6.

Yeah, in the sense that Solidjs and svelte iterate on react's approach to application development. They're kinda React 2.0. Its fair to say they borrow a lot of ideas from react. And they wouldn't exist without react. But there's also a lot of differences. SolidJS and Svelte implement react's developer ergonomics, while having better performance and a web app download size that is many times smaller. Automatic fine grained reactivity means no virtual dom, no vdom diffing and no manual memoization or anything like that.

They also have a trick that react is missing: Your component can just have variables again. SolidJS looks like react, but your component is only executed once per instance in the page. Updates don't throw anything away. As a result, you don't need special react state / hooks / context / redux / whatever. You can mostly just use actual variables. Its lovely. (Though you will need a solidjs store if you want your page to react to variables being updated).


>React only skips over stuff that's provably unchanged. But in many - most? web apps, it rerenders a lot. Yeah, you can add memoization hints. But how many people actually do that?

Even without any hints, it doesn't re-render "the entire view tree" like your parent comment claims, but only stuff below the place that's updated. E.g. if you're updating a text box, only stuff under the component owning that text box's state is considered for reconciliation.

Re: manual memoization hints, I'm not sure what you mean — `useMemo` and `useCallback` are used all over the place in React projects, often unnecessarily. It's definitely something that people do a lot. But also, React Compiler does this automatically, so assuming it gets wider adoption, in the longer run manual hints aren't necessary anyway.

>Where is the rest of the complexity of react?

It's kind of spread around, I wouldn't say it's one specific piece. There's some complexity in hydration (for reviving HTML), declarative loading states (Suspense), interruptible updates (Transitions), error recovery (Error Boundaries), soon animations (View Transitions), and having all these features work with each other cohesively.

I used to work on React, so I'm familiar with what those other libraries do. I understand the things you enjoy about Solid. My bigger point is just that it's still a very different programming model as VB6 and such.


Thanks for your work on react. I just realised who I’m talking to sweats. I agree that the functional reactive model is a very different programming model than VB6. We all owe a lot to react, even though I personally don’t use the react library itself any more. But it does seem a pity to me how many sloppy, bloated websites out there are built on top of react. And how SwiftUI and others seem to be trying to copy react rather than copy its newer, younger siblings which had a chance to learn from some of react’s choices and iterate on them.

UI libraries aside, I’d really love to see the same reactive programming pattern applied to a compiler. Done well, I’m convinced we should be able to implement sub-millisecond patching of a binary as I chance my code.


Sure but the parents point was more about declarative UIs than React. SolidJS and Svelte are declarative.

Dioxus is halfway between React and Svelte, and is working on its own native renderer. Might be worth considering.

> That's why it went beyond web, and why all modern native UI frameworks have a similar model these days.

It's more the other way around, this model started on desktop (eg WPF) and then React popularized it on the web.


I would vote for Delphi/FreePascal, but share the sentiment.

I only had limited exposure to Delphi, but from what I experienced, it's big thumbs-up.

But if you liked that, consider that C# was in many ways a spiritual successor to Delphi, and MS still supports native GUI development with it.


Except on the AOT experience and low level programming, which only started to be taken seriously during the last five years.


Performance?

If there was sufficient interest in it, most performance issues could be solved. Look at Python or Javascript, big companies have financial interest in it so they've poured an insane amount of capital into making them faster.

Do you think that "most performance issues" in Python are solved?

Isn’t python still the slowest mainstream language?

Being slower than other mainstream languages isn't really a problem in and of itself if it's fast enough to get the job done. Looking at all the ML and LLM work that's done in Python, I would say it is fast enough to get things done.

As pointed out already, most of that uses C code or GPU code to do the work and not slow Python code.

A pretty good portion uses Fortran code, too.

No. Ruby exists.

Ruby is now faster than Python, last I saw a comparison, though it used to be the other way around.

LuaJIT can be extremely fast

While being stuck in a language version forever.

It's what everyone uses, anyways. The other versions have their own pitfalls.

And more performant. Software written for 2005 Windows runs super fast on todays systems.

Sometimes I install Office 97 for kicks and marvel at how much I can do with it, yet it asks so little of my system. <2Mb RAM for Word 97!

Same, I used to use Office 2003 because it was so quick and the UI was exactly what you needed.

On later versions they had animated cursor positions which felt slow, the spellcheck squiglies were lethargic and menus convoluted.

That said, I've given up and mostly use Google Docs/Sheets now because of the features and cross platform support.


Only if I don't need to do anything beyond the built-in widgets and effects of Win32. If I need to do anything beyond that then I don't see me being more productive than if I were using a mature, well documented and actively maintained application runtime like the Web.

That's not really true. Even in the 90s there were large libraries of 3rd party widgets available for Windows that could be drag-and-dropped into VB, Delphi, and even the Visual C++ UI editor. For tasks running the gamut from 3D graphics to interfacing with custom hardware.

The web was a big step backwards for UI design. It was a 30 year detour whose results still suck compared to pre-web UIs.


That sounds nice. I agree, not having a UI editor making apps is a step back. However, you seem to be discussing mostly in past tense.

Maybe one day something like Lazarus or Avalonia would catch up but today I feel that Electron is best at what it does.


If it is made to allow C codes to be combined with VB6 codes easily, and a FOSS version of VB6 (and the other components it might use) is made available on ReactOS (and Wine, and it would also run on Windows as well), then it might be better than using web technologies (and is probably better is a lot of ways). (There are still many problems with it, although it would avoid many problems too.)

Or VB.NET? In some ways it's actually easier than VB6

Honestly, it’s probably faster and less resource intensive through emulation than your average Electron app :-/

Wine Is Not an Emulator (WINE). It provides win32 APIs; your CPU will handle the instructions natively. There is no “probably” about it.

Traditionally WINE uses QEMU on Apple Silicon to execute x86 binaries on an ARM CPU, so while I’m aware WINE Is No an Emulator there’s likely emulation happening in a lot of cases.

Whenever people bring this up I find it somewhat silly. Wine originally stood for "Windows Emulator". See old release notes ( https://lwn.net/1998/1112/wine981108.html ) for one example: "This is release 981108 of Wine, the MS Windows emulator." The name change was made for trademark and marketing reasons. The maintainers were concerned that if the project got good enough to frighten Microsoft, they might get sued for having "Windows" in the name. They also had to deal with confusion from people such as yourself who thought "emulation" automatically meant "software-based, interpreted emulation" and therefore that running stuff in Wine must have some significant performance penalty. Other Windows compatibility solutions like SoftWindows and Virtual PC used interpreted emulation and were slow as a result, so the Wine maintainers wanted to emphasize that Wine could run software just as quickly as the same computer running Windows.

Emulation does not mean that the CPU must be interpreted. For example, the DOSEMU emulator for Linux from the early 90s ran DOS programs natively using the 386's virtual 8086 mode, and reimplemented the DOS API. This worked similarly to Microsoft's Virtual DOS Machine on Windows NT. For a more recent example, the ShadPS4 PS4 emulator runs the game code natively on your amd64 CPU and reimplements the PS4 API in the emulator source code for graphics/audio/input/etc calls.


Sure, you can call it an emulator in that sense but how does that imply anything at all about performance? That is what I was responding to.

The problem is the word "emulator" itself. It's a very flexible word in English, but when applied to computing, it very often implies emulating foreign hardware in software, which is always going to be slow. Wine doesn't do that and was wise to step away from the connotations.

Someone please create a windows 7 like user interface or even XP like interface too and you got yourself a serious fan

I might seriously recommend it to newbies and like there is just this love I have for windows 7 even though I really didn't use it for much but its so much more elegant in its own way than windows 10

like it can be a really fun experiment and I would be interested to see how that would pan out.


It stuns me that a polished 1:1 2K/XP/7 clone DE (which it mimics is a setting) hasn’t existed for a 10y+ already. It’s such an obvious target for a mass appeal Linux desktop that many techies and non-techies alike would happily use.

Rough approximations have been possible since the early 2000s, but they’re exactly that: rough approximations. Details matter, and when I boot up an old XP/7 box there are aspects in which they feel more polished and… I don’t know, finished? Complete? Compared to even the big popular DEs like KDE.

Building a DE explicitly as a clone of a specific fixed environment would also do wonders to prevent feature creep and encourage focus on fixing bugs and optimization instead of bells and whistles, which is something that modern software across the board could use an Everest sized helping of.


Yea, you raise some good points. Perhaps your comment/this discussion can help someone be interested in this. I am clearly not educated about DE creation so much but I am sure that some people might create this

I think one of the friction could be ideological if not than anything since most linux'ers love Open source and hate windows so they might not want to build anything which even replicates the UI perhaps

Listen I hate windows just as much as the other guy but gotta give props that I feel nostalgic to windows 7, and if they provide both .exe perfect support and linux binary perfect support, things can be really good. I hope somebody does it and perhaps even adds it to loss32, would be an interesting update.


The problems with cloning the exact look is fear of copyright/IP issues with Microsoft. You can be pretty sure they won’t look away if such a desktop becomes really popular. Remember how Apple sued Samsung over using rounded corners on icons?

It is becoming obvious that some people didn't live through the Lindows era.

I remember Lindows, but I think their problem had more to do with their branding and marketing and the fact that it was sold commercially than it did with its UI resembling Windows, and all of those mistakes are trivial to avoid.

That said, even if the UI looking the same is an issue, it’s not that difficult to come up with a look and feel that is legally distinct but spiritually aligned and functionally identical… random amateurs posting msstyle themes for XP/Vista/7 on DeviantArt did that numerous times.


This is how every open source project GUI feels.

You should try KDE with https://github.com/ivvil/aerothemeplasma

The screenshots could easily fool me into believing it actually is Windows 7 :p


Damn you got me. I am not a big fan of KDE (Currently using Niri) but I can try to use KDE+aerothemeplasma with nixos as a dual boot (I already used to have KDE nix as dualboot until I accidentally removed that disk and ended up using the glorious tool testdisk to save that) so I will try it some day thank you!

There is also anduinos which I think doesn't try to replicate windows 7 but it definitely tries to look at windows 10 perhaps 11 iirc


There's usually an "uncanny valley" feeling to this kind of projects, but damn, this is good.

> it can be a really fun experiment and I would be interested to see how that would pan out.

It would fail, and just be another corpse in the desktop OS graveyard.

https://en.wikipedia.org/wiki/Hitachi_Flora_Prius

https://www.osnews.com/story/136392/the-only-pc-ever-shipped...

https://en.wikipedia.org/wiki/Linspire

Unless you ship your own hardware or get a vendor to ship your OS (see the above), and set up so the user can actually use it, you have to get users to install it on Windows hardware. So now your company is debugging broken consumer hardware without the help of the OEM. So that hopefully someone will install it on exactly that configuration for free.

This is not a winning business model.


Hm I see the confusion, what I was proposed was for something like loss32 to have a window manager / desktop environmnet which looks like windows 7

Loss32 is itself a linux distro and thus there should technically be nothing stopping it from shipping everywhere

I think you were assuming that I meant create a whole kernel from scratch or something but I am just merely asking a loss32 reskin which looks like windows 7 which is definitely possible without any of the company debugging consumer hardware or even the need of company for that matter I suppose considering that I was proposing an open source desktop environment which just behaved like windows 7 by default as an example.

I don't really understand why we need a winning business model out of it, there isn't really a winning model for niri,hyprland,sway,kde,xfce,lxqt,gnome etc., they are all open source projects who are run with help of donations

There might be a misunderstanding between us but I hope this clears up any misunderstanding.


I think fundamentally I disagree with your optimism. I've seen a number of these come and go over the decades. I do not think making something that looks like Windows would be sufficient to be successful.

> you were assuming that I meant create a whole kernel from scratch or something

No, making Linux run reliably on random laptops is already a monumental challenge.


Agreed but there have been some real strides in innovation recently in linux, definitely worth checking out :)

Regarding successful, well they already are, ZorinOS is an OS which looks like windows 7 or has some similarities to it and its sort of recommended to beginners but usually linux mate is the most recommended distro

> No, making Linux run reliably on random laptops is already a monumental challenge.

Not sure about this but I ran linux in 15 year old dell mini like its no big deal so I can only assume that support has been better but I feel like I can assure you that linux support is really good for most laptops in my observation.


I am a huge fan and user of Linux.

The problem is slapping Linux on some random bit of Windows kit and expecting it to work as though it had shipped with Linux, with support to back it. The more recent, the worse it will be.

If you want to run Linux, buy Linux computers that ship with Linux and have a support number you can call. Just like you'd not expect to be able to slap OSX on some random Dell and have it work.


Sir, I am just saying that we can have linux (which works on almost all devices) and then we can have wine which I think is just a software layer so it should work on most hardware considering what it does is Wine translates Windows API calls into POSIX calls on-the-fly, these Posix calls would still be handled by the linux kernel and its support for its drivers.

This is how loss32 works and I am just saying that sir, instead of merely using the win95 design that loss32 uses, perhaps we can modernize the style a little towards something like windows 7 as a good balance?

Sir of course, if you are worried about the software emulation aspect of things, you are worried about loss32 itself and not my idea of "hey lets reskin it to look like win7", We can have a discussion itself on loss32 if you want and weigh in some pros and cons and it certainly isn't something that I will use as a main driver but I think as linux is certainly built on ideas of freedoms, having loss32 isn't really that bad. Its an experiment of sorts even right now and people will test it out because they are curious and we will hear about responses of people who try it out and what they think.

I love Linux just as much as you do but I would admit I never really gotten into windows ecosystem that much so I went to learning Linux really good and took it as a challenge to conquer (mission accomplished)

Many people might not go with that mindset and may come with the mindset that Microsoft is treating them really badly and moral dilemmas as well and so having something which can cater to them isn't bad.

I also want to say that something like this might be good because yes, people say for others to just linux mint but I never really found it good option, not for the gen-z. I think Zorin can be an answer or perhaps AnduinOS but we definitely need more young people in linux and I will tell you as young guy what's happening

People want to get the freedom but they aren't able to articulate it. They are worried about AI but they just can't do anything about it and to be honest they are right, how much can I or you do anything about ram crisis. Maybe there is something that we can do but we just don't know (like did you know that there is a way to convert laptop ram to desktop ram with its gatchas?)

They simply don't know about the open source side of things since they just weren't exposed to it. To us, it may be the core feature but to them its a word written between other words of features that they want to use.

So like I don't really know but pardon me, I don't understand your side of the discussion and I am trying to find a common point.

Do you find an issue within the loss32 architecture itself? Or with the idea of a re-skin towards win7.

I presume its the loss32 architecture but I don't know what to tell you except that it uses wine and wine just works, so much so that the original title of this i think might've been/was about how win32 was the most stable ABI even for linux and that's only possible due to wine.

Not sure what you meant by support there sir, perhaps you are red hat user for a company license or similar and of course this isn't targeted for that sector but for niche users at homes who just want to try out what's "linux" perhaps :) I find the idea of loss32 very interesting as I had thought of designing something similar so I am glad that it exists and I would probably look at it from afar.

I'd love a discussion about it because I think we are saying the same point from different angles and perhaps I can do a better rephrasing but what i mean is completely open source and all linux-y but just have windows applications run easily and have win7 like UI (really similar) and that's it. Everything's linux and these wine programs just convert them to posix syscalls but perhaps I am missing your point of concern and we can talk about it since clearly nothing's better than talking about linux (oh the joy) to another linux user! I think I may be misinterpreting somethings if so pardon me but I am unable to understand how hardware might take a role in wine/what I said and I would be interested if you can tell me more about it perhaps and (have a nice day sir, I got enough quota for the day or the year of talking about linux haha!)?


It's cool. If we ever meet in person, I'll buy you a beer and we can discuss Linux. :)

Haha of course! (Although, I will never drink beer and never will, 1 I am still a minor lol and 2 I just dont really want to ever drink beer like ever) but I get the sentiment!

In beverages though, I just drink cole drink usually but in this time of the year, I'd rather froze to death if I did something like this (the cold is crazy out here) xD

But yea, there have been some instances where I talk about linux to people my age or maybe irl and its definitely frustrates that sometimes they don't understand it.


https://github.com/SerenityOS/serenity is just that, except it's a whole OS that's Win2k styled. If it ever gets good hardware support it might have a chance.

Or maybe ReactOS - the actual windows clone - gets finished. Rumours put a first release date some time after Hurd.


XFCE plus a windows theme would get you pretty far. Is there anything specific you're thinking of which that plus some pre-configured Wine wouldn't hit?

I 100% agree with your comment.

Pro tip but if someone wants to create their own iso as well, they can probably just customize things imperatively in MxLinux even by just booting them up in your ram and then they have the magnificient option of basically snapshotting it and converting that into an iso so its definitely possible to create an iso tweaked down to your configuration without any hassle (trust me but its the best way to create iso's without too much hassle and if one wants hassle, nix or bootc seems to be the way to go)

Regarding Why it wouldn't hit. I don't know, I already build some of my own iso's and I can build one for windows (on MxLinux principle) and upload it for free on huggingface perhaps but the idea is of mass appeal

Yes I can do that but I would prefer if there was an iso which could just do that and I could share it with a new person in linux. And yes I could have the new person do the changes themselves but (why?), there really is no reason perhaps imo and this just feels like a low hanging fruit which nobody touched perhaps and so this is why I was curious too.

But also as the other comment pointed out, I feel like sure we can do this thing, but that there is definitely a genuine reason why we can probably create this thing itself as well and they give some good reasons as well and I agree with them overall too.

Like if you ask me, it would be fun to have more options especially considering this is linux where freedom is celebrated :p


I'm back to running Windows because of the shifting sands of Python and WxWindows that broke WikidPad, my personal wiki. The .exe from 2012 still works perfectly though, so I migrated back from Ubuntu to be able to use it without hassle.

It's my strong opinion that Windows 2000 Server, SP4 was the best desktop OS ever.


Server 2003 was the last release supervised by Cutler, so would have my vote. It's even source-available... technically.

Cutler himself wrote code for Vista/Longhorn though. I don't know what you mean by "supervising" it. He also led the efforts for "PatchGuard" kernel protection mechanism that was introduced with Vista.

Source: I reviewed Cutler's lock-free data structure changes in Vista/Longhorn to find bugs in them, failed to find any.


Sometimes I have problems like this on Debian. I have a reliable solution: debootstrap and snapshots.debian.org

I haven't yet gone more than a decade in the past before, so I can't promise forever, and GPU-accelerated things probably still break, but X11 is very compatible backwards.


> It's my strong opinion that Windows 2000 Server, SP4 was the best desktop OS ever.

Meanwhile, in 2025, with 64GB RAM and solid state drives, we hear, "Windows 11 Task Manager really, really shouldn't be eating up 15% of my CPU and take multiple seconds to fire up."


I see my comment was downvoted, and I apologize.

I meant to agree entirely with the parent comment by showing one specific way in which Win2K SP4 is far superior to Windows 11.

In Win2K, Task Manager takes less than a second to start on a 200 MHz, single core Pentium II with 64MB of RAM and a 5400 RPM IDE HDD.


I like the idea of it, but Linux hardware support is still crap, and will get worse as ARM becomes more entrenched.

What boggles my mind is why Google hasn't gotten more serious about making Android a desktop OS. Pay the money needed to get good hardware support, control the OS, and now you're a Microsoft/Apple competitor for devices. Yes there is the Chromebook, but ChromeOS is not a real desktop OS, it's a toy. Google could control both the browser market and the desktop computing market if they seriously tried. (But then again that would require listening to customers and providing support, so nevermind)


> I like the idea of it, but Linux hardware support is still crap, and will get worse as ARM becomes more entrenched. Linux arguably has better compatibility than Windows, but it's nuanced as it depends on what devices you're interested in.

> What boggles my mind is why Google hasn't gotten more serious about making Android a desktop OS. Google is seriously working on making Android a desktop OS, Android 16 is only the first steps towards it.

> Yes there is the Chromebook, but ChromeOS is not a real desktop OS, it's a toy. ChromeOS is very much not a toy, it's pretty great if it can facilitate your work.

> But then again that would require listening to customers and providing support, so nevermind Google has consistently provided good support for all their hardware products, listening to customers is not their cup of tea though.

Google is absolutely no saint, I don't like their business model, how they're closing more and more of Android, how they keep killing services, how GCP can nuke AI nuke you, that they "own" web standards, ... But they're not all bad, they've also contributed greatly to much of the web and surrounding technologies.


> but ChromeOS is not a real desktop OS, it's a toy.

ChromeOS is a better development environment than macOS in many ways. When was the last time you actually used one of these things, 2013?


> Linux hardware support is still crap

What are you talking about? The majority of hardware is supported by only Linux at this point.


There is plenty of hardware that is either unsupported or poorly supported. I have personally run into a dozen different devices and several architectures that were unsupported. And I'm just one person buying normal stuff in stores.

Sure but this is true of every OS. I can't install macOS or Windows on most of the hardware around me and have it support all the hardware either.

> but Linux hardware support is still crap

What are you talking about? Everything for desktops work out of the box unless you have something weird and proprietary, and even then most distros have support anyway.


By desktop I include laptops (many don't work out of the box) but larger systems can be weird too. Just the choice of CPU can decide whether hibernate or suspend works at all. There's a large ecosystem of accessories which have no Linux support. Video cards have been a nightmare on Linux for decades, famously the reason Torvalds gave Nvidia the finger. Even when something's technically supported, it may require obscure undocumented boot flags, bit-twiddling, userland apps which may not work on the same distro as the kernel you want to use, and of course there's the Wayland debacle (abandoning X extensions that lots of devices used to use to control features from touchpads to input pens)

> Video cards have been a nightmare on Linux for decades, famously the reason Torvalds gave Nvidia the finger.

What are you talking about... The situation is the same as on Windows, an officially supported and maintained proprietary driver maintained by Nvidia. Unless you're trying to run a 12+ year old car, it'll work fine. AMD on the other hand is amazingland and works perfectly, officially supported and maintained open source driver. I LOVE it.

> bit-twiddling

Never happened.



> Video cards have been a nightmare on Linux for decades,

Again, I question your experience in this regard. Do you actually use dGPUs on Linux, or are you repeating a 14-year-old meme?

GPU support on Linux is more comprehensive than macOS, and if you don't need DirectX it's arguably better than Windows too. Mesa drivers are unparalleled by Apple or Microsoft, in a myriad of ways.


My experience is using Linux as my primary desktop OS for 25 years, for gaming, 3D rendering, and web browsing. I'm also a programmer and systems engineer, and I've created Linux distributions, as well as contributed over a thousand packages and ports to other distros, and patched/backported drivers in the kernel. I'm not going to detail every single video driver issue I've run into, as I don't want to write a book just to prove to a random person on the internet that Linux does, in fact, have a history of issues with graphics cards and video subsystems. A simple Google search can provide more than enough examples.

But more than that, it's simple logic: hardware manufacturers often don't often release specs or proprietary firmware blobs, forcing kernel hackers to reverse engineer in order to support a device, which often is too difficult, not to mention there's only so many kernel hackers and a lot of devices and hw revisions. There's a famous YouTube video of the most famous kernel hacker telling Nvidia to go fuck itself for this very reason.


Unironically, yes. It's time that Microsoft taste their own medicine of embrace, extend, and extinguish.

Here me out: Microsoft switches to Linux kernel for Windows 13.

(also Microsoft has been heavily embracing Linux and open source in the last decade)


When WSL first came out, I realized that Windows might be Linux + Wine in 20 or 30 years.

Nowadays, with the Windows team barely able to produce a functional UI, what's happening with the NT kernel? Is it all graybeards back there? When they retire, the stability of Windows going to be in trouble, which is important for the things that really pull in the money. It'll get real bad, then they'll give up and move to an open source base, just like Edge.


The NT kernel continues to evolve. The recent examples I can think of are VBS, HCVI, and Kernel DMA Protection.

No reason to dump a very good kernel.


Idle curiosity, but: does Linux have similar offerings to HVCI?

Why would you want to switch to in many cases, an inferior kernel? NTOS is the golden piece of Windows -- it's Win32 that's hot garbage.

Cool. Having major distributions default to using binfmt_misc to register Wine for PE executables (EXE files) would be nice though. Next steps would obviously be for Windows apps to have their own OS-level identity, confined and permissioned per app using normal Linux security mechanisms, run against a reproducible and pinned Wine runtime with clearly managed state, integrated with the desktop as normal applications (launching, file associations, icons), and produce per-app logs and crash information, so they can be operated and managed like native programs. We have AI now, this should not be rocket science or require major investments. Only viable way Linux is replacing Windows.

>Cool. Having major distributions default to using binfmt_misc to register Wine for PE executables (EXE files) would be nice though

This is something that is very much needed to make Linux much more user friendly for new users.



Reference to the famous “Win32 Is the Only Stable ABI on Linux" post

https://blog.hiler.eu/win32-the-only-stable-abi/

https://news.ycombinator.com/item?id=32471624


> The late-90's-to-early-2010's PC desktop experience was great for power users, especially creative users. Let's keep the dream alive.

It sure was, if you were already bored by Windows 3.11/95 and were getting into Linux, it was fantastic. You were getting skills at the ground floor which could help keep you in good career for most of the rest of your life.


Wine on top of X / Wayland isn't good enough. It needs to be Wine (or loss32) directly on top of the Linux kernel, started as an init.

Yea! I love the spirit. Compatibility in computing is consternating. If my code is compiled for CPU Arch X, the OS should just provide it with (using Rust terminology) standard library tools (networking, file system, and allocator etc) , de-conflict it with other programs, and get out of the way. The barriers between OSes, including between various linux dependencies feels like a problem we (idealistically thinking) shouldn't have.

Rather than API/ABI stability, I think the problem is the lack of coherence and too many fragile dependencies. Like, why should a component as essential as Systemd have to depend on a non-essential service called d-bus? Which in turn depends on an XML parser lib named libexpat. Just d-bus and libexpat combined takes a few megabytes. Last time I checked, the entire NT kernel, and probably the Linux kernel image as well, has no more than single-digit MBs in size. And by the way, Systemd itself doesn’t use XML for configurations. It has an INI style configuration format.

that's why they are doing varlink now

Alternatively one could also use OneCore patched XP with MSYS2/MinGW/Cygwin with Bash, gnu tooling and the pacman package manager. One could compile most necessary software by hand. It runs a modern firefox, libreoffice and Windows7 games. Perhaps most of python, rust and node ecosystems would run. Or if one really needs a linux/wsl light alternative one could run virtualbox, qemu or colinux (up to the ancient kernel 2.6.33). Who needs 64 bit if the lean and mean 32-bit suffices and the Windows classic theme is included? Small llm's would probably not work, while they would with Loss32

Starting with FreeBSD might be easier than starting with Debian then removing all the GNUisms. But perhaps not as much Type II fun.

Using Linux gets you much more hardware compatibility especially for the consumer desktop and laptop systems this is targeted towards.

True

I think Linux is the better choice for replacing the entire userland. From what I've seen, the BSDs don't have such an accessible userspace/kernelspace split. With some effort, on Linux you could probably just run an exe as your init.

+1

A 1:1 recreation of the Windows XP or Windows 7 user experience with the classic theme would be killer.

I say this with love, I have used KDE extensively and I still find it more janky than Windows XP. Gnome is "better" (especially since v40) in that it's consistent and has a few nicer utilities, but it also has worse UX (at least for power users) than Windows XP.


Nice. It would be good if winetricks could install the ReactOS userland, explorer.exe and friends barely exist in upstream wine.

There were some great efforts to build these out in ReactOS a few years ago.


I like this idea and know at least a few who would love to use this if you can solve for the:

'unfortunate rough edges that people only tolerate because they use WINE as a last resort'

Whether those rough edges will ever be ironed out is a matter I'll leave to other people. But I love that someone is attempting this just because of the tenacity it shows. This reminds me of projects like asahi and cosmopolitan c.

Now if we're to do something to actually solve for Gnu/Linux Desktops not having a stable ABI I think one solution would be to make a compatibility layer like Wine's but using Ubuntu's ABIs. Then as long as the app runs on supported Ubuntu releases it will run on a system with this layer. I just hope it wouldn't be a buggy mess like flatpak is.


Technically it's the only stable macOS ABI, too. The only way to run a legacy 32-bit binary on macOS today is a win32 exe running under Wine.

What cool stuff do you run?

I think this project actually has merit and highlights the core issue.

We have gone through one perceived reason after the other to try and explain why the year of the Linux desktop wasn’t this one.

Uncharitably, Linux is too busy breaking and deprecating itself to ever become more than a server OS, and that only works due to companies sponsoring most the testing and code that makes those parts work. Desktop in all its forms is an unmitigated shit show.

With linux, you’re always one kernel/systemd/$sound system/desktop upgrade away from a broken system.

Personal pains: nvidia drivers, oss->alsa, alsa->pulse audio, pulse audio->pipe wire, init.d to upstart to systemd, anything dkms ever, bash to dash, gtk2 to gtk3, kde3 to kde4 (basically a decade?), gnome 2 to gnome 3, some 10 gnome 3 releases breaking plugins I relied on.

It should be blindingly obvious; windows can shove ads everywhere from the tray bar to start menu and even the damned lock screen, on enterprise editions no less, and STILL have users. This should tell you that linux is missing something.

It’s not the install barrier (it’s never been lower, corporate IT could issue linux laptops, linux on laptops exist from several vendors).

It’s also not software, the world has never placed so many core apps in the browser (even office, these days).

It’s not gaming. Though its telling that, in the end, the solution from valve (proton) incidentally solves two issues - porting (stable) windows APIs to linux and packaging a complete mini-linux because we can’t interoperate between distros or even releases of the same distro.

I think the complete and utter disdain in linux for stability from libraries through subsystems to desktop servers, ui toolkits and the very desktops themselves is the core problem. And solving through package management and the ensuing fragmentation from distros a close second.


Pretty sure it's Linux not being the default option

It is not a popularity issue. If it were, company after company would have switched as soon as they could make it work (office365, outlook online, whatever SAAS they use, none care about their desktop, only the browser, and all major browsers are available on Linux).

From there, popularity outside the organization is irrelevant, internal support and userbase is for and on some version of Linux.

As this would spread, we would eventually see global usage increase and global popularity become a non-issue.


Doesn't explain why Chrome beat IE. Or why MacOS has higher market share on the desktop than Linux.

Wine and Proton should have levelled the playing field. But they haven't. Also, if you've only just started using Linux, I recommend you wait a few years before forming an opinion.


I absolutely love this. I need a live CD/USB ASAP please!

I build a gaming VM and decided to go with Windows because the latest AMD drivers (upscaling etc..) only works there for now.

I wanted to be nice and entered a genuine Windows key still in my laptop's firmware somewhere.

As a thank you Microsoft pulled dozens of the features out of my OS, including remote desktop.

As soon as these latest FSR drivers are ported over I will swap to Linux. What a racket, lol.


It still puzzles me decades later how MS built the most functional, intuitive and optimised desktop environment possible then simply threw it away

It still is if you're an enterprise customer. The retail users aren't Microsoft's cash cows, so they get ads and BS in their editions. The underlying APIs are still stable and MS provides the LTSC & Server editions to businesses which lack all that retail cruft.

I'm an enterprise user and I find Windows 11 a complete disaster. They've managed to make something as trivial as right-clicking a slow operation.

I used to be a pretty happy Windows camper (I even got through Me without much complaint), but I'm so glad I moved to Linux and KDE for my private desktops before 11 hit.


If anything, right click is faster thanks to dumping the ability for 3rd parties to pollute it with COM controls that needs to be init'ed.

In my day job, Explorer still freezes every second day, GUI interactions take several seconds and the sidebar is full of tabloid headlines and ads.

At least with regard to the last point, your enterprise admins must be doing a bad job.

Everything after Win 2000 was a bad idea. Enterprise or not.

Windows 2000 was the last version where Dave Cutler was fully in charge of Windows.

Things started going downhill after that.


Windows 2000 was a bug riddled, poorly architected punching bag for malware.

Things definitely went up-hill AFTER Windows 2000.

What on earth would cause someone to say Windows 2000 was a good release? It wasn't even a good release when it came out, and it definitely didn't stand the test of time.


7 was pretty good. But I may be looking through the glasses of nostalgia and my love for the frutiger aero style

XP was arguably better.


Do you mean Windows 1x Pro/Enterprise?

Yes. Enterprise, Pro, and Home are the enshittified, retail editions. Enterprise just adds a few more features IIRC but still has ads. The other versions I mentioned above don't have any of that.

Enterprise is not retail and is usually done via volume licensing, but probably without any additional configuration it might have that stuff intact.

But you can use group policy etc. freely. I don't know how Win 11 is though


FWIW, ChatGPT advised against LTSC or Server editions for a dev workstation and recommended Enterprise, as you do. However, I can’t find Enterprise from a reputable EU vendor. Do you know of any? Is Enterprise available to end users?

The problem with Windows after Windows 7 isn't really ads, it's the blatant stupid use of web view to do the most mundane things and hog hundreds of MB or even GBs for silly features, that are still present in enterprise versions.

Start menu search requires 7 web browser processes that consume ~350 MB of RAM to be constantly running.

Idk why they use Electron for everything, they literally built the UI stack itself and C# is insanely good at building UIs if they stop trying to reinvent UIs in C# that is.

The pivot point was Windows 95.

Competition. In the first half of the 90s Windows faced a lot more of it. Then they didn't, and standards slipped. Why invest in Windows when people will buy it anyway?

Upgrades. In the first half of the 90s Windows was mostly software bought by PC users directly, rather than getting it with the hardware. So, if you could make Windows 95 run in 4mb of RAM rather than 8mb of RAM, you'd make way more sales on release day. As the industry matured, this model disappeared in favor of one where users got the OS with their hardware purchase and rarely bought upgrades, then never bought them, then never even upgraded when offered them for free. This inverted the incentive to optimize because now the customer was the OEMs, not the end user. Not optimizing as aggressively naturally came out of that because the only new sales of Windows would be on new machines with the newest specs, and OEMs wanted MS to give users reasons to buy new hardware anyway.

UI testing. In the 1990s the desktop GUI paradigm was new and Apple's competitive advantage was UI quality, so Microsoft ran lots of usability studies to figure out what worked. It wasn't a cultural problem because most UI was designed by programmers who freely admitted they didn't really know what worked. The reason the start button had "Start" written on it was because of these tests. After Windows 95 the culture of usability studies disappeared, as they might imply that the professional designers didn't know what they were doing, and those designers came to compete on looks. Also it just got a lot harder to change the basic desktop UI designs anyway.

The web. When people mostly wrote Windows apps, investing in Windows itself made sense. Once everyone migrated to web apps it made much less sense. Data is no longer stored in files locally so making Explorer more powerful doesn't help, it makes more sense to simplify it. There's no longer any concept of a Windows app so adding new APIs is low ROI outside of gaming, as the only consumer is the browser. As a consequence all the people with ambition abandoned the Windows team to work on web-related stuff like Azure, where you could have actual impact. The 90s Windows/MacOS teams were full of people thinking big thoughts about how to write better software hence stuff like DCOM, OpenDoc, QuickTime, DirectMusic and so on. The overwhelming preference of developers for making websites regardless of the preferences of the users meant developing new OS ideas was a waste of time; browsers would not expose these features, so devs wouldn't use them, so apps wouldn't require them, so users would buy new computers to get access to them.

And that's why MS threw Windows away. It simply isn't a valuable asset anymore.


It's quite common for a company to build a good product and then once the initial wave of ICs and management moves on, the next waves of employees either don't understand what they're maintaining or simply don't care because they see a chance to extract short term gains from the built-up intellectual capital others generated.

It's functional - yes, intuitive - maybe, but optimized is highly debatable.

The answer to maintaining a highly functional and stable OS is piles and piles of backwards compatibility misery on the devs.

You want Windows 9? Sorry, some code checks the string for Windows 9 to determine if the OS is Windows 95 or 98.


Millions of total computer noobs hit the ground running with Windows 95. It was a great achievement in software design.

He was talking about user interface not app compatibility

He's mentioning Desktop environment, I assume he means all the parts, not just UI.

Piracy. The consumer versions are filled with ads because most people don't pay for them.

Is this really the case? I feel like most windows users just bought a laptop with Windows already on it. Even if all home users were running pirated versions they would still become entrenched in the world of Windows/Office which would then lead to enterprise sales.

> Is this really the case? I feel like most windows users just bought a laptop with Windows already on it.

This is largely true in North America, UK and AUS/NZ, less true in Europe, a mixed bag in the Middle East and mostly untrue everywhere else.


If you were able to wave a magic wand today and remove piracy, Microsoft would not remove ads.

This is a really cool idea. My only gripe is that Win32 is necessarily built on x86. AArch64/ARM is up and coming, and other architectures may arise in the future.

Perhaps that could be mitigated if someone could come up with an awesome OSS machine code translation layer like Apple's Rosetta.


There's not much x86 specific about Win32 and you can make native ARM Windows programs for years already. WinNT was designed to be portable from the start. Windows/ARM comes with a Rosetta like system and can run Intel binaries out of the box.

Not sure on Windows but with Wine you can totally use Win32 on arm.

Valve certainly seems to be making progress on it with Proton.

They are helping to use x86 apps on arm but if you wanted, you could already use windows arm apps on Linux arm without any problem.

Lol I didn't realize that my joke is actually real https://news.ycombinator.com/item?id=46366998#46368990

I’m slowly coming around to it.

This is weird I only use Wine for games, but the name's clever. All my other software is natively Linux, even Steam itself.

As of the time of writing the first hundred or so comments are on tangents, so TLDR: this is about making a "Linux distribution" of which all the userland software is win32 software running on Wine. The idea, among others, is to recreate the experience of '90s..'10s versions of Windows. It's at an early stage.

> What is this? A dream of a Linux distribution where the entire desktop environment is Win32 software running under WINE.

I might unironically use this. The Windows 2000 era desktop was light and practical.

I wonder how well it performs with modern high-resolution, high-dpi displays.


Xfce already exists and has less impedance mismatch. It’s almost as good in some ways, probably better in a few tiny ones.

I already use xfce, of course!

I've also had the same thought...

I’m in if this is happening

But would you want to run these Win32 software on Linux for daily use? I don't.

Depends on what task you're doing, and to a certain extent how you prefer to do it. For example sure there's plenty of ways to tag/rename media files, but I've yet to find something that matches the power of Mp3tag in a GUI under linux.

Have you tried kid3 (https://kid3.kde.org)? It has both a GUI and a CLI.

From a quick glance at the feature lists it looks quite comparable.


I just did, have you actually tried using them side-by-side? It's hard for me to look favorably on kid3. I actually gave myself 5-10m to try and learn kid3 and a lot of what seems like obvious ways to accomplish a task like 'rename these files using their tags' didn't do anything. I even broke out the manual which didn't help/explain if there was a different mindset I need to adopt. I could manage to manually edit tags/rename file by file, but that seems like table stakes for anything that handles media files (even a file manager) let alone an application that is meant to be a specialist in that area, and we're not into any advanced functionality yet.

More generally though it's not about one specific type of tool, it's that windows and linux have been different ecosystems for decades and that has encouraged different strengths and weaknesses. To catch up would mean a lot of effort even if you're just aiming to be equivalent, or use projects like WINE to blur the lines and use the win32 tool as though the specific platform doesn't matter so much.


Gamers have no other option, and thanks Valve, game studios have no reasons left to bother with native Linux clients.

Just target Windows, business as usual, and let Valve do the hard work.


> Gamers have no other option, and thanks Valve, game studios have no reasons left to bother with native Linux clients

But they do test their Windows games on Linux now and fix issues as needed. I read that CDProjekt does that, at least.


CDProjekt releases native linux builds.

I don’t think Witcher 3 or Cyberpunk 2077 have Linux builds available for the common folk? Cyberpunk has a ARM64 Mac build, though.

Huh, I could have sworn Witcher 3 did, but maybe I am misremembering it merely releasing without DRM.

Witcher 2 had a Linux native build, but never Witcher 3.

Not really, most leave that to Valve.

...game studios have no reasons left to bother with native Linux clients.

How many game studios were bothering with native Linux clients before Proton became known?


That's exactly the point. They weren't, so a Linux user didn't have an option to run a native Linux client in preference to a Win32 version.

That goes back to address the original question of "But would you want to run these Win32 software on Linux for daily use?"


More than now, I own a few from the Loki Entertainment days.

Well, not having Proton definitely didn't work to grow gaming on Linux.

Maybe Valve can play the reverse switcheroo out of Microsoft's playbook and, once enough people are on Linux, force the developers' hand by not supporting Proton anymore.


For making music as much as I love the free audio ecosystem there's some very unique audio plugins with specific sounds that will never be ported. Thankfully bridging with wine works fairly well nowadays.

I knew a guy whose main editor for his day to day was Notepad++ running in Wine.

I use some cool ham radio software, a couple SDR applications, and a lithophane generator for my 3d printer. It all works great, if you have a cool utility or piece of software, why wouldn't you want to?

You son of a bitch, im in!

Love this idea. Love where it is coming from.


I think there's a quote from Linus himself saying this.

I love the idea of ending the Wayland vs X argument by supplanting them with GDI+ (kind of implied, though not explicitly stated, by this proposal).

This is going to be a bold claim but here goes.

This will never work, because it isn't a radical enough departure from Linux.

Linux occupies the bottom of a well in the cartesian space. Any deviation is an uphill battle. You'll die trying to reach escape velocity.

The forcing factors that pull you back down:

1. Battles-testedness. The mainstream Linux distros just have more eyeballs on them. That means your WINE-first distro (which I'll call "Lindows" in honor of the dead OS from 2003) will have bugs that make people consider abandoning the dream and going back to Gnome Fedora.

2. Cool factor. Nobody wants to open up their riced-out Linux laptop in class and have their classmate look over and go "yo this n** running windows 85!" (So, you're going to have to port XMonad to WINE. I don't make the rules!)

3. Kernel churn. People will want to run this thing on their brand-new gaming laptop. That likely means they'll need a recent kernel. And while they "never break userspace" in theory, in practice you'll need a new set of drivers and MESA and other add-ons that WILL breaks things. Especially things like 3D apps running through WINE (not to mention audio). Google can throw engineers at the problem of keeping Chromium working across graphics stacks. But can you?

If you could plant your flag in the dirt and say "we fork here" and make a radical left turn from mainline Linux, and get a cohort of kernel devs and app developers to follow you, you'd have a chance.


Whatever, at least it can be a desktop alternative to GNOME and KDE where you can also run exes.

Damn, they didn't miss a spot to add a Loss comic reference.

https://en.wikipedia.org/wiki/Loss_(Ctrl%2BAlt%2BDel)


Thank you. I was contemplating the logo but my brain could not make the connection.

I've heard worse ideas. Not much, but some. An AI-driven Linux, for instance.

Are the people behind this project the same as the Free95 team?

Isn't that the OS from the 1990's that never got anywhere, and then the same people made ReactOS?

googles

Ah, no, that was FreeWin95. What on earth is Free95, it feels like history repeating itself…


One month ago their site went dark

https://github.com/versoft-software/free95/

They also forked Uinxed kernel so as to run their userland.

I believe they are the same. Still, it makes sense what they are trying to do.


To be honest, this seems like a project by some kid or something that has absolutely no idea what they're doing. It's a little painful to witness.

Interesting concept. If it works why not?

There is a ton of useful FOSS for Windows and maybe it is a good push to modernize abandoned projects or make Win32 projects cross-compilable.


Thus reinforcing development tools that target Windows desktop even further, the OS/2 lesson repeats itself.

And failing everything else, Microsoft is in the position to put WSL center and front, and yet again, that is the laptops that normies will buy.


Not to worry, Microsoft can't escape Win32 either. They've tried, with UWP and others, but they're locked in to supporting the ABI.

It's not a moving target. Proton and Wine have shown it can be achieved with greater comparability than even what Microsoft offers.


While true, people should pay attention that WinRT, the technology infrastructure for UWP, nowadays lives in Win32 and is what is powering anything CoPilot+ PC, Windows ML, the Windows Terminal rewrite, new Explorer extensions, updated context menu on Windows 11,....

It is a moving target, Proton is mostly stuck on Windows XP world, before most new APIs started being a mix of COM and WinRT.

Even if that isn't the case, almost no company would bother with GNU/Linux to develop with Win32, instead of Windows, Visual Studio, business as usual.


FWIW, Wine 8.0 introduced some WinRT support, specifically Windows.Gaming.Input.

It's a start.


While this might appeal to retro enthusiasts, I could see a Linux based drop in replacement for Windows 10/11 getting traction amongst mainstream users, especially if it had a good UI/UX.

Your average user might not even know its Linux.


Thing is, I want the opposite. I want the NT/2k/w7 kernel and XFCE on top. NT kernel is infinitely better designed and has much better support on latest intel/amd hardware than Linux. And XFCE is much better than modern windows ui.

> has much better support on latest intel/amd hardware

In what way?


Close Linux laptop. Wait 1 week. Open it. 100% Dead. Do same to windows laptop. 60% chance it is alive — better. MacBook? 100%

That's really not the case overall. My experience: Linux - sure, the battery won't last a week in sleep, but it will revert to hibernation and start anyway. Windows - got 2 laptops which just won't go to sleep automatically since w11 - either force hibernation, or it will overheat in a bag and drain in hours. Mac - constant wakeups to connect to the internet (that I can't disable even though I tried many times) drain the battery in days.

https://github.com/jart/cosmopolitan/issues/35#issuecomment-...

The idea of "fuck it, let's do Windows everywhere" was introduced by Justine Tunney as an April Fools Joke in the Cosmopolitan repository.

That's it. An april fools joke.


I mean... isn't that just X11 light compositor (like IceWM) with binfmt enabled?

It's like having the dream of running Visual Studio 2026 on linux: COMPLETELY RETARDED.

"Yes. I can't tell you how many times the ability to just download a goddamn .exe file and run it in WINE has saved my ass. Seemingly every creative project I undertake eventually requires downloading some piece of software which is either impossible or impractical to rebuild myself, and whose Linux and macOS ports no longer work or never existed. There's more than three decades of Win32 software — .exe files! — that can run in WINE or (of course) on Windows. No other ABI has that kind of compatibility record. WINE can even run Win16 stuff too.

The really cool thing about Win32 is it's also the world's stable ABI. There's lots of fields of software where the GNU/Linux and POSIX-y offerings available are quite limited and generally poor in quality, e.g. creative software and games. Win32 gives you access to a much larger slice of humanity's cultural inheritance. "

What a pile of bullshitting.


This is only ever relevant for proprietary software. Free software does not require a stable ABI. Great that wine exists but it should be useless.

(That and Linux doesn't implement win32 and wine doesn't exclusively run on Linux.)


Stable interfaces and not being in versioning hell (cough libc) would actually be good for FOSS as well.

If you make a piece of software today and want to package it for Linux its an absolute mess. I mean, look at flatpack or docker, a common solution for this is to ship your own userspace, thats just insane.


Agreed... I'm kind of a fan of AppImage/Flatpak/Snap (less Snap, but still)... even then, I don't use a lot of apps, and most of my variety is usually via Docker.

It's much more bloated than it should be, but the best way to reliably run old/new software in any given Linux.


Free software can still benefit from a stable ABI. If I want to run the software, it's better to download it in a format my CPU can understand, rather than download source, figure out the dependencies, wait for compiling (let's say it's a large project like Firefox or Chromium that takes hours to compile), and so on.

> If I want to run the software, it's better to download it in a format my CPU can understand, rather than download source, figure out the dependencies, wait for compiling (let's say it's a large project like Firefox or Chromium that takes hours to compile), and so on.

If its a choice between downloading a binary that depends on a stable ABI and compiling the source. They way most Linux software gets installed is downloading a binary that has been compiled for your OS version (from repos), and the next most common way of installing is compiling source through a system that figures out the dependencies for you (source based distros and repos).


We exist in a world where proprietary software exists, and always will exist. I want to be able to run said software if it's the best tool for the job, not be hobbled by an idealistic stance of "all software should be free so we don't bother to support proprietary software".

Then you are quite simply part of the problem.

The difference between Win32 and Linux is that the latter didn't realize an operating system is more than a kernel and a number of libraries and systems glued together, but is, indeed, a stable ABI (even for kernel modules -- so old drivers will be usable forever), a default, unique and stable API for user interface, audio, ..., and so forth. Linux failed completely not technologically, but to understand what an OS is from the POV of a product.

Linux didn't aim to be an OS in the consumer sense (it is entirely an OS in an academic sense - in scientific literature OS == kernel, nothing else).The "consumer" OS is GNU/Linux or Android/Linux.

> it is entirely an OS in an academic sense - in scientific literature OS == kernel, nothing else

No, the academic literature makes the difference between the kernel and the OS as a whole. The OS is meant to provide hardware abstractions to both developers and the user. The Linux world shrugged and said 'okay, this is just the kernel for us, everyone else be damned'. In this view Linux is the complete outlier, because every other commercial OS comes with a full suite of user-mode libraries and applications.


There really isn't that much GNU on a modern Linux system, proportionately.

Exactly, Gnome/Linux or KDE/Linux would make a lot more sense.

Both are being baked

https://distrowatch.com/table.php?distribution=gnomeos

https://distrowatch.com/table.php?distribution=kdelinux

The question is if either will catch any interest and if so, what will happen to regular distributions.


Except that it can be both and more: you can have Gnome, KDE, and other DEs and libraries installed and use app based on all of them simultaneously.

Sure, although every distro has a default.

systemd/Linux maybe? Lots of things are more significant than GNU, either way.


This is amusing but infeasible in practice because it would need to be behaviorally compatible with Windows, including all bugs along with app compatibility mitigations. Might as well just use Windows at that point.

you have full control of a Linux system. win32/linux respects your rights that microsoft doesn't. that's the difference.

That is irrelevant to the feasibility of reimplementing the Win32 API on Linux.

WINE has been reimplementing the Win32 ABI (not API) for decades. It already works pretty well; development has been driven by both volunteers and commercial developers (CodeWeavers) for a long time.

There are many programs that still do not work properly in WINE, even though it has been developed for decades. This in itself demonstrates the infeasibility of reimplementing Win32 as a stable interface on par with Windows. The result after all this effort is still patchy and incomplete.

There are many programs that do not work properly in Windows 11, so using Windows to run Windows programs doesn't work either.

It's already been done, though. Wine has been around for 30 years and has excellent compatibility at this point.

5341 of the 16491 applications listed in the Wine AppDB have a compatibility rating of "garbage". This is not excellent compatibility.

How many of those entries have been tested with recent versions of wine or proton? Seems a poor metric.

Better to consider is the Proton verified count, which has been rocketing upwards.

https://www.protondb.com/


I would hazard a guess that most of those apps are garbage on Windows, too.

Relative to (64-bit) windows 11, it might be.

I'll check back every few years to see if either this project, Wine or ReactOS can run Visual Studio 2026 (or 2022) and .NET Framework 4.

Not talking about the cross-platform versions of .NET and VS-Code. I'm specifically talking about the Windows-specific software I mentioned above.

I don't see this happening, despite the fact that by now, these types of porting efforts were supposed to be trivial because of AI. Yeah, I'll wait.


You better change software.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: