Only because people aren't putting in the effort to build their binaries properly. You need to link against the oldest glibc version that has all the symbols you need, and then your binary will actually work everywhere(*).
But to link against an old glibc version, you need to compile on an old distro, on a VM. And you'll have a rough time if some part of the build depends on a tool too new for your VM. It would be infinitely simpler if one could simply 'cross-compile' down to older symbol versions, but the tooling does not make this easy at all.
> It would be infinitely simpler if one could simply 'cross-compile' down to older symbol versions, but the tooling does not make this easy at all.
It's definitely not easy, but it's possible: using the `.symver` assembly (pseudo-)directive you can specify the version of the symbol you want to link against.
Ok, so you agree with him except where he says “in a VM” because you say you can also do it “in a container”.
Of course, you both leave out that you could do it “on real hardware”.
But none of this matters. The real point is that you have to compile on an old distro. If he left out “in a VM”, you would have had nothing to correct.
I'm not disagreeing that glibc symbol versioning could be better. I raised it because this is probably one of the few valid use cases for containers where they would have a large advantage over a heavyweight VM.
But it's like complaining that you might need a VM or container to compile your software for Win16 or Win32s. Nobody is using those anymore. Nor really old Linux distributions. And if they do, they're not really going to complain about having to use a VM or container.
As C/C++ programmer, the thing I notice is ... the people who complain about this most loudly are the web dev crowd who don't speak C/C++, when some ancient game doesn't work on their obscure Arch/Gentoo/Ubuntu distribution and they don't know how to fix it. Boo hoo.
But they'll happily take a paycheck for writing a bunch of shit Go/Ruby/PHP code that runs on Linux 24/7 without downtime - not because of the quality of their code, but due to the reliability of the platform at _that_ particular task. Go figure.
> But they'll happily take a paycheck for writing a bunch of shit Go/Ruby/PHP code that runs on Linux 24/7 without downtime - not because of the quality of their code, but due to the reliability of the platform at _that_ particular task.
But does the lack of a stable ABI have any (negative) effect on the reliability of the platform?
Only for people who want to use it as a desktop replacement for Windows or MacOS I guess? There are no end of people complaining they can't get their wifi or sound card or trackpad working on (insert-obscure-Linux-distribution-here).
Like many others, I have Linux servers running over 2000-3000 days uptime. So I'm going to say no, it doesn't, not really.
>As C/C++ programmer, the thing I notice is ... the people who complain about this most loudly are the web dev crowd who don't speak C/C++, when some ancient game doesn't work on their obscure Arch/Gentoo/Ubuntu distribution and they don't know how to fix it. Boo hoo.
You must really be behind the times. Arch and Gentoo users wouldn't complain because an old game doesn't run. In fact the exact opposite would happen. It's not implausible for an Arch or Gentoo user to end up compiling their code on a five hour old release of glibc and thereby maximize glibc incompatibility with every other distribution.
I think even calling it a "design" is dubious. It's an attribute of these systems that arose out of the circumstance, nobody ever sat down and said it should be this way. Even Torvalds complaining about it doesn't mean it gets fixed, it's not analogous to Steve Jobs complaining about a thing because Torvalds is only in charge of one piece of the puzzle, and the whole image that emerges from all these different groups only loosely collaborating with each other isn't going to be anybody's ideal.
In other words, the Linux desktop as a whole is a Bazaar, not Cathedral.
We are using Nix to do this. It’s only a few lines of code. We build a gcc 14 stdenv that uses an old glibc.
But I agree that this should just be a simple target SDK flag.
I think the issue is that the Linux community is generally hostile towards proprietary software and it’s less of an issue for FLOSS because they can always be compiled against the latest.
I don’t understand why this is the case, and would like to understand. If I want only functions f1 and f2 which were introduced in glibc versions v1 and v2, why do I have to build with v2 rather than v3? Shouldn’t the symbols be named something like glibc_v1_f1 and glibc_v2_f2 regardless of whether you’re compiling against glibc v2 or glibc v3? If it is instead something like “compiling against vN uses symbols glibc_vN_f1 and glibc_vN_f2” combined with glibc v3 providing glibc_v1_f1, glibc_v2_f1, glibc_v3_f1, glibc_v2_f2 and glbc_v3_f2… why would it be that way?
It allows (among other things) the glibc developers to change struct layouts while remaining backwards compatible. E.g. if function f1 takes a struct as argument, and its layout changes between v2 and v3, then glibc_v2_f1 and glibc_v3_f1 have different ABIs.
Individual functions may have a lot of different versions. They do only update them if there is an ABI change (so you may have e.g. f1_v1, f1_v2, f2_v2, f2_v3 as synbols in v3 of glibc) but there's no easy way to say 'give me v2 of every function'. If you compile against v3 you'll get f2_v3 and f1_v2 and so it won't work on v2.
Why are they changing? And I presume there must be disadvantages to staying on the old symbols, or else they wouldn’t be changing them—so what are those disadvantages?