Making an executable “request” older symbol versions is incredibly painful in practice. Basically every substantial piece of binary software either compiles against an ancient Debian sysroot (that has to have workarounds for the ancient part) or somehow uses a separate glibc copy from the base system (Flatpak, etc.). The first greatly complicates building the software, the second is recreating Windows’ infamous DLL hell.
Both are way more annoying than anything the platforms without symbol versioning suffer from because of its lack. I’ve never encountered anyone who has packaged binaries for both Linux and Windows (or macOS, or the BSDs) that missed anything about Linux userspace ABIs when working with another platform.
It has to be as ancient as the oldest glibc you want to support, usually a Red Hat release with very old version and manual security backports. These can have nearly decade-old glibc versions, especially if you care about extended support contracts.
You generally have difficulty actually running contemporary build tools on such a thing, so the workaround is to use —-sysroot against what is basically a chroot of the old distro, as if cross-compiling. But there are still workarounds needed if the version is old enough. Chrome has a shorter support window than some Linux binaries, but you can see the gymnastics they do to create their sysroot in some Python scripts in the chromium repo.
On Windows, you install the latest SDK and pass a target version flag when setting up the compiler environment. That’s it. macOS is similar.
We definitely can, because almost every other POSIX libc doesn’t have symbol versioning (or MSVC-style multi-version support). It’s not like the behavior of “open” changes radically all the time, and you need to know exactly what source symbol it linked against. It’s really just an artifact of decisions from decades ago, and the cure is way worse than the disease.
Even open-source software has to deal with the moving target that is ABI and API compatibility on Linux. OpenSSL’s API versioning is a nightmare, for example, and it’s the most critical piece of software to dynamically link (and almost everything needs a crypto/SSL library).
Stable ABIs for certain critical pieces of independently-updatable software (libc, OpenSSL, etc.) is not even that big of a lift or a hard tradeoff. I’ve never run into any issues with macOS’s libc because it doesn’t version the symbol for fopen like glibc does. It just requires commitment and forethought.
Would the police actually try to investigate from where came the jammer? Might the competing firm possibly even finance an investigation themselves privately? And if so, would the police then accept the evidence?
The victim firm would definitely notice, they’d tell the FCC, and their investigators will show up with a device that literally points them to wherever the jammer is. If you do this for stupid, silly reasons you will get fined[1], if you do it in commission of another crime you will probably get made an example of. It doesn’t matter how evil you are, it’s hilariously easy to get caught doing this.
> “Mr. Bojczak claimed that he installed and operated the jamming device in his company-supplied vehicle to block the GPS … system that his employer installed in the vehicle,” the FCC decision stated.
I'm not surprised that somebody would try and do this. However it is just so stupid at every level.
Sorry, you are correct. As soon as the subject of HFT came up I was thinking about London and the things they do to reduce latency to the exchanges in North America. It's too late to edit or remove my previous message.
The parent was saying HFT firms would do this to other HFT firms. They would care about doing this kind of thing - it’s not a white collar crime. And foreign adversaries would care about doing this during peacetime, especially for very unclear benefit.
I’ve used RocksDB for this kind of thing in the past with good results. It’s very thorough from a data corruption detection/rollback perspective (this is naturally much easier to get right with LSMs than B+ trees). The Rust bindings are fine.
It’s worth noting too that B+ tree databases are not a fantastic match for ZFS - they usually require extra tuning (block sizes, other stuff like how WAL commits work) to get performance comparable to XFS/ext4. LSMs on the other hand naturally fit ZFS’s CoW internals like a glove.
Somewhat unrelated, but I just looked at the RustFS docs intro[1] after seeing it here. It has this statement:
> RustFS is a high-performance, distributed object storage software developed using Rust, the world's most popular memory-safe language.
I’m actually something of a Rust booster, and have used it professionally more than once (including working on a primarily Rust codebase for a while). But it’s hard to take a project’s docs seriously when it describes Rust as “the world’s most popular memory-safe language”. Java, JavaScript, Python, even C# - these all blow it out of the water in popularity and are unambiguously memory safe. I’ve had a lot more segfaults in Rust dependencies than I have in Java dependencies (though both are minuscule in comparison to e.g. C++ dependencies).
Thanks for the reality check on our documentation. We realize that some of our phrasing sounded more like marketing hype than a technical spec. That wasn’t our intent, and we are currently refining our docs to be more precise and transparent.
A few points to clarify where we’re coming from:
1. The Technical Bet on Rust: Rust wasn’t a buzzword choice for us. We started this project two years ago with the belief that the concurrency and performance demands of modern storage—especially for AI-driven workloads—benefit from a foundation with predictable memory behavior, zero-cost abstractions, and no garbage collector. These properties matter when you care about determinism and tail latency.
2. Language Safety vs. System Design: We’re under no illusion that using a memory-safe language automatically makes a system “100% secure.” Rust gives us strong safety primitives, but the harder problems are still in distributed systems design, failure handling, and correctness under load. That’s where most of our engineering effort is focused.
3. Giving Back to the Ecosystem: We’re committed to the ecosystem we build on. RustFS is a sponsor of the Rust Foundation, and as we move toward a global, Apache 2.0 open-source model, we intend to contribute back in more concrete ways over time.
We know there’s still work to do on the polish side, and we genuinely appreciate the feedback. If you have specific questions about our implementation details or the S3 compatibility layer, I’m happy to dive into the technical details.
I agree that it is a bad idea to describe rust this way but they likely meant memory safety as used in https://www.ralfj.de/blog/2025/07/24/memory-safety.html . Meaning that shared mutable is thread unsafe, I am unsure about Java and JavaScript but I think that almost every language on the popular memory safe list fails this test.
Again the statement is probably still untrue and bad marketing, but I suspect this kind of reasoning was behind it
Of course Rust technically fails too since `unsafe` is a language feature
I don't have an issue with `unsafe` - Java has the mythical unsafe object, C# has it's own unsafe keyword, Python has ffi, etc. The title of that blog post - that there is no memory safety without thread safety - is not quite true and it acknowledges how Java, C#, and Go have strong memory safety while not forbidding races. Even the "break the language" framing seems like special pleading; I'd argue that Java permitting reading back a sheared long (64-bit) integer due to a data race does not break the language the same way writing to a totally unintended memory area or smashing the stack does, and that this distinction is useful. Java data races that cause actual exploitable vulnerabilities are very, very rare.
It's hard to take a project seriously if it focuses so much on the language it's written in. As a user, I don't care. Show me the results (bug tracker with low rate of issues), that's what I care about. Whether you program in Rust or C or Java or assembly or PHP.
As a potential user of an open source project, I care a fair bit what language it is implemented in. As an open source project, I preffer projects in languages and ecosystems I am familair and comfortable with. I may need to fix bugs, add features, or otherwise make contributions back to the project, and thus I am more likely to pick a solution in a language I am comfortable with than in a language I am not as comfortable with, given my other needs and priorities are met.
I agree, although I’m guessing they’re measuring “most popular” as in “most beloved” and not as in “most used.” That’s the metric that StackOverflow puts out each year.
Debian has been doing this for decades, yes, but it is largely a volunteer effort, and it's become a meme how slow Debian is to release things.
I've long desired this approach (backporting security fixes) to be commercialized instead of the always-up-to-date-even-if-incompatible push, and on top of Red Hat, Suse, Canonical (with LTS), nobody has been doing it for product teams until recently (Chainguard seems to be doing this).
But, if you ignore speed, you also fail: others will build less secure products and conquer the market, and your product has no future.
The real engineering trick is to be fast and build new things, which is why we need supply chain commoditized stewards (for a fee) that will solve this problem for you and others at scale!
But then you as a consumer/user of Debian packages need to stay on top of things when they change in backwards-incompatible ways.
I believe the sweet spot is Debian-like stable as the base platform to build on top of, and then commercial-support in a similar way for any dependencies you must have more recent versions on top.
> But then you as a consumer/user of Debian packages need to stay on top of things when they change in backwards-incompatible ways.
If you need latest packages, you have to do it anyway.
> I believe the sweet spot is Debian-like stable as the base platform to build on top of, and then commercial-support in a similar way for any dependencies you must have more recent versions on top.
That if the company can build packages properly. Also too old OS deps sometimes do throw wrench in the works.
Tho frankly "latest Debian Testing" have far smaller chance breaking something than "latest piece of software that couldn't figure out how to upstream to Debian"
The difference is between staying on stable and cherry-picking the latest for what you really do need, and being on everything latest.
The latter has a huge maintenance burden, the former is the, as I said already, sweet spot. (And let's not talk about combining stable/testing, any machine I tried that on got into an non-upgradeable mess quickly)
I am not saying it is easy, which is exactly why I think it should be a commercial service that you pay for for it to actually survive.
I agree with this, but the open source licenses allow anyone who purchases a stewarded implementation to distribute it freely.
I would love to see a software distribution model in which we could pay for vetted libraries, from bodies that we trust, which would become FOSS after a time period - even a month would be fine.
There are flaws in my argument, but it is a safer option than the current normal practices.
When it is tailored to one customer, that dependency being maintained for you is probably a very particular version you care about. So while copylefted code you can always reshare, it's the timeliness and binary package archives that are where the value really is.
Not dismissing your point, but Looking at the article, it looks like it's in rust unsafe code. Which seems to me to be a point that the rest of the rust code is fine but the place where they turned off the static safety the language provides they got bit.
Hey! Can't I just enjoy my schadenfreude in peace?
I guess the takeaway is that, doubly so, trusting rust code to be memory safe, simply because it is rust isn't sensible. All its protections can simple be invalidated, and an end user would never know.
No non-embedded libc will actually return NULL. Very, very little practical C code actually relies only on specified behavior of the spec and will work with literally any compliant C compiler on any architecture, so I don’t find this particularly concerning.
Usefully handling allocation errors is very hard to do well, since it infects literally every error handling path in your codebase. Any error handling that calls a function that might return an indirect allocation error needs to not allocate itself. Even if you have a codepath that speculatively allocates and can fallback, the process is likely so close to ruin that some other function that allocates will fail soon.
It’s almost universally more effective (not to mention easier) to keep track of your large/variable allocations proactively, and then maintain a buffer for little “normal” allocations that should have an approximate constant bound.
This is just a Linux ecosystem thing. Other full size operating systems do memory accounting differently, and are able to correctly communicate when more memory is not available.
There are functions on many C allocators that are explicitly for non-trivial allocation scenarios, but what major operating system malloc implementation returns NULL? MSVC’s docs reserve the right to return NULL, but the actual code is not capable of doing so (because it would be a security nightmare).
I hack on various C projects on a linux/musl box, and I'm pretty sure I've seen musl's malloc() return 0, although possibly the only cases where I've triggered that fall into the 'unreasonably huge' category, where a typo made my enormous request fail some sanity check before even trying to allocate.
> There are functions on many C allocators that are explicitly for non-trivial allocation scenarios, but what major operating system malloc implementation returns NULL?
Solaris (and FreeBSD?) have overcommitting disabled by default.
Solaris, AIX, *BSD and others do not offer overcommit, which is a Linux construct, and they all require enough swap space to be available. Installation manuals provide explicit guidelines on the swap partition sizing, with the rule of thumb being «at least double the RAM size», but almost always more in practice.
That is the conservative design used by several traditional UNIX systems for anonymous memory and MAP_PRIVATE mappings: the kernel accounts for, and may reserve, enough swap to back the potential private pages up front. Tools and docs in the Solaris and BSD family talk explicitly in those terms. An easy way to test it out in a BSD would be disabling the swap partition and trying to launch a large process – it will get killed at startup, and it is not possible to modify this behaviour.
Linux’s default policy is the opposite end of that spectrum: optimistic memory allocation, where allocations and private mappings can succeed without guaranteeing backing store (i.e. swap), with failure deferred to fault time and handled by the OOM killer – that is what Linux calls overcommit.
Please explain how it adds to the discussion about different ways to broaden supported Rust target architectures. Because both have the word Rust in them?
Both are way more annoying than anything the platforms without symbol versioning suffer from because of its lack. I’ve never encountered anyone who has packaged binaries for both Linux and Windows (or macOS, or the BSDs) that missed anything about Linux userspace ABIs when working with another platform.
reply