As a de facto maintainer of an obscure open source game, I see devs come and go. I just merge all the worthwhile contributions. Some collaborators go pretty deep with their features, with a variety of coding styles, in a mishmash of C and C++. I'm not always across the implementation details, but in the back of my mind I'm thinking, man, anyone could just code up some real nasty backdoor and the project would be screwed. Lucky the game is so obscure and the attack surface minuscule, but it did stop me from any temptation to sign Windows binaries out of any sense of munificence.
This xz backdoor is just the most massive nightmare, and I really feel for the og devs, and anyone who got sucked in by this.
> but in the back of my mind I'm thinking, man, anyone could just code up some real nasty backdoor and the project would be screwed
That's true of course, but it's not a problem specific to software. In fact, I'm not even sure it's a "problem" in a meaningful sense at all.
When you're taking a walk on a forest road, any car that comes your way could just run you over. Chances are the driver would never get caught. There is nothing you can do to protect yourself against it. Police aren't around to help you. This horror scenario, much worse than a software backdoor, is actually the minimum viable danger that you need to accept in order to be able to do anything at all. And yes, sometimes it does really happen.
But at the end of the day, the vast majority of people just don't seek to actively harm others. Everything humans do relies on that assumption, and always has. The fantasy that if code review was just a little tighter, if more linters, CI mechanisms, and pattern matching were employed, if code signing was more widespread, if we verified people's identities etc., if all these things were implemented, then such scenarios could be prevented, that fantasy is the real problem. It's symptomatic of the insane Silicon Valley vision that the world can and should be managed and controlled at every level of detail. Which is a "cure" that would be much worse than any disease it could possibly prevent.
> When you're taking a walk on a forest road, any car that comes your way could just run you over. Chances are the driver would never get caught. There is nothing you can do to protect yourself against it.
Sure you can. You can be more vigilant and careful when walking near traffic. So maybe don't have headphones on, and engage all your senses on the immediate threats around you. This won't guarantee that a car won't run you over, but it reduces the chances considerably to where you can possibly avoid it.
The same can be said about the xz situation. All the linters, CI checks and code reviews couldn't guarantee that this wouldn't happen, but they sure would lower the chances that it does. Having a defeatist attitude that nothing could be done to prevent it, and that therefore all these development practices are useless, is not helpful for when this happens again.
The major problem with the xz case was the fact it had 2 maintainers, one who was mostly absent, and the other who gradually gained control over the project and introduced the malicious code. No automated checks could've helped in this case, when there were no code reviews, and no oversight over what gets merged at all. But had there been some oversight and thorough review from at least one other developer, then the chances of this happening would be lower.
It's important to talk about probabilities here instead of absolute prevention, since it's possible that even in the strictest of environments, with many active contributors, malicious code could still theoretically be merged in. But without any of it, this approaches 100% (minus the probability of someone acting maliciously to begin with, having their account taken over, etc.).
It's not defeatist to admit and accept that some things are ultimately out of our control. And more importantly, that any attempt to increase control over them comes with downsides.
An open source project that imposes all kinds of restrictions and complex bureaucratic checks before anything can get merged, is a project I wouldn't want to participate in. I imagine many others might feel the same. So perhaps the loss from such measures would be greater than the gain. Without people willing to contribute their time, open source cannot function.
> It's not defeatist to admit and accept that some things are ultimately out of our control.
But that's the thing: deciding how software is built and which features are shipped to users _is_ under our control. The case with xz was exceptionally bad because of the state of the project, but in a well maintained project having these checks and oversight does help with delivering better quality software. I'm not saying that this type of sophisticated attack could've been prevented even if the project was well maintained, but this doesn't mean that there's nothing we can do about it.
> And more importantly, that any attempt to increase control over them comes with downsides.
That's a subjective opinion. I personally find linters and code reviews essential to software development, and if you think of them as being restrictions or useless bureaucratic processes that prevent you from contributing to a project then you're entitled to your opinion, but I disagree. The downsides you mention are simply minimum contribution requirements, and not having any at all would ultimately become a burden on everybody, lead to a chaotic SDLC, and to more issues being shipped to users. I don't have any empirical evidence to back this up, so this is also "just" my opinion based on working on projects with well-defined guidelines.
I'm sure you would agree with the Optimistic Merging methodology[1]. I'd be curious to know whether this has any tangible benefits as claimed by its proponents. At first glance, a project like https://github.com/zeromq/libzmq doesn't appear to have a more vibrant community than a project of comparable size and popularity like https://github.com/NixOS/nix, while the latter uses the criticized "Pessimistic Merging" methodology. Perhaps I'm looking at the wrong signals, but I'm not able to see a clear advantage of OM, while I can see clear disadvantages of it.
libzmq does have contribution guidelines[2], but a code review process is unspecified (even though it mentions having "systematic reviews"), and there are no testing requirements besides patches being required to "pass project self-tests". Who conducts reviews and when, or who works on tests is entirely unclear, though the project seems to have 75% coverage, so someone must be doing this. I'm not sure whether all of this makes contributors happier, but I sure wouldn't like to work on a project where this is unclear.
> Without people willing to contribute their time, open source cannot function.
Agreed, but I would argue that no project, open source or otherwise, can function without contribution guidelines that maintain certain quality standards.
> But that's the thing: deciding how software is built and which features are shipped to users _is_ under our control. The case with xz was exceptionally bad because of the state of the project, but in a well maintained project having these checks and oversight does help with delivering better quality software. I'm not saying that this type of sophisticated attack could've been prevented even if the project was well maintained, but this doesn't mean that there's nothing we can do about it.
In this particular case, having a static project or a single maintainer rarely releasing updates would actually be an improvement! The people/sockpuppets calling for more/faster changes to xz and more maintainers to handle that is exactly how we ended up with a malicious maintainer in charge in the first place. And assuming no CVEs or external breaking changes occur, why does that particular library need to change?
Honestly this is why I think we should pay people for open source projects. It is a tragedy of the commons issues. All of us benefit a lot from these free software, and done for free. Pay doesn't exactly fix the problems directly, but they do decrease the risk. Pay means people can work on these full time instead of on the side. Pay means it is harder to bribe someone. Pay also makes the people contributing feel better and more like their work is meaningful. Importantly, pay signals to these people that we care about them. I think the big tech should pay. We know the truth is that they'll pass on the costs to us anyways. I'd also be happy to pay taxes but that's probably harder. I'm not sure what the best solution is and this is clearly only a part of a much larger problem, but I think it is very important that we actually talk about how much value OSS has. If we're going to talk about how money represents value of work, we can't just ignore how much value is generated from OSS and only talk about what's popular and well know. There are tons of critical infrastructure in every system you could think of (traditional engineering, politics, anything) that is unknown. We shouldn't just pay things that are popular. We should definitely pay things that are important. Maybe the conversation can be different when AI takes all the jobs (lol)
I get why, in principle, we should pay people for open source projects, but I guess it doesn't make much of a difference when it comes to vulnerabilities.
First off, there are a lot of ways to bring someone to "the dark side". Maybe it's blackmail. Maybe it's ideology ("the greater good"). Maybe it's just pumping their ego. Or maybe it's money, but not that much, and extra money can be helpful. There is a long history of people spying against their country or hacking for a variety of reasons, even if they had a job and a steady paycheck. You can't just pay people and expect them to be 100% honest for the rest of their life.
Second, most (known) vulnerabilities are not backdoors. As any software developer knows, it's easy to make mistakes. This also goes for vulnerabilities. Even as a paid software developer, uou can definitely mess up a function (or method) and accidentally introduce an off-by-one vulnerability, or forget to properly validate inputs, or reuse a supposedly one-time cryptographic quantity.
I think it does make a difference when it comes to vulnerabilities and especially infiltrators. You're doing these things as a hobby. Outside of your real work. If it becomes too big for you it's hard to find help (exact case here). How do you pass on the torch when you want to retire?
I think money can help alleviate pressure from both your points. No one says that money makes them honest. But if it's a full time job you are less likely to just quickly look and say lgtm. You make fewer mistakes when you're less stress or tired. It's harder to be corrupted because people would rather a stable job and career than a one time payout. Pay also makes it easier to trace.
Again, it's not a 100% solution. Nothing will be! But it's hard to argue that this wouldn't alleviate significant pressure.
Difference is that software backdoors can effect billions of people. That driver on the road can't effect too many without being caught.
In this case, had they been a bit more careful with performance, they could have effected millions of machines without being caught. There aren't many cases where a lone wolf can do so much damage outside of software.
>But at the end of the day, the vast majority of people just don't seek to actively harm others. Everything humans do relies on that assumption, and always has.
Wholeheartedly agree. Fundamentally, we all assume that people are operating with good will and establish trust with that as the foundation (granted to varying degrees depending on the culture, some are more trusting or skeptical than others).
It's also why building trust takes ages and destroying it only takes seconds, and why violations of trust at all are almost always scathing to our very soul.
We certainly can account for bad actors, and depending on what's at stake (eg: hijacking airliners) we do forego assuming good will. But taking that too far is a very uncomfortable world to live in, because it's counter to something very fundamental for humans and life.
> But at the end of the day, the vast majority of people just don't seek to actively harm others. Everything humans do relies on that assumption, and always has.
> It's symptomatic of the insane Silicon Valley vision that the world can and should be managed and controlled at every level of detail. Which is a "cure" that would be much worse than any disease it could possibly prevent.
My personal opinion is that if something is going to find a way to conduct itself in secret anyway (at high risk and cost) if it is banned, it is always better to just suck it up and permit it and regulate it in the open instead. Trafficked people are far easier to discover in an open market than a black one. Effects of anything (both positive and negative) are far easier to assess when the thing being assessed is legal.
Should we ban cash because it incentivizes mugging and pickpocketing and theft? (I've been the victim of pickpocketing. The most valuable thing they took was an irreplaceable military ID I carried (I was long since inactive)... Not the $25 in cash in my wallet at the time.) I mean, there would literally be far fewer muggings if no one carried cash. Is it thus the cash's "fault"?
Captain's Log: This entire branch of comments responding to OP is not helping advance humanity in any significant way. I would appreciate my statement of protest being noted by the alien archeologists who find these bits in the wreckage of my species.
I think drunk driving being an oil that keeps society lubricated cannot and should not be understated.
Yes, drunk driving kills people and that's unacceptable. On the other hand, people going out to eat and drink with family, friends, and co-workers after work helps keep society functioning, and the police respect this reality because they don't arrest clearly-drunk patrons coming out of restaurants to drive back home.
This is such a deeply American take that I can't help but laugh out loud. It's like going to a developing nation and saying that, while emissions from two stroke scooters kills people there's no alternative to get your life things done.
It certainly isn't just America, though we're probably certainly the most infamous example.
I was in France for business once in the countryside (southern France), and the host took everyone (me, their employees, etc.) out to lunch. Far as I could tell it was just an everyday thing. Anyway, we drove about an hour to a nearby village and practically partied for a few hours. Wine flowed like a river. Then we drove back and we all got back to our work. So not only were we drunk driving, we were drunk working. Even Americans usually don't drink that hard; the French earned my respect that day, they know how to have a good time.
Also many times in Japan, I would invite a business client/supplier or a friend over for dinner at a sushi bar. It's not unusual for some to drive rather than take the train, and then of course go back home driving after having had lots of beer and sake.
Whether any of us like it or not, drunk driving is an oil that lubricates society.
Except they weren't irresponsible. We all drove back just fine, and we all went back to work just as competently as before like nothing happened.
It takes skill and maturity to have a good time but not so much that it would impair subsequent duties. The French demonstrated to me they have that down to a much finer degree than most of us have in America, so they have my respect.
This isn't to say Americans are immature, mind you. For every drunk driving incident you hear on the news, hundreds of thousands if not millions of Americans drive home drunk without harming anyone for their entire lives. What I will admit is Americans would still refrain from drinking so much during lunch when we still have a work day left ahead of us, that's something we can take lessons from the French on.
Life is short, so those who can have more happy hours without compromising their duties are the real winners.
As someone who knows people who died in a crash with another drunk driver, it is hard for me to accept your view. Certainly, at a bare minimum, the penalties for drunk driving that results in fatality should be much harsher than they are now -- at that point there is hard empirical evidence that you cannot be trusted to have the "skill and maturity" necessary for driving -- but we can't even bring ourselves to do that, not even for repeat offenders.
Eventually I am optimistic that autonomous driving will solve the problem entirely, at least for those who are responsible drivers. In an era of widely available self-driving cars, if you choose to drive drunk, then that is an active choice, and no amount of "social lubrication" can excuse such degenerate behavior.
I think the real problem is that people are really poor at assessing risk. And I think we can make some headway there, educationally, and it might actually affect how people reason around drunk driving (or their friends, assuming they still have their faculties).
Let's take the example of driving home drunk without hurting anyone or having an accident. Suppose that (being optimistic) there's a 1% chance of an accident and a 0.1% chance of causing a fatality (including to self). Seems like an easy risk to take, right? But observe what happens if you drive home drunk 40 times:
99% chance of causing no accident each time, to the 40th power = 0.99^40 is roughly 67% chance that none of those 40 times results in an accident. 80 times? 45% chance of no accident. Now you're talking about flipping a coin to determine whether you cause an accident (potentially a fatal one, we'll get to that) at all over 80 attempts. (I feel like that is optimistic.)
If I have a 99.9% chance of not killing someone when drunk-driving one time, after 80 times I have a 92% chance of not killing someone (that is, an 8% chance of killing someone). Again, this seems optimistic.
Try tweaking the numbers to a 2% chance of an accident and a 1.2% chance of causing a fatality.
Anyway, my point is that people are really terrible at evaluating the whole "re-rolling the dice multiple times" angle, since a single hit is a HUGE, potentially life-changing loss.
(People are just as bad at evaluating success risk, as well, for similar reasons- a single large success is a potentially life-changing event)
I'm certainly not trying to understate the very real and very serious suffering that irresponsible drunk drivers can and do cause. If any of this came off like that then that was never my intention.
When it comes to understanding drunk driving and especially why it is de facto tolerated by society despite its significant problems, it's necessary to consider the motivators and both positive and negative results. Simply saying "they are all irresponsible and should stop" and such with a handwave isn't productive. After all, society wouldn't tolerate a significant problem if there wasn't a significant benefit to doing so.
One of the well known effects of alcohol is impaired judgment. You're expecting people with some level of impaired judgment to make correct judgment calls. Skill and maturity can help, but are not a solution to that fundamental problem.
Would you be okay with a surgeon operating on you in the afternoon drinking at lunch and working on you later while impaired? Is it okay for every person and job to be impaired, regardless of the responsibility of their situation? If not, why is operating a few thousand pound vehicle in public that can easily kill multiple people when used incorrectly okay?
If it's American to make counterarguments based on reason instead of ridicule, then hell, I'd much prefer to be an American than whatever the hell your judgmental buttocks is doing.
And no, there is currently no substitute for a legal removal of your repression so that you can, say, get on with some shagging. I would love to see a study trying to determine what percentage of humans have only come into existence because of a bit of "social lubrication"
You can laugh out loud all you want, but there are mandatory parking minimums for bars across the USA.
Yes, bars have parking lots, and a lot of spaces.
The intent is to *drive* there, drink and maybe eat, and leave in some various state of drunkenness. Why else would the spacious parking lots be required?
What is more depressing is how we can acknowledge that reality and continue to do absolutely nothing to mitigate it but punish it, in many cases.
The more people practically need to drive, the more people will drunk drive and kill people, yet in so many cases we just sort of stop there and be like "welp, guess that's just nature" instead of building viable alternatives. However the other theoretical possibly is that if people didn't need to drive, they might end up drinking more.
Indeed, that "bias" is a vital mechanism that enables societies to function. Good luck getting people to live together if they look at passerbys thinking "there is a 0.34% chance that guy is a serial killer".
> What "cure" would you recommend?
Accepting that not every problem can, or needs to be, solved. Today's science/tech culture suffers from an almost cartoonish god complex seeking to manage humanity into a glorious data-driven future. That isn't going to happen, and we're better off for it. People will still die in the future, and they will still commit crimes. Tomorrow, I might be the victim, as I already have been in the past. But that doesn't mean I want the insane hyper-control that some of our so-called luminaries are pushing us towards to become reality.
The late author of ZeroMQ, Pieter Hintjens, advocated for a practice called Optimistic Merging[1], where contributions would be merged immediately, without reviewing the code or waiting for CI results. So your approach of having lax merging guidelines is not far off.
While I can see the merits this has in building a community of contributors who are happy to work on a project, I always felt that it opens the project to grow without a clear vision or direction, and ultimately places too much burden on maintainers to fix contributions of others in order to bring them up to some common standard (which I surely expect any project to have, otherwise the mishmash of styles and testing practices would make working on the project decidedly not fun). It also delays the actual code review, which Pieter claimed does happen, to some unknown point in the future, when it may or may not be exhaustive, and when it's not clear who is actually responsible of conducting it or fixing any issues. It all sounds like a recipe for chaos where there is no control over what eventually gets shipped to users. But then again, I never worked on ZeroMQ or another project that adopted these practices, so perhaps you or someone else here can comment on what the experience is like.
And then there's this issue of malicious code being shipped. This is actually brought up by a comment on that blog post[2], and Pieter describes exactly what happened in the xz case:
> Let's assume Mallory is patient and deceitful and acts like a valid contributor long enough to get control over a project, and then slowly builds in his/her backdoors. Then careful code review won't help you. Mallory simply has to gain enough trust to become a maintainer, which is a matter of how, not if.
And concludes that "the best defense [...] is size and diversity of the community".
Where I think he's wrong is that a careful code review _can_ indeed reduce the chances of this happening. If all contributions are reviewed thoroughly, regardless if they're authored by a trusted or external contributor, then strange behavior and commits that claim to do one thing but actually do something else, are more likely to be spotted earlier than later. While OM might lead to a greater community size and diversity, which I think is debatable considering how many projects exist with a thriving community of contributors while also having strict contribution guidelines, it doesn't address how or when a malicious patch would be caught. If nobody is in charge of reviewing code, there are no testing standards, and maintainers have additional work keeping some type of control over the project's direction, how does this actually protect against this situation?
The problem with xz wasn't a small community; it was *no* community. A single malicious actor got control of the project, and there was little oversight from anyone else. The project's contribution guidelines weren't a factor in its community size, and this would've happened whether it used OM or not.
> The problem with xz wasn't a small community; it was no community. A single malicious actor got control of the project, and there was little oversight from anyone else.
So because of this a lot of other highly used software was importing and depending on unreviewed code. It's scary to think how common this is. The attack surface seems unmanageable. There need to be tighter policies around what dependencies are included, ensuring that they meet some kind of standard.
> There need to be tighter policies around what dependencies are included, ensuring that they meet some kind of standard.
This is why it's a good practice to minimize the amount of dependencies, and add dependencies only when absolutely required. Taking this a step further, doing a cursory review of each dependency, seeing the transitive dependencies it introduces, are also beneficial. Of course, it's impractical to do this for the entire dependency tree, and at some point we have to trust that the projects we depend on follow this same methodology, but having a lax attitude about dependency management is part of the problem that caused the xz situation.
One thing that I think would improve this are "maintenance scores". A service that would scan projects on GitHub and elsewhere, and assign a score to each project that indicates how well maintained it is. It would take into account the number of contributors in the past N months, development activity, community size and interaction, etc. Projects could showcase this in a badge in their READMEs, and it could be integrated in package managers and IDEs that could warn users if adding a dependency that has a low maintenance score. Hopefully this would disuade people to use poorly maintained projects, and encourage them to use better maintained ones, or avoid the dependency altogether. It would also encourage maintainers to improve their score, and there would be higher visibility of projects that are struggling, but have a high user base, as potentially more vulnerable to this type of attack. And then we can work towards figuring out how to provide the help and resources they need to improve.
Does such a service/concept exist already? I think GitHub should introduce something like this, since they have all the data to power it.
That’s not an effective idea for the same reason that lines of code is not a good measure of productivity. It’s an easy measure to automate but it’s purely performative as it doesn’t score the qualitative value of any of the maintenance work. At best it encourages you to use only popular projects which is its own danger (software monoculture is cheaper to attack) without actually resolving the danger - this attack is reasonably sophisticated and underhanded that could be slipped through almost any code review.
One real issue is that xz’s build system is so complicated that it’s possible to slip things in which is an indication that the traditional autoconf Linux build mechanism needs to be retired and banned from distros.
But even that’s not enough because an attack only needs to succeed once. The advice to minimize your dependencies is an impractical one in a lot of cases (clearly) and not in your full control as you may acquire a surprising dependency due to transitiveness. And updating your dependencies is a best practice which in this case actually introduces the problem.
We need to focus on real ways to improve the supply chain. eg having repeatable idempotent builds with signed chain of trusts that are backed by real identities that can be prosecuted and burned. For example, it would be pretty effective counter incentive for talent if we could permanently ban this person from ever working on lots of projects. That’s typically how humans deal with members of a community who misbehave and we don’t have a good digital equivalent for software development. Of course that’s also dangerous as blackball environments tend to become weaponized.
> We need to focus on real ways to improve the supply chain. eg having repeatable idempotent builds with signed chain of trusts that are backed by real identities that can be prosecuted and burned.
So, either no open source development because nobody will vouch to that degree for others, or absolutely no anonymity and you'll have to worry about anything you provide because of you screw up and introduce a RCE all of a sudden you'll have a bunch of people and companies looking to say it was on purpose so they don't have to own up to any of their own poor practices that allowed it to actually be executed on?
You don’t need vouching for anyone. mDL is going to be a mechanism to have a government authority vouch your identity. Of course a state actor like this can forge the identity, but that forgery at least will give a starting point for the investigation to try to figure out who this individual is. There’s other technical questions about how you verify that the identity really is tied in some real way to the user at the other end (eg not a stolen identity) but there are things coming down that will help with that (ie authenticated chains of trust for hw that can attest the identity was signed on the given key in person and you require that attestation).
As for people accusing you of an intentional RCE, that may be a hypothetical scenario but I doubt it’s very real. Most people have a very long history of good contributions and therefore have built up a reputation that would be compared against the reality on the ground. No one is accusing Lasse Collin of participating in this even though arguably it could have been him all along for what anyone knows.
It doesn’t need to be perfect but directionally it probably helps more than it hurts.
All that being said, this clearly seems like a state actor which changes the calculus for any attempts like this since the funding and power is completely different than what most people have access to and likely we don’t have any really good countermeasures here beyond making it harder for obfuscated code to make it into repositories.
Your idea sounds nice in theory, but it's absolutely not worth the amount of effort. To put it in perspective, think about xz case, and how the amount of contributions would have prevented the release artifact (tar file) from being modified? Because other people would have used the tar file? Why? The only ones that use tarfiles are the ones that would be redistributing the code, they will not audit it. The ones that could audit it would look at the version system repository, not at the tar files. In other words, your solution wouldn't even be effective at potentially discovering this issue.
The only thing that would effectively do this, is that people stop trusting build artifacts and instead use direct from public repositories packaging. You could figure out if someone maliciously modified the release artifact by comparing it against the tagged version, but at that point, why not just shallow clone the entire thing and be done.
Even if you mandated two code reviewers per merge, the attacker can just have three fake personas backed by the same single human and use them to author and approve malware.
Also, in a more optimistic scenario without sockpuppets, it's unlikely that malicious and underhanded contributions will be caught by anyone that isn't a security researcher.
It's actually an art to writing code like that, but it's not impossible and will dodge cursory inspection. And it's possible to have plausible deniability in the way it is constructed.
I'm not sure why my point is not getting across...
I'm not saying that these manual and automated checks make a project impervious to malicious actors. Successful attacks are always a possibility even in the strictest of environments.
What they do provide is a _chance reduction_ of these attacks being successful.
Just like following all the best security practices doesn't produce 100% secure software, neither does following best development practices prevent malicious code from being merged in. But this doesn't mean that it's OK to ignore these practices altogether, as they do have tangible benefits. I argue that projects that have them are better prepared against this type of attack than those that do not.
It never ceases to amaze me how great of lengths companies go to round securing the perimeter of the network but then have engineering staffs that just routinely brew install casks or vi/emacs/vscode/etc extensions.
Rust is arguably the programming language and/or community with the most secure set of defaults that are fairly impossible to get out of, but even at “you can’t play games with pointers” levels of security-first, the most common/endorsed path for installing it (that I do all the time because I’m a complete hypocrite) is:
and that’s just one example, “yo dawg curl this shit and pipe it to sh so you can RCE while you bike shed someone’s unsafe block” is just muscle memory for way too many of us at this point.
It’s worse than that. Build.rs is in no way sandboxed which means you can inject all sorts of badness into downstream dependencies not to mention do things like steal crypto keys from developers. It’s really a sore spot for the Rust community (to be fair they’re not uniquely worse but that’s a fact poor standard to shoot for).
> yo dawg curl this shit and pipe it to sh so you can RCE while you bike shed someone’s unsafe block
Ahhh this takes me back to... a month ago...[0]
At least rust wraps function in main so you won't run a partial command, but still doesn't there aren't other dangers. I'm more surprised by how adamant people are about that there's no problem. You can see elsewhere in the thread that piping man still could (who knows!) pose a risk. Extra especially when you consider how trivial the fix is, especially when people are just copy pasting the command anyways...
It never ceases to amaze me how resistant people are to very easily solvable problems.
To be honest, it just was a matter of time till we find out our good faith beliefs are exploited.
Behaviors like "break fast and fix eary" or "who wants to take my peojeklct ownership" just ask for trouble and yet it's unthinkable to live without them because open source is an unpaid labor of love to code.
Sad to see such happening but I'm not surprised. I wish to get better tools (also open source) to combat such bad actors.
Thanks to all the researchers out there who tries to protect us all.
dont you think that something as simple as a CLA (contributor legal agreement) would prevent this type of thing? of course creates noise in the open source contribution funnel, but let's be honest: if you are dedicating yourself to something like contributing to oss, signing a CLA should not be something unrealistic.
That's stretching the traditional definition. Usually CLAs are solely focused on addressing the copyright conditions and intellectual property origin of the contributed changes. Maybe just "contributor agreement" or "contributor contract" would describe that.
What exactly is a CLA going to do to a CCP operative (as appears to be the case with xz)? Do you think the party is going to extradite one of their state sponsored hacking groups because they got caught trying to implement a backdoor?
Or do you think they don’t have the resources to fake an identity?
There was a link in this thread pointing to commit times analysis and it kinda checks out. Adding some cultural and outside world context, I can guess which alphabet this three-four-six-letter agency uses to spell it's name at least.
case closed. you are right... could of course make the things a bit more difficult for someone not backed by a state sponsor. but if that's the case, you are right.
This xz backdoor is just the most massive nightmare, and I really feel for the og devs, and anyone who got sucked in by this.