Hacker Newsnew | past | comments | ask | show | jobs | submit | more AnthonyMouse's commentslogin

It also seems to omit the possibility that the thing could be privately operated but not for profit.

Let's Encrypt is a solid example of something you could reasonably model as "tragedy of the commons" (who is going to maintain all this certificate verification and issuance infrastructure?) but then it turns out the value of having it is a million times more than the cost of operating it, so it's quite sustainable given a modicum of donations.

Free software licenses are another example in this category. Software frequently has a much higher value than development cost and incremental improvements decentralize well, so a license that lets you use it for free but requires you to contribute back improvements tends to work well because then people see something that would work for them except for this one thing, and it's cheaper to add that themselves or pay someone to than to pay someone who has to develop the whole thing from scratch.


I get the feeling it's the combination of Schelling points and surplus. If everyone else is being pro-social, i.e. there is a culture of it, and the people aren't so hard up that they can reasonably afford to do the same, then that's what happens, either by itself (Hofstadter's theory of superrationality) or via anything so much as light social pressure.

But if a significant fraction of the population is barely scraping by then they're not willing to be "good" if it means not making ends meet, and when other people see widespread defection, they start to feel like they're the only one holding up their end of the deal and then the whole thing collapses.

This is why the tendency for people to propose rent-seeking middlemen as a "solution" to the tragedy of the commons is such a diabolical scourge. It extracts the surplus that would allow things to work more efficiently in their absence.


The thing that annoys me more is the singular focus on memory safety as if nothing else matters. For example, by most definitions PHP is a "memory safe" language, but it's also full of poor design choices and the things written in it have a disproportionate number of security vulnerabilities. JavaScript is also classically modeled as a gelatinous mass of smoldering tires and npm seems to have been designed for the purpose of carrying out supply chain attacks.

So then we see an enormous amount of effort being spent to try to replace everything written in C with Rust when that level of effort should have been able to e.g. come up with something which is easy enough for ordinary people to use that it could plausibly displace WordPress but has a better security posture. Or improve the various legacy issues with distribution package managers so that people stop avoiding them even for popular packages in favor of perilous kludges like npm and Docker.


> JavaScript is also classically modeled as a gelatinous mass of smoldering tires

TypeScript exists? So I'm not too sure that everyone is focusing entirely on memory safety...

> So then we see an enormous amount of effort being spent to try to replace everything written in C with Rust when that level of effort should have been able to e.g. come up with something which is easy enough for ordinary people to use that it could plausibly displace WordPress but has a better security posture.

I feel like this is somewhat... inconsistent? At the risk of oversimplifying a bit (or more), Rust is "something which is easy enough for ordinary people to use that it could plausibly displace [C/C++] but has a better security posture" (not saying that it's the only option, of course). So now that all that effort has been expended in producing Rust, you want to just... forgo applying the solution and redirect that effort to working on solutions to other problems? What happens when you come up with solutions to those? Drop those solutions on the floor as well in favor of solving yet other issues?

I think another explanation for allocation of effort here is due to the difference between creating a solution and applying a solution. At the risk of oversimplifying yet again, "replace C with Rust" is applying a known solution with known benefits/drawbacks to a known problem. Can you say the same about "[i]mprov[ing] the various legacy issues with distribution package managers so that people stop avoiding them even for popular packages in favor of perilous kludges like npm and Docker", let alone coming up with an easy-to-use more secure WordPress replacement?


> TypeScript exists?

TypeScript is JavaScript with a moderate improvement to one of its many flaws. An actual solution would look like choosing/developing a decent modern scripting language and getting the web standards people to add it to browsers and have access to the DOM, which would in turn cause that to be the first language novices learn and temper the undesirably common practice of people using JavaScript on the back end because it's what they know.

> Rust is "something which is easy enough for ordinary people to use that it could plausibly displace [C/C++] but has a better security posture"

It's kind of the opposite of that. It's something that imposes strict constraints which enables professional programmers to improve the correctness of their software without sacrificing performance. But it does that by getting in your way on purpose. It's not an easy thing if you're new. And there's a place for that, but it's an entirely different thing.

The problem with WordPress isn't that it's designed for performance over security. It's not fast, and a replacement with a better design could easily improve performance while doing significantly more validation. And it's full of low-hanging fruit in terms of just removing a lot of the legacy footguns.

> So now that all that effort has been expended in producing Rust, you want to just... forgo applying the solution and redirect that effort to working on solutions to other problems?

In general when you come up with some new construction methods that are better able to withstand earthquakes, you apply them whenever you build a new building, and maybe to some specific buildings that are especially important or susceptible to the problem, but it's not worth it to raze every building in the city just to build them again with the new thing. After all, what happens when you get the new new thing? Start all over again, again?


> TypeScript is JavaScript with a moderate improvement to one of its many flaws.

I'm certainly not going to say that nothing better could emerge, but nevertheless it's effort towards improving something that isn't memory safety.

In other words, I don't really agree that there's a "singular focus" on memory safety. Memory safety rewrites/projects get headlines, absolutely, but that doesn't mean everyone else has dropped what they were doing. Generally speaking, different groups, different projects, etc.

> It's kind of the opposite of that.

I don't think I quite agree? What I was thinking is that there have been efforts to make memory-safe dialects/variants/etc. of C/C++, but none of them really got significant traction in the domains Rust is now finding so much success in. I'm not saying this is because Rust is easy, but (at least partially) because it took concepts from those previous efforts and made them easy enough to be accessible to ordinary devs, and as a result Rust could become a plausible more-secure replacement for C/C++ where those earlier efforts could not.

> The problem with WordPress isn't that it's designed for performance over security. It's not fast, and a replacement with a better design could easily improve performance while doing significantly more validation. And it's full of low-hanging fruit in terms of just removing a lot of the legacy footguns.

Sure, and I'm not denying that. My point is just that unlike Rust vs. C/C++, as of this moment we don't know what an analogous plausible replacement for WordPress could be (or at least I don't know; perhaps you're more in-the-know than I am). Again, it's the difference between having a plausible solution for a problem in hand vs. sitting at the drafting desk with some sketches.

> In general when you come up with some new construction methods that are better able to withstand earthquakes, you apply them whenever you build a new building, and maybe to some specific buildings that are especially important or susceptible to the problem, but it's not worth it to raze every building in the city just to build them again with the new thing.

I feel like perhaps where the analogy breaks down is that unlike rebuilding a building, the Rust version of something can be built while the old version is still being used. Rust 4 Linux didn't require Linux and/or driver development to halt or for existing drivers to be removed in order to start and/or continue its development, Dropbox didn't have to tear out its old sync engine before starting work on the new one, etc.

And because of that, I feel like in general Rust is already mostly being used for new/important things? Or at the very least, I don't think "raze every building in the city just to build them again with the new thing" is an apt description of what is going on; it's more akin to building a "shadow" copy of a building in the same space using the new techniques with the possibility of swapping the "shadow" copy in at some point.

Or maybe I'm just too charitable here. Wouldn't be the first time.

> After all, what happens when you get the new new thing? Start all over again, again?

If the cost-benefit analysis points in that direction, sure, why not?


> Generally speaking, different groups, different projects, etc.

Well yes, but we're talking about the Rust people, which is why Typescript was a red herring to begin with. The complaint is that they've got a new hammer and then start seeing nails everywhere.

> What I was thinking is that there have been efforts to make memory-safe dialects/variants/etc. of C/C++, but none of them really got significant traction in the domains Rust is now finding so much success in.

This was mostly because they didn't solve the performance problem. In the domains where that matters less, other languages did make significant inroads. Java, Python, etc. have significant usage in domains that before them were often C or C++.

> My point is just that unlike Rust vs. C/C++, as of this moment we don't know what an analogous plausible replacement for WordPress could be (or at least I don't know; perhaps you're more in-the-know than I am). Again, it's the difference between having a plausible solution for a problem in hand vs. sitting at the drafting desk with some sketches.

The primary thing WordPress needs is a fresh implementation that takes into account sound design principals the original never did and which at this point would be compatibility-breaking changes. Give each plugin its own namespace by default, have a sane permissions model etc.

It doesn't require any great novelty, it's just a lot of work to re-implement a complex piece of software from scratch in a different language. But that's the analogous thing, with an analogous level of effort, being proposed for rewriting a lot of software in Rust whose predecessors have significantly fewer vulnerabilities than WordPress.

> I feel like perhaps where the analogy breaks down is that unlike rebuilding a building, the Rust version of something can be built while the old version is still being used.

That has little to do with it. If you really wanted to rebuild every building in the city, you could build a new building on every available empty lot, move the people from existing buildings into the new buildings, raze the buildings they just moved out of to turn them into empty lots and then repeat until every building is replaced.

The reason that isn't done is that building a new thing from scratch requires a significant amount of resources, so it's something you only force outside of its natural replacement cycle if the incremental improvement is very large.

> If the cost-benefit analysis points in that direction, sure, why not?

The point is that it doesn't. Rewriting a large amount of old C code, especially if it doesn't have a lot of attack surface exposed to begin with, is a major cost with a smaller benefit. Meanwhile there are many other things that have medium costs and medium benefits, or large costs and large benefits, and those might be a better use of scarce resources.


> The complaint is that they've got a new hammer and then start seeing nails everywhere.

Ah, my apologies for misreading the original comment I replied to then.

> This was mostly because they didn't solve the performance problem. In the domains where that matters less, other languages did make significant inroads. Java, Python, etc. have significant usage in domains that before them were often C or C++.

Which is true! But even after Java/Python/etc. made their inroads the memory-safe dialects/variants/etc. of C/C++ still didn't attract much attention, since while Java/Python/etc. made memory safety easy enough for devs, as you said they didn't make performant memory safety easy enough, which left C/C++ their niche. While Rust is far from a perfect solution, it seems to have made performant memory safety easy enough to get to where it is today.

> If you really wanted to rebuild every building in the city, you could build a new building on every available empty lot, move the people from existing buildings into the new buildings, raze the buildings they just moved out of to turn them into empty lots and then repeat until every building is replaced.

I took "raze every building in the city just to build them again with the new thing" as specifically implying a destroy -> rebuild order of operations, as opposed to something more like "replace every building with the new thing". Too literal of a reading on my end, I guess?

> The reason that isn't done is that building a new thing from scratch requires a significant amount of resources, so it's something you only force outside of its natural replacement cycle if the incremental improvement is very large.

I mean, that's... arguably what is being done? Obviously different people will disagree on the size of the improvement, and the existence of hobbyists kind of throws a wrench into this as well since their resources are not necessarily put towards an "optimal" use pretty much by definition.

> The point is that it doesn't. Rewriting a large amount of old C code, especially if it doesn't have a lot of attack surface exposed to begin with, is a major cost with a smaller benefit. Meanwhile there are many other things that have medium costs and medium benefits, or large costs and large benefits, and those might be a better use of scarce resources.

That's a fair conclusion to come to, though it's evidently one where different people can come to different conclusions. Whether one stance or the other will be proven right (if the situation can even be summed up as such), only time will tell.

And again, I feel like I should circle back again to the "solution in hand vs. sitting at the drafting table" thing. Maybe an analogy to moonshot research a la Xerox PARC/Bell Labs might be better? One can argue that more resources into a WordPress replacement might yield more benefits than rewriting something from C to Rust, but there are much larger uncertainty bars attached to the former than the latter. It's easier to get resources for something with more concrete benefits than something more nebulous.


C# AOT is performant, is easy to use and has a small footprint. (Less than a megabyte executable without trickery. I am sure one could get much smaller if someone put effort into it.)

Fair point. It's a relatively recent thing, though, and even with the reduced footprint I think it and the GC at least would still make its use difficult at best for some of C/C++'s remaining niches.

That being said, I wouldn't be surprised if it (and similar capabilities from Graal, etc.) grabbed yet more market share due to making those languages more viable where they historically had not been.


Memory safety as a term of art in software security is about eradicating code execution bugs caused by memory corruption. It's not a cure-all for software security. Most vulnerabilities in the industry aren't memory safety bugs, but empirically memory safety vulnerabilities are inevitable in software built in C/C++.

My heresy is that processor ISA's aren't memory safe and so it's sort of foolish to pretend a systems language is safe. I feel things like pointer tagging are more likely to provide real returns.

Also remember a conversation with someone at netscape about JS. The idea was partly as an interpreted language it could be safe unlike binaries. Considering binaries on pre 2000 hardware, running an arbitrary binary oof. But that it wasn't as easy as assumed.


> People simply love having all their eggs in one basket

It's more accurate to say that people don't like having twelve different interfaces that all do the same thing.

The proper way to do this is, of course, to have a single interface (i.e. a user agent) that interfaces with multiple services using a standard protocol. But every proprietary service wants you to use their app, and that's the thing people hate.

But the services are being dumb, because everyone except for the largest incumbent is better off to give the people what they want. The one that wins is the one with the largest network effect, which means you're either the biggest already or you're better off to implement a standard along with everyone else who isn't the biggest so that in combination you have the biggest network, since otherwise you won't and then you lose.


Yeah, thars a more generous way to put it. People are fine with the illusion of one basket. Thars pretty much how any large website works.

The ideal would be for users to choose their front end and have backends hook into it via protocols. Aka RSS feeds or Email (to some extent). But the allure of being vertically integrated is too great, and users will rarely question it.

>But the services are being dumb, because everyone except for the largest incumbent is better off to give the people what they want.

Yup, agreed. At this point, it's really an issue regulation can fix. Before it's too late.


Apple is #4 in laptop sales. Lenovo, Dell and HP each have at least as much volume. Apple also has higher margins than those companies, implying that any cost savings they make on other components aren't making it into the price anyway.

It's probably just that it costs a little more to do it and most customers wouldn't pay a premium to have it.


>Apple is #4 in laptop sales. Lenovo, Dell and HP each have at least as much volume.

True but they divide their sales among several models. Gaming models, 2-in-1s, 13" to 17" and so on. Apple not only has fewer models they often keep the same case design between generations which also benefits economies of scale.


> If you’re lucky and leave enough extra space then you can design next generation parts to line up neatly with the thermal solution of last gen, then cap it at the limit of whatever last gen was designed for.

The mobile Ryzen 3/5/7/9 processors from the current year have a configurable TDP up to the same max (54W) as the earliest Ryzen "H" processors from 2017. The first generation mobile Core i7 from 2009 had a TDP up to 55W. The mobile Pentium 4 from 2003 had a TDP up to 76W (which appears to be the high water mark). In any given generation there were also lower end models using less power across a power range that seems to be fairly consistent over time.

Why does the thermal solution need to be redesigned if the heat output hasn't materially changed in decades?


It seems more like an incredible feat of bureaucratic perverse incentives. How is the thing that poisons people the default and the thing that doesn't is what requires specific government-imposed costs?

> If enough people stop believing in the law, it really threatens those in power.

I think this is why the thing judges hate the most is people admitting when the law gives them an unfair advantage.

A rule that unjustly benefits someone is fine as long as they don't break kayfabe. Big Brother loves you, that's why you can't install apps on your phone, it's to protect you from harm. The incidental monopolization, censorship and surveillance are all totally unintentional and not really even happening. Oceania has always been at war with Eurasia.

Whereas, declare that you're shamelessly exploiting a loophole? Orange jumpsuit.


I agree, but that's the uncharitable interpretation. The charitable one is that intent matters. Those in power being threatened tends to strongly correlate with societal instability and a distinct lack of public safety. I may not always agree with the status quo but I don't want to live in Somalia either.

"Intent matters" is the dodge.

There is an action you can take that does two things. One, it makes it marginally more expensive to commit fraud. Two, it makes it significantly more expensive for your existing customers to patronize a competitor. If you do it, which of these things was it your intent to do?

The answer doesn't change based on whether you announce it. You can fully intend to thwart competition without admitting it. And, of course, if the only way you get punished is if you admit it, what you really have is not a law against intending to do it but a law against saying it out loud. Which is poison, because then people knowingly do it without admitting it and you develop a culture where cheating is widespread and rewarded as long as the cheaters combine it with lying.

Whereas if the law is concerned with knowledge but not "intent" then you'd have a law against thwarting competition and it only matters what anyone would expect to be the result rather than your self-proclaimed unverifiable purpose.

But then it's harder to let powerful people get away with things by pretending they didn't intend the thing that everybody knew would be the result. Which is kind of the point.


FWIW, laws aren't merely abstract tools of oppression, they're what binds groups larger than ~100 people into societies. And the true fabric laws are made of, is one of mutually-recursive belief, everyone's expectation that everyone else expects they're subject to them. Threaten that belief, the system stops working. The system stops working, everyone starves, or worse.

The way you're supposed to do that is by having laws that are actually reasonable and uniformly applied.

Having laws that tilt the playing field and then punishing anyone who admits the emperor has no clothes is just censorship. People still figure it out. Only then they get rewarded for knowing about it and not saying anything, which causes the corruption to spread instead of being opposed, until the rot reaches the foundation. And that's what causes "everyone starves, or worse."


> And that's what causes "everyone starves, or worse."

I disagree. What you've described is certainly bad for much of society, but it represents a change from full participatory democracy to narrower and ultimately aristocratic governance. Many nations moved away from aristocracy and embraced democracy, but the difference in failure mode between "good for the people" and "good for the nation" does nevertheless exist (even when you can avoid the other problem democracy has, that "good for the people" and "popular" are also sometimes different).

When nobody can even "get rewarded for knowing about it and not saying anything", then you get all the examples of groupthink failure. Usually even this is limited to lots of people, rather than everyone, starving, but given the human response to mass starvation is to leave the area, I think this should count as "everyone starves" even if it's not literally everyone.

When everyone knows the rules are optional, or when they think facts and opinions are indistinguishable, then things like speed limits, red lights, which side of the road you're supposed to be on, purchasing goods and services rather than stealing them, all these things become mere suggestions. This is found in anarchies, or a prelude to/consequence of a civil war. There can be colossal losses, large scale displacement of the population to avoid starvation, though I think it would be fair to categorise this as "everyone starves" even if not literally for the same reason as the previous case.


> it represents a change from full participatory democracy to narrower and ultimately aristocratic governance.

I don't think that's the relevant distinction. "Benevolent dictatorship" is still one of the most efficient forms of governance, if you actually have a benevolent dictator.

The real problem is perverse incentives. If you have a situation where 0.1% of people can get 100 times as many resources as the median person through some minimal-overhead transfer mechanism, that's maybe not ideal, but it's a lot better than the thing where 0.1% of people can get 100 times as many resources as the median person by imposing a 90% efficiency cost. In the first case you lost ~10% of your resources so someone else could have 100 times as much, but in the second case you lost >90% of your resources only so that someone else could have 10 times as much as they'd have had to begin with, because now the pie is only 1/10th as big.

But the latter is what happens when corruption is tolerated but not acknowledged, because then someone can't just come out and say "I'm taking this because I can get away with it and if you don't like it then change the law" and instead has to make fanciful excuses for inefficiently blocking off alternative paths in order to herd everyone through their toll booth, at which point they not only get away with it but destroy massive amounts of value in the process.


The Wartime Prohibition Act was passed during the drawdown from World War I and the basis for upholding it was the wartime powers of Congress because of a scarcity of grain from the war.

The last Congressionally declared war was World War II, so if that was supposed to be the constitutional basis for the Controlled Substances Act, there would seem to be the obvious problems that the war was generations ago and nobody is diverting scanty wheat from the food markets to make MDMA.


> its development model cannot consistently provide this product feature.

The real problem is that the hardware vendors aren't using its development model. To make this work you either need a) the hardware vendor to write good drivers/firmware, or b) the hardware vendor to publish the source code or sufficient documentation so that someone else can reasonably fix their bugs.

The Linux model is the second one. Which isn't what's happening when a hardware vendor doesn't do either of them. But some of them are better than others, and it's the sort of thing you can look up before you buy something, so this is a situation where you can vote with your wallet.

A lot of this is also the direct fault of Microsoft for pressuring hardware vendors to support "Modern Standby" instead of rather than in addition to S3 suspend, presumably because they're organizationally incapable of making Windows Update work efficiently so they need Modern Standby to paper over it by having it run when the laptop is "asleep" and then they can't have people noticing that S3 is more efficient. But Microsoft's current mission to get everyone to switch to Linux appears to be in full swing now, so we'll see if their efforts on that front manage to improve the situation over time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: