Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Arizona Government Suspends Uber's Self Driving Cars from Roads (chicagotribune.com)
374 points by ghshephard on March 27, 2018 | hide | past | favorite | 320 comments


Saying nothing about Google/Waymo, it appears they're allowed to continue? Their self-driving technology seems far more advanced, I think it would be unfair to punish all the players for the bad actions of a few.

In particular, Uber seems to have been operating self-driving cars recklessly and in an attempt to "move fast and break things". When those things are people's lives, that's unacceptable. I'm all for interesting technology, but sometimes existing rules are there for a reason.

I particularly hope this doesn't dissuade individual states from making leaps to embrace a particular technology. I think that's one benefit of the US: individual states have a lot of power to allow a new technology or permit testing that's illegal or in a gray area in the rest of the country. Arizona made a bet on self-driving technology. I applaud Arizona, even if the bet in this case might have turned out badly.


Arizona did something reckless and stupid. Unfortunately, somebody other than the decision maker paid the price. Arizona stupidly allowed Uber to operate after California kicked Uber out and after seeing the videos of Uber cars running red lights in SF.

For what? For a few temporary (the plan is to be fully autonomous quickly) low paying test driver jobs? The engineers making the big salaries stayed in California (or Pittsburgh in Uber's case).

https://www.google.com/amp/amp.fox10phoenix.com/news/arizona...


This is what's always confused me. The usual play is to suck up to a corporation in the hopes that a bunch of jobs or other investment will show up, but self-driving testing is kinda a perfect example of something that will not give any economic benefit to the testbed state at all. It takes very little local investment, and it's a temporary thing that will only yield benefit for the corporation doing the testing, and as soon as they have as much data as they want they can leave and use that data anywhere else.

And if what Arizona hoped for is to get self driving taxis early, well, other states can simply let the sucker states like Arizona do the testing, then wait to legalize self driving till the day after it proves itself, thus reaping close to 100% of the benefits with close to 0% of the disadvantages.

I honestly think Arizona politicians just blundered because they were following the typical playbook out of pure habit. I hope the explanation is not less innocent than that, but that's also very possible. Petty corruption is very common in USA state governments.


Now, I totally agree with your comment that other states can let Arizona do the testing then reap the rewards. But at the same time, I think there can be other indirect benefits: Arizona getting known for being innovative, for example. Arizona was getting in the news a lot for positive self-driving tech things, and that's great. It's bad that they didn't vet Uber carefully enough and thus a woman died, but the general attitude of encouraging innovation even without direct, immediate benefits is a great thing.


A very common play is to remove all regulatory requirements for an industry that you know will provide next-to-no jobs at a cost to public safety, because it looks good to Republican voters. See: Coal.


>For what?

To advance technology and make AZ attractive to other R&D. Not everything has to be tied to a direct economic boom.


California remains the center of autonomous car research and industry with carefully thought out regulation designed with public input. Laissez-faire regulation and rolling out the red carpet for bad actors in Arizona didn't change that. It just led to the predictable result.


California remains the center because the companies are already headquartered there due to decisions made 50 years ago by the military for radar technology: https://www.youtube.com/watch?v=ZTC_RxWN_xo That's despite it's often idiotic regulations driven by populist ignorance.

A governance policy in favor of developments of new technology most certainly can have an impact over time.

> Laissez-faire regulation and rolling out the red carpet for bad actors in Arizona didn't change that.

Oh, red carpet for bad actors? Where is it that Uber is headquartered again? And doesn't Waymo also test in Arizona?


> California remains the center because the companies are already headquartered there due to decisions made 50 years ago by the military for radar technology:

Poor argument. It doesn't explain why the auto industry didn't hire all the self driving car engineers to work in the Midwest or why the communications companies didn't hire them all to work on the East Coast. Google itself has offices all over the world, but it put the Chauffeur team in Mountain View. If you really think that the military industrial complex that started Silicon Valley gives it an insurmountable advantage for building a self driving car company, Arizona's decision to even try to pull them away looks even more stupid.

> Oh, red carpet for bad actors? Where is it that Uber is headquartered again?

In California, which reaps all the benefits from taxing its highly paid employees without incurring any of the costs of its out-of-control self driving car program, which Arizona's governor welcomed with much fanfare.

> And doesn't Waymo also test in Arizona?

It does. It also tests extensively in California and has engineered its vehicles to comply with the new California regulations that allow cars on public roads that have no human driver inside. Notably, all the highly paid engineers remain in California, and the advanced testing facility with highly paid testers (unlike the temp workers hired for the public road tests) was built in California too because advanced testing on private facilities is correctly less regulated. Arizona's problem is that it underregulated testing on public roads.


>If you really think that the military industrial complex

Please put some effort into reading before replying. The military funding for Stanford is one of the main reason so many tech companies started there. There are now massive network effects that make it the current tech capital but that doesn't mean new incentives can't start new tech hubs elsewhere.

You completely missed the boat about Uber being headquartered in California and still acting completely outside of sane ethical boundaries (their police spying program, etc). In other words, California rolls out the red carpet for these garbage tech companies already. The moral or consumer focused high-ground you are trying to imply doesn't exist. There is only NIMBYism.

Google built the testing headquarters in California because that's where their talent is at, nothing more, nothing less.


Advancing technology overall is nice, but does not benefit Arizona any more than if they let some other sucker state do the experimenting.

Not everything has to be tied to a direct effect, but if you think this, maybe you could explain at least one mechanism by which you think Arizona could actually indirectly get some benefit?


  maybe you could explain at least one mechanism
1. Arizona does things that are "business friendly" and "good for high-tech R&D" regardless of job creation.

2. It becomes widely known that Arizona is business friendly, and good for high-tech R&D.

3. This reputation attracts companies that will create jobs.

(I'm not claiming I think this will work - just that it's plausible a politician might think it could work.)


>Advancing technology overall is nice, but does not benefit Arizona any more than if they let some other sucker state do the experimenting.

If every government operated by this selfish model, we would have no basic science funding.


The police department was also quick to defend Uber by saying that the crash was 'unavoidable'.

From the reports it seems like Uber were heavily focused on putting in miles trying to make up ground. The NTSB report will hopefully expose the failures (if any) of the Uber self driving system or the organization.

They've said that self driving cars are critical to their survival and if it turns out they cut corners / mis-represented things then it'll be hard for them to get out of that hole.


I find it peculiar that you would quote the police department saying the crash was "unavoidable" when I'm unable to find such distinct phrasing used in any news articles. The most damning phrasing is: "it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway".



So I've read the article and watched the videos - I haven't seen any actual quotes where any spokesperson had said it was unavoidable outside of the headline and first paraphrased sentence. I only mentioned this initially because it seems like a sensationalized headline of a statement.


"Unavoidable"... I assume for a human. I'm not sure it was unavoidable for an automated car.


The decision to allow self-driving cars in Arizona is going to save far more lives than it will take. I firmly believe it was a massively good decision. but time will tell.


You've misunderstood my statement. California allows self-driving cars and has licensed 50 companies to test them in the state. California didn't allow Uber to operate on public roads because Uber didn't agree to the required oversight. Arizona allowed Uber to operate without reasonable oversight even after seeing videos of Uber autonomous cars running red lights. The result is that it took more lives than it saved, and that outcome was obvious to anybody paying attention.


I heard Phoenix took quite a big hits in the tech sector in 2008.

I live out in the boonies in Arizona, I would say at least if I go off of for recruiter spam, there's been an uptick in engineering positions related to the self driving car companies.

Whether those positions are actually real I don't know, never actually replied to them.


> Arizona did something reckless and stupid. Unfortunately, somebody other than the decision maker paid the price.

Well, crossing the street in that manner wasn't the smartest thing to do either. The person shouldn't have been run over, but we accept some risks while walking bicycles across multi-lane highways at night without watching for oncoming traffic.


> we accept some risks while walking bicycles across multi-lane highways at night without watching for oncoming traffic.

The accident occurred on a surface street in a fairly busy area right down the street from ASU's main campus where there tend to be a lot of pedestrians and bikers


Sure, but every time you cross the street, you assume some risk. That's why little kids hold an adult's hand when they cross. And the risk is even higher if you're crossing the street at night. And it's even higher if you don't have a light, crosswalk, or stop sign to protect you.


I used to live in Arizona. The traffic lights and crosswalks are an insane distance apart. There are bright streetlights basically everywhere. All the roads are straight.

It's all designed for cars, and not for pedestrians. And it's really tempting to j-walk if you're ever stuck walking somewhere.


I thought the same thing. Why was she so nonchalant about a car about to hit her?

And then I remembered: Pedestrians are simply not afraid enough of cars. They blithely assume the driver can see them, that the driver is a human (!), that the driver wants to avoid hitting them, that the driver has quick reactions, and that the car's brakes are in good working order. Those are a lot of assumptions to bet your life on.

It's much safer to cross streets assuming every car is trying to kill you.


Yes, because oncoming traffic at night is just so hard to see with those bright headlights and all.


Actually, it may be an instance of a "tragedy of the commons" but I think the most strategically optimal move for a state when there are 49 others is to ban it and let "sucker" states do the testing, and then immediately legalize it as soon as it shows promise.

Just like how I never upgrade to a new software version right away, letting the other "suckers" test it in production for me, creating value for me at their expense with no repercussions whatsoever, since the delay is very small.

Another strategically effective move might be to charge corporations an appropriate fee to capture back the value of the data they are collecting by experimenting on public roads. Given how valuable driverless technology could be, the fees could be large.


I don't believe this because whilst there are costs for the state of allowing self-driving cars, there are also massive upsides. The CompSci jobs that are involved in self-driving cars are highly sought after and by enticing testing of these cars Arizona is also likely bringing in high value tech jobs that would normally go to somewhere like silicon valley. Once they've established a center of excellence for that skill set it becomes self sustaining too. So you'll probably find that years after every state has self-driving cars Arizona still have more of those jobs.


For testing self driving cars on real streets though there's not really a big need for local developers if the company is setup well at all it can do it 99% remotely with just a barebones support staff onsite for installing the new software and sorting out integration issues which can also be done at their headquarters on engineering samples on their test tracks.


I had thought it was a brilliant move by Arizona. They were trying to be for autonomous vehicles like what Detroit was for cars back in the day, or Boston is now for biotech, or New York for finance.

But I never thought through: which jobs are moving to Arizona? Autonomous vehicle production and maintenance? LIDAR and other sensor R&D? AI software development? Or is it just the test driver and the cars get developed elsewhere?


It's a gamble to make the state look attractive for R&D shops/departments to either relocate or setup permanent testing sites. Automakers already use the desert conditions to test cars, so this is just an extension.


There was a person at the wheel.

That person wasn't paying full attention to the road. From the video of the crash, it looks like the woman at the wheel was looking at their phone.

Why is no one acknowledging that this woman was responsible for taking over the wheel if necessary for any reason. When I have cruise control on, I take my feet off of the pedals and rest them on the floor. But if, say, a homeless person with a bicycle walks out into the middle of the road at night with a black sweatshirt on in front of my car, then I step on the brakes.

And why is no one acknowledging that a homeless person walked out into the middle of the street at night in a black sweatshirt. I definitely don't know whether I, even if I wasn't on my phone like the car operator was, would've seen the person in the road with enough time to slam on the brakes. I might've killed that person walking across the road by mistake, too.

Like millions of other people on the road every single day, the car operator was on their smartphone while they were at the wheel. And like 3-5 million other people a year, the homeless person walking across the street was killed in a car accident.

The woman walking across the street and the woman at the wheel of the car broke the law. Uber didn't break any laws as far as I can tell. They were as responsible as any other company building self-driving cars as far as I can tell, too. I don't think it's fair to put the fault on Uber here.


  Why is no one acknowledging that this woman was
  responsible for taking over the wheel if necessary
  for any reason.
Nobody in the industry really believes safety drivers work, beyond the early point in testing where they're intervening regularly.

It is widely known [1, 2, 3] that it's extremely difficult for humans to remain vigilant when monitoring systems for rare events. Airport baggage screening systems perform "threat image projection" where they show the operator knives and guns from time to time, just to break up the monotony and reward attentiveness.

Beyond early testing, safety drivers are there to make the public/politicians feel safer, for situations where a slow response is better than none at all, and for more cynical companies, so you can use them as scapegoats in situations like this one.

[1] https://pdfs.semanticscholar.org/ece2/465ed2258585ebb8055fd7... [2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5633607/ [3] http://journals.sagepub.com/doi/full/10.1177/001872081350155...


"It is widely known that it's extremely difficult for humans to remain vigilant when monitoring systems for rare events."

It may be difficult to remain vigilant, but it's not difficult to stay off of your phone while you're at the wheel.

There are signs all over the road and commercials on tv telling people that texting and driving kills. Being on your phone while you're driving is illegal for that reason.

Having a self-driving car on the road with a car operator is legal. There are many companies with self-driving cars on the road that have higher intervention rates than Uber.

We'll never know whether that woman would've been saved if the car operator wasn't on their phone. We do know that what the car operator did was illegal.


How do you know she was on her phone? Was anything released on that regard? All we know is that she's looking down, I assumed it could be to some self-driving debug screen.


So when they tell me to open my luggage at the airport, it may be because they have just seen a fake knive or gun placed on the x-ray image by the system to enhance their attention? Amazing!


IIRC it notifies the gun was fake BEFORE it gets to the random search part.


Even regardless of her lack of full attention, from the video it seems quite obvious a human driver would have failed to see the woman in time even with full alertness. The only question here, and why Uber is under scrutiny, is why a Lidar system which can "see in the dark" wasn't able to detect her and react in time.


> from the video it seems quite obvious a human driver would have failed to see the woman in time even with full alertness

The video is massively - and I'll wager intentionally - misleading. There's no chance the car was making decisions from a $20 dashcam.

Here's what a better camera (looks like a smartphone camera) sees in the same spot (at about 33 seconds - you can see the signs, lights, and terrain match the Uber video): https://www.youtube.com/watch?v=1XOVxSCG8u0


I've driven this area plenty of times, and something that is misleading about the YT video is that there are around two dozen cars driving around lighting that area up. I agree that the dashcam video makes it look very dark, but this video is on the other end of the spectrum and does not represent how dark that stretch gets when there isn't another car.

The moment another car passes the driver to the left is where the accident was. That part is typically very dark and has plants as a backdrop - I can see where it would be difficult to distinguish a human form under specific conditions. Regardless, the LIDAR and backup driver failed miserably here.


The YouTube video has plenty of sections in less well-lit areas. For example: https://imgur.com/a/9L53u

No cars, a light only one one side, and you can still see much further than the Uber video makes it seem. If the Uber car's sensors can't get a better idea of its surroundings than a smartphone video, Uber's management should be liable for criminally negligent homicide for letting them out on the streets.

Plus, the Uber car itself has headlights.


It looks to me that the relevant part of that video is at 2:06, where he mentions a news camera filming. That section is much darker than the earlier footage, but the camera is waving around making it difficult to get a clear idea.


Here, I screencapped the same scene in the two videos:

https://imgur.com/a/LiISl (note the signs on either side of the road, the street lights, the hill on the right-hand side, the building with lit sign in the background, the purple light on the left-hand side, etc.)

33 seconds in in the YouTube video, and 6 seconds in on the Uber video at https://www.theguardian.com/technology/2018/mar/22/video-rel....


mea culpa, you're right.


Human vision has much better dynamic range than the dashcam video they released.


I think it's incorrect to say that the woman was looking at her phone. She was looking down and to the right of the steering wheel, which is where Uber places a monitoring tablet (an iPad) in their self-driving vehicles (where the radio is).

Here's an article that shows the sort of thing she would be looking at, an animation of road conditions: http://www.businessinsider.com/uber-driverless-car-interior-...

If you actually examine her behavior, in the space of ten seconds, she manages to:

* Look down at the tablet and watch the animation

* Look up at the road

* Scan the lane to the left (she was in the right lane)

* Look forward again

* Look down at the tablet again

* Look up again just as the accident occurs

She is systematically scanning her environment and appears to be alert and engaged.

The problem is that there is only one test driver, who has to divide her attention between the tablet and the road. Often two drivers are used, and then one watches the tablet and the other watches the road.

I think it is unfair to accuse her of inattention when she appears to be doing the job exactly as it is designed.

The reason everyone says "she was looking at her phone!" is because they don't understand the setup of how Uber's self-driving car works. They are only familiar with being bored and looking at their phone while driving. They think the test driver's job is to just stare out of the window all day, which is not correct.


The tablet in that article is in the back seat, not up front, and the photos they show of the driver area doesn't suggest any screens that would require the driver to be looking down like the person was doing in the video.

I know that when I was taught to drive we were told to continually scan ahead of us, our side mirrors, and the rear view mirror and basically only take our eyes off the road to check our speed. --This is definitely NOT what the individual in the driver's seat was doing.


There are tablets in the front and back (https://cdn.geekwire.com/wp-content/uploads/2018/02/Front-_-...).

You were taught to drive manually, not in an Uber self-driving car.

If you can find a video of an Uber driving who is driving as they were instructed to do, they would look exactly like her.


This is not about legal or illegal. If the Uber program is unable to detect a large object like that in the middle of the road using lasers, they shouldn't be driving on public roads at lethal speeds. That's all there is to it. If autonomous cars mean that whenever a little kid runs out into the street after a ball and they aren't on a crosswalk, they get run down, or the cars smash full speed into any obstructions in the road, then I don't think they are ready for public testing, though I strongly suspect this is an Uber specific problem.

And not only are they apparently incapable of handling the most basic emergency you can think of, but they are trying to spin it like it's not their fault instead of immediately identifying what went wrong. This makes it appear that not only are they incapable but untrustworthy too. These are dangerous machines going down public roads, they have at least as much responsility as the rest of us to not hit people with their cars. And I don't get to hit people with my car just because they are Jay walking, it's night, and I'm not paying attention. Half the people in NYC would be dead already. The autonomous car can't see what is ahead well enough to avoid a potential collision? Just like a human driver, it should slow down. Otherwise these things should still be getting refined on a private course.


>> And why is no one acknowledging that a homeless person walked out into the middle of the street at night in a black sweatshirt.

I'm guessing, because a lot of people wear black clothes and self-driving cars must be able to deal with that, too. If safety-critical systems can't react to a common real-world situation then they 're just not safe enough to operate in the real world.

You might as well ask what was this woman doing crossing a road at night where even human drivers have reduced visibility. Well- people might cross roads at any time of day (legal issues aside- jaywalking is a very US-centric thing btw). Roads and cars must be designed so as to be safe under this assumption and drivers must be trained enough to deal with the situation.

What is the alternative? Make everyone wear high-visibility vests, or carry special self-driving car beacons when they go outside? I've actually heard both of those ideas since Elen Herzberg was killed and they're both on the near side of the absurd (so I'm sure someone will come up with something even more out there).

But, if tech that is supposed to make our lives easier is too dangerous, the answer is not to make our lives harder to accommodate it. That would defeate the purpose of the technology in the first place.


>There was a person at the wheel. That person wasn't paying full attention to the road. From the video of the crash, it looks like the woman at the wheel was looking at their phone. >Why is no one acknowledging that this woman was responsible for taking over the wheel if necessary for any reason.

The woman was part of their safety program, but ultimate responsibility for the effectiveness and failures of said program lies with them. For instance,

Other driver assist programs (eg, Cadillac) have eye tracking and hand sensors in the wheel. If your eyes leave the road or hands leave the wheel escalating alarms are initiated until driver assist is deactivated. This is a technology that takes into account normal human behavior. Uber did not do this.

Uber previously used two person teams, to minimize tedium and lapses in awareness that emerge when a single person as the job of staring out an uneventful window for hour(s) at a time. This is a policy that takes into account normal human behavior. They did not do this.

Either or both of these could have prevented this death. Both of these decisions occur at a level above the proximal driver-pedestrian interaction.


Those are level 1 or level 2 systems, not level 4 ones. In the former systems, the driver must and will intervene often, in the latter intervention is really really rare. It would get painfully boring very quickly in a level 4 system to have the driver on the wheel at all time but not do anything. The same problem arises in airplanes with modern autopilots.


Asserting that the person who was run over broke the law is victim-blaming of the highest order.

The claim that the ""driver"" was texting suggests that they are at fault, but the whole "safety driver" concept is flawed; the less the driver has to intervene and take control, the less likely they are to remain alert and willing to jump in and take control. You almost need a "fault injection" system that gives back control once every 5-10 minutes just to keep them awake.


Because Uber and possibly most other self-driving car makers aren't doing it right.

First off, Waymo, Ford, and even Volvo (whose car was in the accident) have said that they want to skip Level 3 because expecting people to actually watch the road while they're "self-driven" is not reasonable. They saw from initial testing that the drivers were dosing off.

https://www.bloomberg.com/news/articles/2017-02-17/ford-s-do...

And I didn't know this until now, but it looks like Toyota is now thinking the same:

http://www.motortrend.com/news/toyota-might-skip-level-3-aut...

Second, if they were to at least attempt to do it right (like Tesla is doing now, but didn't do in the beginning either) is to force drivers to keep their hands on the wheel at all times.

Also, Uber could have been watching its drivers with the cameras, and fired those that didn't stick with the program.

And all of this doesn't even mention how Uber's self-driving system seems to be terrible. Lidar wasn't working, the car didn't brake, and they've had one intervention per 13 miles while Waymo has one per 5600 miles.

So, my point is, Uber, the company, still seems to share most of the blame here, for all of those different reasons.


You are making a lot of assumptions.


They are also responsible for the test setup, the behavior and training of their employees.

If all their drivers, which ultimately are the latest safety when everything fails, are routinely not following the process that is an issue.

Maybe you need to have 2 drivers, maybe you need to randomly monitor the behavior of the driver remotely, but you have to demonstrate some active measures your took, to demonstrate the safety of the test. You cannot just say "not my fault".

I am not saying that Uber have not followed a rigorous procedure I don't know. But saying it is their fault or not seems a bit premature.


Waymo mentions this in their safety report, which I think is interesting, and relevant:

> Advanced driver-assist technologies were one of the first technologies our teams explored. In 2012 we developed and tested a Level 3 system that would drive autonomously on the freeway in a single lane but would still require a driver to take over at a moment’s notice. During our internal testing, however, we found that human drivers over-trusted the technology and were not monitoring the roadway carefully enough to be able to safely take control when needed.

> As driver-assist features become more advanced, drivers are often asked to transition from passenger to driver in a matter of seconds, often in challenging or complex situations with little context of the scene ahead. The more tasks the vehicle is responsible for, the more complicated and vulnerable this moment of transition becomes.


>> And like 3-5 million other people a year, the homeless person walking across the street was killed in a car accident.

3-5 million people a year? Are you adding all road fatalities worldwide together here, including those in countries with fatality rates that are literally 100 times that of any developed country, and which are by causes completely unrelated to the kinds of problems autonomous cars try to solve?

The number of road fatalities in all of the US is close to 40K last year. Still a lot, but almost over 100 times less than the figure you quoted. The number of non-vehicle deaths (like pedestrians) is measured in the hundreds [1]

[1] https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...


Even if the driver and the woman crossing the street were both to blame, it doesn't mean that Uber was any less to blame. Accidents in critical situations are typically multi-system failures, and every failure needs to be addressed.

There are four entities who could have and should have relatively straightforwardly avoided this death.

1. The woman shouldn't have crossed the street there and then.

2. The safety driver shouldn't have been looking at her phone (if that's indeed what she was doing).

3. Uber's automation should have caused the vehicle to brake much sooner.

4. That street should have been designed much safer. The design of a lit crosswalk on the median encourages people to cross there, so much stronger discouragement is required. Furthermore, a 35mph limit in an area with pedestrians is going to regularly cause pedestrian fatalities. That's a trade-off most people seem willing to make, but if you make that trade-off you have to own it. If the speed limit was 20mph that woman would be alive today.

As far as I can see it, all 4 entities are 100% responsible for the death of the pedestrian.

None of those 4 entities passed the "reasonable person" test with their actions, therefore all 4 are fully responsible.

Sure you can argue all you want on whether one entity's misbehaviour is more egregious than the others. It doesn't matter; all 4 engaged in behaviour that regularly kills people at a rate much higher than acceptable.


Sure, fine. But this is the world as it is. Badly designed roads everywhere - very, very occasionally with enough accidents one gets fixed. (Keep an eye on local news for that one). People crossing where they "shouldn't" - all the time. Wearing black clothes - hey try cycling at night, they're all over the bicycle paths like that - thank heaven for LED lights.

This is what automated cars have to be able to solve. The real world is not a special case.


What is the significance of the person being homeless?


It's a masterstroke of reporting ingenuity.

If you're the kind of person who thinks homeless people are useless garbage that deserves to be run over, you'll think the article is pointing this out to inform you that what happened was not a big deal and the person walking out in front of the car was at fault.

If you're the kind of person who considers homelesness an affront to the dignity of human societies that tolerate it, you'll think that the article is pointing out the tragic angle of a woman already suffering from exclusion and poverty being run over by the malfunction of an extremely expensive piece of equipment that should be recalled at once.

If you're somewhere in between, you'll still gravitate towards one of the extremes. In any case, mentioning that Elen Herzberg was homeless is only there as an appeal to emotion.

By the way- Elen Herzberg's friends and family have said she was not homeless. I recall, I think, a stepdaughter, saying that she had "issues with homelessness" but had recently turned her life around and was starting a new job.


It's a descriptor, the same way "man" or "woman" is a descriptor. Descriptors help you picture a situation more clearly.


Nothing about whether or not the person involved happened to have a permanent place of residence is a relevant descriptor to the situation at hand.


Great, and neither does their hair color, their # of kids, their name, their birthplace, or any of the other 100 details we normally include. Would you prefer all news articles were:

> Someone was walking and a car hit and killed them.

And that is the end of the story?


Fair enough. It felt like a wholly unnecessary descriptor to me, intended to lessen the value of that person's life.


Cruise control is different because you still have your hands on the wheel.


A fully autonomous car is different, too.

The person at the wheel when that woman was killed was responsible for stepping on the brakes if necessary. And when I have cruise control on, I'm also responsible for stepping on the brakes if necessary.


Yeah, but with cruise control, at least having your hands on the wheel forces you to pay attention.

Really, I think Uber should have used tech to make sure the driver is alert. And test whether the driver can stay alert in a simulator. It's too easy to blame the driver.


It's even worse than that, sadly. The tech is called 'driver vigilance devices' and are common on the railways. Although they're better than nothing, it turns out that real drivers end up operating the device automatically, even if they're not alert. It may simply not be possible to engineer your way around.


> From the video of the crash, it looks like the woman at the wheel was looking at their phone.

How do you know that? They could have been monitoring the car in another way.

Also, it’s pretty obvious that if a car is fully automated, the driver isn’t going to pay attention. If that was you, or any other human the situation would have been the same.

The car screwed up, it should have seen the pedestrian. The safety mechanism of having a driver behind the wheel didn’t work either, and we all know it doesn’t work.


Yes Waymo is still active. I live in the testing zone and see an enormous amount of their vans. About 1 for every 5 minutes of driving.


I drove down a main road here in Chandler, Arizona yesterday and saw three in five minutes. Based on all my experiences on the road with them in the last few months I’d say the chances of them having an incident similar to Uber are much less.


I actually trust the waymo cars more than regular ones now when I'm on my bicycle or motorcycle. Granted they are absurdly "cautious drivers" and can be mildly annoying to be stuck behind in a car.


This strikes me as a risk factor that no one is really discussing. We talk about the dangers of Level 3/4 autonomy, where the driver is still responsible for the car, but autopilot is doing most of the work, leading to a bored/disengaged driver who's not ready to react when it's actually necessary. We don't hear as much pedestrians taking it for granted that a car will stop for them, right of way or no, because of their experience with self-driving cars. Leading to the inevitable accident when they step in front of a human-piloted car who isn't paying attention / can't see them.


That habituation will work in the other direction too.

1: cars driving super cautiously will become the norm, so human driving behaviour that is normal now may become seen as abnormal and dangerous. I expect to see a lot more traffic tickets for driving 10mph above the speed limit, blowing through yellows, et cetera.

2: crashes will be increasingly blamed on the driver. Right now when a driver crashes into a pedestrian or cyclist, most of the time the driver is let off the hook relatively easily. But if it's an accident that a self-driving car would have avoided, the public will be seen as much more avoidable and the driver will be blamed.

There's also the factor that a majority of accidents are caused by a minority of drivers; people who think they are bad drivers are more likely to adopt self-driving cars. Sure there's a large dunning-kroger effect: 80% of drivers think they are above average, but the other 20% are probably really bad.

So I do expect the widespread presence of self-driving to make human drivers much better.

But as you said pedestrians and cyclists will likely become worse. I think it'll balance into many fewer fatalities, but we'll see...


Excellent points. Yes, I can see an increasing intolerance for reckless and inattentive driving as autonomous cars become more prevalent and set a good example.


Maybe that's what happened here. The LIDAR would be visible from afar and that lady just watched the car run into her without flinching.

In all seriousness, what you're describing happens on college campuses across the country today, where students are conditioned to drivers yielding right of way.


Part of that conditioning is that in many of those places, drivers are legally required to yield the right of way.


Something I learned in motorcycle class, very applicable here: Laws are for the living, or, yea you had right of way but you're still dead.


What's your suggestion then? Making the front window reflective so pedestrians have no way of knowing whether or not there is a driver?


My suggestion for pedestrians is to treat cars like guns. Just as you assume a gun is loaded, you should assume a car does not see you. I see people stepping into crosswalks staring at their phones after barely glancing up to see the light has changed. They have no idea if the oncoming traffic is slowing/stopping for them. It's a problem now, and I think self-driving cars have the potential to make it worse (as, at the same time, they are making it better in other ways).


This is the thing that makes me most excited about autonomous vehicles.


Amen. One day they they'll write "Strangely, they killed <this many> per year with cars and did not even bat an eye."


I wonder if this is because waymo has shown more details of how their lidar operates (and how it would have detected the pedestrian who was hit by the Uber car), but Uber is refusing to show the inner workings of their lidar's abilities to the regulators.

The manufacturer of uber's lidar system recently came out and said that they do not believe it failed, but that the input from it failed somewhere inside Uber's software.


"always be hustling" is/was the Uber motto. " move fast and break things" is/was Facebook.


True, but the concept still applies here. Uber has gained a well deserved reputation for being careless as long as what they are being careless with belongs to other people.


Related talk by Bryan Cantrill (Principles of Technology Leadership) Watch the whole video, it's good!

https://youtu.be/9QMGAtxUlAc?t=43m46s


Well, do these comments sound like "hustling" to you or more like "move fast and kill things"?

https://www.theverge.com/2018/3/20/17144090/uber-car-acciden...


Which makes me think of it would be prudent for the government or military to invest in this tech as well. Companies like Northrop Grumman while being bureaucratic nightmares, take safety pretty seriously.


This whole thing was started by a DARPA grand challenge. https://en.wikipedia.org/wiki/DARPA_Grand_Challenge

Industry takes the lead after DARPA's investment which makes sense.


The urban challenge could very well be the best thing to ever come out of DARPA. Riding in the OshKosh truck going around some actual roads in rough conditions is when autonomous cars became "real" to me. In relatively little time they had it going down roads in the woods and driving at high speeds within a few feet of a fence. Now that truck seems clunky and primitive in comparison even just against what Tesla is doing on production cars just 10 years later. Autonomous cars in another 10 years time still feels like a pipedream but then again I would have thought the same thing about where we are today 10 years ago.


Started? The Wikipedia page you mention says ”Fully autonomous vehicles have been an international pursuit for many years, from endeavors in Japan (starting in 1977)”, and https://en.wikipedia.org/wiki/History_of_autonomous_cars lists attempts going back to the 1930s (1920s, if you take ‘radio controlled’ as a form of autonomy)

The first DARPA grand challenge was announced in 2002 and held in 2004.


The theory is that the chain of effect that led to where we are today with autonomous vehicle development began with the Big Bang, but it's generally agreed that the current development paradigm crystalized through the DARPA Grand Challenges in 2004, 2005 and the Darpa Urban challenge in 2007, though we also owe a great deal to deep learning, which Google didn't utilize in Autonomous vehicles until 2014.


I kind of assumed they already were. In fact I'd be surprised if they weren't.


They are.[1] Oskosh offers a system which allows a convoy to consist mostly of unmanned vehicles. The manned vehicle is usually at the rear and armored. The others can be regular Army trucks. They're self-driving, but supervised by one operator for the whole convoy.

Oskosh had an experimental version in the 2005 DARPA Grand Challenge, and they had something more or less usable by 2010. DoD never really went for it. That concept may reach deployment in some future form, but this version doesn't seem to be worth fielding.

[1] https://oshkoshdefense.com/components/terramax/#lit


>> Their self-driving technology seems far more advanced, I think it would be unfair to punish all the players for the bad actions of a few.

It's very unlikely that Uber and Waymo have radically different technology. The two companies probably differe only in business and testing practices- i.e. the environments and the conditions in which they are willing to test their cars.

Waymo simply seems to be more conscious about safety and therefore use its systems well within the safety limits of the technology. But, both companies have access to the state of the art (in the form of highly paid specialists) and you can't really expect one to be too far ahead of the other in terms of capabilities.

Edit: I'm guessing all this by comparison with results generally in machine learning and machine vision. With the exception of a big leap forward with CNNs in 2014, most improvements you ever see in the literature are in the order of a couple of percentile points at most so most systems differe by a very small margin. The same should go for industry entities using this technology in self-driving cars.


>Google, said that in tests on roads in California last year, its cars went an average of nearly 5,600 miles before the driver had to take control from the computer to steer out of trouble. As of March, Uber was struggling to meet its target of 13 miles per “intervention”

Quite a difference.


As many others have pointed out, it's very difficult to take these disengagement stats as a direct comparison between the two companies' systems. Basically, there's no information about what miles driven by Uber and miles driven by Waymo are covering. Perhaps Waymo is driving its cars in less complex environments, or maybe Uber is driving them in more diverse environments (whereas Waymo is sticking to the same few roads and neighbourhoods).


I know you're under the impression you are informed. These metrics, however, are not the same. You don't know what Uber's equivalent metric is at. That doesn't mean you should compare different metrics as if they were the same.


I was just quoting from the NYT. I don't have any special knowledge but there does seem a difference there.


In the literature you see small margins because many of the things that really matter in practice (data quantity, data quality, engineering) aren't generally of academic interest.


I got the impression that Waymo has a ton of actual, human written, software on top of the ML core, "manually" handling a lot of edge cases.

I doubt Uber has this


You know who’s not applauding Arizona? The dead woman and her family.

Turns out when you let corporations do whatever they want without oversight for a weird PR “win” against SF people can die.


I think the public will probably feel the same way, but if this happens again with another company, I think there will be sustained backlash. Hopefully Waymo is as advanced as they’re advertising, and given what a terrible company Uber is by comparison, that doesn’t seem to be an unreasonable assumption.

You know what they say though, once is a mistake, twice is a habit.


Hopefully Waymo has test drivers that actually pay attention.


Unlike Uber, Waymo has always said they do not believe (based on data and experience) "safety drivers" are a realistic safety measure, because normal human beings do not have the ability to concentrate hard on a task for 8 hours a day that they have absolutely no control/feedback upon.

If this is not obvious to you, you have never tried doing such a task. Try it and you will change your mind immediately.


I believe the safety drivers are primarily responsible for getting the autonomous cars to resume driving again after they've stopped for some unknown reason, not to react in a fraction of a second to divert a disaster.


In practice, yes.

In theory, as is obvious from all these comment threads, people seem to think the "safety drivers" (think about that name...) are there to actually, like, you know, improve safety.


That's the stated goal, I believe. I also believe that the actual task is "scapegoat." I sure hope the people taking this job offer are at least aware of this.


Yep, we are lousy at monitoring automation: http://www.techtimes.com/articles/90644/20151003/study-askin...


It could be done; they could drive in shorter shifts, and they could have automation like some high end cars that detect eyes off the road or off the wheel. I guess the main issue is that it would be expensive.


I agree with that actually. The task /as defined now/ is impossible for a human, but yes, actually designing it for human psychology would make it better. A great idea I saw on another comment thread was to keep safety drivers busy by giving them a task like labelling objects on the road as pedestrians, cars etc. Even if it is not that useful, it means their eyes will be on the road when it counts. I mean, we all do that when we do normal driving on boring roads, don't we, try to become more observant, count silly things, to fight boredom?

Pretty sure Uber doesn't give a damn and they see the safey drivers as window dressing. My bayesian gut tells me it's more likely the safety driver program is systemically weak, not that this guy was a huge outlier.


> You know what they say though, once is a mistake, twice is a habit.

This line of thinking might end up with more people dead by delaying the advancement of SDC's.


All the more reason to get rid of companies like Uber, who endanger the entire self-driving endeavour with their habitual reckless behaviour.


Or maybe Uber got unlucky and they had to be suspended. I wonder how many miles they had and every how many miles does an accident happens with human drivers.


Uber did not get unlucky -- they are simply an evil company. Their cars need human intervention every 13 miles as opposed to Google's every 5,600 miles -- they were not fit for the road but Uber let them out anyways.

Uber should be shut down and the principals should get life without parole.


The numbers you are comparing do not measure the same thing.


Care to expand on that?


Fair judgment but where did you get those figures from?


"As of March, Uber was struggling to meet its target of 13 miles per “intervention” in Arizona, according to 100 pages of company documents obtained by The New York Times"

https://mobile.nytimes.com/2018/03/23/technology/uber-self-d...


It's been posted in related threads, bit many, many more human miles are driven per fatality. The problem is that the number of self driving car incidents is too low to reliably extrapolate out.


Watch the video of the accident. Even though it's dark (llooks to have been manipulated afterwards), you can still judge the actual situations in terms of the position of the lights, and the car and pedestrian - to see that it was obviously the car's fault because it should have seen the pedestrian with the lidar, or if the lidar was deactivated - it should have driven with much lower velocity.


> bad actions of a few

What bad actions? If you mean an AI car killing a person - that's hardly "an action", that's an accident. Classic human-driven cars kill people every day yet are allowed to continue operation. Do people really expect self-driving cars to be not just more safe than classic but actually 100% safe?

By the way I myself don't drive at all and only use public transport (and a bicycle to ride the countryside). A major reason stopping me from driving is I can't logically understand how can I make sure I am not going to kill anybody if I drive at a speed considered normal (rather than something ridiculous like 15 km/h) and somebody may jump out right in front of my car suddenly (which is very very far from an impossible encounter).

I am totally car-phobic yet I really doubt suspending self-driving cars operation can make the roads more safe. I trust AIs much more than humans when it comes to subjects like this - they can fail occasionally (especially in guaranteed failure situations like when a pedestrian emerges right in front of you all of a sudden) but are seemingly less prone to fail than humans are.


>What bad actions?

Uber disabled the safety systems included with the car that was involved in the incident. They relied solely on their own, half-baked tech when these systems could have saved the life of person who was killed:

https://www.bloomberg.com/news/articles/2018-03-26/uber-disa...


I remain interested in why the lidar didn't work in this case and I hope more details emerge so we can learn what happened.

But it seems logical that Uber would disable the onboard built-in volvo crash detection feature, it would be adding another variable for a car that is intended to test one thing at a time. Its hard to see this solely as the "uber is being reckless" narrative instead of "maybe this is just how all self-driving cars are tested".

I am happy to be proven wrong certainly


If the Volvo crash avoidance system actually took action then things have obviously already gone very very wrong and it's not like braking when that happens would ever be a bad thing. That's like saying we should remove safety nets from underneath tightropes lest it get in the gymnast's way.


Why do you think the LIDAR did not work? The LIDAR might have worked just fine but what the system taking the output of the sensor did with the data is the question.

If you want to know what we would expect the LIDAR to have seen in such a situation we did a simple simulation of such a scene here [1]

If the LIDAR was defective the system processing the sensor output should detect that there is no data coming in and the problem should change the cars behavior accordingly not just drive on as if nothing is wrong.

[1] http://www.blensor.org/lidar_accident.html


> Why do you think the LIDAR did not work? The LIDAR might have worked just fine but what the system taking the output of the sensor did with the data is the question.

Very true. I don't know either way.


Thanks. This makes sense then. Why did they do this?


One can only speculate at this point, but the only thing consistent about Uber is their recklessness and disregard for others.


Indeed. Good engineering does not consist of disabling proven safety mechanisms for the sake of ones under development.


One problem with these comparisons is the misinterpretation of statistical data. For general policy making statistics are useful, but for personal decisions you need to take them with a grain of salt and switch on your brain.

Human traffic accident statistics include the totality of reckless drivers, street racers, and drunken idiots. Would you readily be a passenger when they drive? I wouldn't. At the same time, the statistics also include the calmest, best, and most reasonable drivers you've ever met in your life.

I think it's reasonable to expect self-driving cars to always drive better than your average drunk college student with 1 year of driving experience and instead perform more like the best drivers you've ever met. So yes, self-driving cars should perform above the human average.


This incident brings to light the fact that this Uber car is blind, the person did not jumped in front of the car from behind some obstacle.

I can't drive a car because of my bad eyes so it makes sense that when we see that blind cars( or with blind spots) drive on roads we don't like it.


Several comments here making comparisons to human drivers. As if autonomous cars are ok so long as they are at or below parity with humans. This is statistically true but a complete misunderstanding of human nature.

Human risk tolerance varies drastically depending on control/participation. What is the acceptable casualty rate of elevators?


On top of that, I think it is absolutely rational to expect autonomous cars to perform at the current state of the art. We tolerate certain kinds of bad human drivers, like beginners, because there is hardly an alternative. A self-driving car with the driving skills of a beginner would be completely unacceptable if the state of the art has skills comparable to, say, somebody with a few years of experience.


We don't really know yet what state of the art is for autonomous vehicles. Until we have gathered a bit more data from different companies we can't say anything.

This accident might have been a fluke, it might have been caused by bad engineering, it might have been caused by many things. We can only compare once we have a sizeable sample of incidents, or a long enough time of non-incidents that we're confident the system works well.

And every software update could change something about it.


> Several comments here making comparisons

On one hand, you are (reasonably) calling out comments for questionable comparisons, but on the other hand...

> What is the acceptable casualty rate of elevators?

...you've made one of your own.

I haven't yet formed an opinion on the underlying question (of what kind of risk aversion is appropriate in the face of automation), but I do notice that it evokes questionable comparisons from all sides.


His point was that machines should be held at much higher standards than humans.

This is a point I'e been making since the beginning myself. If you want self-driving cars to be accepted, they need to be at least 10x better than the best human drivers. Comments such as "self-driving cars only need to be 50% better than humans on average" are absolute nonsense. Humans will never go for that.


I don't see why they need to be 10x better.

If every car could match the best that humans can do 24x7 that should prevent almost all casualties.

I imagine the best possible driver to be a world-class rally driver who:

1. never drinks

2. never speeds

3. never drives whilst tired

4. drives without emotion/adrenaline (test pilots talk about the importance of having no adrenaline as the best way to perform)

5. Always keeps a 3-4s distance from the car in front

But I believe that when self-driving cars can match humans which is actually the hard part, then they can become 10x better because similar to airline pilots they can be trained on extreme events and so are able to respond much faster in situations that even the best unprepared human could do.


That's one of the most important points that most of the statistics miss.

Autonomous vehicles don't need to beat average human statistics. They need to beat statistics based on the best human drivers.


I don't think the risk of death from walking up the stairs is very high... so I'm assuming elevators are about equivalent, because people have definitely died in elevators.


This comment is precisely the point. You assume the risks are equivalent but they aren't even close. Roughly 50x as many people die on stairs vs elevators. Yet more people are afraid of elevators. Even subtly in your comment, notice how you minimize the risks of stairs and underscore the risks of elevators.

Similar thing with ladders. No one would ever get in an elevator if it had the same risk profile as a ladder. So why do we use ladders? Because there is a high degree of control. Humans are more comfortable with risk so long as there is a control/participation element.


Actually, death by elevator is extremely unlikely. Comparatively staircases are death traps.

Source by googling a bit (you can find many more): http://www.lifeinsurancequotes.org/additional-resources/dead... (1 in 10,440,000). That said I wouldn't trust an elevator in China or what you would consider underdeveloped countries (incorrectly called 3rd world) (I have heard various stories of cutting corners and lack of inspection).

Self-driving cars don't need to be 10x better, but they are going to be. Humans introduce a delay of 1sec in best cases which is in the same order of magnitude to a car going 60 to 0. Again as mentioned above by others, self-driving cars don't experience fatigue, attention fatigue, emotions, focal vision degradation due to light conditions etc.

Uber tried to cut corners from what is apparent [to me and quite a few others I guess]. I am really trying hard to imagine a scenario where the LiDAR would have missed the pedestrian. From my knowledge there are no stealth-material-dressed pedestrians out there, no matter how cool that idea would be for a movie. Try to check youtube about videos of the Uber car's behaviour in San Francisco and you would see issues a year ago. Driving conditions in Phoenix are much better (personal experience and I have witnessed self-driving cars in both cities). Uber needs to drastically improve their quality assurance and scenarios. (I am not trying to reach any conclusions and pass judgment without having all the evidence at hand. Even though my comments are extremely critical.)

I think it is a good idea we Waymo, Uber, etc propose a standard for adoption with different levels and an independent committee argues and improves on it and we reach a common list of milestones. I think otherwise state governments are making decisions on a whim. Of course that path is a public relation nightmare as they will have to accept existing issues. Nobody wants to write down there is a 0.001% a pedestrian will die in this scenario, even if with a human driver behind the wheel the chance would be higher; for instance 1 or 10% or even 50%.


Why 10x better? Hypothetically, if you could replace every manually operated car today with a self-driving car and this reduced collisions by .01%, wouldn't you want to do it?


We should strive to be better as a society.

(not that it applies in this case. Uber's cars compare disfavourably to human drivers)


Yes. Extremely valid point, yet somehow overlooked in these discussion till now.

But can you explain how, if it is statistically true, can have a different real world outcome?


Dunno if someone/something is driving my cab I'd go with the one that was safer.

I note elevators used to be controlled by human operators - you don't get much of that these days https://en.wikipedia.org/wiki/Elevator#Manual_controls


> “Improving public safety has always been the emphasis of Arizona’s approach to autonomous vehicle testing, and my expectation is that public safety is also the top priority for all who operate this technology in the state of Arizona,” Mr. Ducey said in his letter. “The incident that took place on March 18 is an unquestionable failure to comply with this expectation.”

This after the police claimed no fault.[1] Not quite sure how those two statements square...

[1] http://fortune.com/2018/03/19/uber-self-driving-car-crash/


The investigation isn't complete, but Uber did disable the car's own safety features [0].

Both Volvo and Intel tested their software against the low-grade video that was released, and it was able to detect the impending accident and reduce the impact to where the pedestrian would likely have survived.

[0] https://www.bloomberg.com/news/articles/2018-03-26/uber-disa...


Uber did disable the car's own safety features

Well, of course they did. Can you imagine testing a self-driving system when another system is also controlling the car? What happens when they send contradictory commands? Race conditions are bad enough outside of actual cars!


They disabled a system we know works, for one that failed. No redundancy. If that is the case, then Uber can absolutely be held accountable for not taking enough safety precautions.

These aren't just test cars. Uber and Volvo's deal was around Uber selling 'enhanced' XC90s. Why not design around the product you purchased?

> Can you imagine testing a self-driving system when another system is also controlling the car?

You mean external forces such as a driver? Which is expected and required at this point?


I don't pretend to be an expert, but generally adding multiple layers of control on the same axis adds its own failure points. Having a human that can override the automatic system is not an argument for leaving that enabled - it's an argument against it!

Personally, I think Uber simply shouldn't have tested it on public roads at this point.


> generally adding multiple layers of control on the same axis

They wouldn't be. Subsumption architecture is the usual method to couple these things together, since the 80s.

Individual reactionary modules, that may be 'dumb', like the AutoBreak, that have a hierarchy of response.

Redundancy is essential for safety.


OK, let's say the Volvo system gets precedence. How will the Uber system ever learn how to behave when the Volvo system is no longer in place?

If we were talking about a production system, I would agree, but I don't think this is appropriate for the training phase.

Redundancy is only essential if the potential for failure is unacceptable. If you remove that (by not driving in public streets), it isn't.


Saving lives is more important than training the self-driving system. Disabling safety systems for the sake of training isn't acceptable.

That said, how does Uber improve the self-driving system in response to any failure? Perhaps by training on recorded data? They must have ways to train the system in addition to waiting and hoping the car encounters the same situation again.


It will learn how to behave by treating an activation of the redundancy systems as a failure.

One of the big lessons of the Therac fiasco - your interlocks need to log every time they are activated.


> Redundancy is only essential if the potential for failure is unacceptable. If you remove that (by not driving in public streets), it isn't.

In which case we wouldn't be having this conversation, Uber wouldn't be under investigation, and someone wouldn't be dead.


The should ditch the human out of the care too. What happens if it gives contradictory command? More seriously, as it was already pointed out elsewhere: Human > car safety features > AI. So it is pretty clear what to do when the car wants to engage emergency braking.


Car is likely to hit object. Car hits brakes.

Not to say on a closed course you might not disable it, but fundamentally new self driving tech on open roads? Redundancy is an important part of reducing risk.


Two different and unrelated agencies and levels of government with different incentives.

The state has strong precedence over local authorities though, so whatever some garbage local keystone kops have to say is kind of irrelevant here. Local coppers are known in general to have a bias in favor of motorists for obvious reasons...


Good riddance. But I think this is not enough to send a strong message to all those who want to put half baked tech out there in hopes of "disrupting" something..

Also, I really hope all self driving testing is suspended until there is sufficient legal and testing ground work before these things are allowed on roads...


It might surprise you that we were testing (selectively) autonomous cars on the autobahn in Germany in 1991 as part of the Prometheus project. Our tech was nowhere near as good as what's available now and so we compensated by having attentive drivers hovering over the E-stop button.

I don't think that stopping testing over a single crash is warranted. Maybe suspend it for a short time while you look into it, but if we stopped medicine every time a patient died in a trial, society would be worse off.


> if we stopped medicine every time a patient died in a trial, society would be worse off.

What on earth do you mean? Clinical trials are stopped all the time when people have adverse reactions & die. And people who die in clinical medical trials are volunteers, consciously aware of their participation, and having signed a waiver. In the U.S., medications may not be sold or administered to the public before being approved via trials. If Uber had to meet the same standards as the medical industry, Elaine Herzberg would be alive, and we wouldn't see autonomous vehicles on the roads for several more years.


Also- a clinical trial (usually) can't get out on the streets and run someone over if it doesn't work as expected.


Drug trials are done on volunteers with informed consent.

Testing on public roads is like kidnapping random people and forcing them to take the drug.


I am not asking to stop it for all time.

Just device proper tests that deem a self driving vehicle to be worthy/competent enough to be on the road with a normal average human being behind the wheel.

And then have every single self driving vehicle to pass before it can be put on the road. Is that too much to ask?

I am just asking for a driving test for self driving tech. Humans undergo driving tests. Why give machines a free pass? As we have seen here, having a human on stand by just won't cut it..


So if the tech can pass a human driving test, is that ok? Because those are a very low bar in some countries (like the US).


>So if the tech can pass a human driving test, is that ok?

That would be dumb, right?

There should be separate tests designed for a self driving vehicle and every single one of the SDV should pass the tests consistently before being put on road...

Until such laws and tests are devised, the all testing of SDVs on real roads should be suspended..


Yes, that was my point. Driving tests for humans are awful, mostly because we expect humans to only become better with experience. But with a self driving system such an expectation would be misplaced.

You'd need a driving test that has sufficiently many difficult situations, hopefully somewhat randomized (so that the cars aren't just trained for this cycle), and in various weather/environmental/situational conditions (snow, ice, rain, desert, dark/light, partially broken street lighting, oncoming traffic with broken headlights, etc.)


Precisely. Disruption can be a wonderful force for change and innovation, but "minimum viable product" is an astronomically higher bar when injury & death are the default failure modes.


I don't know much about the logic used in self-driving cars - but I do wonder how they will handle roundabouts. We have a lot of them here (Australia) and you have to give way to a car that is approaching or entering the roundabout on your right - which often is opposite you. How will a self-driving car cope with detecting a vehicle behind the concrete barrier of a roundabout?


No, the roundabout rule is that a vehicle entering a roundabout must give way to any vehicle already on the roundabout, or a tram that is entering or approaching the roundabout.

See the Australian Road Rules, Part 9: https://www.pcc.gov.au/uniform/Australian-Road-Rules-19March...

If you are approaching the roundabout and someone enters it from your left before you get there, you have to give way to them.


Things like that may vary from country to country, though. In Austria, roundabouts do not have a special rule and therefore a car approaching the roundabout theoretically has the right of way. But pretty much any roundabout has a yield sign, I've never encountered one without a yield sign at the entrance. Doesn't mean that there can't be a roundabout that does not have one.


As you described, the default is to yield, and so the vehicle can always yield and signal its intention to do so by breaking slowly. And I agree, there may not always be a sign (a storm or driver could knock it over), so the default should always be to yield.


Isn't this what the original comment is saying though? Considering the cars are driving on the left-hand side of the road, cars in the roundabout are approaching from the right with respect to the vehicle, not the left. Or am I misunderstanding something unique for roundabouts in Australia? The way you describe it seems opposite of intuition that vehicles already within the roundabout do not have right of way.


The original comment says "you have to give way to a car that is approaching or entering the roundabout on your right", whereas the rule is that you have to give way to all cars already on the roundabout, without regard to whether they approached from your right or left. If they got into the roundabout before you got there, you have to give way.


This sounds really weird, Sweden also has roundabouts but I do not have to yield to anyone on the roundabout, only the ones that I will interfere with? Whether someone enters after or before me is really of no consequence. I am basically not allowed to enter if I will obstructing someones path of travel, with the additional caveat that both lanes of a multi-lane roundabout has to be free, since anyone in the roundabout is free to switch lanes.


>I am basically not allowed to enter if I will obstructing someones path of travel,

Yes, you just described how yielding works everywhere. You enter unless you will block someone. There is nowhere in the world that you yield to someone who isn't going to hit you, that's just called a stop sign.


In Australia you can not change lanes in a roundabout.


That's not true

> You can change lanes within a roundabout, but you must indicate and give way to other vehicles. http://www.roadrules.rsc.wa.gov.au/road-rules/roundabouts

edit: and NSW too, to make it clear this isn't just "oh WA"

> Be careful if changing lanes in a roundabout, particularly when leaving http://www.rms.nsw.gov.au/roads/safety-rules/road-rules/roun...


> you have to give way to a car that is approaching or entering the roundabout on your right - which often is opposite you. How will a self-driving car cope with detecting a vehicle behind the concrete barrier of a roundabout?

How does a human driver detect a vehicle behind a concrete barrier?

Either humans can see it and the intersection makes sense, of humans (and machines) can't see it and the whole setup does not make any sense.


In Mountain View, I once saw a Google/Waymo car trying to cross traffic and turn left onto Rengstorff Ave, which is a pretty busy street in the evenings. As a human driver, I wouldn't know how to do it unless there was a miraculous break in traffic, I would have just turned right and go the opposite direction and make a u-turn elsewhere. So like you, I'm curious how it handled that situation as well.


In Europe we have a lot of roundabouts that work the opposite way, priority is given to anyone inside/exiting the roundabout, which would seem easier to handle for automated systems.


Also in Europe it depends what road signs are at the roundabout. They have no special status in my country and can be looked as a "T" intersections connected to a circle. In most cases there is a yield sign for the cars approaching the roundabout so the cars already on it have right of the way, but it is not the case for all roundabouts. Also, some roundabout allow exit from the two rightmost lanes, others just from one—it also depends on signage.


I think that's what is meant, remember Australians drive on the left.


If they are still on the other side of the roundabout (and there are no other cars), can't you safely enter the roundabout anyway?

My understanding is that the goal behind a roundabout is to create an intersection where any vehicles that enter are traveling in a similar direction, improving the capability of people to avoid crashing and reducing the seriousness of crashes that do occur between vehicles.


I wonder how long the investigations are going to take, they usually take months to a year in these cases. Or have for Tesla at least.

The political side is interesting to look at as well. The governor directly issued the order and he is up for election this year [0] and he played up the positive PR of the Uber and others coming to the state.

0 - https://en.m.wikipedia.org/wiki/Arizona_gubernatorial_electi...


Amen. Uber's fatal accident rate is now 50x that of sober human driver's. At their current rate of driving, they can't have a _lower_ rate until 2028! Inexcusable failure of their technology to prevent a human death in the most basic collision avoidance scenario.


Before the accident they were infinitely better than a sober human driver.

Understand statistics before employing them please. We don't have enough data and a single data point doesn't change that.


It is not 1 data pint though, say you shot at a target 100 times and hit only 1 time , is this 1 data point only and I can't make any conclusion?

Similar if you drive 1 million km and kill 1 person, if I drive 10 km and kill 1 person is still 1 data point and I can make no conclusion? I think it would have been 1 data point if this was the first km a Uber self driving car has driven.


>> Similar if you drive 1 million km and kill 1 person, if I drive 10 km and kill 1 person is still 1 data point and I can make no conclusion?

Yep. Because I still have another 9 m km to go before I've driven as long as you have and there is no way to know whether I'm going to kill another 9 people, or 0 more, until I've actually driven them all.


You are wrong, there is a conclusion we can make, the conclusion is not absolute but fuzzy so maybe fuzzy logic is not your thing.

Also you have a mistake in your comment, I would still have to do 999990 km of driving. If I killed a person in my first 10 km what is the probability that I won't kill anyone in my next 999990?

Your point is that I can't be 100% sure and that is true but we can compute the probability, so the probability that I had bad luck is very small, if the probability of killing 1 person in 1 mil km is 1 or 100% what is the probability of killing this person in my first 10km? ( you are correct is not 0 )


I misread the numbers in the original post. But what you say in your comment- well, that's not how it works.

To assess the risk posed by a driver, you wouldn't follow them around, constantly logging the miles they drive, counting the deaths they cause and continuously updating the probability they will kill someone, at least not in the real world (in a simulation, maybe). Instead, what you'd do is wait for enough people to have driven some reasonably significant (and completely arbitrarily chosen) distance, then count the fatal accidents per person per that distance and thereby calculate the probability of causing an accident per person per that distance. That's a far more convenient way to gather statistics, not least because if you take 1000 people who have causd an accident while driving, they'll each have driven a different distance before the accident.

So you might come up with a figure that says "Americans kill 1.18 people every million miles driven" (it's something like that, actually, if memory serves).

Given that sort of metric, you can't then use it for comparison with the performance of someone who has only driven, say, 1000 miles. Because if you did, you would be comparing apples and oranges: 1 accident per 1000 miles is not on the same scale as ~1 accident per million miles. There's still another 999k miles to go before you're in the same ballpark.

And on that scale, no, you can't know whether an accident in the first 1000 miles will be followed by another in the next 1000 miles. Your expectation is set for 1 million miles.

It's a question of granularity of the metric.


Do you have any math do back up that what I said is wrong? I can try to explain my point better but I see you are ignoring the math so maybe I should not waste my time.(we can reduce the problem to balls in a jar and make things easy)

But think about this, if I killed a person in my first 10 km of driving, what is the chance that will kill 0 after the next 999990, would you bet that I will kill 0 or 1 , more then 10?


I think what you mean by "maths" is "formulae" or "equations". Before you get to the formulae, you have to figure out what you're trying to do. If the formulae are irrelevant to the problem at hand you will not find a solution.

As to your question- I wouldn't bet at all. There is no way to know.

Here's a problem from me: I give you the number 345.

What are the chances that the next number I give you is going to be within 1000 numbers of 345?


Your problem is not equivalent with what we were discussing about, you need to change it a bit like

I draw random numbers from 0 to Max and I get 345, what is P that next number N is in 100 range near 345?

P = 200/Max; in the assumption that Max >445;

For self driving cars, the probability that a car kills a person for 1 km or road driven is unknown, so you can call it X

Then my self driving car killed a person in first 10 km, What is the probability that a random event will happen in the first 10km from 10^9 km, is 10^(-8)

Say the self driving car would have the probability of killing N people for 10^9 km, this are random,independent events So the probability that a kill will happen in first 10km is N*10^-8,

I hope you notice my point that we can measure something, we do not need to wait for 10 or 100 people to be killed

We are not sure but we can say that is is a very small chance that I will not kill other person in my next 999 990km.

let me know if my logic is not correct, in statistics is easy to do mistakes.


> Understand statistics before employing them please

Do you understand statistics?

https://news.ycombinator.com/item?id=16685929


I wonder if this type of events are modeled by Poisson processes and measured by MTB(A?)


reminds me of this xkcd https://xkcd.com/605/


A fatal accident at 50x the rate of a sober human driver with a study size where N = 1.


I'll have a go at seeing what we can conclude from the data. Others, check my thinking please. Now we have 1 death in 3m miles for Uber, versus 1.18 deaths in 100m miles for sober drivers.

The expected rate for 100m miles for Uber is 33.333...

But how confident can we be? To answer that let's compute a poisson confidence interval around that rate, as in https://stats.stackexchange.com/questions/10926/how-to-calcu....

Let's see what a 95% confidence interval for 1 death in 3m miles looks like:

  > poisson.test(1,conf.level = 0.95)$conf.int
  [1] 0.02531781 5.57164339
  attr(,"conf.level")
  [1] 0.95
Multiply that by 33.333 to convert to deaths per 100m miles:

  > 33.333333*0.02531781
  [1] 0.843927
  > 33.333333*5.57164339
  [1] 185.7214
  
So 95% confidence that the rate per 100m miles is from 0.84 to 185.72. That's pretty wide! And since the lower bound crosses 1.18, the difference is not significant at the .05 level (if we must make that particular comparison). However, let's look at 90% CI:

  > poisson.test(1,conf.level = 0.9)$conf.int
  [1] 0.05129329 4.74386452
  attr(,"conf.level")
  [1] 0.9
  
Which gives a CI of 1.71 to 158.13. So with 90% confidence we can say Uber is less safe than sober drivers. Ok.

Now let's look at 93% CI:

  > poisson.test(1,conf.level = 0.93)$conf.int
 [1] 0.03562718 5.17251332
 attr(,"conf.level")
 [1] 0.93
 
That gives a CI of 1.188 to 172.417. The lower bound being just a bit worse than sober drivers.

So we can conclude with 93% certainty from this data that Uber is less safe than sober drivers. Probably a LOT less safe. Although the CI is really wide, this is shocking data for Uber, in my opinion.


> 1.18 deaths in 100m miles for sober drivers

> Uber is less safe than sober drivers

But the 1.18 deaths in 100m miles is for all drivers, not just the subset of sober drivers. Not quite sure why you are claiming it is only sober drivers.


Erm... I don’t think statistics work like this. You can‘t go and pick a confidence level that „confirms“ your desired outcome.

People with more knowledge about statistics than me might be able to explain why.


Statistics works exactly like this. What doesn't work is saying "Okay, we have one death in 3 million miles, that extrapolates to 33 deaths in 100 million miles", because it implies a silent addition of "with nearly 100% certainty", which is the part that's wrong here.

But the poster did something different. He took it one level further and attempted to calculate this confidence number for different spans in which the actual "deaths per 100 million miles" number of Uber's current cars would fall into, given an ideal world (from a data perspective) in which they would have driven an infinite amount of miles. But he actually did it the other way round - he modified the confidence variable and calculated the spans, and then he adjusted the confidence until he arrived at a span that would put Uber's cars just on par with human driving in the best case.

The fact that a fatal incident happens that early (at 3 million, and not closer or past the 86 million that a statistical human drives on average until a fatal incident occurs) does not allow us to extrapolate a sound number per 100 million miles, but it tells us something about the probability by which the actual number of fatalities by 100 million miles that we'd get if Uber continued testing just like it did and racked up enough miles (and killed people) for a statistically sound calculation will fall into different margins. Sure, Uber could have been just very, very unlucky - but that's pretty unlikely, and the unlikeliness of Uber's bad luck (and conversely the likeliness of the fact that Uber's tech is just systematically deadly) is precisely what can be calculated with this single incident.


The statement "with 95% confidence" is a classic misinterpretation of what a CI is, the assumption of Poisson is dubious but there's no obvious plausible alternative. Overall seems reasonable.


Hello! I'd be interested to hear what you think the correct interpretation of these CIs are in this case. Failing that can you explain what is wrong with saying something like "with xx% confidence we can conclude that the rate is within these bounds" is?

The assumption of using Poisson seems pretty solid to me, given we are talking about x events in some continuum (miles traveled in this case), but always happy to hear any cogent objections.


The Poisson distribution assumes equal probability of events occurring. That seems to me to be an oversimplification, given that AV performance varies over time as changes are made, and also given that terrain / environment plays a huge factor here, whether looking at one particular vehicle or comparing to vehicles across companies (and drivers in general). Since AV performance will hopefully be improved when an accident occurs, we also cannot meet the assumption of independence between events. Although if AVs are simply temporarily stopped after an accident, that also breaks the independence assumption as we'd have a time period of zero accidents.

The bigger problem though is what you are doing with your confidence interval. A CI is a statement about replication. A 95% confidence level means that in 100 replications of the experiment using similar data, 5 of the generated CIs -- which will all have different endpoints -- will _not_ contain the population parameter, although IIRC this math is more complicated in practice, meaning that the error rate is actually higher. As such, if you generate a CI and multiply the endpoints by some constant, that's a complete violation of what is being expressed: there is vastly more data with 100m driving miles than 3m miles, which will cause the CI to shrink and the estimate of the parameter to become more accurate. There is absolutely no basis for multiplying the endpoints of a CI!

Ultimately, given that the size of the sample has an effect on CI width, you need to conduct an appropriate statistical test to compare the estimated parameters between the 1 in 3m deaths for Uber and whatever data generated the 1.18 in 100m deaths for sober drivers. There's a lot more that needs to be taken into account here than what a simple Poisson test can do.

For an analysis of how AVs with various safety levels perform in terms of lives saved over time, I recommend https://www.rand.org/blog/articles/2017/11/why-waiting-for-p...

Edit: Note the default values of the T and r parameters when you run poisson.test(1, conf.level = 0.95), and also that the p-value of the one-sample exact test you performed is 1. Also, since this is an exact test, the rate of rejecting true null hypotheses at 0.95 is 0.05, but given my reservations about the use of a Poisson distribution here, I don't think that using an exact Poisson test is appropriate.


To be more clear, when you run poisson.test(1, conf.level = 0.95) with the default values of T and r (which are both 1) you are performing the following two-sided hypothesis test:

Null hypothesis: The true rate of events is 1 (r) with a time base of 1 (T).

Alternative hypothesis: The true rate is not equal to 1.

The reason that you end up with a p-value of 1 is because you've said that you've observed 1 event in a time base of 1 with a hypothesized rate of 1. So given this data, of course the probability of observing a rate equal to or more extreme than 1 is 1! As such, you're not actually testing anything about the data that you claim you are testing.

I'm not trying to be harsh here, but please be careful when using statistics!


Ok I re-ran setting T properly for both cases. The results were similar:

> poisson.test(c(1, 11800), c(3, 1000000), alternative = c("two.sided"),conf.level = .93)

Comparison of Poisson rates

  data:  c(1, 11800) time base: c(3, 1e+06)
  count1 = 1, expected count1 = 0.035403, p-value = 0.03478
  alternative hypothesis: true rate ratio is not equal to 1
  93 percent confidence interval:
     1.006334 146.142032
  sample estimates:
  rate ratio 
    28.24859
The lower bound of the CI approaches a rate ratio = 1 for a 93% confidence interval.

Interestingly, if you multiply the CI I claimed before by the rate ratio instead of the expected rate, you get almost exactly the same CI as here.

  > ci <- c(0.03562718, 5.17251332)
  > 28.24859 * ci
  [1]   1.006418 146.116208
 
* Note 11800 is about two years of pedestrian deaths and time units are in millions of miles. https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/...


Facinating, thank you. Particularly the part about multiplying the CI. I wonder if the analysis could be resuced to some extent? I feel there must be a way to use the information we have do draw some conclusions, at least relative to some explicit assumptions.


No. 3 million miles of observation. You can get a pretty exact and conservative estimate with a bayesian poisson process model. I don't have the time to run the numbers right now, but my guess is the posterior estimate that Uber's fatal accident rate is higher than a human's is >90%, even if taking the human accident rate as a starting prior.


I thought Uber had to have a human take over every 13 miles.

It’s more like 10 miles of observation 300,000 times. Or rather an attentive human can be 50x better than average.


I'd be very interested in seeing the math if you have the time later.


95% - erring on assuming Uber has driven more miles than they probably have.

https://news.ycombinator.com/item?id=16621118


Hmm; if I understand correctly, in that link you show that if Uber’s AI has the same risk of killing people as a human driver, then the prior probability of an accident occurring when it did or earlier was 5%. That’s significant, but it’s not the same measure as the probability that the AI has a higher risk (which would require a prior distribution).


It's a reasonable gut feeling to not generalize from n=1, but the numerical evidence - with either a Bayesian or frequentist approach - is actually quite strong and statistically significant. Math here: https://news.ycombinator.com/item?id=16655081


That's not right. You're setting your expectation for N = 100m miles, then updating it for N = 3 million miles?

That's like saying: "I rolled this red d20 twenty times before I rolled a 1, whereas I rolled a 1 the first time on this blue d20, so the red d20 is obviously better and I'm rolling all my saves on it".

Or, I don't know- "I rolled three 1s on this d20 in twenty rolls so it's obviously not a fair d20".


Can you clarify? What do you believe to be wrong and why?

If you have a strong prior the dice are equivalent, then yes, the rolls shouldn't change your mind.

If you have a prior that the dice are weighted in an unknown way, then yes, the rolls really should change your mind.


What is it compared to human drivers in general? That seems to be the fair comparison as "computer will never be drunk/tired/distracted" is usually cited as one of the benefits of computer-driven vehicles.


If you want to be legally allowed on the road, I think your benchmark has to be other drivers that are legally allowed on the road.


Haha, as I joked in another article comment thread, their current crash rate ironically means a somewhat drunk but otherwise defensive driver is probably much safer! What a world.


For all of the theoretical moral questions that have been brought up around self driving cars (the proverbial lever that lets you kill one person instead of a handful, for example), it's interesting/inevitable that this one came up first.

I'm sure lots of people will argue that self-driving cars will, in the long run, save a lot of lives, and therefore we can justify some loss of life in the process. They are probably right about self-driving cars saving more lives in the long run, but I don't know that the conclusion follows.

From what has been released so far, this feels like a case where a human would have had a high likelihood of doing the same thing and would not have been at-fault in the collision. But self-driving cars have more sensors and, in theory, more capabilities to prevent something like this, and we absolutely should expect them to do better.

As someone with a little bit of experience with the machine learning side of things, I understand how this stuff is really hard, and I can imagine a handful of ways the system could have been confused, which makes it all the more important to understand what went wrong and how we could prevent it from happening in the future.

As self-driving cars become more prevalent, things like this will happen and it will be tough to walk the line between over-reacting to every incident and killing progress in this area and designing safe cars that don't make the problem worse. I think it is prudent to have Uber stop testing on public roads until they can explain how this happened and how they can keep it from happening again.


Based on what came out of the last few days I don’t think it’s clear at all, I think a human would not have hit the woman.

The footage we saw was from a terrible dashcam and did not represent how dark the street actually was. Other people who have driven down the street with cameras show it to be perfectly reasonably lit, with streetlights and everything.

Second, the woman was crossing across two lanes, so any driver watching would have already seen her cross the left-hand lane. It’s not like she popped out from behind a tree suddenly. She wasn’t even going that fast.

This seems to be a massive failure of Uber’s system as well as a complete failure of the ‘safety’ driver.

I don’t think anyone’s overreacting. I think they’re under reacting. I think it’s pretty clear over was heavily negligent. There’s no way they don’t get sued. What comes out at trial is going to be up amazing, I think.

I think if this had been a “real” accident that would’ve been almost impossible to avoid, around it WAS impossible to avoid, or one where the human stepped in and it wasn’t good enough… We would be having a discussion we’re like what you’re saying. Maybe if this was what emai I think if this had been a “real“ accident that would’ve been almost impossible to avoid, around it WAS impossible to avoid, or one where the human stepped in and it wasn’t good enough… We would be having a discussion we’re like what you’re saying. Maybe if this was Waymo or one of the other comapnies.

But it looks like the dubious honor of first death by a autonomous car was totally unnecessary and preventable.


I think both these things are true:

Something was very wrong with how the Uber car handled this.

A human driver might well have avoided hitting the pedestrian, but would not have had any legal trouble if they did.


The pedestrian also has a significant contribution of negligence to this collision, IMO.


This is an example of the strong tendency to blame pedestrians for most to all pedestrian-vehicle collisions. In reality, pedestrians are generally at the bottom of the list for blame, behind even the engineers who designed the roads in the first place.

In this situation, there is no evidence that the pedestrian did anything dangerous. She was crossing in what amounts to an unmarked crosswalk. When struck, she was well advanced in her crossing motion, and there's no sign that she stopped or otherwise dallied in the middle of traffic. The dashcam footage indicates a car some distance ahead: it is not unreasonable that she waited for the car to pass, checked that the traffic was clear, and started crossing.

If you object to the idea that it's an unmarked crosswalk: the median has a paved surface that has no other apparent purpose than as pedestrian walkways, and is otherwise inaccessible to pedestrians except by crossing the street at that point. I would not be surprised to find out that the crossing time at the light is insufficient for anyone other than a fit adult walking briskly. For someone walking at a much slower gait, it is actually safer to cross at a place with a very ample pedestrian refuge. Indeed, the presence of the no pedestrian signs actually suggests that the crossing point is more useful than the "legal" route, and quite possibly safer too (given the tendency of many DOTs to consider road improvements only on the basis of vehicle efficiency, even to the point of active hostility against pedestrians). If that crossing point is fundamentally dangerous for a pedestrian, it's not the pedestrian that is to be blamed, it's the traffic engineer who built a deathtrap.


> If you object to the idea that it's an unmarked crosswalk

I do. I've never seen another crosswalk with a "no pedestrians" sign.


In California, that's a legal crosswalk and the pedestrian gets right of way. In Arizona, it is not. I'm not sure a pedestrian even gets the right of way in a marked cross walk at all times in Arizona. Different state, different culture, different laws.

We had local PD actually pull a very similar sting operation that had people complaining recently. They had a police officer with a baby stroller cross at an unmarked crosswalk and gave tickets to all drivers who did not stop for her to finish crossing the street.

And going back to my teenage years, that was the one ding I had on my drivers' test (again, in California). I had stopped for a pedestrian to cross, but I did not wait until that person was completely across the road before starting again.

Regardless, the video that was shared does not reflect the real world. There have been plenty of photographs including the one in this article: https://www.bloomberg.com/news/articles/2018-03-26/uber-disa... and this one https://imgur.com/a/PM7uu that show the area as well lit. I have trouble believing that the best quality video that a self-driving car company has available in it's vehicles is the one that they produced after the accident. Spending $40k on the car and $80 on a low end consumer level dash camera? That doesn't pass the sniff test.


Genuine question: what are the conditions to make a particular piece of pavement a legal crosswalk in California?


Basically - any (right-angled) corner of sidewalk produces a crosswalk going in the directions parallel to both its sides.

Source: https://leginfo.legislature.ca.gov/faces/codes_displaySectio...

"""

“Crosswalk” is either:

(a) That portion of a roadway included within the prolongation or connection of the boundary lines of sidewalks at intersections where the intersecting roadways meet at approximately right angles, except the prolongation of such lines from an alley across a street.

(b) Any portion of a roadway distinctly indicated for pedestrian crossing by lines or other markings on the surface.

Notwithstanding the foregoing provisions of this section, there shall not be a crosswalk where local authorities have placed signs indicating no crossing.

"""


From the driver's handbook:

> Most crosswalks are located at corners, but they can also be located in the middle of the block.

> Pedestrians have the right-of- way in marked or unmarked crosswalks.

> Crosswalks are often marked with white lines. Yellow crosswalk lines may be painted at school crossings. Most often, crosswalks in residential areas are not marked.

Interesting that the legal definition points out right angle intersections specifically. I wonder if that can be overridden by local rules.


So, that's not legally an unmarked crosswalk under California rules, then? (Streets are not meeting at approximately right angles and the local authorities have posted 'no pedestrians'.)


On the surface, no.

But...

The end of that walking path is perpendicular to the roadway. The other end of that walking path is perpendicular to the roadway on the other side. How it behaves in the middle (not perpendicular) is irrelevant. The walking path itself is legally considered a roadway.

The "no pedestrians" sign does not face pedestrians crossing as this pedestrian did, it faces the road.

So... in California drivers would be expected to give the right of way to anyone crossing the street.

Assuming that the argument is won that this was not an unmarked crosswalk - California also has a blanket vehicle code law that requires drivers to "exercise due care for the safety of any pedestrian upon a roadway".

Arizona actually does have a similar law:

1. Exercise due care to avoid colliding with any pedestrian on any roadway.

2. Give warning by sounding the horn when necessary.

3. Exercise proper precaution on observing a child or a confused or incapacitated person on a roadway.

--- Everything is arguable. Uber threw the first shot in the PR war by producing a sub-par video and pushing the social media perception that the pedestrian was in the wrong. In a civil courtroom, different lines will be drawn. You'll have statements from Volvo, Google, and the LIDAR manufacturer that say the accident was entirely preventable. You'll have video showing that the time to react by an alert driver at the same time of night was plentiful. You'll have a dead body that was thrown a significant distance. You'll have an undertrained driver, who was without a partner in the first month of that experience. You'll have a driver not paying attention to the road. And you'll have Uber staff testing the vehicle without the full use of it's available safety systems.

The question isn't fault. It's how much damage was done to Uber's self-driving vehicle program and self-driving vehicle programs in general, and how much it's going to cost Uber both financially and in PR - both of which can effect future investment, which was already nearing a close.


Are you arguing that the second image here (the one marked 8:27 PM) https://twitter.com/EricPaulDennis/status/975891554538852352 is a perpendicular intersection of the end of a sidewalk with the roadway?

Is there another walkway/roadway intersection that's approximately perpendicular that I'm not seeing? (I have to assume that we're looking at different areas of the map, because I give you enough credit that you aren't arguing that these two characters are perpendicular right at the end: \| )

I agree with your closing notion that there is a question of contributory fault. What started this sub-branch was my argument that the pedestrian's contribution was not 0.00%. I also don't think that the Uber Volvo's share is 0.00%.


It is sufficiently close to perpendicular that I would call it an unmarked crosswalk. If it is not intended to be a crosswalk, it is not the pedestrian's fault to use it as a crosswalk... it is the traffic engineer's fault for designing an unsafe crosswalk.

The pedestrian's contribution may not be 0%, but it is definitely lower than both the traffic engineer's share and Uber's share.


The intersection of the pedestrian walkway (red line) is "close to perpendicular" with the roadway (yellow line)?

https://imgur.com/keA4Xsw

(From Eric's twitter stream, it seems like the collision may have even been at the pink line, which is even farther from perpendicular than the red line.)


That's one point of view, if you're a goodie two shoes who automatically follows and agrees with every law and rule you ever heard of, even if it violates your own basic ethical values.

Another point of view is that the state of arizona intentionally hates pedestrians and gratuitously creates roads that put their lives at risk when all they're trying to do is get home from the grocery store or whatever, a perfectly reasonable thing we oughtta all have the right to do without having to walk an extra mile out of our way or whatever.

In NYC, which is actually a reasonable jurisdiction in the USA, that dumb fuck Giuliani tried to enforce anti jay walking laws one time in the '90s. You know what happened? NYPD cops went on the record telling the NY Times they planned to intentionally leave their ticket books in their precinct lockers so they had no way of writing such idiotic tickets. Police nullification, you could call it. Guliani immediately backed down and no one in recent memory has tried to do that again in downstate NYS.


Another point of view is that the traffic department realized that this was perhaps an unreasonably dangerous place for pedestrians to cross and posted a sign to that effect, in hopes of protecting pedestrians who might incorrectly judge/perceive the danger.


Pretty much every serious attempt at protecting pedestrians requires building refuges (which already exists) and devising ways to get drivers to actually slow down (as opposed to slapping a lower speed limit sign and hoping they'll do so). If you want to truly dissuade pedestrians from crossing, you need to erect a median barrier.

Merely putting up a sign sounds a lot more like the traffic department dealing with drivers' complaints of annoying jaywalkers in the least effort way possible.


As an analogy, the road design here is similar to a highway built to Interstate standards... with a 25mph speed limit. That speed limit is not going to be honored.


Why? They crossed a 35 mph (so in Tempe this is a “slow” road) at a corner (unmarked corners are also crosswalks, legally) on a well lit road. Where the car was many seconds away when she started crossing.

Would a human driver be deemed at fault in a court of law? Usually not (though in this case, maybe) unless they kept going after the collision, or were under the influence. But that’s because we give human drivers a lot of leeway to kill people. Not because it isn’t their fault.


I don't live there, but this article suggests that there is signage for "no pedestrians" at the location where she was struck: https://www.curbed.com/transportation/2018/3/20/17142090/ube...

This twitter stream suggests the same with more details: https://twitter.com/EricPaulDennis/status/975889922413551616...

It appears to me that they did not cross at a corner, and the corners "nearby" in the references above all have marked crosswalks.

If that's the case, I don't see how one could assign 0% negligence to the pedestrian.


Ah I missed the part where there was a sign saying “no pedestrians”. In that case I’d agree it’s not 0 responsibility on the walker, but I don’t think I’d assign the woman full responsibility, either. If the driver attempted to slow down and still hit her, the driver (in this case the car) probably wouldn’t really be at fault. But a collision at 40 in a 35 zone is really bad for a driver.


The road where the accident took place is @ 45 MPH; the "35 MPH" figure came from confusing it with the road in opposite direction (they are separate there).


As a pedestrian, it's still our responsibility to make sure approaching vehicles are actually stopped before we start crossing. She did not, or she would be alive.


It's certainly prudent to do so, and any amount of self-preservation says you should, but is not checking if you're about to be run over actually negligent?

Edit: Also, if you as a pedestrian, stand at a 35mph road, at night, and wait for all approaching cars to stop before you start crossing, you will be standing there a very long time indeed.


One legal definition of negligence is A failure to behave with the level of care that someone of ordinary prudence would have exercised under the same circumstances.

If you've established that doing so is "certainly prudent", then I think it's reasonable that not doing so is negligent.


Accidents in complex systems usually happen because multiple safeguards have failed.

We can agree the pedestrian had some degree of negligence by walking across the road in a less than perfectly safe manner.

But the same assessment also applies for Uber, for not reacting to an obstacle that probably should have been detected.

The question right now is what degree of negligence does Uber bear...


> From what has been released so far, this feels like a case where a human would have had a high likelihood of doing the same thing

No, a human would have easily avoided this (unless they were not looking).

The video is much darker than what a person would see.

Even when you can see the bike on that dark video, the car still does not react!


"From what has been released so far, this feels like a case where a human would have had a high likelihood of doing the same thing and would not have been at-fault in the collision."

The camera footage is A LOT darker than what people have filmed there just a day or two after and is probably not a good representation of what it looked like to the human eye.

Even if it was fairly accurate, the car would be HEAVILY and EXTREMELY dangerously overdriving its headlights in a way that humans would not do.


> From what has been released so far, this feels like a case where a human would have had a high likelihood of doing the same thing and would not have been at-fault in the collision. But self-driving cars have more sensors and, in theory, more capabilities to prevent something like this, and we absolutely should expect them to do better.

The footage was artificially darkened, or it was extremely crappy dash cam or both. I suspect Uber just tampered with it (to get some sympathy) Someone posted a footage from that place few days later and that is very well lit road. On Uber footage you can see lamps on the sky, yet it is very dark.

Also note this dashcam is not what the car uses, and the car is also equipped with LIDAR so it should be able to spot pedestrian even in pitch black.

> As self-driving cars become more prevalent, things like this will happen and it will be tough to walk the line between over-reacting to every incident and killing progress in this area and designing safe cars that don't make the problem worse. I think it is prudent to have Uber stop testing on public roads until they can explain how this happened and how they can keep it from happening again.

Your are missing the point here. Preventing accidents like this should be thing that AI would excel at, even today.

The fact that this failed means that there is something seriously wrong with how Uber is doing it.


How would the footage be tampered when it was taken by the police directly from the dashcam itself at the scene?

Also, there was no moon on that night and all of the video comparisons that have been put out are low-light cameras. They are not very realistic.

Also, noone seems to think it's a possibility that the agencies responsible for the lighting in this area to have made improvements immediately after the inicident before the public speculation started.


> all of the video comparisons that have been put out are low-light cameras

If a self-driving car can't see 50 feet at night, maybe they should buy some low-light cameras?

> Also, noone seems to think it's a possibility that the agencies responsible for the lighting in this area to have made improvements immediately after the inicident before the public speculation started.

None of this matters. (I also find it pretty unlikely - the lights shown in Uber's video look plenty close together already to safely light the road. Even on an unlit road, the car's own headlights should've revealed the pedestrian far sooner.)

A self-driving car should be able to see (via the car's headlights, infrared, LIDAR, etc.) a safe distance ahead of the car at all times. The car should detect the unsafe condition and refuse to continue - not mow down pedestrians.

If it can't cope with a moonless, unlit road, it should not be on that road.


> From what has been released so far, this feels like a case where a human would have had a high likelihood of doing the same thing and would not have been at-fault in the collision. But self-driving cars have more sensors and, in theory, more capabilities to prevent something like this, and we absolutely should expect them to do better.

Intel and Volvo tested against the terrible dashcam footage. If Uber hadn't disabled the XC90's safety systems, this wouldn't have happened. [0]

The systems today can do better.

[0] https://www.bloomberg.com/news/articles/2018-03-26/uber-disa...


I'm going to take your post apart because I enjoyed it very much...

> For all of the theoretical moral questions that have been brought up around self driving cars (the proverbial lever that lets you kill one person instead of a handful, for example), it's interesting/inevitable that this one came up first.

Your talking about the "trolley problem" and it is a ethical question as opposed to moral. This is literally a fuckup, bad programing or poor design. We are going to have real ethical concerns because at some point a car is going to have to make a choice pedestrian or occupant - and I know that my elderly father would choose himself if he were in the car, but I might make a different choice.... will the cars ethics ever reflect my own?

> I'm sure lots of people will argue that self-driving cars will, in the long run, save a lot of lives, and therefore we can justify some loss of life in the process. They are probably right about self-driving cars saving more lives in the long run, but I don't know that the conclusion follows.

It is a tough one, and if there is more loss of life does using a self driving car a bit of fruit off a poison tree.

https://en.wikipedia.org/wiki/Eduard_Pernkopf springs to mind when you bring this up.

> From what has been released so far, this feels like a case where a human would have had a high likelihood of doing the same thing

Except the fact that when you look at footage of the road when driven on with a decent camera recording rather than the crapy low quality and very much artificially dark uber footage this looks to have been very avoidable. If the guy who was supposed to be there to take over had been looking up right then, we might not be talking about this at all...

> As someone with a little bit of experience with the machine learning side of things, I understand how this stuff is really hard, and I can imagine a handful of ways the system could have been confused, which makes it all the more important to understand what went wrong and how we could prevent it from happening in the future.

Were talking about uber here... it is also just as likely that they are in way over their heads.

> As self-driving cars become more prevalent, things like this will happen and it will be tough to walk the line between over-reacting to every incident and killing progress in this area and designing safe cars that don't make the problem worse.

When I read this bit I felt compelled to respond - the life of one victim is too many. In the early days of aviation people knew the risks... But this wasn't the test pilot who died this was a woman walking down the road. I look forward to the NTSB report - they tend to be the adults in these situations.


> Your talking about the "trolley problem" and it is a ethical question as opposed to moral. This is literally a fuckup

Right, thanks for the correction. The ethical question I was getting at is "What is the tradeoff of making sure self-driving cars are 100% safe and predictable before we allow them on the road at all, versus the time delay that will cause in which tens of thousands of people will die via human drivers?"

My thoughts behind this post came from reading comments elsewhere stating that the Arizona government was ultimately costing more lives by suspending Uber's program than it was saving. I disagree with that assertion, although it seems like my post came across in not quite the way I intended it.

My knee-jerk reaction to those comments (supporting Uber) is to liken it to advocating for ignoring deaths in Phase 1 of a clinical trial because there is hope that the drug will save more lives - but even then the comparison isn't apt, because people in clinical trials at least consent to some notion of the risk involved.

To me, it is a really difficult, uncomfortable question. And it isn't so much targeted at this particular case - where, as you say, it was more of a fuckup by a company known for not really being careful - but rather at more marginal cases, and there will be plenty of those. Driving is a lot more complicated than people give credit for, especially in areas where you are sharing the road with cyclists and pedestrians, and the ML approaches to self-driving are probabilistic in nature and will be wrong sometimes.

You can look at this crash and say "Simple safety measures available today would have prevented, or at least mitigated, this collision", and you'd be right, but that won't always be the case. When you say "the life of one victim is too many", it feels impossible to disagree with you, but it does also mean that we are likely decades away from having self-driving cars, during which millions will die from cars (worldwide). If you accept the premise that self-driving cars will be substantially safer than human drivers at some point, what is the right thing to do?


Uber and responsibility doesn't seem like a great cultural match.


This seems like a lesson in why government regulation can be good.


The government regulation allowed the Uber on the road, so it may be a lesson in why government regulation doesn't work or provides a false sense of safety.


I specifically mean that Arizona chose to allow self-driving cars on their roads with little regulation and that lack of regulation contributed to the death of a pedestrian.


Yea, I'm really splitting hairs at this point, but you can't use the Uber incident this month as evidence to support the claim that "regulation is good". You could use the incident as evidence that "Arizona's regulation is not good".

We _can_ support the claim "regulation is good" by pointing to other states that did regulation right and seeing how their fatality rates are better than Arizona's.

But yes, I agree with the spirit of the statement, Arizona needs better regulation, and it seems pretty clear that "regulation is good" in this case.


clearly the better alternative would have been no regulation, and uber continuing to operate (???)


I just meant to point out that "regulation" on its own isn't inherently helpful. We shouldn't applaud "regulation" for its own sake, but should consider what the regulation actually is and how it functions when evaluating its value.

Obviously our roads need some regulation, and in seems that the regulation in Arizona wasn't the right regulation.


Based what I learned about Uber their self driving cars probably run on NodeJS.

I'm only half joking.


We detached this subthread from https://news.ycombinator.com/item?id=16684651 and marked it off-topic.


I don't actually know what a self driving car should run on, I would want to make it in Erlang or Rust but - what do you think a self driving car should be running on?


Some kind of realtime system. A GC pause of 200ms in a car could easily cause a crash.

Sadly, I fear it may actually be written in C++.


It takes around 100 - 400 msec to blink, so an occasional 200msec GC pause doesn't sound too bad. Though I'd sure hate to be in a car that gets stuck in a 15 second full GC freeze.


> occasional 200msec GC pause doesn't sound too bad

I guess you never used Windows.

But seriously occasional 200ms pause is in ideal scenario if you have some kind of memory leaks those might become longer etc.

Hard RT system has defined hard deadline for every operation and if it says it needs to react within 200ms it will or it will be considered a failure of the system.

This might not be obvious to many people, but as cars becoming more computerized you no longer directly control it, for example when you press a brake, it is not directly connected to car's brakes by steel line, instead it is just sending signal to braking mechanism. That system is hard real-time, and it is designed so if you press a brake, the car has maximum time it needs to react to that. There is no excuse to do it later, because it was doing something else at the time.

Same thing should be with autonomous driving. It absolutely HAS to react before a deadline.

Actually lets assume the GC will always take at most 200ms and the component needs to react within 300ms, if that can always satisfied then you can call the system a real time system.

The problem with GC is that typically you can't guarantee that it will finish in 200ms or even 400ms. You can see this quite often with our desktop computers and servers which are not RT and many times you see a slow application or not responsive.

The difference here is that that when GC is taking its time, your website might have slower response, it might get you annoyed, maybe it will even make you swear at the developer of the application/system. When this happens to a car, someone gets killed.


For real-time embedded there are basically 3 options, C, C++ and Rust. And Rust isn't that widely used, so my guess would be on C/C++. You can't have a GC.


What about Ada?


From what I know Ada is basically a legacy language mostly used by the military now a days.


True, I forgot about Ada. While ugly, Ada2012 is a very modern language with a lot of benefits compared to C, especially for embedded control systems programming.

Ada got a bad rap in the past and didn't spread very wide outside US military circles. But it is a very good language. For adoption today it would need some big backing though.


Hard real time system. Meaning every operation has a hard deadline and the car absolutely needs to react before then. No unknown delays because of garbage collection etc.

Sadly, I suspect that most of these self driving cars don't do that, because RT is hard.


Batch


Php


VB Script


The blockchain


Excel spreadsheet


O.M.G.


I thought it was java and it had stop the world GC running when it killed


This is interesting, but can we avoid posting links to articles behind paywalls?

Here's one on the subject that isn't: http://www.chicagotribune.com/news/nationworld/ct-arizona-ub...


It's helpful to post better URLs, and we've changed the submission to that one from https://www.wsj.com/articles/arizona-governor-suspends-ubers.... Thanks!

Please don't post complaints about paywalls, though: https://news.ycombinator.com/item?id=10178989. They're off-topic, and as long as standard workarounds exist (which users routinely post to these threads), paywalled articles are ok. Not that they aren't annoying; they suck. But HN would be quite a bit worse if all such articles were cleared out of here.


FWIW, complaints about paywalls are generally off-topic, though I agree, a non-paywalled source can be useful.

https://news.ycombinator.com/newsfaq.html


I think courtesy should dictate that paywalled articles be accompanied by at least a brief description. Perhaps a change to the FAQ to recommend such?


Just click the "web" link then click the WSJ link. If you have no cookies from them, it gets you around the paywall most of the time.


Let’s suspend human driving too, since it kills over 100 people per day.


People that drive recklessly and kill people can, indeed, have their license pulled. This is not unreasonable and we'll see what the investigation finds. If Uber did not take needless risks and this was some kind of edge-case where everything that could go wrong, went wrong, then perhaps it would be acceptable to allow them to re-continue their testing.

Given Uber's track record, however, I believe we'll see this was more of their "don't give a fuck" brogrammer attitude. When you're only really risking people not being charged a correct fee or not being picked up for a ride it's one thing. Risking public safety is another.


Humans kill 1 person every ~100m miles.

Uber has killed 1 person after ~3m miles.

Yes, I would absolutely suspend a system that kills 3,000 people/day in favor of a system that kills 100 people/day.


[flagged]


If you're going to claim it's a lie, citing evidence and why it's flawed would probably be a good start...


Out of the 95 billion or so[0] people that have ever breathed air before the year 1900, all are dead dead.[1] Every year on average, 500 people die of carbon monoxide poisoning[2]. If that rate was steady for the amount of time humans have been alive, that's a total of 25,950,000 deaths ever from carbon monoxide poisoning.

"Carbon monoxide exposure kills 1 person every 18 hours (500/8766h).

Oxygen has killed 1944 people every 18 hours (108x18)[3].

Yes, I would absolutely replace a gas that kills 155,000 (108x60x24) people/day[3] in favor of a gas that kills 1 person/day."

This is lying with statistics. It's not the right way to use data.

[0]https://www.prb.org/howmanypeoplehaveeverlivedonearth/

[1]http://supercentenarian-research-foundation.org/TableE.aspx

[2]https://www.cdc.gov/mmwr/preview/mmwrhtml/mm6303a6.htm

[3]https://www.cia.gov/library/publications/the-world-factbook/...


Come on now, I was asking clearly about data on Uber's death rate...


And I was specifically referring to the absurdity of comparing Uber's death rate to humans'.


Not very convincing though - genuine neutral party here without a fully formed opinion and I see no reason that's absurd. If self-driving cars (or brand X of them) are killing people at higher rates per mile, that's not good. I don't see how the statistic is an absurd comparison.


Here is the issue with the OG:

1). He has no sources. [0]

2). Sample size for Uber deaths is 1

3). Miles driven is also sampled at 1

2+3). With only one pair, it is a rate, not "rates" of death by Uber. There has been 1 death by Uber. As opposed to the many deaths motor vehicles accidents have caused

4). Per mile is an arbitrary metric, and in this case, false equivalency due to n=1. One Uber car killed one person. As opposed to millions of cars killing thousands of people. You cannot compare the cumulative results of the many to the single result of the one.

4.1). Per mile is an arbitrary metric. It tells us nothing but how many miles of road must be driven by every single driver until 1 death will happen. How do we measure total miles driven in a practical fashion? We can't. We estimate months later.

4.2) Per mile is an arbitrary metric. It doesn't let us know how quickly deaths happen. Do they happen every 1,000 hours? Every 10 hours? Every 100 years?

6). Comparison is unstandardized. 1 kill per ~100m miles is an aggregate and 1 kill per ~3m miles is an absolute. To normalize the data, you would take all the drivers who killed people, their total aggregate miles driven, and graph them on a standard distribution. Plop Uber in there to get your real likelihood of an Uber killing you compared to a regular human.

I might even do #6 if I can find the data.

[0]https://www-fars.nhtsa.dot.gov/Main/index.aspx


Okay... let’s try this way.

Assume we know a) the total number of miles driven in a year by sober humans and b) the number of traffic fatalities by sober humans over a year. (The year part isn’t actually important for this, it could be for all time as far as this is concerned — what is important is the miles). Your observations are the number of miles. Time doesn’t play into this, but you could do the same calculation by hours driven or number of trips, if you have that data. Hours driven and mileage driven are going to be pretty well related, so let’s just use that.

Let’s say this rate is 1 fatality every 100 million miles.

Your question is now: given that we’ve observed one fatality in 3 million miles driven for Uber — is the rate for Uber worse than the rate for humans? (Null hypothesis is that the rates are the same). Another way of saying this is - given the rate of one fatality per 100m miles, what is the likelihood that we’d see one fatality in 3m miles?

If you want to estimate the number of fatalities that will happen over the next X million miles driven, you’d use the Poisson distribution, because this is a rare event over a long time span (or mile-span in this case). Plug in the rate, the number of miles, and you can get a pvalue for each fatality count: none, 1 fatality, 2 fatalities, etc...).

Given this rate (1/100m), you can also calculate the likelihood that there would be 1 fatality in 3 million miles. Turns out, it’s not that likely — suggesting that the fatality rate for Uber is higher than humans [0]. It doesn’t say what the rate is exactly, just that it is likely to be higher. Now it’s possible that the Uber rate is the same as humans, just not all that likely.

https://news.ycombinator.com/item?id=16684764


n=1 (To reduce the number of email notifications I'm getting I'd best put this message in parentheses right next to this statement. NOTE: I'm not a statistician and don't believe statistics are all that important here! All I meant to say here is that this is a single incident. That's it. Nothing more.)

Edit: I'm getting the impression some people think I am suggesting there aren't grounds to suspend Uber's driving based on my 3 character comment above, so I'll paste my comment to a child comment here. Also, I'm not a statistician and I don't really care about the statistics here all that much when there's video evidence showing this was poor driving.

"Same reason clinical trials stop early if someone dies. It might just be bad luck. Unfortunately we'll never know and that drug might never be tested again. It might have been an amazing drug. In this case, based on how it played out, I personally wouldn't want Uber's self-driving cars near me."


Using the above numbers as an example... from this sample size, we can be fairly confident that the current Uber fatality rate is greater than the sober human fatality rate (assuming that the Uber rate is 1 fatality every 3m miles and the human rate is 1 every 100m miles). Here, T is in "millions of miles".

    > poisson.test(1,3,1/100, alt='greater')
    
    	Exact Poisson test
    
    data:  1 time base: 3
    number of events = 1, time base = 3, p-value = 0.02955
    alternative hypothesis: true event rate is greater than 0.01
    95 percent confidence interval:
     0.01709776        Inf
    sample estimates:
    event rate 
     0.3333333


So if there isn't enough data to use statistics, they need to go with what they've got. And what they've got is a video that shows that it failed in a way that is inexcusable. It can't be explained as "this was a fluke accident, the car just happened to look away at the exact wrong time". Self-driving cars don't look away.

Mentioning the statistics might only makes sense if someone is to claim that, statistically, Uber's self driving fleet is safer. Well, it's not.

Any way you look at it, we don't have a reliable indicator to say they are safe enough. Maybe at some point we'll get more information, and realize that, indeed, this was a highly unlikely combination of things that is unlikely to happen again. But we don't have this now, so AZ was right to do this.


n =/= 1

If a baseball player has 10 at bats, and gets 0 hits, he's 0/10, with n=10

If same baseball player has 500 at bats, and gets 0 hits, he's 0/500, with n=500

You can be a lot more confident that he won't get his 1st hit anytime soon from the 2nd case compared with the 1st case. Because the sample size is significantly larger, even though the # of successes is equal between the two cases.


Agreed, but it's not like it's OK to say "Well sure, so far we have a kill rate of 50x a human driver, but why don't we wait until Uber kills at least 20 people first so we can determine proper statistical significance"


Why not?

Why in this case is it acceptable to make decisions on non-statistically significant data? You would exercise more rigour than that in a throwaway A/B test.


It is statistically significant, at a 0.03 level: https://news.ycombinator.com/item?id=16655081


Excellent


Same reason clinical trials stop early if someone dies. It might just be bad luck. Unfortunately we'll never know and that drug might never be tested again. It might have been an amazing drug.

In this case, based on how it played out, I personally wouldn't want Uber's self-driving cars near me.


Yes, but given that the probability of an auto death is continuous, there is an incredibly high confidence level that Uber is, in fact, less safe then human drivers.


No, bad statistician, don't pass go. N = 3,000,000. Miles are the observation here. We're talking # of mortalities per mile, so mile is the N. If you want to talk about the problem of only have a single mortality incident, you need to do the work of determining the likelihood of a 1 in 100,000,000 observation incident occurring in the first 3,000,000 observations. I leave that detail to the students.


Then that number becomes arbitrary. If it was measured in deaths per lightyear your 'n' would become 0.

You know what I meant when I said "n=1".


No, i dont know what you mean. I know only that your comments do not portray a deep understanding of statistics and probability. Do your own homework to determine the liklihood of a low probability event ocurring within a small number of observations. That is you answer to the question of how meaningful it is that Uber experienced a fatality in only 3,000,000 miles.


You crossed into incivility in this subthread and provoked a pointless spatty flamewar. Please don't do that again. There's no reason not to be respectful, regardless of how much you know about statistics.

https://news.ycombinator.com/newsguidelines.html


You're being pompous. I never claimed to have those things.

Re-read my original comment with my edit and you will see why I believe your statistics in fact do not answer the question of how meaningful Uber's fatality is. I doubt your expert testimony of pedantic statistics will trump common sense and the video evidence.


I have pointed out an error in your statistical reasoning. This is not a matter of pomposity or opinion. No insult was intended, and i retract the snark. My criticism of your statisitcal reasoning, however, stands.


I have no problem with people pointing out errors and new ideas to me. It's how we learn. That's not what I was referring to.

It was more the calling me a "bad statistician", the "do not pass go", the "leave the details to the students", commenting on my understanding of statistics and probability, telling me to "do my homework" and telling me how to get the answer to a question I didn't pose.

I'm not upset and I don't hold it against you. I know the written word can sometimes be more difficult to interpret. I appreciate your retraction.


@bitumen, calling names is saying: "you are a pompous person". It is very different to saying "you are being pompous".


It’s ironic, but true that telling someone they’re pompous is almost always in itself, very pompous. In general I find that making a reasoned argument is inevitably more valuable and convincing than calling people names.

Calling names is saying: "you are a pompous person". It is very different to saying "you are being pompous".

And you accused someone else of pedantry!


How is this company still in business...karma is chasing it all the way down to oblivion!


Outside of Hacker News, I have never encountered anyone who followed, was fully aware of, or cared about Uber scandals. From my conversations, it seems that people in my city hate taxi companies so much that it takes more than a couple of manslaughters to make them stop using Uber.


Seems like kind of a pointless action to make it look like they're responding. Uber already suspended their testing in Arizona.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: