> “Improving public safety has always been the emphasis of Arizona’s approach to autonomous vehicle testing, and my expectation is that public safety is also the top priority for all who operate this technology in the state of Arizona,” Mr. Ducey said in his letter. “The incident that took place on March 18 is an unquestionable failure to comply with this expectation.”
This after the police claimed no fault.[1] Not quite sure how those two statements square...
The investigation isn't complete, but Uber did disable the car's own safety features [0].
Both Volvo and Intel tested their software against the low-grade video that was released, and it was able to detect the impending accident and reduce the impact to where the pedestrian would likely have survived.
Well, of course they did. Can you imagine testing a self-driving system when another system is also controlling the car? What happens when they send contradictory commands? Race conditions are bad enough outside of actual cars!
They disabled a system we know works, for one that failed. No redundancy. If that is the case, then Uber can absolutely be held accountable for not taking enough safety precautions.
These aren't just test cars. Uber and Volvo's deal was around Uber selling 'enhanced' XC90s. Why not design around the product you purchased?
> Can you imagine testing a self-driving system when another system is also controlling the car?
You mean external forces such as a driver? Which is expected and required at this point?
I don't pretend to be an expert, but generally adding multiple layers of control on the same axis adds its own failure points. Having a human that can override the automatic system is not an argument for leaving that enabled - it's an argument against it!
Personally, I think Uber simply shouldn't have tested it on public roads at this point.
Saving lives is more important than training the self-driving system. Disabling safety systems for the sake of training isn't acceptable.
That said, how does Uber improve the self-driving system in response to any failure? Perhaps by training on recorded data? They must have ways to train the system in addition to waiting and hoping the car encounters the same situation again.
The should ditch the human out of the care too. What happens if it gives contradictory command?
More seriously, as it was already pointed out elsewhere: Human > car safety features > AI. So it is pretty clear what to do when the car wants to engage emergency braking.
Not to say on a closed course you might not disable it, but fundamentally new self driving tech on open roads? Redundancy is an important part of reducing risk.
Two different and unrelated agencies and levels of government with different incentives.
The state has strong precedence over local authorities though, so whatever some garbage local keystone kops have to say is kind of irrelevant here. Local coppers are known in general to have a bias in favor of motorists for obvious reasons...
This after the police claimed no fault.[1] Not quite sure how those two statements square...
[1] http://fortune.com/2018/03/19/uber-self-driving-car-crash/