"There were 272 instances in which the software detected an anomaly somewhere in the system that could have had possible safety implications; in these cases it immediately handed control of the vehicle to our test driver. We’ve recently been driving ~5300 autonomous miles between these events, which is a nearly 7-fold improvement since the start of the reporting period, when we logged only ~785 autonomous miles between them. We’re pleased."
785 miles between anomalous events that require manual control is bad enough. 5300 is worse! That honestly needs to be 0 events (impossible as that might be) or only events for which advanced notice can be provided to be market-acceptable.
It's the TSA screening problem all over again: 13-88 hours of mind-numbing normal operation, with a single incident of something potentially exploding that you have to catch.
You mean that 5300 is worse because people will pay less attention?
Well, would it be better if it had a random chance of (artificially) needing human intervention, at on average every 785 miles? (Not exactly, because that would make it too easy to predict, and therefore might not make the person attentive when a real problem happens)
That doesn't seem like it would be too hard to add, if it was better by keeping people on their toes.
But, I doubt the problems get monotonically worse when the average distance between events increases? I would expect at some point, increasing distance doesn't significantly reduce attention , so the safety would increase a bit. Maybe not more than for some shorter distance, but I think for some slightly different distance.
785 miles between anomalous events that require manual control is bad enough. 5300 is worse! That honestly needs to be 0 events (impossible as that might be) or only events for which advanced notice can be provided to be market-acceptable.
It's the TSA screening problem all over again: 13-88 hours of mind-numbing normal operation, with a single incident of something potentially exploding that you have to catch.