Completely self driving? Don't they go into a panic mode, stop the vehicle, then call back to a central location where a human driver can take remote control of the vehicle?
They've been seen doing this at crime scenes and in the middle of police traffic stops. That speaks volumes too.
Incorrect humans never take over the controls. An operator is presented with a set of options and they choose one, which the car then performs. The human is never in direct control of the vehicle. If this process fails then they send a physical human to drive the car.
> presented with a set of options and they choose one
> they send a physical human to drive the car.
Those all sound like "controls" to me.
"Fleet response can influence the Waymo Driver's path, whether indirectly through indicating lane closures, explicitly requesting the AV use a particular lane, or, in the most complex scenarios, explicitly proposing a path for the vehicle to consider. "
So they built new controls that typical vehicles don't have. Then they use them. I fail to see how any of this is "incorrect." It is, in fact, _built in_ to the system from the ground up.
Semantic games aside, it is obviously more incorrect to call them "completely self driving" especially when they "ask for help." Do human drivers do this while driving?
I don't know what you're trying to prove here. Stopping safely and waiting for human input in edge cases is fine (Waymo). Crashing into things is not fine (Tesla).
https://waymo.com/safety/impact/