I can't find how it will operate. Will it detect a situation where lines are not painted sufficiently clearly, warn you, and disengage? Or will you need to detect where it begins to operate wonky because lines aren't painted sufficiently clearly and to take over? I guess it's the second. It's hands-off, not eyes-off.
Also, I would like to see a car company that is further down the road of full autonomy clearly describing all the long tail scenarios. It's just impossible.
The current Gen 1s will start beeping at you if they can’t see the lines. If you don’t take over quickly it will start slowing down and beeping very insistently.
I've been saying that it's semi-solved, in the sense that we have a decent idea of how to get there without requiring major breakthroughs. (By "we" I mean Waymo.)
I notice that tesla seems to choose (highlight in blue) the left and/or right lane lines or the car in front of you. But it still seems to work without lines.
The endless list of "exception" cases is why I'm continuous skeptical about any sort of full-self-driving claims.
Sure, you can cover everything they can think of. But there are so many cases you can't predict, or which don't have an obvious solution, and it often comes down to a human judgement call that doesn't always have a programmatically-clear right answer.
That was specifically for their existing Gen2 highway assist expansion. Not the Gen3 custom silicon full autonomy that they were discussing for the rest of the presentation.
The problem UBI boosters have is not understanding how basic social welfare programs work, or somehow pretending their one weird trick replaces them (that’s why they’re always vague about the actual amount of the UBI).
A swim coach told me that in 1950s people used to do the first lap of breaststroke underwater but people kept passing out. It wasn't safe for youth sports.
Trees are great but Las Vegas is in a desert. It would better to also build for shade, like old hilltop towns in Italy or Spain, or various urban designs in the Middle East.