Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Humans drive without LIDAR. Why can’t robots?


Because human vision has very little in common with camera vision and is a far more advanced sensor, on a far more advanced platform (ability to scan and pivot etc), with a lot more compute available to it.


I don't think it's a sensors issue - if I gave you a panoramic feed of what a Tesla sees on a series of screens, I'm pretty sure you'd be able to learn to drive it (well).


yeah, try matching a human eye on dynamic range and then on angular speed and then on refocus. okay forget that.

try matching a cat's eye on those metrics. and it is much simpler that human one.


Who cares? They don't need that. The cameras can have continuous attention on a 360 degree field of vision. That's like saying a car can never match a human at bipedal running speed.


I'm curious, in what ways is a cat's vision simpler?


less far sight, dichromatic color vision, over-optimized for low light.

a cursory glance did not find studies on cat peripheral vision, but would assume it's worse than human if only because they rely more on audio


The human sensor (eye) isn't more advanced in its ability to capture data -- and in fact cameras can have a wider range of frequencies.

But the human brain can process the semantics of what the eye sees much better than current computers can process the semantics of the camera data. The camera may be able to see more than the eye, but unless it understands what it sees, it'll be inferior.

Thus Tesla spontaneously activating its windshield wipers to "remove something obstructing the view" (happens to my Tesla 3 as well), whereas the human brain knows that there's no need to do that.

Same for Tesla braking hard when it encountered an island in the road between lanes without clear road markings, whereas the human driver (me) could easily determine what it was and navigate around it.


Why tie your hands behind your back?

LIDAR based self-driving cars will always massively exceed the safety and performance of vision-only self driving cars.

Current Tesla cameras+computer vision is nowhere near as good as humans. But LIDAR based self-driving cars already have way better situational awareness in many scenarios. They are way closer to actually delivering.


And what driver wouldn't want extra senses, if they could actually meaningfully be used? The goal is to drive well on public roads, not some "Hands Tied Behind My Back" competition.


Because any active sensor is going to jam other such sensors once there are too many of them on the road. This is sad but true.


And bird fly without radar. Still we equip planes with them.


The human processing unit understands semantics much better than the Tesla's processing unit. This helps avoid what humans would consider stupid mistakes, but which might be very tricky for Teslas to reliably avoid.


Even if they could: Why settle for a car that is only as good as a human when the competitors are making cars that are better than a human?


Cost, weight, and reliability. The best part is no part.

No part costs less, it also doesn't break, it also doesn't need to be installed, nor stocked in every crisis dealership's shelf, nor can a supplier hold up production. It doesn't add wires (complexity and size) to the wiring harness, or clog up the CAN bus message queue (LIDAR is a lot of data). It also does not need another dedicated place engineered for it, further constraining other systems and crash safety. Not to mention the electricity used, a premium resource in an electric vehicle of limited range.

That's all off the top of my head. I'm sure there's even better reasons out there.


These are all good points. But that just seems like it adds cost to the car. A manufacturer could have an entry-level offering with just a camera and a high-end offering with LIDAR that costs extra for those who want the safest car they can afford. High-end cars already have so many more components and sensors than entry-level ones. There is a price point at which the manufacturer can make them reliable, supply spare parts & training, and increase the battery/engine size to compensate for the weight and power draw.


We already have that. Tesla FSD is the cheap camera only option and Waymo is the expensive LIDAR option that costs ~150K (last time I heard). You can't buy a Waymo, though, because the price is not practical for an individually owned vehicle. But eventually I'm sure you will be able to.


LIDAR does not add $150K to the cost. Dramatically customizing a production car, and adding everything it needs costs $150K. Lidar can be added for hundreds of dollars per car.


  > Lidar can be added for hundreds of dollars per car.
Surprisingly, many production vehicles have a manufacturer profit under one thousand dollars. So that LIDAR would eat a significant portion of the margin on the vehicle.


But that’s sort of the point of the business model. Getting safe fully-self driving vehicles appears to require a better platform, given today’s limitations. You can achieve that better platform financially in a fleet vehicle where the cost of the sensors can be amortized over many rides, and the “FSD” capability translates directly into revenue. You can’t put an adequate sensor platform into a consumer vehicle today, which is what Tesla tried to promise and failed to deliver. Maybe someday it will be possible, but the appropriate strategy is to wait until that’s possible before selling products to the consumer market.


Not with Teslas. There are almost no options on a Tesla - it's mostly just colours and wheels once you've selected a drivetrain.


Teslas use automotive Ethernet for sensor data which has much more bandwidth compared to CAN bus


But also higher latency. Teslas also use a CAN bus.

But LIDAR would probably be wired more directly to the computer then use a packet protocol.


Because our eyes work better than the cheap cameras Tesla uses?


problem is, expensive cameras that Tesla doesn't use don't work either.


They cost 20-60$ to make per camera depending on the vehicle year and model. They also charge $3000 per camera to replace them…


I think his point was even if you bought insanely expensive cameras for tens of thousands of dollars, they would still be worse than the human eye.


They charge $3000 for the hours of labor to take apart the car, pull the old camera out, put the new camera in, and put the car back together, not for the camera. You can argue that $3000 is excessive, but to compare it to the cost of the camera itself is dishonest.


Fender camera is like $50 and requires 0 skill to replace. Next.


Chimpanzees have binocular color vision with similar acuity to humans. Yet we don't let them drive taxis. Why?


Chimpanzies are better than humans given a reward structure they understand. The next battlefield evilution are chimpanzies hooked up with intravenous cocaine modules running around with 50. cals


There's laws about mis-treating animals. Driving a taxi would surely count as inhumane torture.


they can't understand how to react to what they see the way humans do

it has to do with the processing of information and decision-making, not data capture


This is plainly untrue, see e.g. https://www.youtube.com/watch?v=sdXbf12AzIM


I drove into the setting sun the other day and needed to shift the window shade and move my head carefully to avoid having the sun directly in my field of vision. I also had to run the wipers to clean off a thin film of dust that made my windshield difficult to see through. And then I still drove slowly and moved my head a bit to make sure I could see every obstacle. My Tesla doesn’t necessarily have the means to do all of these things for each of its cameras. Maybe they’ll figure that out.


Here's a good demonstration why LIDAR SHOULD be implemented instead of what Tesla tries to sell: https://www.youtube.com/watch?v=IQJL3htsDyQ


I wouldn't trust a human to drive a car if they had perfect vision but were otherwise deaf, had no proprioception and were unable to walk out of their car to observe and interact with the world.


And yet deaf people regularly drive cars, as do blind-in-one-eye people, and I've never seen somebody leave their vehicle during active driving.


I didn't mean that a human driver needs to leave their vehicle to drive safely, I mean that we understand the world because we live in it. No amount of machine learning can give autonomous vehicles a complete enough world model to deal with novel situations, because you need to actually leave the road and interact with the world directly in order to understand it at that level.


> I've never seen somebody leave their vehicle during active driving.

Wake me up when the tech reaches Level 6: Ghost Ride the Whip [0].

[0] https://en.wikipedia.org/wiki/Ghost_riding


They can. One day. But nobody can just will it to be today.


We crash a lot.


that's (usually) because our reflexes are slow (compared to a computer), or we are distracted by other things (talking, phone, tiredness, sights, etc. etc.), not because we misinterpret what we see


Well these robots can’t.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: