Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Deep learning automatic driving systems that want to work have to have world models.

It's Automatic Driving 101.

These models don't have to be as explicit as formulas but can be approximations of reality through beam search (having multiple steering hypotheses at once and then picking the most likely one etc.), model ensembles, some bayesian state exploration or anything that isn't random search.



If that model is inaccessible to the people that are trying to assure safety then it's not particularly material if it exists or not when it comes to safety



I am specifically talking about the situation of trying to verify that a self driving car has a reasonable model of the world, i.e. it won't fail in a spectacular way in a situation a human would handle properly. Right now I don't know how to show this even without complicating it with ZKPs, verifiable computing or homomorphic encryption.

Really cool work, I just don't see how it has anything to do with the safety of the AI.


Are you saying that from experience or speculation?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: