Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem with self-driving cars is the 1% failure of software.

Cars travel at a high rate of speed and a 1% error can easily be fatal. Either by hitting a pedestrian or someone on a bike, or by the car failing to acknowledge something and crashing itself head on into something.

Because cars travel in a highly dynamic environment this 1% error and the fatality risk associated with it is very high, so you need to have something that is 99.99999% safe which is extremely challenging because of the dynamic environment.

Meanwhile, if you look at automated train and rail systems. Because the environment is 99% controlled and less complex a 1% error is actually much smaller, because the environment is so controlled.

The high rate of speed, coupled with fatality risk, coupled with dynamic environments, is why this is such a challenging problem.



Did you just pull 1% out of the air?


I'm not saying it's statistical, it's just a reference to a small issue which is compounded because of high fatality risk and that the dynamic environment makes the likelihood of unlikely events more likely to occur.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: