> Your talking about the "trolley problem" and it is a ethical question as opposed to moral. This is literally a fuckup
Right, thanks for the correction. The ethical question I was getting at is "What is the tradeoff of making sure self-driving cars are 100% safe and predictable before we allow them on the road at all, versus the time delay that will cause in which tens of thousands of people will die via human drivers?"
My thoughts behind this post came from reading comments elsewhere stating that the Arizona government was ultimately costing more lives by suspending Uber's program than it was saving. I disagree with that assertion, although it seems like my post came across in not quite the way I intended it.
My knee-jerk reaction to those comments (supporting Uber) is to liken it to advocating for ignoring deaths in Phase 1 of a clinical trial because there is hope that the drug will save more lives - but even then the comparison isn't apt, because people in clinical trials at least consent to some notion of the risk involved.
To me, it is a really difficult, uncomfortable question. And it isn't so much targeted at this particular case - where, as you say, it was more of a fuckup by a company known for not really being careful - but rather at more marginal cases, and there will be plenty of those. Driving is a lot more complicated than people give credit for, especially in areas where you are sharing the road with cyclists and pedestrians, and the ML approaches to self-driving are probabilistic in nature and will be wrong sometimes.
You can look at this crash and say "Simple safety measures available today would have prevented, or at least mitigated, this collision", and you'd be right, but that won't always be the case. When you say "the life of one victim is too many", it feels impossible to disagree with you, but it does also mean that we are likely decades away from having self-driving cars, during which millions will die from cars (worldwide). If you accept the premise that self-driving cars will be substantially safer than human drivers at some point, what is the right thing to do?
Right, thanks for the correction. The ethical question I was getting at is "What is the tradeoff of making sure self-driving cars are 100% safe and predictable before we allow them on the road at all, versus the time delay that will cause in which tens of thousands of people will die via human drivers?"
My thoughts behind this post came from reading comments elsewhere stating that the Arizona government was ultimately costing more lives by suspending Uber's program than it was saving. I disagree with that assertion, although it seems like my post came across in not quite the way I intended it.
My knee-jerk reaction to those comments (supporting Uber) is to liken it to advocating for ignoring deaths in Phase 1 of a clinical trial because there is hope that the drug will save more lives - but even then the comparison isn't apt, because people in clinical trials at least consent to some notion of the risk involved.
To me, it is a really difficult, uncomfortable question. And it isn't so much targeted at this particular case - where, as you say, it was more of a fuckup by a company known for not really being careful - but rather at more marginal cases, and there will be plenty of those. Driving is a lot more complicated than people give credit for, especially in areas where you are sharing the road with cyclists and pedestrians, and the ML approaches to self-driving are probabilistic in nature and will be wrong sometimes.
You can look at this crash and say "Simple safety measures available today would have prevented, or at least mitigated, this collision", and you'd be right, but that won't always be the case. When you say "the life of one victim is too many", it feels impossible to disagree with you, but it does also mean that we are likely decades away from having self-driving cars, during which millions will die from cars (worldwide). If you accept the premise that self-driving cars will be substantially safer than human drivers at some point, what is the right thing to do?