Imagine you’re riding in an autonomous or self-driving vehicle. The car is traveling 35 to 40 miles per hour and a pedestrian suddenly walks out in front of your vehicle. Complicating the issue there is a concrete barrier in the oncoming lane. Hitting the pedestrian may result in killing them while hitting the concrete barrier may result in the death of the vehicle passenger. What will the car do? Whose insurance will be liable?
Similar scenarios should also be considered such as what if we replace the concrete barrier with an oncoming vehicle containing one or more passengers? How should the vehicle respond if instead of a pedestrian, a dog or a deer suddenly runs into the road in front of the self-driving car? What should the car do if we replace the pedestrian with a person riding a bicycle who loses their balance and veers into the path of the AV?
The underlying question which needs to be addressed is whose life should the AV protect when there is the potential for a loss of life? More pointedly, how will an AV decide who lives and who dies? As you may guess, very few automakers makers want to answer the question publicly. However, it’s a question which should be publicly addressed so owners of AVs know what to expect. It also raises several liability or insurance related questions.
Google stated in 2014 their vehicle would choose to hit the smaller of the two objects. In the original scenario, this dictates the car chooses to hit the pedestrian. The same logic applies with an oncoming car instead of a concrete barrier. The passenger of the AV is protected by choosing to hit the smaller object which is an ethical decision.
The question of liability depends largely on the degree and expectation of human driver interaction with the vehicle. Some auto makers want the human “driver” to touch the steering wheel ever few minutes to assure the computer driver that the human behind the wheel is awake. This raises yet another question, does the AV, and consequently the automaker expect the AV to decide what to do in such a scenario or the human “driver” to take over and make the ethical decision.
If the expectation is the human driver take over, then their liability insurance will be responsible for the outcome of such a decision. If, however, the vehicle does not allow the human “driver” to take over and decide, it will up to the AV to decide who or what is struck and the insurance liability lies squarely on the automaker or software company who writes the program.
Further complicating these issues will be the transition period when autonomous vehicles share the road with vehicles operated by human drivers. This makes it even more important to understand what rules AVs will operate under, how ethical situations are decided, and whose insurance will be liable. What do you want to see? Share your thoughts and questions with me on my Facebook, Google +, and LinkedIn pages. I’d love to hear from you!