Earlier this week, a pedestrian was struck by an Uber vehicle operating in autonomous mode and died from her injuries. This if the first recorded fatality between a pedestrian and an autonomous vehicle, and it raises several interesting legal, liability, and even insurance related questions worth noting.
On Sunday, March 18, Elaine Herzberg, the pedestrian, was walking her bike outside of the crosswalk and was struck by a Volvo XC90 operating in autonomous mode. A human was at the wheel to take over control of the vehicle, however, according to video, Ms. Herzberg turned unexpectedly in front of the vehicle and was hit by it. Neither the system operating the car, nor the human “driver” were able to avoid hitting Ms. Herzberg.
Liability: One of the first questions raised was, “Who is at fault?” Tempe Police Chief Sylvia Moir, has stated it would have been difficult to avoid hitting Ms. Herzberg in any kind of mode (human driven or autonomous driver). Most states have laws in place requiring drivers to take “due care” to avoid pedestrians, however, who’s the driver? Is it Volvo the car manufacturer, or Uber the operator of the vehicle, or the makers of the software and hardware allowing the XC90 to operate in autonomous mode? What part does the human “driver” have in avoiding a pedestrian? If a lawsuit is filed, I expect all parties to be named.
Legality: According to an Insurance Journal article, “car accident litigation usually turns on whether a driver acted negligently or failed to exercise a reasonable level of care.” Conversely, “litigation involving an autonomous vehicle could revolve around whether the self-driving system had a design defect.” What could be more impactful is how this accident affects future legislation involving the release of autonomous vehicles to the general public once testing is completed.
Insurance: Assuming this had been an owner operated autonomous vehicle and not a test vehicle operated by Uber, what would the role of car insurance play in such an accident? I believe this is a very fuzzy line, in the near term, between what’s expected of the human “driver” and the autonomous system. For instance, if the driver is expected to be ready to take over command of the driving on a moment’s notice and doesn’t, could they be found negligent? In such a case, the vehicle owner’s car insurance could be forced to pay a claim whether the accident involves a pedestrian, another vehicle, or an object.
On the other hand, if the vehicle doesn’t allow for human interaction, or has difficulty relinquishing control, then will this be a product liability case involving the vehicle manufacturer, and the makers of the software and hardware used to drive the vehicle? Product liability insurance would be responsible for the claim. Further complicating the issue is whether there are instances we haven’t thought of where both the car insurance policy and the product liability policy could be called on to make restitution in an accident?
We are still in the early stages of vehicles being rolled out with more technology than ever before and could see early adopters of partially to substantially autonomous vehicles being released as early as 2019. These questions need to be addressed sooner rather than later.
This accident is a tragedy on so many levels. I’m curious to see what impact it has on future legislation, as well as, the testing and future release of autonomous vehicles. What do you think? Share your thoughts and questions with me on my Facebook, Google +, and LinkedIn pages. I’d love to hear from you!