On September 20th, the U.S. Department of Transportation announced 15 safety guidelines for autonomous cars. The purpose of the guidelines is to provide guidance for traditional automakers and companies such as Uber, Google, and Tesla as they move toward releasing driverless vehicles. Here’s a synopsis of the 15 guidelines as written by Cecilia Kang for the New York Times.
Data Sharing: New cars and trucks already collect a vast amount of data, unbeknown to their drivers, and share it with the auto manufacturer. The manufacturers should store the data and then share it with regulators to help recreate what went wrong in a crash system breakdown.
Privacy: Most people don’t know their cars and trucks are “spying” on them and automakers haven’t been exactly forthcoming in telling us what data is being collected, who it’s shared with, and how it’s being used. Vehicle owners should be told what’s being collected by their vehicles and have the option to opt out or reject the collection of personal data.
System Safety: Vehicles must be engineered to respond safely to system malfunctions, near crashes, and loss of traction and other risks. Carmakers must get third party validation to verify the systems operate safely even when problems occur.
Digital Security: Vehicle systems should be engineered to prevent online attacks or hacking. All programming decisions and testing should be recorded around security and this information should be shared with others in the industry.
Human – Machine Interface: Vehicles must be able to easily switch between autopilot and human control, and the carmakers must be able to show how that happens. Human drivers need to be able to easily find basic information such as when autonomous driving is not available. Carmakers should consider ways to communicate with pedestrians and other vehicles when the car is operated autonomously. Controls for fully autonomous vehicles should be designed for people with disabilities.
Crashworthiness: Autonomous vehicles must meet regular standards for crashworthiness as set forth by the National Highway Traffic Safety Administration, or prove they are built to best protect occupants. If a driverless car is involved in a crash, the vehicle damage should be no different from the damaged incurred by a human operated vehicle.
Consumer Education: Automakers should train sales representatives and other staff members on how autopilot works so dealers and distributors can be educated. In turn, they should be able to educate autonomous vehicle buyers on limitations and capabilities of self-driving cars and on emergency fallback scenarios.
Certification: Software updates and new driverless features must be submitted to the NHTSA.
Post – Crash Behavior: Driverless vehicles should be proven, by the automaker, they are safe to use again after a crash. All damaged sensors or critical safety control systems should be repaired before the vehicle can be operated autonomously.
Laws and Practices: Autonomous vehicles should follow state and local laws and practices which apply to drivers. Cars must be able to recognize different speed limits in different cities and states and whether a state allows U-turns or right turns at red lights. Systems should be able to respond in a way, even breaking the law, to avoid a crash.
Ethical Considerations: Humans make ethical decisions whenever they get behind the wheel. The systems which guide these vehicles will need to do the same. For instance, should the car be programmed protect the vehicle’s occupants or the occupants of another vehicle better if a crash is about to occur? Should it crash itself to avoid a pedestrian or animal? Should it violate the law by crossing a double line if needed? All programming decisions should be clearly disclosed to the NHTSA.
Operational Design: Similar to the manual you receive with your new car, drivers should clearly understand where, when, and under what conditions the autonomous system works. Carmakers have to prove their vehicles have been tested and validated to fulfill these descriptions, which include how fast a car can travel and whether it’s capable of driving at night and on rocky dirt roads.
Detection and Response: Automakers must show the car has been programmed to respond to normal driving situations such as lane changes and traffic signals. They must also prove their cars can safely avoid big surprises and crashes such as other cars, pedestrians, animals, and falling trees.
Fallback: The cars should be able to change modes safely when a technological malfunction occurs. The car should also be able to determine the condition of the driver recognizing if they are impaired in any way (alcohol, drowsy, etc.) before changing the driver mode.
Validation: Carmakers need to develop testing and validation methods which take account the various technologies used in a driverless car. Testing should occur in simulators, on test tracks, and on actual roads.
The federal government has taken major strides in crafting these 15 guidelines to help foster the release of this technology without dampening innovation. They are clearly looking after the safety of all of us. It will be interesting to see how Texas and other states respond with their regulations. What do you think? Share your thoughts, questions, and suggestions with me on my Facebook, Google +, and LinkedIn pages. I’d love to hear from you!