Self-driving cars have always been the stuff of science fiction. However, in recent years we have begun to see these vehicles on our streets and highways. While fully autonomous cars are still a long way from mass production, virtually every major automobile manufacturer is currently developing and testing driverless car systems.
Driverless cars actually date back to the 1920s and have been in development ever since. Though there have been tremendous technological breakthroughs, driverless car systems continue to struggle when confronted with inclement weather such as rain, snow, and fog.
Leaving aside the specific technological barriers for a moment, driverless car systems will also need to confront the ethical side of driving.
Consider the following example:
A driverless car, filled with passengers, is heading toward pedestrians using a crosswalk. As the driverless car approaches the crosswalk, it realizes the brakes in the vehicle have failed. The driverless car is now faced with a choice to either crash the car into a nearby embankment or wall, thereby endangering the lives of those in the vehicle, or proceed through the crosswalk, thereby endangering those in the crosswalk itself.
While this example is certainly far-fetched, it gives rise to an important question about the ethical considerations of self-driving cars. Self-driving cars are controlled by an onboard computer system. Like any computer system, it can be programmed to react a certain way in a certain situation. While a human driver would make a decision based on instinct, a driverless car would make that decision based on algorithms and programming. So in the above scenario, how should a driverless car be programmed to respond?
A paper published on October 24, 2018 by the Massachusetts Institute of Technology sheds an interesting light on the answer to this question. The paper collated data from an online quiz launched in 2016 which asked users to make a series of ethical decisions regarding fictional car crash scenarios similar to the example above. Different factors were assessed including test takers’ preference for crashing into men versus women, the young versus the elderly, pedestrians versus jaywalkers, etc.
Users from over 233 countries took part in the study and over 40 million questions were answered. While there were some variations across the globe, you should feel very comfortable if you’re young, female, and walking in a large group. An elderly, solitary man on the other hand may want to keep their head on a swivel.
The data from this specific study has not been used to program any driverless cars. However, manufacturers of autonomous vehicles are already programming in rough preferences when confronted with a difficult decision. For example, in 2016, Google’s Chris Urmson said its cars would try to avoid hitting unprotected road users such as cyclists and pedestrians. That same year, a Mercedes-Benz manager reportedly said that the company’s self-driving cars would prioritize the lives of the passengers in the car versus the objects outside of the car.
While driverless cars are still prototypes rather than products, it’s important to note that these programming decisions will need to be made at some point in the future and public consultation will be essential. Canada currently ranks 12th out of 25 countries on KPMG’s Autonomous Vehicles Readiness Index and the ethics behind the decision-making of an autonomous vehicle is still quite a ways down the list of priorities.
While autonomous vehicles are still a long way from ubiquity, the MIT study highlights the difficult decisions that manufacturers will face when programming these vehicles for mass production.
Nick first joined Oatley Vigmond as a law student, and later an articling student, prior to joining the team as an associate lawyer. He has a Law Degree from the University of Kent, a Master of Laws...