Autonomous Vehicles Raise Ethical Questions with Life or Death Consequences


Without a doubt, autonomous vehicles are set to be life-changing. Also known as self-driving cars, autonomous vehicles have a market that is set to reach approximately $42 billion by 2025, and have a compound annual growth rate (CAGR) predicted at 21% until 2030. This change will be great, bringing new types of businesses, services, and business models; but the introduction of autonomous vehicles will also spark ethical implications just as big.

About 1.35 million people around the globe die every year in traffic accidents, according to the World Health Organization (WHO), with most caused by human error. The goal autonomous vehicles aim to achieve is to eliminate 90 percent of those accidents. And even though the benefits of self-driving cars are clear, there are still challenges that--if not addressed--could ruin the promise that technology gives.

How human life is valued is one of the key challenges for manufacturers of autonomous vehicles. Who makes the decision of who lives and who dies in a split-second, or decides the value of one human life over another? And, most importantly, how is that decision calculated? Being on the road is naturally dangerous, and with danger comes unavoidable compromises that will be made as autonomous cars confront life-or-death situations. For these situations, ethical guidelines are necessary because if an autonomous vehicle makes a mistake, it could lead to a person’s death.

For years, researchers have struggled with this idea, like the trolley problem that was proposed in 1967 by the philosopher Philipa Foot. The trolley problem has many variants. Mainly, it is used to evaluate the decisions people make when faced with life or death decisions. For example, if you had to choose, should you send a trolley down a set of tracks that would kill one person, or down tracks that would kill ten people?

This straightforward problem becomes trickier when used for self-driving cars. Let's say a self-driving car encounters a mechanical failure where its acceleration increases and it continues without being able to stop. Should it crash into a group of pedestrians and protect its passengers, or should it swerve and crash into an obstacle, potentially killing the passengers inside? Who decides what that car should do? Who is responsible for the consequences of that decision?

Autonomous vehicles could choose who lives or dies by maybe adopting utilitarian principles that focus on the greatest good for the greatest number of people. Still, what is the cost? For example, a car owner might ask "why is my life less important than those of five strangers I don’t know?" Another option would be for self-driving cars to follow principles that would ensure the car maintains its course to keep the driver safe, even if it brings harm to other people.

Would buyers prefer autonomous vehicles that follow principles that kill the driver to protect strangers, or kill strangers to protect the driver? Who decides what ethical guidelines artificial intelligence in self-driving cars will follow? Human drivers have had years to be ready to make decisions like these, but still, they don’t always make the right ones.

Today, advanced technology is being developed by leading technology companies. To sell their vehicles they will need to make what consumers want. Growth in the use of autonomous vehicles comes with the trust and comfort that people who buy or use those cars have in using the technology. Back in the early 1900s when elevators became autonomous, a lot of people were very uncomfortable with them. But as manufacturers adjusted and added features, people gradually began to accept the change and now we barely think about getting into an elevator.

The idea that we maybe do not have to decide how human life should be valued is also on the table. Is Artificial Intelligence determining its own outcomes--guided only by a base of moral principles that are accepted globally--the right thing to do? The problem with this is that ethics may be defined differently in the way that values are defined differently. What may be acceptable in one country might not be in another.

In an article for Digitalist Magazine, Rudeon Snell said: “My hope is that we strive not to be surprised by unintended consequences that could derail the promise that breakthrough technology offers just because we didn’t take a moment to anticipate how we could deal with them. Right and wrong are, after all, influenced by factors other than just the pros and cons of a situation. If we ask the hard questions now, we can drive our world in the direction we want it, not just for us today, but for our children tomorrow.”

[Source: Digitalist Magazine ]

I never thought about the ethical guidelines that come with artificial intelligence in self-driving cars. It's definitely something we should pay close attention to. Very well written article Moises! Keep up the hard work! – Cristian , Madison West High School (2020-03-25 22:29)
This turned out well, Moises. Congratulations! Jane – jane coleman , volunteer (2020-03-26 08:44)
Great article Moises! This is such an interesting subject to consider, and one that I had never thought of before! – Helen , New York (2020-03-26 18:51)
I really enjoyed your article and I though it was very important and timely! Especially since self-driving cars are increasingly being pushed towards consumers, so knowing the benefits and dangers is really helpful to ensure that the public is getting the right information to make an informed decision. Another point that I wanted to make was that I really appreciated the fact that you brought in the question of ethics. And it was really interesting because it still remains a big problem that many manufacturers in this field have to figure out. But very good job! It was a very engaging read and I feel that it was very well written! – Yani , Madison East High School (2020-03-28 16:20)
Name

Location

Email

Comment