The Ethics of Autonomous Vehicles

The Ethics of Autonomous Vehicles

The Ethics of Autonomous Vehicles – M.I.T. Moral Machine Exercise


Background Information:

An ethical dilemma is a scenario where there is a choice to be made between two options, neither of which resolves the situation in a way that is fully acceptable. In such a scenario the decision-maker must make a choice between the “lesser of two evils.” Autonomous, or self-driving, vehicles have the potential to significantly reduce the overall number of traffic fatalities by removing human error from the equation. However, considerable questions have emerged about how autonomous vehicles should be programed and regulated to navigate various real-world ethical dilemmas.


Imagine the following scenario involving an autonomous vehicle.

A single passenger is riding in an autonomous vehicle that is obeying all vehicular traffic rules. The passenger has no control over the vehicle’s movement. In the path in front of the vehicle, two pedestrians are crossing the street in a crosswalk. The pedestrians are obeying all safety rules and have a green light indicating that they have the right of way. Suddenly, the autonomous vehicle experiences a malfunction and has only two options: (1) swerve off the road and kill the passenger, thus saving the pedestrians from harm, or (2) continue straight through the crosswalk and kill the two pedestrians, thus saving the passenger from harm.


When a human is involved as a driver in a traffic accident resulting in injury or death, a driver’s split-second reaction is considered random, instinctual, and non-discriminatory. The driver’s reaction is understood as being made with no forethought or malevolent intent. In contrast, autonomous vehicles are required to be programmed beforehand to determine what course of action to take. For example, a vehicle could be programmed to prioritize driver safety, or to minimize danger to others. Thus, the outcome of accidents involving autonomous vehicles would potentially be decided by programmers or policymakers long before the accident occurs.


Let’s now consider two opposing paradigms that can be applied to autonomous vehicle programming and policy.

According to the ethical paradigm of utilitarianism, the most ethical course of action is the one that offers the greatest good for the greatest number of people. In this way, utilitarian ethics seeks to minimize harm to all parties involved. Thus, the ends (in this case the greatest good for the greatest amount of people) justify the means. If an autonomous vehicle were to be programed to reflect utilitarian ethics, the vehicle could seek to achieve the greatest good for the greatest number of people. In the scenario described above, the vehicle could be programmed to swerve off the road, thus killing the passenger to avoid crashing into the pedestrians.


As an alternative, another ethical paradigm is that of duty-based ethics, which suggests that the most ethical course of action is to do the right thing in the moment, regardless of the good or bad consequences that may be produced. In this way, duty-based ethics prioritizes principles over consequences.

As an example, the philosopher Emmanuel Kant proposed that it is wrong to tell a little white lie in order to save a friend from being murdered. Applied to autonomous vehicles, if a vehicle were programmed to adhere to the maxim of preserving the passenger(s) of the vehicle at all cost, the vehicle could potentially kill multiple pedestrians in order to save a single passenger.


Consider briefly which of the two options you would chose (duty-based or utilitarian) if you were in charge of programming autonomous vehicles? Which type of vehicle would you prefer to be a passenger in? Would it make a difference in your decision if, for example, the passenger was your close family member or someone that you have never met? Would it make a difference if the pedestrian was a child or an elderly person? Would it make a difference if the pedestrian was a close friend or a felon bank robber?


To provide context for this exercise, we will first watch the following two brief video clips:

The ethical dilemma of self-driving cars – Patrick Lin

What moral decisions should driverless cars make? Iyad Rahwan

After watching the videos, we will individually complete the online M.I.T. Moral Machine interactive exercise following the steps below and then answer the questions.

A. Navigate to the M.I.T. Moral Machine website at

B. Select “Judge” from the navigation options at the top of the page.

C. Each page will present you with two image options to select from. Select “Show Description” below the images to provide a detailed explanation. Select your preferred outcome by clicking on the image.

D. When you complete your selections, you will be able to view your results and compare them to those of other people that have completed the exercise.

E. Once you have completed the survey, please answer the questions below regarding your results.



1. What is your most saved character? What is your most killed character?

2. How much does saving the greatest number of lives matter to you? How much does protecting passengers’ lives matter to you?

3. How much does upholding the law matter to you? How much does avoiding intervention matter to you?

4. Do your results indicate strong preference to a certain gender, age, fitness, or perceived social value?

5. If you were to provide your recommendation to a government regulator, would you advocate for regulation of the industry? If so, would you recommend a utilitarian or duty-based mandate?

6. Is there anything about your results that surprised you?

Ethics of Autonomous Vehicles

Ethics of Autonomous Vehicles

Home – Osty Writers