Autonomous Vehicles: A Beauty in Disguise?

Author: Andrew Zhang, Graphics: Joshua Lee

The BRB Bottomline

With the unprecedented rate of technological development, it seems inevitable that autonomous vehicles (AVs) will eventually dominate our roads. In fact, as of 2020, there were over 40 big players in the self-driving vehicle industry, including Amazon, Google, and General Motors. In the AV debate, there are two distinct sides: the skeptics and the optimists. The one big question surrounding AVs is: “Are autonomous vehicles safe?” AV proponents argue that computers have reflexes that are far superior to even the most tact Formula One drivers. Critics, however, pose that algorithms within AVs are prone to mistakes, contending that even one misstep is fatal. But have you ever wondered how AVs make decisions in the first place? In this read, I will propose a completely new question: whose ethical values are these AVs programmed to adhere to?


I recently came across the concept of a reward function while taking Introduction to Artificial Intelligence (CS188) with Professor Anca Dragan. This algorithm fascinated me because it deals with the ethics behind robot decision-making: how would an AI discern right from wrong?

Before I explain what a reward function is, let me propose a situation. Imagine you are a pacman meandering an enclosed maze trying to eat all the dots. Of course, there is a challenge–you must avoid the ghosts at all costs! For an average human playing this game, the whole notion of eating all the dots and avoiding all the ghosts may be quite straight-forward. However, for a machine—which has every move of pacman calculated in advance—the actions are more comprehensively laid-out. The decision to proceed up or down is dependent on many factors, including the position of the dots, the distance of the ghosts, and the existence of an escape route. 

The question becomes slightly more complicated when an additional factor is incorporated: survival cost. In the classic game, Pac-Man loses points the longer it stays in the game. The question that naturally popped into my mind when designing my own algorithm was this: “under what condition should pacman commit suicide by colliding into the wall in order to optimise the number of points attained.” In this case, the answer to this question boils down to the simple concept of a reward function.

Suppose pac-man was trapped in a corner with no escape! Directly behind the pacman is a fire-pit and a few spaces in front of it is a ghost. For a human playing this game, the natural response is one of self-preservation: to survive for as long as possible. Yet when simulated through the game rules, the correct response (in the case where Pac-Man is losing points as it stays alive) is to voluntarily fall into the pit to prevent any unnecessary expenditure of points. 

Consider instead what would pacman do if, instead of a loss of points, there was a gain of points for every millisecond that the pacman manages to stay alive. The program would incentivize  the pacman to keep on surviving. By tuning one number, the actions taken by a calculated agent becomes drastically different. This observation triggers an ominous thought: in a simulation like pacman, the repercussions of falling into a pit or colliding into a ghost is virtually nonexistent. But when we apply this simple theory to reality where the outcomes are punitive, these ethical dilemmas become much more complex and daunting. 

Perspective of an autonomous vehicle. ZOOX

Now, let’s consider how an autonomous vehicle operating in real life would respond to two opposing threats. The answer can be narrowed down to the reward function: if the reward (or in this case, penalty) incurs more points for colliding into a group of children than the alternative–which may be driving the vehicle off a cliff–then the AI will do the former. If the functions were reversed, and the engineers of the AV imposed a lesser penalty towards colliding into a group of people than it did for voluntary suicide, the optimal action would be the latter. 

An ethical dilemma is surely to arise for the creators of these autonomous vehicles. On one hand, it is difficult to market a product that would decide to sacrifice the life of its buyer for the greater good. On the other, there is no justification–ethically and legally–for a company to decide that the lives of a few (or even one) is more valuable than the life of another. These predicaments are modelled well by MIT’s Moral Machine platform, which allows participants to visualize the difficulties of machine driven moral decisions. The question is who will regulate this pressing issue.

An AV making a decision: crash into an obstacle which may kill the passengers, or maneuver away and kill a bank robber instead, November 10, 2020. Moral Machine/MIT

This year, the U.S. Department of Transportation set out important, but extremely broad, recommendations for the autonomous vehicle industry in its report titled Ensuring American Leadership in Automated Vehicle Technologies., It discussed basic safety, data security, and research on self-driving cars. Nearly 30 states, including California, have enacted legislation around autonomous vehicles—but most of these laws focus on broad specifications like how closely autonomous vehicles are allowed to follow each other. Other laws released by the DMV require companies to report “disengagement” frequencies, detailing the rate in which back-up drivers must retake control of the vehicle during testing. Clearly, the vague scope of laws and loose technical requirements surrounding AVs warrant skepticism from the public. 

Take-Home Points

Unfortunately, private vehicle manufacturers are not required to disclose the algorithms to which their AVs adhere to. It is in the interest of society to understand what makes up the minds of these machines before they are released onto public roads, but this undertaking is inherently tough due to the advanced technologies utilized in automated vehicle design. Hopefully, this technology will not be a black-box that corporations label “trade-secret”, but rather one that is fully transparent.

Leave a Reply

Your email address will not be published. Required fields are marked *