In grad school I wrote a paper about Asimov’s three laws of robotics and whether they were analogous to an ethical system called divine command theory. I thought I was being clever since robots are programmed to do certain things, and a “good” robot does what it is programmed to do. The paper was a dud. I probably should have written about principlism, since pretty much all of Asimov’s robot stories are about what happens when his three laws conflict with each other. Live and learn.

Even though the paper flopped, my research for that paper was very helpful in looking at the various ways that people have tried to address the question of robot ethics. Robot ethics has become increasingly important as we automate complex activities that interface with humans.

1429066-soAn article in Tech Crunch explores this topic with self-driving cars. It uses the trolley problem, a mainstay in ethics classes, as an example. In brief, the trolley problem presents hypothetical no-win situations where you control a lever that can direct a trolley (or a train) to either one track or another. In one scenario, both tracks have people tied to them (by bandits from the Old West, I guess), but one track has five people tied down while the other track has one person. Who do you kill? (The majority of people will kill the one person to save the five.) Once you answer that question, the scenario changes such that the one person is someone you love or a family member and the five people are convicts. Does your answer change? In another scenario, you can throw a fat man on the track to save the five people. Would you kill the fat man?

 

The trolley problem is one of those “gotcha” hypothetical ethical situations that is so contrived and convoluted (and equally improbable) that it serves little use other than to get someone to make a decision that they would otherwise not do (“Ha ha, see, you don’t really think utilitarianism is bad.” Or “See, you’re prejudiced against fat people.”) It’s a cheap trick in ethics debates. But, where the trolley problem is instructive is in asking the bigger questions about more likely scenarios: What kind of calculus will self-driving cars use in no-win situations? Sometimes you have to hit the cat to avoid a rear-end collision. Sometimes you have to make more difficult split-second decisions, like which car you are going to hit, because you’re going to hit one of them.

 

The obvious answer would be to program the self-driving car to make the decision that minimizes damage. But, what if all decisions could very well involve someone dying or being severely injured? Unfortunately, this is not as improbable as the trolley situation. The Tech Crunch article asks whether we really want to program our self-driving cars (or robots, in general) to make a decision that kills a human. I may be willing to sacrifice myself to avoid hitting a car full of children, but should a self-driving car make that decision for me? Should the car have my best interests in mind or those of people younger than me? Or older than me? Or the car with the most number of people? (i.e., you are punished for not carpooling?) Usually, I’d want it to have my best interest in mind, but there may be times and situations where I’m willing to bear the impact.

 

As we automate complex activities that will inevitably involve machines interacting with humans, these questions will arise. In most of Asimov’s robot stories as well as his Foundation series, things got complicated when people were involved. People are not machines. We do not always make the most logical decisions. We are not always on our best behavior. We don’t always pay attention. We are not always consistent in our decisions. And, we are often unpredictable. These are all arguments for using self-driving cars. Think of all of the lives that would be saved if there was no more drinking and driving or texting and driving. While this is a good point, the cars do not operate in a vacuum. There are still people inside, people walking around the streets, and people driving other cars.

 

One option is to get rid of the unpredictable “people” factor as much as possible. So, make all cars self-driving and completely automate transport. This would certainly save us from our texting, tired, tipsy, selves. The other option is to always keep the “people” factor there by having manual overrides and human monitoring of the machines.

Lexus_2054_Minority_Report_concept (2)

However, there are still split-second decisions that don’t involve people making a mistake. Take, for example, an incident that happened on Valentine’s Day in California near Google’s headquarters. One of Google’s self-driving cars had a fender bender with a bus. The car was attempting to turn right onto a major street when it detected sandbags around a storm drain at an intersection. According to the article, “The right lane was wide enough to let some cars turn and others go straight, but the [self-driving] Lexus needed to slide to its left within the right lane to get around the obstruction. The Lexus was going 2 mph when it made the move and its left front struck the right side of the bus, which was going straight at 15 mph.”

 

The report does not address fault, but Google is taking some responsibility for this accident. Google’s written statement said that the accident is a “classic example of the negotiation that’s a normal part of driving—we’re all trying to predict each other movements.”

 

While not nearly as dramatic, or morally problematic, as our trolley scenario, this situation demonstrates just how poorly an AI adapts to the myriad of situations that could occur on the road. What does this have to do with ethics? The easy ethics decisions are when something is clearly right or clearly wrong (based on one’s moral foundation). Things get dicey when all options are morally bad or when you can’t know the outcome. At that point, you have to take into consideration all of the factors of a situation and make your best guess. A human driver may have hit the bus, too, but a human driver may very well have slammed on the breaks and missed the bus.

 

While some researchers are trying to teach robots ethics, they can never account for every possible situation. Additionally, technology has not advanced to the point that robots can mimic the human brain in its odd ability to assess and contextualize new situations, and then make decisions that require more than mathematically complex if-then statements.