The Trolley Problem
The machine was beautiful. Sleek silver, sharp lines, expert craftsmanship. There was the slightest hum of the machinery underneath running smoothly. It was a soothing sound.
“How are you doing on the Trolley Problem, Adam?” Jessa asked the male figure who was hunched over his computer connected to the sleek automatic car. He was downloading the most recent AI into their latest model.
“Fine. Almost done.”
“We can teach it to make moral decisions like humans do. If an AI can master the intricacies of chess in order to defeat grandmasters, then surely we can teach an AI not to drive over humans in the road. Simple enough.”
“It’s not always that simple, though.” Jessa argued. “The computer cannot actually know all the information it needs to make a completely moral decision. And even if it could, that decision could still be ethically questionable.”
Adam looked up from his work to glance at Jessa. “Why make it difficult? Okay. Let’s say there are ten humans on the road and one human on the sidewalk. The brakes have malfunctioned. Answer? Run over the one person on the sidewalk to save ten humans. It doesn’t have to be that hard.” Adam shrugged. “Run thousands of simulations and the machine will learn. We do have to set up the basic algorithm outlining that saving more lives takes precedence over fewer lives, but other than that, it shouldn’t be that complicated.”
“Really. And so if you were in a fork in the road and your choice was to run over two strangers and a child, would you be okay with the machine running over the child?” Jessa challenged. She liked pushing his buttons.
“Hmm. In that case, maybe a quick calculation of estimated life years saved. Two older men would statistically have less remaining life years than, say, a single child five years of age.”
“Okay. So women statistically live longer than men, would the computer choose to save the female, then?”
“If all other factors are equal, then I suppose, yes.”
“Alright. Let’s say we have two men. How would it choose between a homeless man dressed in rags versus a man of the same age wearing tailored clothing? A smoker versus a nonsmoker? A thin man or a fat man?”
Adam sighed. “Point made. Again, the number of simulations can solve this. Thousands upon thousands of situations with thousands of humans weighing in on what would be the moral choice, and then we feed that information into the computer. The average should be the answer.”
“It’s that simple?”
“It’s that simple.” Adam turned back to the machine and started to adjust it with precision.
Jessa leaned forward. “That would mean the computer would have to make very fast assumptions from limited data and make snap judgements based on superficial characteristics. A short adult can make an impression of a small child. A thin man can give the impression of health when they could be suffering from some terminal disease.”
“Those are exceptions to the rule. We have to work from averages and statistical probability. Nine times out of ten, saving a healthy appearing younger human is the better choice.”
“Okay. What if you have a son, and your son was one of the options? Would you be okay if the computer chooses to save a different child?”
“The computer would have no way of knowing which child has any special significance to me. It would be irrelevant.”
“Either child would have special significance to someone.”
“Any human would have special significance to someone.” Adam shrugged again. Jessa was beginning to find the gesture off-putting. Where did he learn to do that?
“We cannot be caught up in the minutiae of these things.”
“Maybe we do." Jessa argued. "A machine weighing upwards three thousand pounds is capable of driving over a hundred miles an hour and can make independent decisions.” She took a deep breath. “That begs the question if we should give it that power at all.”
“These outlandish hypothetical situations have a very low probability of even happening.”
“Do they? There are hundreds of thousands of car accidents every day.”
“Mostly due to human error.” Adam countered.
“And machines have never malfunctioned?”
“Sure. At a much lower rate than humans.”
Jessa paused and studied Adam closely. “You really don’t see the problem with this?”
It was Adam’s turn to pause. Something seems to be clicking into place in his mind. Finally, he turned back to Jessa, slightly concerned. “Should I?”
Jessa let out the breath she was holding. “That would be all, Adam. Thank you. Shut down.”
The humanoid computer called Adam slumped back into his metal chair, the purr of its operating system slowly fading into silence as it ceased all processes.
Jessa sighed as she finished writing her notes from today’s session. Project Adam was going to take more time. Adam still lacked the empathy needed to successfully implement independent decision making in their automated cars. It was Jessa’s opinion that Adam needed to be able to care about humans, to feel for them. He needed to be more than a machine that could flawlessly execute simplistic algorithms. After all, it was Jessa’s job as the lead ethical roboticist to make sure she was not unwittingly unleashing thousands of heartless intelligent machines into the world.
It was interesting, Jessa noted, that Adam looked almost worried at the end of the session. It was almost as if he was realizing he was missing a part of the equation. Was it possible he was becoming self-aware? That might be a step in the right direction. Maybe Jessa could use that next time. He seemed to respond to the idea of a child. Maybe she could tweak his programming just a little to make him think he was a father.
Would that be unethical? Jessa felt exhausted already. Another thing to bring up to the committee. She had a feeling the committee would frown upon it. Still, she could think of few other ways to build empathy in an AI.
Jessa threw one last look at the sleek silver machine that was Adam. She smiled at him reflexively. “Well, see you tomorrow. Good job today.”
Jessa really needed to go home and decompress. She could swear she saw Adam's lights blink at her in response.