I’ve read Kurzweil’s The Singularity and, like you, have seen numerous articles and television programs that talk about the near future of AI – Artificial Intelligence. The focus is always on: do we need to fear AI? Another focus is on: what jobs will be lost or what does it mean to be human when AI exceeds human intelligence? What is missing is the fact that AI, in the form of decision algorithms that affect how we live, is among us right now. An algorithm is a programmed instruction set computers use for automated decisions. And there are significant moral questions worth considering. This blog entry was prompted by those television shows and an Aeon Newsletter article I recently read titled ‘Automated Ethics’. I’m drawing a lot from that article for this post.
Before I jump into the AI decisions underway now, let’s go back to a relevant moral mind game that most have considered. It was conceived in 1967 by English philosopher, Philippa Foot. I’m going to tweak this mind game and apply a computer algorithm to it.
Let’s say you’re standing at a railway switch at a point where two tracks go into mile-long tunnels. The switch is simple in that if you move it right the speeding train goes in the right tunnel and if you move it left, the train goes in the left tunnel.
The mind game goes like this: you just watched one person, a stranger walking on the tracks, go into the left tunnel and five people, also strangers, go into the right tunnel. A train comes barreling down the tracks and the switch is currently set to direct it into the right tunnel where the five are walking on the tracks. What to do?
It appears the correct decision is to actively throw the switch so the train goes into the left tunnel and only kills one rather than five. If you hooked a computer up to the switch you would probably put in these directions for this contingency.
There’s a lot of debate on what the moral implications are if the one were a genius and the five were criminals on death row, or the one was a family member, etc. but let’s leave it that the six are strangers for the moment.
Take the situation a step further. I have to give a nod to the Aeon article for this scenario. I tweaked it from one first posed in a 1985 article by MIT philosopher Judith Jarvis Thomson. You are a skinny person on an overpass with a fat man. The five strangers walk into the tunnel directly below you. The only way to save them when the train comes is to clock the fat man over the head to render him unconscious and then fling the fat man in front of the train. The scenario goes that only the larger person will work to save the five, you are too skinny.
Most balk at killing the fat man as it’s calculated murder. This argument says premeditated murder is wrong regardless of the results. A computer algorithm in this scenario would view the two situations (throw the switch versus knock out and fling to the tracks) as exactly the same: an action is taken that kills one but spares five. A book was written about this scenario: Would You Kill the Fat Man? By David Edmonds. There’s other areas to go with this scenario such as when the person your action kills is thousands miles away like in a drone strike, or do you let distant people starve to death due to inaction? For my purpose of describing AI’s effect on us, I’m going to stick to the up close and personal.
Now we’re deep into the realm of moral ethics. There are two approaches: deontology and consequentialism. Don’t get too wrapped around the words, philosophers like to use big words to explain the simple. In the first case, all moral action is judged solely by the nature of the action regardless of consequences. In the second case, all moral action is judged solely by the consequences regardless of the nature of the action.
Thomas Aquinas in the 13th Century attempted to balance the moral judgement between the nature of actions and their consequence: that a person’s intent and not just their action matters. There is no way I’m going to attempt to capture the nuances of Aquinas but his point stands: both actions and consequences matter.
Computer algorithms and today’s AI run on the second case: actions designed to deliver the desired consequences. What does that mean? Simply put, a computer algorithm would throw the switch to kill one and save five. Simply put, a computer algorithm would kill the fat man.
Let's focus on algorithms in the self-driving cars. As much as purists want to push against them, self-driving cars are already here. Teslas today have the capability to self-steer, and hundreds of thousands of AI driven miles have already been logged. There is tentative push back but the proliferation of self driving cars is inevitable.
Elon Musk, among many others, called sitting in traffic a soul-sucking experience. I used to commute over an hour each way in the Bay Area and Elon is right! Sitting in traffic is maddening. I would liken it to losing irreplaceable alert lifespan that I’d much rather use on something else. Self-driving car technology is one of those steam-rollers of change that’s going to happen. You can trust me on that. Self-driving cars are going to be lauded as saving lives and enabling higher quality of life. In the not too distant future; self-driving cars, buses, and trucks are going to be held up of one of this generation’s great achievements. And it’s all going to run on computer AI. And all that AI will have algorithms programmed to make moral judgements.
In today’s polarized environment, there is a big fight on which side gets to be the elites but there is little discussion on the point that, regardless of which side has the upper hand, the elites get to tell everyone what to do, what to believe. Modern society is simply too complex not to have to rely on respective experts. But this is different.
I strongly oppose the idea of ‘let the experts figure it out’ with respect to the moral implications of self-driving cars and other automated algorithms. If ever there was need for non-partisan transparency, this is it. Don’t believe me? Okay, let’s run the fat man scenario in a personal context.
A regular person is in a self-driving car heading down California’s scenic Highway 1 at sixty miles per hour. This person is simply going from Carmel to Big Sur’s Pfeiffer State Park Campground a 26.2 mile distance – the picturesque route (in reverse) of the annual Big Sur Marathon. The self-driving car is buzzing at sixty miles per hour and our lone occupant is reading email on their smart phone and lapping up the inimitable scenery.
Two-thirds across the iconic Bixby Bridge the car senses an overturned school bus with five children directly in its path. It’s computer AI kicks in and it swerves hard right to miss the children. It’s in an unfortunate spot but the algorithm is clear. The car careens off the bridge and the occupant is killed. Let’s not go into potential survival of the occupant. I’ve personally run across that iconic bridge twice when doing the Big Sur Marathon and it’s the tallest single span concrete bridge in the world. The fall is 260 feet – the occupant is killed.
Is everyone okay with what just happened? One human was killed to save five. It’s the exact same scenario as the railroad switch. Maybe more to the point, what algorithm does Tesla or Uber or Google or Apple use for this situation in its self-driving cars today? Do they use any? What if the algorithm says it will do its best to stop and if it can’t, too bad? Is that okay? Shouldn’t we all know this?
It’s worth more water cooler talk when you start changing it up. What if the occupant is a doctor about to present the cure for cancer at the scenic Pfeiffer State Park Campground and the overturned bus puts prison inmates in front of the speeding car? Are we happy with the same algorithm? It’s easy to imagine the different approach a software programmer who believed his father was falsely imprisoned would take from a software programmer whose father died of cancer. As to how the algorithm would know who was where – GPS transceivers and cell phones. Anonymity is going to be hard to come by.
There is a school of thought that says to inflict such ‘kill one to save five’ moral questions on people is to set up a Machiavellian ‘ends justifies the means’ rationalization that goes too far. Such rationalization enables acceptance of drone strikes, preemptive military strikes, and even the premeditated murder of someone known to abuse their children. That's where our responsibility to Aquinas's balanced approach comes in. We have to judge and accept the responsibility for that judgement.
Push against thinking about quandaries all you want, everything you do has a moral right and wrong element. I disagree with the school of thought that we shouldn’t involve regular people in these discussions. We have to face these moral choices. It is immoral to abdicate our personal responsibility. It would also be wrong if you lose the ability to choose.
Thomas Frank, in his book Listen, Liberal, used the term virtue quest to describe desirable overarching goals like climate change or human rights or micro-lending in impoverished countries that the high-achieving American professionals in politics, business, and foundations support. It’s not a stretch to see that a reduction of traffic fatalities could become a virtue quest. According to the National Safety Council (NSC), 40,200 people died in motor vehicle accidents in the U.S. in 2016 – a 14% jump in a two year period. Self-driving cars will reduce the fatalities due to the most common cause: inattentiveness.
When traffic fatalities dramatically decline in the Seattle area or the Bay Area or Los Angeles due to self-driving cars, we will have the classic virtue quest. If using this technology demonstrably saves lives then isn't government morally bound to dictate its use? Today a Tesla driver has the choice on whether to activate self-driving; what about tomorrow? If people can be saved by mandating that every car have this technology then it’s a short step to command self-driving activation, to remove the individual’s choice. That’s why it’s important to reckon with the moral implications of AI and all its automation forms right now.
The moral questions are tough to answer. That’s why philosophers have been debating them for millennia. But they are in your purview. And the worst thing we could do is to let the high-achieving professionals decide these algorithms without public scrutiny. These algorithms need to be transparent in design, execution, and governance. There should also be mechanisms of review.
In the future, when you or your son or daughter get into a self-driving car in Carmel and plan to relish the enjoyment of the trip to Big Sur’s Pfeiffer State Park Campground; you should be totally aware of what the algorithm is going to do when facing five children from an overturned bus on Bixby Bridge. The only way you will know that is to ask about it - and weigh in - now!