Can Robots Be Moral?

The morality of a robot, or lack thereof, is a classic sci-fi trope. Whether it is an unfeeling Hal 9000 exterminating the human crew that get in its way or a cute WALL-E learning good and how to help humanity reconnect with its roots, we are both fearful of the risks of immoral or amoral robots and excited by the idea of the paternalistically teaching morality to robots. But can robots be moral? And indeed, can robots be more moral than humans?

As there are a number of approaches for humans to consider what is moral and what is not, so too are there a number of methods by which we could teach robots those same or similar moral systems. One seemingly scientific method that we could apply would be utilitarianism, that gives a value to different consequences. 

Just as we have scales of loss in personal injury, so too might we have scales of loss from a moral basis and apply values to various harms. From death to a person in a far-off land to an insult to a close friend. Each could have a different value ascribed to it. Not only could we build a model of how to approach things, we could use machine learning to enable robots to discover how best to operate in the world with both physical and intangible constraints. 

Although this may seem scientific at first, it is little more than a cowboy bull weighing. This analogy is as follows:

In the wild west when they needed to weigh cows, they didn’t have scales on which to weigh them. 

So instead, they would set up a lever by which they would hoist the cow on a balance with a heavy rock on the other side. By adjusting the point of the fulcrum, they would then calculate mathematically the ratio of the cow’s weight to the rock’s weight with great precision. Then after conducting such a calculation all that was required was for them to guess the weight of the rock.”

Just like cowboys guessing the weight of rocks, ascribing some kind of value to various moral potentialities leads to a model with an appearance of technical creativity but, in reality, is based on bogus assumptions.

Laws for Robots

Another approach is to use a rules-based system. Some series of IF THIS THEN THAT in a robot’s code. This could range from cars avoiding hitting humans to autonomous robots being programmed not to harm humans – in the vein of Isaac Asimov’s Three Law of Robotics. Such rule-based systems have been well explored, such as in I Robot, where a supercomputer seeks to imprison all of humanity so as to prevent humans from harming themselves and each other. In regards to IF THIS THEN THAT statements, there seems to be an endless discussion of trolley problems in regards to the creation of these rules and how they might apply. So, if a self-driving car had to choose between a course of action that saved the life of a pedestrian but will put the driver at risk, what should it do? What if it has to swerve, hitting an elderly lady and into the path of a child? There seems to be endless discussions about different moral scenarios under rule-based systems. In my view, the entire line of questioning is misguided.

Robot Thinking

The entire notion of a trolley type decision whereby you must choose between one path or another and determine it’s the moral outcome is a robotic scenario and not suited to humans. If you were ever faced with such a scenario, you would do more than pick between one of two options (classically pull a level to direct a train along a track such that it hits one person, rather than five along its ordinary course). Still, we would look for other ways to solve it. Maybe the brakes could be fixed, the people moved off the tracks, or the train shunted off into some other direction. The limit on options is provided only by our creativity. The solution to any decision-based moral quandary, such as the trolley problem, is to not enter into it. If somebody has let a scenario proceed to being two choices of imminent harm, then there has been an earlier failing, and that is where the moral breakdown is. Trying to create some rule-based system for our own moral judgment is as problematic for humans as it would be for robots.

Human Thinking

Advocacy is more than presenting a set of facts. It is constructing them in such a way so as to best appeal to the audience. Humans are social creatures. Like it or not, we are influenced by those that we are around. A skilled advocate can use their charisma, empathy with the listener, and story-telling narrative skills to move someone onto their side. While there are AI that can manipulate people

The underlying problem is human’s tendency to anthropomorphise. We think look at machines, and we think that they might think and act in a way a human would. This is because the greatest part of the human brain is dedicated to understanding other humans. So, we are constantly at risk of projecting human thoughts and attributes onto non-humans. 

This applies to when we look at a robot and think that it might think in the way of a human and might be taught morality. It also applies to when we look at some kind of mathematical model or series of rules, in itself a kind of robot or program, and try and use it for determining human morality. Instead, we humans should use our expertise of thinking to determine what is moral and what is not, rather than attempt to create robots to delegate it to. Most humans are largely moral and can determine what is good and what is not according to their own judgment, their stories and customs, and feelings and intuitions. It should be encouraged to justify something as right or wrong because you feel it is right or wrong, instead having to resort to some kind of pseudo-scientific reasoning. Humans should trust their human skills of intuition and feelings and learnt customs and stories, and use them to craft morality for themselves. Resist attempting to apply scientism – the first encroachment of robotic automation – to that which cannot be measured.  

When you go to a play, the things that can be measured are the stage, the props, the location of the actors, and so on. But the most important thing is the story and emotion that is conveyed, and this is not something that can be measured. When the play is over, the story remains and in a way that cannot be measured. We should not try and measure plays.

based on big data, as we have seen with, say, Facebook and YouTube algorithms, this is more about reflections and presenting people with more of what they have expressed that they like rather than convincing them to like something that they might not otherwise. Advertising is not advocacy.

While we might have robots involved in elections, infamously re-tweeting comments, and disbursing fake content in recent elections, humans are enamoured with stories. Whether it is a rise and fall and rise again of a hero or a battle between good and evil, ultimately, the lead character in our stories has to be something that we can associate with. I could never see people associating with an eternal robot that has a ‘junk to jewellery’ story of its rise. In particular, because the story might have no end and could last eternally – stories must have an end. Mortality ends all human stories eventually, and that promise enables us to back the hero. An immortal politician seems to me to only be a villain to be conquered. I cannot see how humans could be persuaded by a robot politician advocating their position. It might be possible for scientism or some worship of science to lead to some god-like robot entity or even a fictional persona. But I think that there would need to be humans advocating on its behalf.  

While the best-used car salesman might always be a human who can size up the potential customer and sell to them based on particular desires and weaknesses, it is well known that salesmen will often use repeated sales scripts. So I am certainly not saying that there will be no robot salesmen. There have been books on how to write sales letters for a hundred years. Online sales, conducted entirely without humans, have been increasing exponentially. However, when trying to sell a particular product to a particular person, a talented human always has the potential to do better than a robot.

Robots and Tools

In my view, the way of reckoning morality for robots is to treat them like any other machine and don’t try to anthropomorphise them.
We have no difficulty in considering the morality of hammers, even though we may occasionally hit our own thumb with them. We understand the amoral liability of an automatic forklift or elevator and ascribe safety concerns and liabilities to its owners and controllers. Even autonomous robots and self-driving vehicles, I cannot understand why there is difficulty. We have thousands of years of considering how the autonomous property of someone, such as a cow straying from its field, might be the liability of its owner and controller.
Dogs, which have a greater understanding of humans than any other animal or machine, are not taught morality, nor do we expect them to have moral judgment. Instead, we teach them things that they should do and should not. Whether those things are right or wrong is a decision for humans to be judged by other humans. And what is right or wrong can change by context: there is a big difference in what you might teach a guard dog or police dog as compared with a family dog or assistance dog. Might we find scenarios where it’s incapable for robots to hurt humans, such as surgical robots?
If we properly conceptualise robots as tools rather than as some anthropomorphised machine, we might consider that we have been using tools to kill other humans for thousands of years and will continue to do so. Guns don’t kill people; People kill people.
And so, not only is there no divine wisdom or morality that humans could learn from robots, we need not be concerned that one-day robots will be better at morality than us, just as they are at in limited games like chess. We will always be superior in judging morality as compared with robots unless, of course, we inappropriately delegate our thinking.

Processing...