Can a robot act ethically?

Interaction between robots and humans is no longer science fiction. Automated vehicles, teaching and care assistants, and battlefield robots are being tested and pioneered. Software robots working on the internet are invisible, but each time you google something, an algorithm uses information such as your click history and geographical location to decide what to show you.

It will take a long time before robots possess the cognitive abilities required for independent action, in order for them to be assessed morally or considered to be ethical actors in this strict sense. They are merely machines. But they do make choices, either pre-programmed or, more and more often, acquired through learning algorithms. Of course, ideally the consequences of these choices would be in line with our ethical valuations. This evokes three questions: What kind of rules should robots obey? How can these rules be implemented in practice?

Who is responsible if a robot’s action or the consequences of it are ethically undesirable? These are questions that should be considered in advance, not after robots become commonly used.

Let’s start with responsibility. Because a robot is not a subject of ethical action, it cannot be held ethically responsible. This leaves four options. First, we can place responsibility with the programmer who is responsible for the robot’s operations and the choices it makes. Then again, the user of the robot can be responsible for its operations if the robot is used in an environment for which it was not designed. Thirdly, both can be held responsible. But the fourth option is that no-one is responsible for the action of an individual robot.

Minimum prerequisites of moral responsibility include intentional action and a possibility of anticipating its consequences. An act can be morally reprehensible even if the consequences leading to this were not intentional or anticipated – the question is should they have been understood in advance. It is always possible that no-one is responsible for an individual unfortunate consequence. However, if the use of robots in certain situations (such as replacing bus drivers with robots) repeatedly leads to unethical consequences, this becomes a morally reprehensible choice, and the decision-makers will be held responsible.

Responsibility issues always refer to a specific incident, but the moral-philosophical questions related to them should be considered separately before introducing robots, and individually for each context of use – and new legislation may even be required in order to define moral responsibility juridicall

So, what rules should robots have? This depends on the purpose of the robots. Of course, robots’ actions cannot lead to unethical consequences in the light of universal moral norms. Robots are not actors in the same way as humans, so it is appropriate to consider a smaller number of situations and principles in the programming. However, it is possible to burden robots with responsibilities we would not thrust on humans – for example, much lighter grounds would apply to robots sacrificing themselves. Robots may also be programmed with specific ethical duties similar to those of various professionals (such as doctors). It may become problematic, however, if at one time we have robots that are on a considerably higher cognitive level, beginning to acquire rights similar to animal rights, even if they are not likened to humans. This is a long way off, but there is also another side to the treatment of robots.

Humans have a tendency to anthropomorphise animals and even objects. Particularly if the robots resemble human beings and appear to act in a human manner, people tend to perceive them as human actors – this is especially true if the robot can engage in conversation. If robots are used in ways considered unethical for humans, people working with the robots may find their moral feelings getting hurt. For example, a ”harsh” use of robots may brutalise people. That is why moral responsibilities toward people place restrictions on equipping robots with artificial responsibilities – and perhaps some kind of self-preservation should also be an ethical norm for robots. But the priority now lies with outlining the ethical rules restricting the action of the robots that will be introduced in the near future and will become increasingly common. We are unlikely to find answers to this without close cooperation with those adapting robots, engineers, and experts in applied ethics.

The third question is how to apply these rules in practice. Even advanced robots would not be able just to adopt abstract moral rules unless they were able to interpret how the rules should be applied in each situation – or even identify morally relevant situations and understand what is morally relevant in them. An example of an easy situation is when the robot has a limited number of actions to select in a limited number of situations. The robot can simply be programmed to act correctly in each situation. The programmer makes the ethical choices. The robot will only need to pick up a signal in its operating environment, identified by the programmer as a signal of a morally relevant situation. Once the robot has identified the signal, it will act in a pre-programmed, appropriate manner.

The situation becomes more complex when robots are able to learn more and their behaviour becomes more flexible. The ethical restrictions applied to this new behaviour will need to relate to the range of new actions that the robot is able to acquire. Things are complicated even further when the ethically significant characteristics become more abstract. Generic robots cannot just be programmed not to poke people; instead, they must be programmed with a rule not to cause pain, meaning that they will need to be able to identify situations that would cause pain.

We are unlikely to achieve ethically sound robots by making them continually go through a list of morally significant situations and comparing it to the signals they pick up in their environment. This would take up too much processing capacity. But the good news is that humans don’t to that either – human moral cognition does not work by always applying abstract moral principles in all new situations. We are able to identify morally relevant characteristics in, for example, social situations far earlier than we are able to process and categorise social observations. Reactions and decisions made in these situations are intuitive and automatic, guided more by moral feelings than reasoning – moral consideration is only applied in complex and new situations and when adopting new principles. If we wanted to create robots with ethically sensitive cognition and model it on human moral reasoning, it would be better to start with this unconscious and more intuitive segment of moral cognition.

However, this leads us to a serious challenge. Human moral cognition taps into abilities and aptitudes that are, in the evolutional sense, older than morality. As with all evolutional development, the development of morals is based on the old, on varying it and adding to it. This is also evident in the fact that moral ”building blocks” can be found in many animals that cannot yet be considered moral actors as such.

Path dependences found in technological evolution are similar to those in biological evolution: new solutions are achieved by building ”on top” of existing ones and modifying them, rather than trying to figure out functional basic solutions from scratch. Human moral cognition and moral feelings are based on abilities and aptitudes related to social life, such as empathy, reciprocity, and the adoption of communal norms. If such aptitudes are not required of robots, it may be very difficult to turn them into moral actors when such needs arise.

From the start, the foundation of robots’ ethical activities must be built into their operation, even before there is an actual need for it.

Otherwise, it may be too late. How this can be achieved is a more difficult question. What I propose is cooperation between different groups: robotics engineers, moral psychologists, specialists in the proto-moral characteristics of animals, moral evolution experts, moral philosophers studying the actions of robots, and – why not? – authors of science fiction, and, as interpreters, portrayers and critics, philosophers of science.

Translation: Semantix


Tomi Kokkonen

Postdoctoral Researcher in Philosophy, University of Helsinki