Soul in the circuits?

When humans encounter new technology they do so with their stone age brains, the same organs that were fine-tuned for cooking and taking forest hikes. Like our digestive tract and our hands, our brain and it’s information processing algorithms are the product of evolution. Due to this reason, our cognition does some peculiar things from time to time. It perceives minds, emotions and intentions in teddy bears and in electric circuits. Once we understand the evolutionary background of our cognition, we understand why we end up in confusing situations with new technology.

According to a classical view, evolution does not produce organisms that perceive the world truthfully or accurately; our genes are only interested in copying themselves. An example of this is people thinking sound sources located behind them are closer than they actually are. Our sense organs are not very accurate, but they function “well enough” nonetheless. Evolutionary psychology suggests that all of the different mechanisms and functions in our bodies (mind included) have been shaped by selection pressures in our evolutionary environments. For instance, eyes develop to react to photons, and ears would not evolve if there were no pressure waves.

Developmental psychologists have long noted that we have built-in understanding about the world even before we are born. Babies have intuitive understanding about solidity, numbers and object permanence. Anthropologists and neuroscientists have concluded that people have a natural predisposition to categorize objects; for example, as animals, tools, or plants. It seems fairly self-evident what the selection pressures for the evolution of object permanence and tool perception have been: We have lived in a world shaped by tools we have made for 2.5 million years, during which time objects that vanish into thin air (i.e. impermanent objects) have been rare or non-existent.

Humanity is constantly facing new technological challenges, and we do so with stone-aged cognition.

In terms of AI, friction is caused by the lack of selection pressures in our evolutionary history from autonomous technology. In other words, we do not have ready-made cognitive categories or concepts for robots or AI, which have also been dubbed as the “new ontological category” by developmental psychologists. As far as we know, a previously unseen and unheard of type of “being” is emerging in the form of robots. Because we do not have pre-existing intuitive cognitive capacities to understand artificial agents, we react to them according to what we do have: stone-aged ontology. Sometimes we perceive robots as tools, sometimes as “cute animals”, and occasionally even as our children. To perceive robots as merely robots, we require fairly extensive education and training. Perhaps (and hopefully) schools in the future will have subjects more thoroughly covering robotics and programming.

Our interactions with new technologies get an extra twist from our natural tendency to see the world dualistically, that is we instinctively separate the world into matter and “soul-stuff”. According to scholars of comparative religion, our intuitions of souls include assumptions about the souls’ capacities. Souls can travel from one body to another and they are capable of traveling through space and time. These sorts of soul beliefs are found across the world and they are probably part of our species typical cognition. At present, it is not entirely clear why the evolutionary process resulted in us being dualists; however, in contemporary moral psychology we know that people are capable of projecting mind, consciousness, feelings of pain as well as intentions on robots. In the future, when we interact with robots, it is not entirely clear to which extent the interaction is an illusion, for instance we might project emotions unto robots, although their creator did not put such things into them. Furthermore, we don’t know at which moment our faulty intuitions regarding robots will stop being inaccurate. At which point can we reliably say that the AIs or robots have become “ensouled”? In other words, when are they really doing something similar to what our brains do when we are talking about “having feelings” or “ having awareness” ?

Notwithstanding, there are companies, which promise their clients an eternal life in an indestructible body. They promise that this will be possible within the next 30 years, and they tell us that they will achieve this by transferring your mind into a machine. It is interesting to observe the intuitive reactions people have towards these situations and we can do so by giving people a story to read and ask them to provide answers to standardized questions.

In a recent pilot study, I observed individual differences in tendencies to feel different moral emotions. Participants read a story where a scientist successfully transfers his consciousness into a machine; after the transfer he wakes up in the machine and his body falls on the ground. I asked participants to judge how acceptable they found the actions of the scientist. The results show that age, gender or income did not influence people’s judgments. However, people who were well versed in science fiction were the most accepting towards the idea of consciousness transfer. Interestingly, the extent to which an individual is sensitive to moral disgust influenced negative judgments the most, followed by the extent to which people are sensitive in perceiving harm. Some disapproval was also channelled through the individual tendency to respect authority and traditional social structures.

At the time of writing, I am still trying to make sense of the results in a deeper way. Even though, I chose the variables to be measured, I am a bit astonished.

Why is moral disgust such a strong predictor of negative attitudes towards consciousness transfer?

Whatever the reasons are, one thing seems to be relatively certain: these results are a good example of how we cannot predict or understand our own reactions towards emerging technologies, without extensive research. When we arrive to the edge of evolution, where situations and events are completely novel, our emotions function in the most unexpected ways. This highlights the need to do more basic research. Innovative basic research helps us to gain deeper understanding about the limits and scopes of our previous knowledge. Old conundrums in philosophy and cognitive science are illuminated while new problems regarding our relationship to new technologies and our environment are revealed. By studying our relationship with new technologies we gain more understanding of our emotions and our evolutionary history. Perhaps we can even guide the evolution of our own species by being slightly more self-aware of where we are going, rather than just following our own unconscious and unexamined instincts.


Michael Laakasuo

Postdoctoral Researcher in Cognitive Science