Stories

At the Well blog

10.04.2017

Algorithms – a threat or opportunity?

Algorithms play a key role in the everyday lives of citizens and consumers, since they determine the content that people see at any given time. It can seem as though bots affect how people see the world and mould public opinion. In their project, ‘Algoritmien valta: neutraalius ja puolueellisuus koneellisessa päätöksenteossa’ (The power of algorithms: neutrality and partiality in algorithmic decision-making) Joni Salminen and his workgroup are exploring algorithmic decision-making and how to prevent the related abuses.


We interviewed the Director of the project, Joni Salminen (D.Sc. (Econ.)).

Should experts in the human sciences be involved in technology development?

Absolutely, because a wide range of subjects have much to contribute to exploring the problematic nature of society’s encounter with technology. Systems are designed logically, but are exposed to an environment which behaves illogically. In a perfect world, there would be no need for the human sciences, but in the real world it makes sense to try to improve the way systems work, based on a broader understanding of humankind. It matters whether we view a person as a user, an individual or a strategic actor. Depending on the background of the expert in question, there is a gap between the human sciences and data processing research. But that is why we need multidisciplinary ‘humanist’ research on information systems.

What risks could be associated with algorithmic decision-making?

In theory, algorithmic decision-making is supremely efficient due to superior computational capacity, its low number of logical errors and the attitude-related problems typical of humans. Machines follow their instructions precisely and, in most cases, flawlessly, whereas people often make logical and statistical errors. But despite all of their potential, algorithms pose various risks. Since they were created by humans, they can end up functioning in undesirable ways. Because people are not perfectly logical, the machines and algorithms they create can be viewed as ‘inheriting’ their imperfections.

Both machines and people can be manipulated – sometimes in very similar ways. Both people (in the case of propaganda) and machines can be fed incorrect data, leading to flawed decision-making in both cases. However, there is a crucial difference between the two. It is natural for people to question and dissent from prevailing opinion, whereas machines believe everything they are told.

How can algorithms improve our lives?

They are already doing so in many ways. An algorithm is just a computer program which performs certain routines in a certain order. We have been enjoying the benefits of algorithms since the 1980s; the automation of manufacturing, efficient global logistics and the Internet were all made possible by algorithms. Our world of plenty is the work of algorithms. ‘Algorithm’ is now the buzzword for what was once known as technology. The word is used in a slightly misleading way in contemporary discussions; the decision-making capacity of algorithms is often exaggerated. When I refer to the power of algorithms, I mean indirect power. Algorithms used in search engines or the social media do not deliberately include or exclude data with certain positive or negative social consequences, but this is nevertheless the result due to the need to filter Big Data. This means that machines wield indirect power, because the filtering of information is connected to the exercise of power, including power with major implications for the direction society takes. That is why I think that algorithms are of such topical interest.

What is the current and future role of algorithms in society?

That is the 60,000 dollar question. Artificial intelligence has reached the stage at which algorithms and machine learning models can be used to automate certain activities (such as categorising texts and pictures). These ‘thousands of little helpers’ are being rapidly integrated into professional work routines in various sectors. For example, marketing experts once handled the targeting of ads, but this is now done by machines. However, people are still in charge of the creative side. See https://www.linkedin.com/hp/update/6253987066133209088

Almost every machine-learning algorithm that specialises in a certain sub-area of information work can assist us in some way. For lawyers, they can search all Finnish laws for similar cases, for doctors they can search through thousands of illnesses for candidates that fit the symptoms. They can automatically find the three best candidates from among thousands of job applicants for employers, or automatically create a soundtrack from a text for grant applicants. The best aspect of this is perhaps the fact that a machine can amplify a person’s natural abilities. When teaching students, I use the example of marketers. To advertise in a place such as India, for example, marketers once had to undergo an elaborate process such as negotiating with local partners, which could take either weeks or months. With modern platforms, from a sofa a single person can market something in a hundred countries, with the campaign being translated and up and running within minutes. A machine can amplify our capabilities and enable things that were either very long or impossible processes before.

The downside is that automation will undoubtedly replace entire occupations that do not involve knowledge work and which machines can fully perform. As technology develops, there will be more and more of such occupations – where this used to apply to low-paid occupations, we can now envisage a world in which even lawyers are replaced by law bots. Personally, I doubt that this will happen, because the law has to be interpreted and people are needed for that. In addition, laws are based on values which a machine cannot understand.

At a minimum, knowledge work will take less time and need fewer working hours i.e. fewer employees. That is the downside of efficiency. At the level of society, we need to consider how to restructure society in order to create more work for humans (the other alternative would be a Utopia in which no one needs to work – for various reasons I don’t believe in this scenario, which would be more likely to lead to chaos). The risk posed by the people displaced by machines is far greater than the so-called ‘machines are taking over’ scenario. Technology is nowhere near this stage of artificial intelligence at the moment, but a situation in which displaced human labour is at risk of radicalisation and supporting populists is already here. As we know from history, this is a genuine risk which machines are unintentionally causing (unemployment -> marginalisation ->radicalisation).

What should everyone know about algorithms?

Most of all, they should know what algorithms are not. They are not a magical solution to the problems of humankind. They will not ‘solve humanity’s problems’ for us. Algorithms, particularly information algorithms, ultimately reflect the state of humanity. We should not blame machines for the existence of digital echo chambers, or the increase in uncivilised behaviour. Instead, people should take responsibility for their own deeds. A machine is the ideal scapegoat for many current phenomena, such as unpopular election results (Facebook, we are told, got Trump elected) or social media echo chambers (which are caused by the wish to filter out dissenting opinions). Machines are neutral actors in terms of social improvement; they still only do what they are told. The rest is about projecting human characteristics onto machines and fleeing responsibility.

 

DSc. (Econ) Joni Salminen and his workgroup were awarded a grant of EUR 97,000 in the Kone Foundation’s annual application round for 2016, for the project Algoritmien valta (The power of algorithms). Neutraalius ja puolueellisuus koneellisessa päätöksenteossa (The power of algorithms: neutrality and partiality in algorithmic decision-making).

https://algoritmitutkimus.fi/