Decision-making by artificial intelligence (AI) is no longer science fiction. Staggering examples of AI development can be found around the world. In Finland, AI utilisation also an everyday practice in a vast number of companies and organisations. Researchers at the University of Turku are in the forefront of research on responsible AI and governance of AI.
Today, artificial intelligence applications are all around us. Media streaming services, online shopping, social media, and search engines all provide us with contents tailored precisely for us. AI systems are also being used for reviewing tasks such as job applications or contract documents. AI may assist a doctor in coming up with a diagnosis, or even help an urban planner in designing a whole new residential area.
AI-driven decision-making is the main topic of interest for the researchers of the Artificial Intelligence Governance and Auditing (AIGA) research project that is coordinated by the University of Turku. The aim of the researchers in the project is to create actionable means for companies and public organisations to utilise artificial intelligence in a responsible manner.
– The current trend in the daily life of many businesses is towards more and more automatised decision-making. For instance, AI practices in the form of algorithms are already being utilised in fields such as banking, insurance as well as in recruitment. In the European context, the algorithms are typically used for the preliminary processes as well as for providing the human decision-maker with recommendations. Nevertheless, a human is still in charge of making the final decision. It would indeed be intriguing to establish just how often a decision made by a human differs from the recommendation the algorithm might have created, says the Principal Investigator of the AIGA project Matti Mäntymäki, Associate Professor at Turku School of Economics.
According to Matti Minkkinen (left), Mika Viljanen, and Matti Mäntymäki, artificial intelligence is likely to change our lives in the next few decades in much the same way as the Internet did 30 years ago.
In Europe, the General Data Protection Regulation (GDPR) limits the automatised decision-making concerning individuals. In addition, the European Union is preparing a European regulation concerning the use of AI technologies called the Artificial Intelligence Act. With regulation, the EU seeks to ensure the socially responsible use of AI as the technology is taking continuous steps forward.
The European perspective on responsible AI is not universally shared. For instance, in China, the citizens are being monitored through a monitoring system called “Police Cloud”. The system tracks their criminal records and data gathered from surveillance cameras, but also transport tickets, medical prescriptions, and hotel stays are being monitored and recorded. Based on this information gathered, the system creates a precise and detailed profile of a person.
”In Europe, the development and utilisation of AI systems is strongly based on European fundamental values such as respect for privacy.”
– Matti Mäntymäki
– In Europe, the current situation is vastly different from, for instance, the United States or China. In the United States, the interests of the big technology companies play a significant role, whereas in China, the AI systems have long been developed under state control disregarding all privacy. In Europe, the development and utilisation of AI systems is strongly based on European fundamental values such as human dignity, freedom, democracy, equality rule of law, and human rights, both in the EU and on a national level. In the EU regulation, social responsibility is a central dimension in the development and utilisation processes of technologies such as artificial intelligence, explains Mäntymäki.
One example of the use of artificial intelligence in Finland that received a fair amount of attention was an experiment carried out by the City of Espoo and the software company Tieto in 2017 which examined whether artificial intelligence can be used in a pre-emptive manner in the allocation processes of family and social services. The artificial intelligence found nearly 300 risk factors that may predict the risk of a child at one point winding up as a client for the child welfare services when multiple risk factors applied to the same child. The experiment allowed the staff to discover that low-threshold child welfare services did not sufficiently reach residents with mother tongue other than Finnish or Swedish, which consequently led to further development of these services. After the new development measures were in place, the percentage of these foreign-language children in child welfare centres decreased from 45% to 24%.
– The European vision of the AI future relies on the idea that the aim of AI development is the welfare of the whole community and improving the everyday lives of us all. My own personal vision is that AI could, for instance, give us different kinds of recommendations in all sorts of situations in life. If, for instance, you have recently moved to a whole new city or locality, this could be registered as a change, and the AI could then recommend, for instance, hobbies that interest you, explains Senior Researcher Matti Minkkinen from Turku School of Economics.
According to Matti Mäntymäki, Finnish companies want to implement their AI decision-making processes as responsibly as possible, and the researchers are there to support them.
Intelligibility Imposes Limitations for AI
Future technologies and digital society is one of the six strategic research and education profiles of the University of Turku that conduct research and offer education across disciplines. The objective of the researchers in the multidisciplinary AIGA research project is to support companies and organisations in making the decisions generated by the AI systems more reliable, transparent, and comprehensible. The realisation of the responsibility is especially vital whenever AI is being utilised for so-called higher risk fields such as traffic, health care, and financial sector.
– There is in general a strong desire in the Finnish organisations to handle matters in a socially responsible manner, and we offer tools for actually putting these good intentions and principles into practice. There is a great demand for this type of competence in different organisations, so we are here to satisfy that need.
”If the AI system the organisation is using all of a sudden turns out discriminatory, the organisation could be faced with considerable hardship.”
– Mika Viljanen
Together with their partners, the researchers have developed instructions on how the activities of AI systems should be monitored. They have also created a description of the practical measures that the companies should observe when developing their own AI systems. The researchers have already tested these tools together with e.g. OP Financial Group and the Finnish Tax Administration.
– If the AI system the organisation is using all of a sudden turns out discriminatory, the organisation could be faced with considerable hardship. The media may bring up the matter and customers can easily vote with their feet. It is worthwhile for the organisations to be interested in how their own systems function, says Associate Professor Mika Viljanen.
The utilisation of AI systems is appealing to companies and organisations, as they can be used for improving the quality of some processes as well as help decrease the need for human workforce. Health care is one field where the opportunities for the utilisation of AI systems are many.
The researchers of the Institute of Biomedicine at the University of Turku have been involved in a broad international research project the results of which were recently published, clearly showing the ability of AI systems to precisely diagnose cases of prostate cancer. The purpose of developing artificial intelligence is certainly not to replace a human doctor in the decision-making, but rather function as an aid in order to more efficiently detect cases of cancer and to standardise various diagnostic processes. In their previous studies, the same group of researchers has shown the power of the AI in detecting cancer present in tissue samples, evaluating the amount of cancer tissue in biopsies, as well as grading the harmfulness of cases of prostate cancer with the accuracy of an international panel of experts.
According to Mika Viljanen, comprehensibility is not a feature that can entirely be required of the AI
As AI systems are being utilised in the field of health care, it is essential that both the health care professional and the patient understand why the AI interprets that the patient is suffering from a specific illness.
– The processing by an AI may function in a completely different manner compared to the human mind, and therefore, the decisions made by it can be difficult for us to understand, notes Minkkinen.
– Therefore, there has also been a rising number of those opinions stating that we are losing many beneficial qualities of different technologies if we require intelligibility from them. If a system that is better intelligible for us humans detects, for instance, 90 percent of all cases, whereas an unintelligible one detects 98 percent of them, it is a clear instance of sacrificing functionality in the name of comprehensibility, says Viljanen.
According to Viljanen, comprehensibility is not a feature that can entirely be required of the AI, however, to achieve sufficient reliability it is important that the operating principles can be understood at least to some level. Otherwise, there could be a distortion or a glitch in the system that goes unnoticed. One bit of a tragicomic example of this was an AI system developed in the United States to analyse the resumes of potential job seekers, which interpreted that the most qualified applicants were called Jared and played lacrosse.
– For an expert, it may be sufficient to get a narrower understanding of the reasoning behind the decisions made by an artificial intelligence, even if they could not know exactly what they were. On the other hand, it may be significant for a patient or job seeker to receive a valid justification for their condition diagnosis or failure to be selected for a job. Do they really need to be justified? This is an ethical and, in some cases, judicial question. Therefore, many organisations do not wish to make decisions that cannot be fully explained, which in turn sets limitations on the matters the AI is able to decide on, says Viljanen.
Pursuit of an Ecosystem of Responsible Artificial Intelligence
A non-discriminatory system is one of the main principles of an ethically functional AI. Here, the challenge again is understanding the artificial intelligence: if we do not understand the principles behind the decision-making of the AI, we might be, by mistake, creating a system that makes decisions on discriminatory grounds.
In Finland, the National Non-Discrimination and Equality Tribunal banned a certain credit company in 2018 from using their AI-based method which discriminated against applicants on the basis of e.g. their native language and gender. The complaint was lodged to the Tribunal by a man that had in 2015 applied for credit but had not been granted any. Had the man, whose first language was Finnish, been a woman or a native Swedish speaker instead, the credit would have been granted.
– Information increases the risk of discrimination, as well. More and more data are being collected on the health of individuals and, for example, on their genetic predisposition to various diseases, through which we gain a better understanding of the risks. The observations constantly move towards a more precise understanding of the situation of an individual. In a sense, we already know too much to avoid discrimination, Viljanen says.
”We are on the lookout for an answer to how responsible artificial intelligence can be included in companies' business models.”
– Matti Minkkinen
One risk factor that strikes at the root of responsibility is the scale, as artificial intelligence can make an enormous number of decisions in an instant.
– Artificial intelligence systems are designed to be scalable, which means that they don't have to ponder upon each decision one by one. For example, a system can process a million decisions practically as fast as it would process one. When people make decisions, the process much less scalable, so the systemic risks are also less significant, Mäntymäki says.
The researchers have, together with their corporate partners, studied combining corporate business models and responsible artificial intelligence. The introduction of the AI and considerations of responsibility are not only a challenge for companies, but also an opportunity for new initiatives as well as international growth.
– There are also many other values associated with the responsible use of artificial intelligence other than complying with laws. We have been seeking such values and value propositions for responsible artificial intelligence together with companies. Fairness and intelligibility should not be viewed as an opposite of making profit. We are on the lookout for an answer to how responsible artificial intelligence can be included in companies' business models, Minkkinen explains.
According to Matti Minkkinen, the goal of the researchers is to form an ecosystem of responsible artificial intelligence into the society.
Researchers are currently developing a roadmap for promoting an ecosystem of responsible artificial intelligence. The aim is that in about five years' time, our society will have a business and innovation ecosystem around responsible artificial intelligence, with room for a variety of commercial operators. In this ecosystem, for instance, consulting firms could conduct audits and ensure that companies utilise artificial intelligence responsibly.
– Overall, the importance of responsibility is currently growing, especially from the perspective of ESG investment. Many institutional investors are very interested in the working conditions of the employees as well as how the company in question handles environmental issues. It could be assumed that as the understanding of artificial intelligence systems increases, investors and consumers will also be more interested in how a company, for example, ensures that the customer information is safely secured or how they ensure the fair and equal treatment of candidates in the recruitment process, Mäntymäki states.
The cooperation partners of the AIGA project are DAIN Studios, University of Helsinki, Loihde Advisory, OP Financial Group, Siili Solutions, Solita, Finnish Tax Administration, and Zefort.
Text: Jenni Valta
Photos: Hanna Oksanen