Insights from REPAIR’s workshop on trust and AI

Kirsimarja Blomqvist & Tuuli Toivikko, LUT University

Mirko Schäfer presenting at the workshop.

REPAIR organized a workshop to discuss trust and AI in Finland next to FINT2023 conference, held in LUT University. The topics of the workshop varied from AI’s effects on trust to philosophical accounts of trust and AI in software engineering and regulation of AI.

The workshop was motivated by Finland’s unique position in OECD countries, where trust in institutions and generalized others has been highest in Finland (OECD, 2021). Among other Nordic countries, Finland is an exception worldwide, where most citizens distrust institutions such as the media and politicians (Edelman, 2022). Thus, Finland may have much to lose if the AI-enabled deep fakes, misinformation, and manipulation deteriorate the society's fabric of trust. At the same time, Finland can gain many potential benefits from AI. The aging population and related increasing social and healthcare costs endanger the Nordic welfare society, and despite the investments in digitalization, productivity remains lower than in Sweden and Germany. Thus, despite the high risks, there is also high interest in how AI could be adopted for increased efficiency and effectiveness. 

Kirsimarja Blomqvist from LUT Business School opened the workshop calling for both social and institutional mechanisms for building trust in AI. Trust involves vulnerability and risk-taking, that can become paramount in the context of AI, without sufficient trustor’s agency and understanding of the trustee, i.e., AI and different parties developing, implementing, and providing AI-based services. Building trust in ethical and socially acceptable AI is also a complex task as trust is a multi-level phenomenon where different levels of trust affect each other.  

Raul Hakli from the Department of Practical Philosophy & Pekka Mäkelä from the Helsinki Institute for Social Sciences and Humanities, University of Helsinki, provided an overview of a variety of philosophical accounts of trust. They contrasted philosopher Anne Baier’s notion of entrusting with Russell Hardin’s notion of encapsulated interest. For Baier, entrusting means that the trustor trusts something valuable for a trustee, believing in the trustee’s ability and goodwill. For Hardin, the trustee’s encapsulated interest means that s/he shares the same interest with the trustor, and therefore it is rational for him/her to behave in a trustworthy manner. After having provided a comparison between different moral notions of trust, Hakli and Mäkelä concluded that this normative sense of trust is critical for maintaining and building digital democracies, and robots and AI agents do not qualify as trustors or trustees. 

Dominik Siemon from LUT University discussed AI in software engineering, highlighting concerns about AI control by large companies and potential biases. He mentioned algorithms that can modify themselves and large language models that can even create other LLMs. Dominik Siemon suggested thinking of algorithms as collaborators rather than tools, citing the benefits of tools like CoPilot for software engineers. However, his research showed that software developers and students tend to rely heavily on AI, potentially diminishing their own skills. 

Simon Schafheitle from University of Twente brought the organizational and employee perspective to the conversation. Based on his and co-author’s recent research he discussed vulnerability as a key element in cultivating human trust amidst workplace technology deployment, and how introducing new technologies resonate with employees’ various vulnerability experiences.  According to Schafheitle, employees’ trust in the employer usually happens “automatically” and low effort and the advent of technology puts these modes of trust processing to the test. In this situation, employee vulnerabilities become particularly salient, and employees start to reevaluate the “trust deal” – a process, that is effortful and costly. The research indicates that it is precisely these vulnerabilities that allow the organization to derive active trust management strategies, preventing the reevaluation from settling into a default state of low trust or distrust to the detriment of the organization.

Mirko Schäfer from Utrecht University Data School presented Data Ethics Decision Aid (DEDA) tool that has been used in tens of organizations in Holland and abroad. It is a practical tool for when introducing AI and especially helpful in bringing together different stakeholders involved in AI development, implementation, and use. At its best it enables a dialogue where different roles, risks, and tensions can be revealed for open discussion during AI projects.  REPAIR consortium has translated the DEDA tool in Finnish and organized training to make it available for Finnish private and public sector organizations. 

Anna-Mari Rusanen
from Ministry of Finance and University of Helsinki discussed national and international collaboration as a tool in regulating AI.  Due to its complex and quickly developing nature, AI regulation requires broad and deep knowledge often not available within governmental or EU structures. Here Finland piloted a unique model, an interdisciplinary academic committee for artificial intelligence and digitalization, that met regularly to discuss and provide insights for governmental work on regulating and introducing AI-based services.  Experiences were very positive and raised interest also among other EU countries.

The workshop was organized in an informal and collegial spirit giving room for various comments and viewpoints.  Dominik Siemon’s stance on AI as a teammate sparked a lively debate, with many preferring to see AI strictly as a tool and cautioning against anthropomorphizing algorithms. Altogether, an inspiring afternoon discussing not only the dystopian views, but also how individuals, organizations, and societies can keep their agency and build collaborative practices and trust to benefit from technological development.  

References:

Edellinen
Edellinen

Dismantling public values, one data center at the time

Seuraava
Seuraava

Apotti kertoo dataistuvan työn murtumista