Volltext-Downloads (blau) und Frontdoor-Views (grau)

Building Appropriate Trust in Human-AI Interactions

  • AI (artificial intelligence) systems are increasingly being used in all aspects of our lives, from mundane routines to sensitive decision-making and even creative tasks. Therefore, an appropriate level of trust is required so that users know when to rely on the system and when to override it. While research has looked extensively at fostering trust in human-AI interactions, the lack of standardized procedures for human-AI trust makes it difficult to interpret results and compare across studies. As a result, the fundamental understanding of trust between humans and AI remains fragmented. This workshop invites researchers to revisit existing approaches and work toward a standardized framework for studying AI trust to answer the open questions: (1) What does trust mean between humans and AI in different contexts? (2) How can we create and convey the calibrated level of trust in interactions with AI? And (3) How can we develop a standardized framework to address new challenges?

Export metadata

Additional Services

Search Google Scholar Check availability


Show usage statistics
Document Type:Article
Author:Fatemeh Alizadeh, Gunnar Stevens, Oleksandra Vereschak, Gilles Bailly, Baptiste Caramiaux, Dominik Pins
Parent Title (English):Reports of the European Society for Socially Embedded Technologies
Publisher:European Society for Socially Embedded Technologies (EUSSET)
Date of first publication:2022/06/22
Copyright:Copyright 2022 held by Authors.
Proceedings of 20th European Conference on Computer-Supported Cooperative Work
Departments, institutes and facilities:Fachbereich Wirtschaftswissenschaften
Institut für Verbraucherinformatik (IVI)
Dewey Decimal Classification (DDC):0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 005 Computerprogrammierung, Programme, Daten
Entry in this database:2022/06/30