Refine
H-BRS Bibliography
- yes (6)
Departments, institutes and facilities
Document Type
- Conference Object (3)
- Part of a Book (2)
- Article (1)
Keywords
- Risk Perception (2)
- AI systems (1)
- AI-Systems (1)
- EU AI Act (1)
- Empirical Study (1)
- Human computer interaction (HCI) (1)
- Human-centered computing (1)
- Mixed / augmented reality (1)
- Responsible AI (1)
- Virtual Reality (1)
Digitale Verantwortung
(2024)
Die Verbreitung digitaler Systeme beeinflusst Entscheidungen, Gesetze, Verhalten und Werte in unserer Gesellschaft. Dies wirkt sich auf Konsumgewohnheiten, Marktbeziehungen, Machtverteilung, Privatsphäre und IT-Sicherheit aus. Damit einhergehende Veränderungen haben direkte Auswirkungen auf unser Leben, was im Bereich der Technikfolgenabschätzung bzw. der angewandten Informatik unter dem Stichwort ELSI diskutiert wird. Dieses Kapitel fokussiert auf entsprechende Fragestellungen bezüglich ethischer Auswirkungen. Insbesondere rückt Fairness im Kontext automatisierter Entscheidungen in den Fokus, da Verbraucher:innen diesen zunehmend ausgesetzt sind. Zudem wird im Rahmen der gestiegenen Besorgnis über ökologische Auswirkungen das Thema Nachhaltigkeit am Beispiel von „Sharing Economy“ und „Shared Mobility“ weiter vertieft.
AI systems pose unknown challenges for designers, policymakers, and users which aggravates the assessment of potential harms and outcomes. Although understanding risks is a requirement for building trust in technology, users are often excluded from legal assessments and explanations of AI hazards. To address this issue we conducted three focus groups with 18 participants in total and discussed the European proposal for a legal framework for AI. Based on this, we aim to build a (conceptual) model that guides policymakers, designers, and researchers in understanding users’ risk perception of AI systems. In this paper, we provide selected examples based on our preliminary results. Moreover, we argue for the benefits of such a perspective.
AI-powered systems pose unknown challenges for designers, policymakers, and users, making it more difficult to assess potential harms and outcomes. Although understanding risks is a requirement for building trust in technology, users are often excluded from risk assessments and explanations in policy and design. To address this issue, we conducted three workshops with 18 participants and discussed the EU AI Act, which is the European proposal for a legal framework for AI regulation. Based on results of these workshops, we propose a user-centered conceptual model with five risk dimensions (Design and Development, Operational, Distributive, Individual, and Societal) that includes 17 key risks. We further identify six criteria for categorizing use cases. Our conceptual model (1) contributes to responsible design discourses by connecting the risk assessment theories with user-centered approaches, and (2) supports designers and policymakers in more strongly considering a user perspective that complements their own expert views.