Refine
Departments, institutes and facilities
Document Type
- Conference Object (10)
- Article (3)
- Part of a Book (2)
Keywords
- GDPR (3)
- Claim personal data (2)
- Data literacy (2)
- Data takeout (2)
- Digitaler Verbraucherschutz (2)
- Hilfsangebote (2)
- Onlinebetrug (2)
- Usable Privacy (2)
- Verbraucherinformatik (2)
- Viktimologie (2)
For most people, using their body to authenticate their identity is an integral part of daily life. From our fingerprints to our facial features, our physical characteristics store the information that identifies us as "us." This biometric information is becoming increasingly vital to the way we access and use technology. As more and more platform operators struggle with traffic from malicious bots on their servers, the burden of proof is on users, only this time they have to prove their very humanity and there is no court or jury to judge, but an invisible algorithmic system. In this paper, we critique the invisibilization of artificial intelligence policing. We argue that this practice obfuscates the underlying process of biometric verification. As a result, the new "invisible" tests leave no room for the user to question whether the process of questioning is even fair or ethical. We challenge this thesis by offering a juxtaposition with the science fiction imagining of the Turing test in Blade Runner to reevaluate the ethical grounds for reverse Turing tests, and we urge the research community to pursue alternative routes of bot identification that are more transparent and responsive.
When dialogues with voice assistants (VAs) fall apart, users often become confused or even frustrated. To address these issues and related privacy concerns, Amazon recently introduced a feature allowing Alexa users to inquire about why it behaved in a certain way. But how do users perceive this new feature? In this paper, we present preliminary results from research conducted as part of a three-year project involving 33 German households. This project utilized interviews, fieldwork, and co-design workshops to identify common unexpected behaviors of VAs, as well as users’ needs and expectations for explanations. Our findings show that, contrary to its intended purpose, the new feature actually exacerbates user confusion and frustration instead of clarifying Alexa's behavior. We argue that such voice interactions should be characterized as explanatory dialogs that account for VA’s unexpected behavior by providing interpretable information and prompting users to take action to improve their current and future interactions.
As voice assistants (VAs) become more advanced leveraging Large Language Models (LLMs) and natural language processing, their potential for accountable behavior expands. Yet, the long-term situational effectiveness of VAs’ accounts when errors occur remains unclear. In our 19-month exploratory study with 19 households, we investigated the impact of an Alexa feature that allows users to inquire about the reasons behind its actions. Our findings indicate that Alexa's accounts are often single, decontextualized responses that led to users’ alternative repair strategies over the long term, such as turning off the device, rather than initiating a dialogue about what went wrong. Through role-playing workshops, we demonstrate that VA interactions should facilitate explanatory dialogues as dynamic exchanges that consider a range of speech acts, recognizing users’ emotional states and the context of interaction. We conclude by discussing the implications of our findings for the design of accountable VAs.
In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI” from the users, who were interacting with AI but did not realize it. Three decades of research and we are still facing the same issue with the AItechnology users. In the lack of users’ awareness and mutual understanding of AI-enabled systems between designers and users, informal theories of the users about how a system works (“Folk theories”) become inevitable but can lead to misconceptions and ineffective interactions. To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. However, a profound understanding of the current users’ perception of AI is still missing. In this study, we introduce the term “Perceived AI” as “AI defined from the perspective of its users”. We then present our preliminary results from deep-interviews with 50 AItechnology users, which provide a framework for our future research approach towards a better understanding of PAI and users’ folk theories.
Technological objects present themselves as necessary, only to become obsolete faster than ever before. This phenomenon has led to a population that experiences a plethora of technological objects and interfaces as they age, which become associated with certain stages of life and disappear thereafter. Noting the expanding body of literature within HCI about appropriation, our work pinpoints an area that needs more attention, “outdated technologies.” In other words, we assert that design practices can profit as much from imaginaries of the future as they can from reassessing artefacts from the past in a critical way. In a two-week fieldwork with 37 HCI students, we gathered an international collection of nostalgic devices from 14 different countries to investigate what memories people still have of older technologies and the ways in which these memories reveal normative and accidental use of technological objects. We found that participants primarily remembered older technologies with positive connotations and shared memories of how they had adapted and appropriated these technologies, rather than normative uses. We refer to this phenomenon as nostalgic reminiscence. In the future, we would like to develop this concept further by discussing how nostalgic reminiscence can be operationalized to stimulate speculative design in the present.
AI (artificial intelligence) systems are increasingly being used in all aspects of our lives, from mundane routines to sensitive decision-making and even creative tasks. Therefore, an appropriate level of trust is required so that users know when to rely on the system and when to override it. While research has looked extensively at fostering trust in human-AI interactions, the lack of standardized procedures for human-AI trust makes it difficult to interpret results and compare across studies. As a result, the fundamental understanding of trust between humans and AI remains fragmented. This workshop invites researchers to revisit existing approaches and work toward a standardized framework for studying AI trust to answer the open questions: (1) What does trust mean between humans and AI in different contexts? (2) How can we create and convey the calibrated level of trust in interactions with AI? And (3) How can we develop a standardized framework to address new challenges?
Sprachassistenten wie Alexa oder Google Assistant sind aus dem Alltag vieler VerbraucherInnen nicht mehr wegzudenken. Sie überzeugen insbesondere durch die sprachbasierte und somit freihändige Steuerung und mitunter auch den unterhaltsamen Charakter. Als häuslicher Lebensmittelpunkt sind die häufigsten Aufstellungsorte das Wohnzimmer und die Küche, da sich Haushaltsmitglieder dort die meiste Zeit aufhalten und das alltägliche Leben abspielt. Dies bedeutet allerdings ebenso, dass an diesen Orten potenziell viele Daten erfasst und gesammelt werden können, die nicht für den Sprachassistenten bestimmt sind. Demzufolge ist nicht auszuschließen, dass der Sprachassistent – wenn auch versehentlich – durch Gespräche oder Geräusche aktiviert wird und Aufnahmen speichert, selbst wenn eine Aktivierung unbewusst von Anwesenden bzw. von anderen Geräten (z. B. Fernseher) erfolgt oder aus anderen Räumen kommt. Im Rahmen eines Forschungsprojekts haben wir dazu NutzerInnen über Ihre Nutzungs- und Aufstellungspraktiken der Sprachassistenten befragt und zudem einen Prototyp getestet, der die gespeicherten Interaktionen mit dem Sprachassistenten sichtbar macht. Dieser Beitrag präsentiert basierend auf den Erkenntnissen aus den Interviews und abgeleiteten Leitfäden aus den darauffolgenden Nutzungstests des Prototyps eine Anwendung zur Beantragung und Visualisierung der Interaktionsdaten mit dem Sprachassistenten. Diese ermöglicht es, Interaktionen und die damit zusammenhängende Situation darzustellen, indem sie zu jeder Interaktion die Zeit, das verwendete Gerät sowie den Befehl wiedergibt und unerwartete Verhaltensweisen wie die versehentliche oder falsche Aktivierung sichtbar macht. Dadurch möchten wir VerbraucherInnen für die Fehleranfälligkeit dieser Geräte sensibilisieren und einen selbstbestimmteren und sichereren Umgang ermöglichen.
New cars are increasingly "connected" by default. Since not having a car is not an option for many people, understanding the privacy implications of driving connected cars and using their data-based services is an even more pressing issue than for expendable consumer products. While risk-based approaches to privacy are well established in law, they have only begun to gain traction in HCI. These approaches are understood not only to increase acceptance but also to help consumers make choices that meet their needs. To the best of our knowledge, perceived risks in the context of connected cars have not been studied before. To address this gap, our study reports on the analysis of a survey with 18 open-ended questions distributed to 1,000 households in a medium-sized German city. Our findings provide qualitative insights into existing attitudes and use cases of connected car features and, most importantly, a list of perceived risks themselves. Taking the perspective of consumers, we argue that these can help inform consumers about data use in connected cars in a user-friendly way. Finally, we show how these risks fit into and extend existing risk taxonomies from other contexts with a stronger social perspective on risks of data use.
Voice assistants (VA) collect data about users’ daily life including interactions with other connected devices, musical preferences, and unintended interactions. While users appreciate the convenience of VAs, their understanding and expectations of data collection by vendors are often vague and incomplete. By making the collected data explorable for consumers, our research-through-design approach seeks to unveil design resources for fostering data literacy and help users in making better informed decisions regarding their use of VAs. In this paper, we present the design of an interactive prototype that visualizes the conversations with VAs on a timeline and provides end users with basic means to engage with data, for instance allowing for filtering and categorization. Based on an evaluation with eleven households, our paper provides insights on how users reflect upon their data trails and presents design guidelines for supporting data literacy of consumers in the context of VAs.