Refine
Departments, institutes and facilities
- Institut für Verbraucherinformatik (IVI) (252) (remove)
Document Type
- Conference Object (134)
- Article (59)
- Part of a Book (32)
- Report (7)
- Book (monograph, edited volume) (5)
- Doctoral Thesis (5)
- Working Paper (5)
- Contribution to a Periodical (4)
- Conference Proceedings (1)
Year of publication
Keywords
- ICT (8)
- User Experience (8)
- Sustainability (7)
- Verbraucherinformatik (6)
- Dementia (5)
- Exergame (5)
- Shared autonomous vehicles (5)
- Design (4)
- Eco-Feedback (4)
- GDPR (4)
Diese Studie untersucht die Aneignung und Nutzung von Sprachassistenten wie Google Assistant oder Amazon Alexa in Privathaushalten. Unsere Forschung basiert auf zehn Tiefeninterviews mit Nutzern von Sprachassistenten sowie der Evaluation bestimmter Interaktionen in der Interaktionshistorie. Unsere Ergebnisse illustrieren, zu welchen Anlässen Sprachassistenten im heimischen Umfeld genutzt werden, welche Strategien sich die Nutzer in der Interaktion mit Sprachassistenten angeeignet haben, wie die Interaktion abläuft und welche Schwierigkeiten sich bei der Einrichtung und Nutzung des Sprachassistenten ergeben haben. Ein besonderer Fokus der Studie liegt auf Fehlinteraktionen, also Situationen, in denen die Interaktion scheitert oder zu scheitern droht. Unsere Studie zeigt, dass das Nutzungspotenzial der Assistenten häufig nicht ausgeschöpft wird, da die Interaktion in komplexeren Anwendungsfällen häufig misslingt. Die Nutzer verwenden daher den Sprachassistenten eher in einfachen Anwendungsfällen und neue Apps und Anwendungsfälle werden gar nicht erst ausprobiert. Eine Analyse der Aneignungsstrategien, beispielsweise durch eine selbst erstellte Liste mit Befehlen, liefert Erkenntnisse für die Gestaltung von Unterstützungswerkzeugen sowie die Weiterentwicklung und Optimierung von sprachbasierten Mensch-Maschine-Interfaces.
Sharing economies enabled by technical platforms have been studied regarding their economic, legal, and social effects, as well as with regard to their possible influences on CSCW topics such as work, collaboration, and trust. While a lot current research is focusing on the sharing economy and related communities, there is little work addressing the phenomenon from a socio-technical point of view. Our workshop is meant to address this gap. Building on research themes and discussion from last year’s ECSCW, we seek to engage deeper with topics such as novel socio-technical approaches for enabling sharing communities, discussing issues around digital consumer and worker protection, as well as emerging challenges and opportunities of existing platforms and approaches.
3D Printers as Sociable Technologies: Taking Appropriation Infrastructures to the Internet of Things
(2017)
New cars are increasingly "connected" by default. Since not having a car is not an option for many people, understanding the privacy implications of driving connected cars and using their data-based services is an even more pressing issue than for expendable consumer products. While risk-based approaches to privacy are well established in law, they have only begun to gain traction in HCI. These approaches are understood not only to increase acceptance but also to help consumers make choices that meet their needs. To the best of our knowledge, perceived risks in the context of connected cars have not been studied before. To address this gap, our study reports on the analysis of a survey with 18 open-ended questions distributed to 1,000 households in a medium-sized German city. Our findings provide qualitative insights into existing attitudes and use cases of connected car features and, most importantly, a list of perceived risks themselves. Taking the perspective of consumers, we argue that these can help inform consumers about data use in connected cars in a user-friendly way. Finally, we show how these risks fit into and extend existing risk taxonomies from other contexts with a stronger social perspective on risks of data use.
AI systems pose unknown challenges for designers, policymakers, and users which aggravates the assessment of potential harms and outcomes. Although understanding risks is a requirement for building trust in technology, users are often excluded from legal assessments and explanations of AI hazards. To address this issue we conducted three focus groups with 18 participants in total and discussed the European proposal for a legal framework for AI. Based on this, we aim to build a (conceptual) model that guides policymakers, designers, and researchers in understanding users’ risk perception of AI systems. In this paper, we provide selected examples based on our preliminary results. Moreover, we argue for the benefits of such a perspective.
Background
Consumers rely heavily on online user reviews when shopping online and cybercriminals produce fake reviews to manipulate consumer opinion. Much prior research focuses on the automated detection of these fake reviews, which are far from perfect. Therefore, consumers must be able to detect fake reviews on their own. In this study we survey the research examining how consumers detect fake reviews online.
Methods
We conducted a systematic literature review over the research on fake review detection from the consumer-perspective. We included academic literature giving new empirical data. We provide a narrative synthesis comparing the theories, methods and outcomes used across studies to identify how consumers detect fake reviews online.
Results
We found only 15 articles that met our inclusion criteria. We classify the most often used cues identified into five categories which were (1) review characteristics (2) textual characteristics (3) reviewer characteristics (4) seller characteristics and (5) characteristics of the platform where the review is displayed.
Discussion
We find that theory is applied inconsistently across studies and that cues to deception are often identified in isolation without any unifying theoretical framework. Consequently, we discuss how such a theoretical framework could be developed.
Autonomous driving enables new mobility concepts such as shared-autonomous services. Although significant re-search has been done on passenger-car interaction, work on passenger interaction with robo-taxis is still rare. In this paper, we tackle the question of how passengers experience robo-taxis as a service in real-life settings to inform the interaction design. We conducted a Wizard of Oz study with an electric vehicle where the driver was hidden from the passenger to simulate the service experience of a robo-taxi. 10 participants had the opportunity to use the simulated shared-autonomous service in real-life situations for one week. By the week's end, 33 rides were completed and recorded on video. Also, we flanked the study conducting interviews before and after with all participants. The findings provided insights into four design themes that could inform the service design of robo-taxis along the different stages including hailing, pick-up, travel, and drop-off.
This article concerns with the accessibility of Business process modelling tools (BPMo tools) and business process modelling languages (BPMo languages). Therefore the reader will be introduced to business process management and the authors' motivation behind this inquiry. Afterwards, the paper will reflect problems when applying inaccessible BPMo tools. To illustrate these problems the authors distinguish between two different categories of issues and provide practical examples. Finally the article will present three approaches to improve the accessibility of BPMo tools and BPMo languages.