Refine
Departments, institutes and facilities
Document Type
- Conference Object (55)
- Article (23)
- Part of a Book (4)
- Report (1)
Year of publication
Language
- English (83) (remove)
Keywords
- Sustainability (5)
- Eco-Feedback (4)
- GDPR (4)
- Shared autonomous vehicles (4)
- Mobility (3)
- Modal Shift (3)
- Public Transport (3)
- Smart Home (3)
- Sustainable Interaction Design (3)
- User Experience (3)
New cars are increasingly "connected" by default. Since not having a car is not an option for many people, understanding the privacy implications of driving connected cars and using their data-based services is an even more pressing issue than for expendable consumer products. While risk-based approaches to privacy are well established in law, they have only begun to gain traction in HCI. These approaches are understood not only to increase acceptance but also to help consumers make choices that meet their needs. To the best of our knowledge, perceived risks in the context of connected cars have not been studied before. To address this gap, our study reports on the analysis of a survey with 18 open-ended questions distributed to 1,000 households in a medium-sized German city. Our findings provide qualitative insights into existing attitudes and use cases of connected car features and, most importantly, a list of perceived risks themselves. Taking the perspective of consumers, we argue that these can help inform consumers about data use in connected cars in a user-friendly way. Finally, we show how these risks fit into and extend existing risk taxonomies from other contexts with a stronger social perspective on risks of data use.
Background
Consumers rely heavily on online user reviews when shopping online and cybercriminals produce fake reviews to manipulate consumer opinion. Much prior research focuses on the automated detection of these fake reviews, which are far from perfect. Therefore, consumers must be able to detect fake reviews on their own. In this study we survey the research examining how consumers detect fake reviews online.
Methods
We conducted a systematic literature review over the research on fake review detection from the consumer-perspective. We included academic literature giving new empirical data. We provide a narrative synthesis comparing the theories, methods and outcomes used across studies to identify how consumers detect fake reviews online.
Results
We found only 15 articles that met our inclusion criteria. We classify the most often used cues identified into five categories which were (1) review characteristics (2) textual characteristics (3) reviewer characteristics (4) seller characteristics and (5) characteristics of the platform where the review is displayed.
Discussion
We find that theory is applied inconsistently across studies and that cues to deception are often identified in isolation without any unifying theoretical framework. Consequently, we discuss how such a theoretical framework could be developed.
Autonomous driving enables new mobility concepts such as shared-autonomous services. Although significant re-search has been done on passenger-car interaction, work on passenger interaction with robo-taxis is still rare. In this paper, we tackle the question of how passengers experience robo-taxis as a service in real-life settings to inform the interaction design. We conducted a Wizard of Oz study with an electric vehicle where the driver was hidden from the passenger to simulate the service experience of a robo-taxi. 10 participants had the opportunity to use the simulated shared-autonomous service in real-life situations for one week. By the week's end, 33 rides were completed and recorded on video. Also, we flanked the study conducting interviews before and after with all participants. The findings provided insights into four design themes that could inform the service design of robo-taxis along the different stages including hailing, pick-up, travel, and drop-off.
In Software development, the always beta principle is used to successfully develop innovation based on early and continuous user feedback. In this paper we discuss how this principle could be adapted to the special needs of designing for the Smart Home, where we do not just take care of the software, but also release hardware components. In particular, because of the 'materiality' of the Smart Home one could not just make a beta version available on the web, but an essential part of the development process is also to visit the 'beta' users in their home, to build trust, to face the real world issues and provide assistance to make the Smart Home work for them. After presenting our case study, we will then discuss the challenges we faced and how we dealt with them.
There has been a growing interest in taste research in the HCI and CSCW communities. However, the focus is more on stimulating the senses, while the socio-cultural aspects have received less attention. However, individual taste perception is mediated through social interaction and collective negotiation and is not only dependent on physical stimulation. Therefore, we study the digital mediation of taste by drawing on ethnographic research of four online wine tastings and one self-organized event. Hence, we investigated the materials, associated meanings, competences, procedures, and engagements that shaped the performative character of tasting practices. We illustrate how the tastings are built around the taste-making process and how online contexts differ in providing a more diverse and distributed environment. We then explore the implications of our findings for the further mediation of taste as a social and democratized phenomenon through online interaction.
AI (artificial intelligence) systems are increasingly being used in all aspects of our lives, from mundane routines to sensitive decision-making and even creative tasks. Therefore, an appropriate level of trust is required so that users know when to rely on the system and when to override it. While research has looked extensively at fostering trust in human-AI interactions, the lack of standardized procedures for human-AI trust makes it difficult to interpret results and compare across studies. As a result, the fundamental understanding of trust between humans and AI remains fragmented. This workshop invites researchers to revisit existing approaches and work toward a standardized framework for studying AI trust to answer the open questions: (1) What does trust mean between humans and AI in different contexts? (2) How can we create and convey the calibrated level of trust in interactions with AI? And (3) How can we develop a standardized framework to address new challenges?
Critical consumerism is complex as ethical values are difficult to negotiate, appropriate products are hard to find, and product information is overwhelming. Although recommender systems offer solutions to reduce such complexity, current designs are not appropriate for niche practices and use non-personalized intransparent ethics. To support critical consumption, we conducted a design case study on a personalized food recommender system. Therefore, we first conducted an empirical pre-study with 24 consumers to understand value negotiations and current practices, co-designed the recommender system, and finally evaluated it in a real-world trial with ten consumers. Our findings show how recommender systems can support the negotiation of ethical values within the context of consumption practices, reduce the complexity of finding products and stores, and strengthen consumers. In addition to providing implications for the design to support critical consumption practices, we critically reflect on the scope of such recommender systems and its appropriation.
The technological development of the digital computer and new options to collect, store and transfer mass data have changed the world in the last 40 years. Moreover, due to the ongoing progress of computer power, the establishment of the Internet as critical infrastructure and the options of ubiquitous sensor systems will have a dramatic impact on economies and societies in the future. We give a brief overview about the technological basics especially with regard to the exponential growth of big data and current turn towards sensor-based data collection. From this stance, we reconsider the various dimensions of personal data and and market mechanisms that have an impact of data usage and protection.
For most people, using their body to authenticate their identity is an integral part of daily life. From our fingerprints to our facial features, our physical characteristics store the information that identifies us as "us." This biometric information is becoming increasingly vital to the way we access and use technology. As more and more platform operators struggle with traffic from malicious bots on their servers, the burden of proof is on users, only this time they have to prove their very humanity and there is no court or jury to judge, but an invisible algorithmic system. In this paper, we critique the invisibilization of artificial intelligence policing. We argue that this practice obfuscates the underlying process of biometric verification. As a result, the new "invisible" tests leave no room for the user to question whether the process of questioning is even fair or ethical. We challenge this thesis by offering a juxtaposition with the science fiction imagining of the Turing test in Blade Runner to reevaluate the ethical grounds for reverse Turing tests, and we urge the research community to pursue alternative routes of bot identification that are more transparent and responsive.
Eco-InfoVis at Work
(2020)
While the recent discussion on Art. 25 GDPR often considers the approach of data protection by design as an innovative idea, the notion of making data protection law more effective through requiring the data controller to implement the legal norms into the processing design is almost as old as the data protection debate. However, there is another, more recent shift in establishing the data protection by design approach through law, which is not yet understood to its fullest extent in the debate. Art. 25 GDPR requires the controller to not only implement the legal norms into the processing design but to do so in an effective manner. By explicitly declaring the effectiveness of the protection measures to be the legally required result, the legislator inevitably raises the question of which methods can be used to test and assure such efficacy. In our opinion, extending the legal compatibility assessment to the real effects of the required measures opens this approach to interdisciplinary methodologies. In this paper, we first summarise the current state of research on the methodology established in Art. 25 sect. 1 GDPR, and pinpoint some of the challenges of incorporating interdisciplinary research methodologies. On this premise, we present an empirical research methodology and first findings which offer one approach to answering the question on how to specify processing purposes effectively. Lastly, we discuss the implications of these findings for the legal interpretation of Art. 25 GDPR and related provisions, especially with respect to a more effective implementation of transparency and consent, and provide an outlook on possible next research steps.
Reducing energy consumption is one of the most pursued economic and ecologic challenges concerning societies as a whole, individuals and organizations alike. While politics start taking measures for energy turnaround and smart home energy monitors are becoming popular, few studies have touched on sustainability in office environments so far, though they account for almost every second workplace in modern economics. In this paper, we present findings of two parallel studies in an organizational context using behavioral change oriented strategies to raise energy awareness. Next to demonstrating potentials, it shows that energy feedback needs must fit to the local organizational context to succeed and should consider typical work patterns to foster accountability of consumption.
The smart home of the future is typically researched in lab settings or apartments that have been built from scratch. However, comparing the lifecycle of buildings and information technology, it is evident that modernization strategies and technologies are needed to empower residents to modify and extend their homes to make it smarter. In this paper, we describe a case study about the deployment, adaption to and adoption of tailorable home energy management systems in 7 private households. Based on this experience, we want to discuss how hardware and software technologies should be designed so that people could build their own smart home with a high usability and user experience.
In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI” from the users, who were interacting with AI but did not realize it. Three decades of research and we are still facing the same issue with the AItechnology users. In the lack of users’ awareness and mutual understanding of AI-enabled systems between designers and users, informal theories of the users about how a system works (“Folk theories”) become inevitable but can lead to misconceptions and ineffective interactions. To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. However, a profound understanding of the current users’ perception of AI is still missing. In this study, we introduce the term “Perceived AI” as “AI defined from the perspective of its users”. We then present our preliminary results from deep-interviews with 50 AItechnology users, which provide a framework for our future research approach towards a better understanding of PAI and users’ folk theories.
Due to expected positive impacts on business, the application of artificial intelligence has been widely increased. The decision-making procedures of those models are often complex and not easily understandable to the company’s stakeholders, i.e. the people having to follow up on recommendations or try to understand automated decisions of a system. This opaqueness and black-box nature might hinder adoption, as users struggle to make sense and trust the predictions of AI models. Recent research on eXplainable Artificial Intelligence (XAI) focused mainly on explaining the models to AI experts with the purpose of debugging and improving the performance of the models. In this article, we explore how such systems could be made explainable to the stakeholders. For doing so, we propose a new convolutional neural network (CNN)-based explainable predictive model for product backorder prediction in inventory management. Backorders are orders that customers place for products that are currently not in stock. The company now takes the risk to produce or acquire the backordered products while in the meantime, customers can cancel their orders if that takes too long, leaving the company with unsold items in their inventory. Hence, for their strategic inventory management, companies need to make decisions based on assumptions. Our argument is that these tasks can be improved by offering explanations for AI recommendations. Hence, our research investigates how such explanations could be provided, employing Shapley additive explanations to explain the overall models’ priority in decision-making. Besides that, we introduce locally interpretable surrogate models that can explain any individual prediction of a model. The experimental results demonstrate effectiveness in predicting backorders in terms of standard evaluation metrics and outperform known related works with AUC 0.9489. Our approach demonstrates how current limitations of predictive technologies can be addressed in the business domain.
Exploring Future Work - Co-Designing a Human-robot Collaboration Environment for Service Domains
(2020)
There has been increasing interest in the application of humanoid robots in service domains like retail or care homes in recent years. Here, most use cases focus on serving customer needs autonomously. Frequently, human intervention becomes necessary to support the robot in exceptional situations. However, direct intervention of service operators is often not possible and requires specialized personnel. In a co-design process with 13 service operators from a pharmacy, we designed a remote working environment for human-robot collaboration that enables first-time experiences and collaboration with robots. Five participants took part in an assessment study and reported on their experiences about the utility, usability and user experience. Results show that participants were able to control and train the robot through the remote control environment. We discuss implications of our results for future work in service domains and emphasize a shift of focus from full robot automatization to human-robot collaboration forms.
Focus on what matters: improved feature selection techniques for personal thermal comfort modelling
(2022)
Occupants' personal thermal comfort (PTC) is indispensable for their well-being, physical and mental health, and work efficiency. Predicting PTC preferences in a smart home can be a prerequisite to adjusting the indoor temperature for providing a comfortable environment. In this research, we focus on identifying relevant features for predicting PTC preferences. We propose a machine learning-based predictive framework by employing supervised feature selection techniques. We apply two feature selection techniques to select the optimal sets of features to improve the thermal preference prediction performance. The experimental results on a public PTC dataset demonstrated the efficiency of the feature selection techniques that we have applied. In turn, our PTC prediction framework with feature selection techniques achieved state-of-the-art performance in terms of accuracy, Cohen's kappa, and area under the curve (AUC), outperforming conventional methods.
Within qualitative interviews we examine attitudes towards driverless cars in order to investigate new mobility services and explore the impact of such services on everyday mobility. We identified three main issues that we would like to discuss in the workshop: (I) Designing beyond a driver-centric approach; (II) Developing mobility services for cars which drive themselves; and (III) Exploring self-driving practices.