Refine
Departments, institutes and facilities
Document Type
- Conference Object (55)
- Article (23)
- Part of a Book (4)
- Report (1)
Year of publication
Language
- English (83) (remove)
Keywords
- Sustainability (5)
- Eco-Feedback (4)
- GDPR (4)
- Shared autonomous vehicles (4)
- Mobility (3)
- Modal Shift (3)
- Public Transport (3)
- Smart Home (3)
- Sustainable Interaction Design (3)
- User Experience (3)
So far, sustainable HCI has mainly focused on the domestic context, but there is a growing body of work looking at the organizational context. As in the domestic context, these works still rest on psychological theories for behaviour change used for the domestic context. We supplement this view with an organizational theory-informed approach that adopts organizational roles as a key element. We will show how a role-based analysis could be applied to uncover information needs and to give em-ployee’s eco-feedback, which is linked to their tasks at hand. We illustrate the approach on a qualitative case study that was part of a broader, ongoing action research conducted in a German production company.
Focus on what matters: improved feature selection techniques for personal thermal comfort modelling
(2022)
Occupants' personal thermal comfort (PTC) is indispensable for their well-being, physical and mental health, and work efficiency. Predicting PTC preferences in a smart home can be a prerequisite to adjusting the indoor temperature for providing a comfortable environment. In this research, we focus on identifying relevant features for predicting PTC preferences. We propose a machine learning-based predictive framework by employing supervised feature selection techniques. We apply two feature selection techniques to select the optimal sets of features to improve the thermal preference prediction performance. The experimental results on a public PTC dataset demonstrated the efficiency of the feature selection techniques that we have applied. In turn, our PTC prediction framework with feature selection techniques achieved state-of-the-art performance in terms of accuracy, Cohen's kappa, and area under the curve (AUC), outperforming conventional methods.
In Software development, the always beta principle is used to successfully develop innovation based on early and continuous user feedback. In this paper we discuss how this principle could be adapted to the special needs of designing for the Smart Home, where we do not just take care of the software, but also release hardware components. In particular, because of the 'materiality' of the Smart Home one could not just make a beta version available on the web, but an essential part of the development process is also to visit the 'beta' users in their home, to build trust, to face the real world issues and provide assistance to make the Smart Home work for them. After presenting our case study, we will then discuss the challenges we faced and how we dealt with them.
Critical consumerism is complex as ethical values are difficult to negotiate, appropriate products are hard to find, and product information is overwhelming. Although recommender systems offer solutions to reduce such complexity, current designs are not appropriate for niche practices and use non-personalized intransparent ethics. To support critical consumption, we conducted a design case study on a personalized food recommender system. Therefore, we first conducted an empirical pre-study with 24 consumers to understand value negotiations and current practices, co-designed the recommender system, and finally evaluated it in a real-world trial with ten consumers. Our findings show how recommender systems can support the negotiation of ethical values within the context of consumption practices, reduce the complexity of finding products and stores, and strengthen consumers. In addition to providing implications for the design to support critical consumption practices, we critically reflect on the scope of such recommender systems and its appropriation.
In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI” from the users, who were interacting with AI but did not realize it. Three decades of research and we are still facing the same issue with the AItechnology users. In the lack of users’ awareness and mutual understanding of AI-enabled systems between designers and users, informal theories of the users about how a system works (“Folk theories”) become inevitable but can lead to misconceptions and ineffective interactions. To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. However, a profound understanding of the current users’ perception of AI is still missing. In this study, we introduce the term “Perceived AI” as “AI defined from the perspective of its users”. We then present our preliminary results from deep-interviews with 50 AItechnology users, which provide a framework for our future research approach towards a better understanding of PAI and users’ folk theories.
For most people, using their body to authenticate their identity is an integral part of daily life. From our fingerprints to our facial features, our physical characteristics store the information that identifies us as "us." This biometric information is becoming increasingly vital to the way we access and use technology. As more and more platform operators struggle with traffic from malicious bots on their servers, the burden of proof is on users, only this time they have to prove their very humanity and there is no court or jury to judge, but an invisible algorithmic system. In this paper, we critique the invisibilization of artificial intelligence policing. We argue that this practice obfuscates the underlying process of biometric verification. As a result, the new "invisible" tests leave no room for the user to question whether the process of questioning is even fair or ethical. We challenge this thesis by offering a juxtaposition with the science fiction imagining of the Turing test in Blade Runner to reevaluate the ethical grounds for reverse Turing tests, and we urge the research community to pursue alternative routes of bot identification that are more transparent and responsive.
Smart home systems are becoming an integral feature of the emerging home IT market. Under this general term, products mainly address issues of security, energy savings and comfort. Comprehensive systems that cover several use cases are typically operated and managed via a unified dashboard. Unfortunately, research targeting user experience (UX) design for smart home interaction that spans several use cases or covering the entire system is scarce. Furthermore, existing comprehensive and user-centered longterm studies on challenges and needs throughout phases of information collection, installation and operation of smart home systems are technologically outdated. Our 18-month Living Lab study covering 14 households equipped with smart home technology provides insights on how to design for improving smart home appropriation. This includes a stronger sensibility for household practices during setup and configuration, flexible visualizations for evolving demands and an extension of smart home beyond the location.
AI (artificial intelligence) systems are increasingly being used in all aspects of our lives, from mundane routines to sensitive decision-making and even creative tasks. Therefore, an appropriate level of trust is required so that users know when to rely on the system and when to override it. While research has looked extensively at fostering trust in human-AI interactions, the lack of standardized procedures for human-AI trust makes it difficult to interpret results and compare across studies. As a result, the fundamental understanding of trust between humans and AI remains fragmented. This workshop invites researchers to revisit existing approaches and work toward a standardized framework for studying AI trust to answer the open questions: (1) What does trust mean between humans and AI in different contexts? (2) How can we create and convey the calibrated level of trust in interactions with AI? And (3) How can we develop a standardized framework to address new challenges?
Autonomous driving enables new mobility concepts such as shared-autonomous services. Although significant re-search has been done on passenger-car interaction, work on passenger interaction with robo-taxis is still rare. In this paper, we tackle the question of how passengers experience robo-taxis as a service in real-life settings to inform the interaction design. We conducted a Wizard of Oz study with an electric vehicle where the driver was hidden from the passenger to simulate the service experience of a robo-taxi. 10 participants had the opportunity to use the simulated shared-autonomous service in real-life situations for one week. By the week's end, 33 rides were completed and recorded on video. Also, we flanked the study conducting interviews before and after with all participants. The findings provided insights into four design themes that could inform the service design of robo-taxis along the different stages including hailing, pick-up, travel, and drop-off.