005 Computerprogrammierung, Programme, Daten
Refine
H-BRS Bibliography
- yes (7) (remove)
Departments, institutes and facilities
Document Type
- Conference Object (7) (remove)
Year of publication
- 2020 (7) (remove)
Keywords
- Usable Security (2)
- Artificial Intelligence (1)
- Authentication (1)
- Domestic Robots (1)
- Folk theories (1)
- Human-Centered Robotics (1)
- Learning and Adaptive Systems (1)
- Mental models (1)
- Misconception (1)
- Password (1)
Validierung einer Web-Applikation zum Fern-Monitoring von Belastungs- und Erholungsparametern
(2020)
Simultan zur agilen Entwicklung einer Web-Applikation, die Parameter der Belastungs- und Beanspruchungssteuerung erfasst, wurden die implementierten Belastungs- und Erholungs-parameter an freiwilligen Testern/innen in der Praxis überprüft. Um sowohl die Applikation als auch die z.T. selbst entwickelten Kenngrößen auf ihre externe Validität hin zu bewerten, werden diese regressionsanalytisch bearbeitet.
Risk-based Authentication (RBA) is an adaptive security measure to strengthen password-based authentication. RBA monitors additional features during login, and when observed feature values differ significantly from previously seen ones, users have to provide additional authentication factors such as a verification code. RBA has the potential to offer more usable authentication, but the usability and the security perceptions of RBA are not studied well.
We present the results of a between-group lab study (n=65) to evaluate usability and security perceptions of two RBA variants, one 2FA variant, and password-only authentication. Our study shows with significant results that RBA is considered to be more usable than the studied 2FA variants, while it is perceived as more secure than password-only authentication in general and comparably secure to 2FA in a variety of application types. We also observed RBA usability problems and provide recommendations for mitigation. Our contribution provides a first deeper understanding of the users' perception of RBA and helps to improve RBA implementations for a broader user acceptance.
In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI” from the users, who were interacting with AI but did not realize it. Three decades of research and we are still facing the same issue with the AItechnology users. In the lack of users’ awareness and mutual understanding of AI-enabled systems between designers and users, informal theories of the users about how a system works (“Folk theories”) become inevitable but can lead to misconceptions and ineffective interactions. To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. However, a profound understanding of the current users’ perception of AI is still missing. In this study, we introduce the term “Perceived AI” as “AI defined from the perspective of its users”. We then present our preliminary results from deep-interviews with 50 AItechnology users, which provide a framework for our future research approach towards a better understanding of PAI and users’ folk theories.
Bei der sechsten Ausgabe des wissenschaftlichen Workshops ”Usable Security und Privacy” auf der Mensch und Computer 2020 werden wie in den vergangenen Jahren aktuelle Forschungs- und Praxisbeiträge präsentiert und anschließend mit allen Teilnehmenden diskutiert. Drei Beiträge befassen sich dieses Jahr mit dem Thema Privatsphäre, einer mit dem Thema Sicherheit. Mit dem Workshop wird ein etabliertes Forum fortgeführt und weiterentwickelt, in dem sich Expert*innen aus unterschiedlichen Domänen, z. B. dem Usability- und Security-Engineering, transdisziplinär austauschen können.
Trust is the lubricant of the sharing economy. This is true especially in peer-to-peer carsharing, in which one leaves a highly valuable good to a stranger in the hope of getting it back unscathed. Nowadays, ratings of other users are major mechanisms for establishing trust. To foster uptake of peer-to-peer carsharing, connected car technology opens new possibilities to support trust-building, e.g., by adding driving behavior statistics to users' profiles. However, collecting such data intrudes into rentees' privacy. To explore the tension between the need for trust and privacy demands, we conducted three focus group and eight individual interviews. Our results show that connected car technologies can increase trust for car owners and rentees not only before but also during and after rentals. The design of such systems must allow a differentiation between information in terms of type, the context, and the negotiability of information disclosure.
An essential measure of autonomy in assistive service robots is adaptivity to the various contexts of human-oriented tasks, which are subject to subtle variations in task parameters that determine optimal behaviour. In this work, we propose an apprenticeship learning approach to achieving context-aware action generalization on the task of robot-to-human object hand-over. The procedure combines learning from demonstration and reinforcement learning: a robot first imitates a demonstrator’s execution of the task and then learns contextualized variants of the demonstrated action through experience. We use dynamic movement primitives as compact motion representations, and a model-based C-REPS algorithm for learning policies that can specify hand-over position, conditioned on context variables. Policies are learned using simulated task executions, before transferring them to the robot and evaluating emergent behaviours. We additionally conduct a user study involving participants assuming different postures and receiving an object from a robot, which executes hand-overs by either imitating a demonstrated motion, or adapting its motion to hand-over positions suggested by the learned policy. The results confirm the hypothesized improvements in the robot’s perceived behaviour when it is context-aware and adaptive, and provide useful insights that can inform future developments.