005 Computerprogrammierung, Programme, Daten
Refine
Departments, institutes and facilities
Document Type
- Conference Object (187) (remove)
Year of publication
Keywords
- Usable Security (7)
- Privacy (5)
- Usable Privacy (5)
- Cloud (4)
- GDPR (4)
- HTTP (4)
- Web (4)
- security (4)
- Authentication (3)
- REST (3)
Diese Studie untersucht die Aneignung und Nutzung von Sprachassistenten wie Google Assistant oder Amazon Alexa in Privathaushalten. Unsere Forschung basiert auf zehn Tiefeninterviews mit Nutzern von Sprachassistenten sowie der Evaluation bestimmter Interaktionen in der Interaktionshistorie. Unsere Ergebnisse illustrieren, zu welchen Anlässen Sprachassistenten im heimischen Umfeld genutzt werden, welche Strategien sich die Nutzer in der Interaktion mit Sprachassistenten angeeignet haben, wie die Interaktion abläuft und welche Schwierigkeiten sich bei der Einrichtung und Nutzung des Sprachassistenten ergeben haben. Ein besonderer Fokus der Studie liegt auf Fehlinteraktionen, also Situationen, in denen die Interaktion scheitert oder zu scheitern droht. Unsere Studie zeigt, dass das Nutzungspotenzial der Assistenten häufig nicht ausgeschöpft wird, da die Interaktion in komplexeren Anwendungsfällen häufig misslingt. Die Nutzer verwenden daher den Sprachassistenten eher in einfachen Anwendungsfällen und neue Apps und Anwendungsfälle werden gar nicht erst ausprobiert. Eine Analyse der Aneignungsstrategien, beispielsweise durch eine selbst erstellte Liste mit Befehlen, liefert Erkenntnisse für die Gestaltung von Unterstützungswerkzeugen sowie die Weiterentwicklung und Optimierung von sprachbasierten Mensch-Maschine-Interfaces.
Sharing economies enabled by technical platforms have been studied regarding their economic, legal, and social effects, as well as with regard to their possible influences on CSCW topics such as work, collaboration, and trust. While a lot current research is focusing on the sharing economy and related communities, there is little work addressing the phenomenon from a socio-technical point of view. Our workshop is meant to address this gap. Building on research themes and discussion from last year’s ECSCW, we seek to engage deeper with topics such as novel socio-technical approaches for enabling sharing communities, discussing issues around digital consumer and worker protection, as well as emerging challenges and opportunities of existing platforms and approaches.
In Fortführung zu den drei erfolgreichen „Usable Security und Privacy“ Workshops der letzten drei Jahre, sollen in einem vierten ganztätigen wissenschaftlichen Workshop auf der diesjährigen Mensch und Computer sechs bis acht Arbeiten auf dem Gebiet Usable Security and Privacy vorgestellt und diskutiert werden. Vorgesehen sind Beiträge aus Forschung und Praxis, die neue nutzerzentrierte Ansätze aber auch praxisrelevante Lösungen zur nutzerzentrierten Entwicklung und Ausgestaltung von digitalen Schutzmechanismen thematisieren. Mit dem Workshop soll das etablierte Forum weiterentwickelt werden, in dem sich Experten aus unterschiedlichen Domänen, z. B. dem Usability-Engineering und Security-Engineering, transdisziplinär austauschen können. Der Workshop wird von den Organisatoren als klassischer wissenschaftlicher Workshop ausgestaltet. Ein Programmkomitee bewertet die Einreichungen und wählt daraus die zur Präsentation akzeptierten Beiträge aus. Diese werden zudem im Poster- und Workshopband der Mensch und Computer 2018 veröffentlicht.
Bei der sechsten Ausgabe des wissenschaftlichen Workshops ”Usable Security und Privacy” auf der Mensch und Computer 2020 werden wie in den vergangenen Jahren aktuelle Forschungs- und Praxisbeiträge präsentiert und anschließend mit allen Teilnehmenden diskutiert. Drei Beiträge befassen sich dieses Jahr mit dem Thema Privatsphäre, einer mit dem Thema Sicherheit. Mit dem Workshop wird ein etabliertes Forum fortgeführt und weiterentwickelt, in dem sich Expert*innen aus unterschiedlichen Domänen, z. B. dem Usability- und Security-Engineering, transdisziplinär austauschen können.
Auch die mittlerweile siebte Ausgabe des wissenschaftlichen Workshops “Usable Security und Privacy” auf der Mensch und Computer 2021 wird aktuelle Forschungs- und Praxisbeiträge präsentiert und anschließend mit allen Teilnehmer:innen diskutiert. Zwei Beiträge befassen sich dieses Jahr mit dem Thema Privatsphäre, zwei mit dem Thema Sicherheit. Mit dem Workshop wird ein etabliertes Forum fortgeführt und weiterentwickelt, in dem sich Expert:innen aus unterschiedlichen Domänen, z. B. dem Usability- und Security- Engineering, transdisziplinär austauschen können.
Ziel der achten Auflage des wissenschaftlichen Workshops “Usable Security and Privacy” auf der Mensch und Computer 2022 ist es, aktuelle Forschungs- und Praxisbeiträge zu präsentieren und anschließend mit den Teilnehmenden zu diskutieren. Der Workshop soll ein etabliertes Forum fortführen und weiterentwickeln, in dem sich Experten aus verschiedenen Bereichen, z. B. Usability und Security Engineering, transdisziplinär austauschen können.
Recent years have seen extensive adoption of domain generation algorithms (DGA) by modern botnets. The main goal is to generate a large number of domain names and then use a small subset for actual C&C communication. This makes DGAs very compelling for botmasters to harden the infrastructure of their botnets and make it resilient to blacklisting and attacks such as takedown efforts. While early DGAs were used as a backup communication mechanism, several new botnets use them as their primary communication method, making it extremely important to study DGAs in detail.
In this paper, we perform a comprehensive measurement study of the DGA landscape by analyzing 43 DGAbased malware families and variants. We also present a taxonomy for DGAs and use it to characterize and compare the properties of the studied families. By reimplementing the algorithms, we pre-compute all possible domains they generate, covering the majority of known and active DGAs. Then, we study the registration status of over 18 million DGA domains and show that corresponding malware families and related campaigns can be reliably identified by pre-computing future DGA domains. We also give insights into botmasters’ strategies regarding domain registration and identify several pitfalls in previous takedown efforts of DGA-based botnets. We will share the dataset for future research and will also provide a web service to check domains for potential DGA identity.
New cars are increasingly "connected" by default. Since not having a car is not an option for many people, understanding the privacy implications of driving connected cars and using their data-based services is an even more pressing issue than for expendable consumer products. While risk-based approaches to privacy are well established in law, they have only begun to gain traction in HCI. These approaches are understood not only to increase acceptance but also to help consumers make choices that meet their needs. To the best of our knowledge, perceived risks in the context of connected cars have not been studied before. To address this gap, our study reports on the analysis of a survey with 18 open-ended questions distributed to 1,000 households in a medium-sized German city. Our findings provide qualitative insights into existing attitudes and use cases of connected car features and, most importantly, a list of perceived risks themselves. Taking the perspective of consumers, we argue that these can help inform consumers about data use in connected cars in a user-friendly way. Finally, we show how these risks fit into and extend existing risk taxonomies from other contexts with a stronger social perspective on risks of data use.
In this paper, we present a solution how to test cultural influences on E-Learning in a global context. Based on a metadata approach, we show how specifically cultural influence factors can be determined to transfer and adapt learning environments. We present a method how those influence factors can be validated for both, to improve the dynamical meta-data specification and to be used in the development of (international) E-Learning scenarios.
Online media consumption is the main driving force for the recent growth of the Web. As especially realtime media is becoming more and more accessible from a wide range of devices, with contrasting screen resolutions, processing resources and network connectivity, a necessary requirement is providing users with a seamless multimedia experience at the best possible quality, henceforth being able to adapt to the specific device and network conditions. This paper introduces a novel approach for adaptive media streaming in the Web. Despite the pervasive pullbased designs based on HTTP, this paper builds upon a Web-native push-based approach by which both the communication and processing overheads are reduced significantly in comparison to the pull-based counterparts. In order to maintain these properties when enhancing the scheme by adaptation features, a server-side monitoring and control needs to be developed as a consequence. Such an adaptive push-based media streaming approach is intr oduced as main contribution of this work. Moreover, the obtained evaluation results provide the evidence that with an adaptive push-based media delivery, on the one hand, an equivalent quality of experience can be provided at lower costs than by adopting pull-based media streaming. On the other hand, an improved responsiveness in switching between quality levels can be obtained at no extra costs.
Durch die Digitalisierung befindet sich die Mobilitätsbranche im starken Umbruch. So wird man bei der Verkehrsmittelwahl zukünftig wohl auch auf selbstfahrende Autos zurückgreifen können. Die Studie erweitert die Verkehrs- und Nutzerakzeptanzforschung, indem unter Berücksichtigung relativer Teilmehrwerte tiefergehend analysiert wird, wie sich die neuen Verkehrsmodi autonomer Privat-PKW, autonomes Carsharing und autonomes Taxi aus heutiger Sicht in den bestehenden Verkehrsmix einsortieren. Hierzu wurde auf Basis der Nutzerpräferenztheorie eine Onlineumfrage (n=172) zu den relativen Mehrwerten der neuen autonomen Verkehrsmodi durchgeführt. Es zeigt sich, dass Nutzer im Vergleich zum PKW bei den autonomen Modi Verbesserungen im Fahrkomfort und in der Zeitnutzung sehen, in vielen anderen Bereichen – insbesondere bei Fahrspaß und Kontrolle – hingegen keine Vorteile oder sogar relative Nachteile sehen. Gegenüber dem ÖPNV bieten die autonomen Modi in fast allen Eigenschaften Mehrwerte. Diese Betrachtung auf Teilnutzenebene liefert eine genauere Erklärung für Nutzerakzeptanz des automatisierten Fahrens.
Voice assistants (VA) collect data about users’ daily life including interactions with other connected devices, musical preferences, and unintended interactions. While users appreciate the convenience of VAs, their understanding and expectations of data collection by vendors are often vague and incomplete. By making the collected data explorable for consumers, our research-through-design approach seeks to unveil design resources for fostering data literacy and help users in making better informed decisions regarding their use of VAs. In this paper, we present the design of an interactive prototype that visualizes the conversations with VAs on a timeline and provides end users with basic means to engage with data, for instance allowing for filtering and categorization. Based on an evaluation with eleven households, our paper provides insights on how users reflect upon their data trails and presents design guidelines for supporting data literacy of consumers in the context of VAs.
Appropriating Digital Fabrication Technologies — A comparative study of two 3D Printing Communities
(2015)
Digital fabrication technologies have a great potential for empowering consumers to produce their own creations. However, despite the growing availability of digital fabrication technologies in shared machine shops such as FabLabs or University Labs, they are often perceived as difficult to use, especially by users with limited technological aptitude. Hence, it is not yet clear if the potentials of the technology can be made accessible to a broader public, or if they will remain limited to some form of “maker elite”. In this paper, we study the appropriation of digital fabrication on the example of the use of 3D printers in two different communities. In doing so, we analyze how users conceptualize their use of the 3D printers, what kind of contextual understanding is necessary to work with the machines, and how users document and share their knowledge. Based on our empirical findings, we identify the potentials that the machines offer to the communities, and what kind of challenges have to be overcome in their appropriation of the technology.
Cancer is one of the leading causes of death worldwide [183], with lung tumors being the most frequent cause of cancer deaths in men as well as one of the most common cancers diagnosed in woman [40]. As symptoms often arise in advanced stages, an early diagnosis is especially important to ensure the best and earliest possible treatment. In order to achieve this, Computed Tomography (CT) scans are frequently used for tumor detection and diagnosis. We will present examples of publicly available CT image data of lung cancer patients and discuss possible methods to realize an automatic system for automated cancer diagnosis. We will also look at the recent SPIE-AAPM Lung CT Challenge [10] data set in detail and describe possible methods and challenges for image segmentation and classification based on this data set.
Beyond HCI and CSCW: Challenges and Useful Practices Towards a Human-Centred Vision of AI and IA
(2019)
The Web has become an indispensable prerequisite of everyday live and the Web browser is the most used application on a variety of distinct devices. The content delivered by the Web has changed drastically from static pages to media-rich and interactive Web applications offering nearly the same functionality as native applications, a trend which is further pushed by the Cloud and more specifically the Cloud’s SaaS layer. In the light of this development, security and performance of Web browsing has become a crucial issue.
Critical consumerism is complex as ethical values are difficult to negotiate, appropriate products are hard to find, and product information is overwhelming. Although recommender systems offer solutions to reduce such complexity, current designs are not appropriate for niche practices and use non-personalized intransparent ethics. To support critical consumption, we conducted a design case study on a personalized food recommender system. Therefore, we first conducted an empirical pre-study with 24 consumers to understand value negotiations and current practices, co-designed the recommender system, and finally evaluated it in a real-world trial with ten consumers. Our findings show how recommender systems can support the negotiation of ethical values within the context of consumption practices, reduce the complexity of finding products and stores, and strengthen consumers. In addition to providing implications for the design to support critical consumption practices, we critically reflect on the scope of such recommender systems and its appropriation.
Components and Architecture for the Implementation of Technology-Driven Employee Data Protection
(2021)
Consolidating Principles and Patterns for Human-centred Usable Security Research and Development
(2018)
We present an evaluation of usable security principles and patterns to facilitate the transfer of existing knowledge to researchers and practitioners. Based on a literature review we extracted 23 common usable security principles and 47 usable security patterns and identified their interconnection. The results indicate that current research tends to focus on only a subset of important principles. The fact that some principles are not yet addressed by any design patterns suggests that further work on refining these patterns is needed. We developed an online repository, which stores the harmonized principles and patterns. The tool enables users to search for relevant patterns and explore them in an interactive and programmatic manner. We argue that both the insights presented in this paper and the repository will be highly valuable for students for getting a good overview, practitioners for implementing usable security and researchers for identifying areas of future research.
Most people use disaster apps infrequently, primarily only in situations of turmoil, when they are physically or emotionally vulnerable. Personal data may be necessary to help them, data protections may be waived. In some circumstances, free movement and liberties may be curtailed for public protection, as was seen in the current COVID pandemic. Consuming and producing disaster data can deepen problems arising at the confluence of surveillance and disaster capitalism, where data has become a tool for solutionist instrumentarian power (Zuboff 2019, Klein 2008) and part of a destructive mode of one world worlding (Law 2015, Escobar 2020). The special use of disaster apps prompts us to ask what role consumer protection could play in safeguarding democratic liberties. Within this work, a set of current approaches are briefly reviewed and two case studies are presented of what we call appropriation or design against datafication. These combine document analysis and literature research with several months of online and field ethnographic observation. The first case study examines disaster app use in response to the 2010 Haiti earthquake, the second explores COVID Contact Tracing in Taiwan in 2020/21. Against this backdrop we ask, ‘how could and how should consumer protection respond to problems of surveillance disaster capitalism?’ Drawing on our work with the is IT ethical? Exchange, a co-designed community platform and knowledge exchange for disaster information sharing, and a Societal Readiness Assessment Framework that we are developing alongside it, we explore how co-design methodologies could help define answers.
An essential measure of autonomy in assistive service robots is adaptivity to the various contexts of human-oriented tasks, which are subject to subtle variations in task parameters that determine optimal behaviour. In this work, we propose an apprenticeship learning approach to achieving context-aware action generalization on the task of robot-to-human object hand-over. The procedure combines learning from demonstration and reinforcement learning: a robot first imitates a demonstrator’s execution of the task and then learns contextualized variants of the demonstrated action through experience. We use dynamic movement primitives as compact motion representations, and a model-based C-REPS algorithm for learning policies that can specify hand-over position, conditioned on context variables. Policies are learned using simulated task executions, before transferring them to the robot and evaluating emergent behaviours. We additionally conduct a user study involving participants assuming different postures and receiving an object from a robot, which executes hand-overs by either imitating a demonstrated motion, or adapting its motion to hand-over positions suggested by the learned policy. The results confirm the hypothesized improvements in the robot’s perceived behaviour when it is context-aware and adaptive, and provide useful insights that can inform future developments.
Digital ecosystems are driving the digital transformation of business models. Meanwhile, the associated processing of personal data within these complex systems poses challenges to the protection of individual privacy. In this paper, we explore these challenges from the perspective of digital ecosystems' platform providers. To this end, we present the results of an interview study with seven data protection officers representing a total of 12 digital ecosystems in Germany. We identified current and future challenges for the implementation of data protection requirements, covering issues on legal obligations and data subject rights. Our results support stakeholders involved in the implementation of privacy protection measures in digital ecosystems, and form the foundation for future privacy-related studies tailored to the specifics of digital ecosystems.
Data emerged as a central success factor for companies to benefit from digitization. However, the skills in successfully creating value from data – especially at the management level – are not always profound. To address this problem, several canvas models have already been designed. Canvas models are usually created to write down an idea in a structured way to promote transparency and traceability. However, some existing data science canvas models mainly address developers and are thus unsuitable for decision-makers and communication within interdisciplinary teams. Based on a literature review, we identified influencing factors that are essential for the success of data science projects. With the information gained, the Data Science Canvas was developed in an expert workshop and finally evaluated by practitioners to find out whether such an instrument could support data-driven value creation.
Künstliche Intelligenz im autonomen Fahrzeug verarbeitet enorme Mengen an Daten. Beim Betrieb eines solchen Fahrzeugs basiert jede Bewegung auf einer datenbasierten, automatisierten und adaptiven Entscheidungsfindung. Aber auch, um Regeln zur Erkennung und Entscheidung in komplexen Situationen wie den hochindividuellen Verkehrsszenarien entwickeln zu können (KI-Training), sind bereits beachtliche Datenmengen von Fahrzeugen im Realverkehr erforderlich – zum Beispiel Videosequenzen aus Kamerafahrten. Für das Training Künstlicher Intelligenz ist es aus Sicht der Fahrzeugentwicklung attraktiv, auf den Datenschatz zuzugreifen, den die Gesamtheit der Fahrzeuge im realen Anwendungskontext erzeugen kann. Als Nutzer:innen und Insassen sind Verbraucher:innen so Teil einer groß angelegten Testdatenerhebung durch Fahrzeughersteller und Anbieter. Das wirft Datenschutzfragen auf. Ziel des vorliegenden Beitrags ist es herauszuarbeiten, inwiefern sich hierdurch Implikationen für die Rechte und Freiheiten von Verbraucher:innen ergeben und welche Mechanismen das geltende Recht sowie aktuelle legislative Entwicklungen bereithalten, den „Datenhunger“ der KI mit den Interessen an Datensouveränität und informationeller Selbstbestimmung in Einklang und Ausgleich zu bringen. Im Fokus steht dabei insbesondere, wie Anforderungen schon im Produktdesign „mitgedacht“ werden und damit für Verbraucher:innen rechts- und vertrauensfördernd wirken können.
Datenschutz und informationelle Selbstbestimmung sind Bestandteile aktueller Leitbilder einer Digitalen Bildung in der Schule. Im Kontext der Schulschließungen und der vorrangigen Nutzung digitaler Medien zeigte sich jedoch, dass Datenschutz weder als Thema noch als Gestaltungsprinzip digitaler Lernumgebungen in der bildungsadministrativen und pädagogisch-praktischen Schulwirklichkeit systematisch verankert ist. Die Diskrepanz zwischen aktuellen Leitbildern einer digitalen Bildung und der sichtbar problematischen Praxis des digitalen Notfalldistanzunterrichts markiert den Ausgangspunkt des Beitrages, der sich der übergeordneten Frage widmet, welche Herausforderungen sich bei der Realisierung von Datenschutz in der Schul- und Unterrichtswirklichkeit in einer digital geprägten Welt stellen. Im Sinne einer Problemfeldanalyse werden prototypische Handlungsprobleme der Schule herausgearbeitet. Fokussiert betrachtet werden exemplarische Herausforderungen und Anforderungen an Technologien und Akteur:innen der inneren und äußeren Schulentwicklung auf den Ebenen der Unterrichtsentwicklung, der Personalentwicklung, der Technologieentwicklung und der Organisationsentwicklung.
Hinreichende Datensouveränität gestaltet sich für Verbraucher:innen in der Praxis als äußerst schwierig. Die Europäische Datenschutzgrundverordnung garantiert umfassende Betroffenenrechte, die von verwantwortlichen Stellen durch technisch-organisatorische Maßnahmen umzusetzen sind. Traditionelle Vorgehensweisen wie die Bereitstellung länglicher Datenschutzerklärungen oder der ohne weitere Hilfestellungen angebotene Download von personenbezogenen Rohdaten werden dem Anspruch der informationellen Selbstbestimmung nicht gerecht. Die im Folgenden aufgezeigten neuen technischen Ansätze insbesondere KI-basierter Transparenz- und Auskunftsmodalitäten zeigen die Praktikabilität wirksamer und vielseitiger Mechanismen. Hierzu werden die relevanten Transparenzangaben teilautomatisiert extrahiert, maschinenlesbar repräsentiert und anschließend über diverse Kanäle wie virtuelle Assistenten oder die Anreicherung von Suchergebnissen ausgespielt. Ergänzt werden außerdem automatisierte und leicht zugängliche Methoden für Auskunftsersuchen und deren Aufbereitung nach Art. 15 DSGVO. Abschließend werden konkrete Regulierungsimplikationen diskutiert.
Due to ongoing digitalization, more and more cloud services are finding their way into companies. In this context, data integration from the various software solutions, which are provided both on-premise (local use or licensing for local use of software) and as a service, is of great importance. In this regard, Integration Platform as a Service (IPaaS) models aim to support companies as well as software providers in the context of data integration by providing connectors to enable data flow between different applications and systems and other integration services. Since previous research has mostly focused on technical or legal aspects of IPaaS, this article focuses on deriving integration practices and design-related barriers and drivers regarding the adoption of IPaaS. Therefore, we conducted 10 interviews with experts from different software as a services vendors. Our results show that the main factors regarding the adoption of IPaaS are the standardization of data models, the usability and variety of connectors provided, and the issues regarding data privacy, security, and transparency.
Cryptographic API misuse is responsible for a large number of software vulnerabilities. In many cases developers are overburdened by the complex set of programming choices and their security implications. Past studies have identified significant challenges when using cryptographic APIs that lack a certain set of usability features (e.g. easy-to-use documentation or meaningful warning and error messages) leading to an especially high likelihood of writing functionally correct but insecure code.
To support software developers in writing more secure code, this work investigates a novel approach aimed at these hard-to-use cryptographic APIs. In a controlled online experiment with 53 participants, we study the effectiveness of API-integrated security advice which informs about an API misuse and places secure programming hints as guidance close to the developer. This allows us to address insecure cryptographic choices including encryption algorithms, key sizes, modes of operation and hashing algorithms with helpful documentation in the guise of warnings. Whenever possible, the security advice proposes code changes to fix the responsible security issues. We find that our approach significantly improves code security. 73% of the participants who received the security advice fixed their insecure code.
We evaluate the opportunities and challenges of adopting API-integrated security advice and illustrate the potential to reduce the negative implications of cryptographic API misuse and help developers write more secure code.
In the project EILD.nrw, Open Educational Resources (OER) have been developed for teaching databases. Lecturers can use the tools and courses in a variety of learning scenarios. Students of computer science and application subjects can learn the complete life cycle of databases. For this purpose, quizzes, interactive tools, instructional videos, and courses for learning management systems are developed and published under a Creative Commons license. We give an overview of the developed OERs according to subject, description, teaching form, and format. Following, we describe how licencing, sustainability, accessibility, contextualization, content description, and technical adaptability are implemented. The feedback of students in ongoing classes are evaluated.
Unsere interdisziplinäre Forschungsarbeit „Die Gestaltung wirksamer Bildsymbole für Verarbeitungszwecke und ihre Folgen für Betroffene“ („Designing Effective Privacy Icons through an Interdisciplinary Research Methodology“) baut auf dem „Data Protection by Design“-Ansatz (Art. 25(1) DSGVO) auf und zielt auf folgende Forschungsfragen ab: Wie müssen das Transparenzprinzip (Art. 5(1)(a) DSGVO) und die Informationspflichten (Art. 12-14 DSGVO) insbesondere im Hinblick auf die Festlegung der Verarbeitungszwecke (Art. 5(1)(b) DSGVO) umgesetzt werden, damit sie die Nutzer:innen effektiv vor Risiken der Datenverarbeitung schützen? Mit welchen Methoden lässt sich die Wirksamkeit der Umsetzung ermitteln und diese auch durchsetzen?1 Im vorliegenden Projekt erweitern wir juristische Methoden um solche aus der HCI-Forschung (Human Computer Interaction) und der Visuellen Gestaltung. In einer ersten Phase haben wir mit empirischen Methoden der HCI-Forschung untersucht, welche Datennutzungstypen Nutzer:innen technologieübergreifend als relevant empfinden. Diese Erkenntnisse können als Ausgangspunkt für eine neue Zweckbestimmung dienen, die bestimmte Datennutzungstypen deutlicher ein- oder ausschließt. Erste Umformulierungen von Zweckbestimmungen haben wir in zwei Praxisworkshops mit Verantwortlichen der Datenverarbeitung getestet. In einer darauffolgenden qualitativen Studie untersuchten wir dann die Einstellungen und Erwartungen von Internetnutzerinnen und -nutzern am Beispiel der Personalisierung von Internetinhalten, um die entsprechenden Zwecke anhand eines konkreten Beispiels, in unserem Fall der personalisierten Werbung, neu zu formulieren. Auf dieser Basis haben wir nun die zweite Forschungsphase begonnen, in der wir Designs für Datenschutzhinweise und Kontrollmöglichkeiten unter besonderer Berücksichtigung des Verarbeitungszwecks entwickeln. Da der Einsatz von Cookies eine wichtige Rolle bei der Personalisierung von Werbung spielt, ist eine zentrale Aufgaben die Neugestaltung des sogenannten „Cookie-Banners“.
Die nutzerInnenfreundliche Formulierung von Zwecken der Datenverarbeitung von Sprachassistenten
(2020)
2019 wurde bekannt, dass mehrere Anbieter von Sprachassistenten Sprachaufnahmen ihrer NutzerInnen systematisch ausgewertet haben. Da in den Datenschutzhinweisen angegeben war, dass Daten auch zur Verbesserung des Dienstes genutzt würden, war diese Nutzung legal. Für die NutzerInnen stellte diese Auswertung jedoch einen deutlichen Bruch mit ihren Privatheitsvorstellungen dar. Das Zweckbindungsprinzip der DSGVO mit seiner Komponente der Zweckspezifizierung fordert neben Flexibilität für den Verarbeiter auch Transparenz für den Verbraucher. Vor dem Hintergrund dieses Interessenkonflikts stellt sich für die HCI die Frage, wie Verarbeitungszwecke von Sprachassistenten gestaltet sein sollten, um beide Anforderungen zu erfüllen. Für die Erhebung einer Nutzerperspektive analysiert diese Studie zunächst Zweckangaben in den Datenschutzhinweisen der dominierenden Sprachassistenten. Darauf aufbauend präsentieren wir Ergebnisse von Fokusgruppen, die sich mit der wahrgenommenen Verarbeitung von Daten von Sprachassistenten aus Nutzersicht befassen. Es zeigt sich, dass bestehende Zweckformulierungen für VerbraucherInnen kaum Transparenz über Folgen der Datenverarbeitung bieten und keine einschränkende Wirkung im Hinblick auf legale Datennutzung erzielen. Unsere Ergebnisse über von Nutzern wahrgenommene Risiken erlauben dabei Rückschlüsse auf die anwenderfreundliche Gestaltung von Verarbeitungszwecken im Sinne einer Design-Ressource.
Die Blockchain-Technologie ist einer der großen Innovationstreiber der letzten Jahre. Mit einer zugrundeliegenden Blockchain-Technologie ist auch der Betrieb von verteilten Anwendungen, sogenannter Decentralized Applications (DApps), bereits technisch umsetzbar. Dieser Beitrag verfolgt das Ziel, Gestaltungsmöglichkeiten der digitalen Verbraucherteilhabe an Blockchain-Anwendungen zu untersuchen. Hierzu enthält der Beitrag eine Einführung in die digitale Verbraucherteilhabe und die technischen Grundlagen und Eigenschaften der Blockchain-Technologie, einschließlich darauf basierender DApps. Abschließend werden technische, ethisch-organisatorische, rechtliche und sonstige Anforderungsbereiche für die Umsetzung von digitaler Verbraucherteilhabe in Blockchain-Anwendungen adressiert.
This paper gives an overview of how we can benefit from using container technology in our academic work. It aims to be a starting point for fellow researchers which also think about applying these technologies. Hence, we focus on decribing our own experiences and motivations instead of proving hard scientific facts.
For most people, using their body to authenticate their identity is an integral part of daily life. From our fingerprints to our facial features, our physical characteristics store the information that identifies us as "us." This biometric information is becoming increasingly vital to the way we access and use technology. As more and more platform operators struggle with traffic from malicious bots on their servers, the burden of proof is on users, only this time they have to prove their very humanity and there is no court or jury to judge, but an invisible algorithmic system. In this paper, we critique the invisibilization of artificial intelligence policing. We argue that this practice obfuscates the underlying process of biometric verification. As a result, the new "invisible" tests leave no room for the user to question whether the process of questioning is even fair or ethical. We challenge this thesis by offering a juxtaposition with the science fiction imagining of the Turing test in Blade Runner to reevaluate the ethical grounds for reverse Turing tests, and we urge the research community to pursue alternative routes of bot identification that are more transparent and responsive.
Die soziale Netzwerkanalyse versucht menschliche Interaktion in einen analytischen und auswertbaren Zusammenhang zu bringen. Sie hat sich als Methode in den letzten Jahrzehnten über die Sozialwissenschaften hinaus in die Geschichtswissenschaften, Archäologie und Religionswissenschaften verbreitet. Dabei fanden verschiedene Paradigmenwechsel statt, zum Beispiel vom statischen Netzwerken mit dem Schwerpunkt auf quantitativ-struktureller Analyse hin zu heterogenen Handlungsnetzwerken wie zum Beispiel in der der Actor Network Theory (ANT) gewandelt. Der Fokus liegt aktuell eher auf der Frage des Informationsaustauschs und der Dynamik nicht statischer Netzwerke.
In recent years a new category of digital signature algorithms based on Elliptic Curve Cryptography (ECC) has taken place besides well known schemes as RSA or DSA. So far it is, however, still not obvious how ECC-based signature schemes can be integrated in X.509-based Public Key Infrastructures (PKI).This paper briefly introduces cryptographic basics of signature schemes based on elliptic curves and points out the necessary cryptography parameters that are important in this context. Afterwards the structure and the encoding of X.509 certificates and Certificate Revocation Lists (CRL) are discussed regarding the integration of ECC public keys and ECC signatures respectively. The paper closes with exemplary implementations of ECC-based security systems.
Threats to passwords are still very relevant due to attacks like phishing or credential stuffing. One way to solve this problem is to remove passwords completely. User studies on passwordless FIDO2 authentication using security tokens demonstrated the potential to replace passwords. However, widespread acceptance of FIDO2 depends, among other things, on how user accounts can be recovered when the security token becomes permanently unavailable. For this reason, we provide a heuristic evaluation of 12 account recovery mechanisms regarding their properties for FIDO2 passwordless authentication. Our results show that the currently used methods have many drawbacks. Some even rely on passwords, taking passwordless authentication ad absurdum. Still, our evaluation identifies promising account recovery solutions and provides recommendations for further studies.
In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI” from the users, who were interacting with AI but did not realize it. Three decades of research and we are still facing the same issue with the AItechnology users. In the lack of users’ awareness and mutual understanding of AI-enabled systems between designers and users, informal theories of the users about how a system works (“Folk theories”) become inevitable but can lead to misconceptions and ineffective interactions. To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. However, a profound understanding of the current users’ perception of AI is still missing. In this study, we introduce the term “Perceived AI” as “AI defined from the perspective of its users”. We then present our preliminary results from deep-interviews with 50 AItechnology users, which provide a framework for our future research approach towards a better understanding of PAI and users’ folk theories.
Within qualitative interviews we examine attitudes towards driverless cars in order to investigate new mobility services and explore the impact of such services on everyday mobility. We identified three main issues that we would like to discuss in the workshop: (I) Designing beyond a driver-centric approach; (II) Developing mobility services for cars which drive themselves; and (III) Exploring self-driving practices.
With the debates on climate change and sustainability, a reduction of the share of cars in the modal split has become increasingly prevalent in both public and academic discourse. Besides some motivational approaches, there is a lack of ICT artifacts that successfully raise the ability of consumers to adopt sustainable mobility patterns. To further understand the requirements and the design of these artifacts within everyday mobility adopted a practice-lens. This lens is helpful to get a broader perspective on the use of ICT artifacts along consumers’ transformational journey towards sustainable mobility practices. Based on 12 retrospective interviews with car-free mobility consumers, we argue that artifacts should not be viewed as ’magic-bullet’ solutions but should accompany the complex transformation of practices in multifaceted ways. Moreover, we highlight in particular the difficulties of appropriating shared infrastructures and aligning own practices with them. This opens up a design space to provide more support for these kinds of material-interactions, to provide access to consumption infrastructures and make them usable, rather than leaving consumers alone with increased motivation.
Helping Johnny to Analyze Malware: A Usability-Optimized Decompiler and Malware Analysis User Study
(2016)
Usable security puts the users into the center of cyber security developments. Software developers are a very specific user group in this respect, since their points of contact with security are application programming interfaces (APIs). In contrast to APIs providing functionalities of other domains than security, security APIs are not approachable by habitual means. Learning by doing exploration exercises is not well supported. Reasons for this range from missing documentation, tutorials and examples to lacking tools and impenetrable APIs, that makes this complex matter accessible. In this paper we study what abstraction level of security APIs is more suitable to meet common developers’ needs and expectations. For this purpose, we firstly define the term security API. Following this definition, we introduce a classification of security APIs according to their abstraction level. We then adopted this classification in two studies. In one we gathered the current coverage of the distinct classes by the standard set of security functionality provided by popular software development kits. The other study has been an online questionnaire in which we asked 55 software developers about their experiences and opinion in respect of integrating security mechanisms into their coding projects. Our findings emphasize that the right abstraction level of a security API is one important aspect to consider in usable security API design that has not been addressed much so far.
This work introduces Grid computing, showsits use in eHealth environments and elicits trends towards the integration of custodians in eHealth Grids. It considers security and privacy requirements for the use of Grid computing in eHealth scenariosand discusses the possible integration of different types of data custodians. Finally the paper concludes and gives an outlook on the development and deployment of eHealth Gridsinthe near future.
Is It Really You Who Forgot the Password? When Account Recovery Meets Risk-Based Authentication
(2024)
Question Answering (QA) has gained significant attention in recent years, with transformer-based models improving natural language processing. However, issues of explainability remain, as it is difficult to determine whether an answer is based on a true fact or a hallucination. Knowledge-based question answering (KBQA) methods can address this problem by retrieving answers from a knowledge graph. This paper proposes a hybrid approach to KBQA called FRED, which combines pattern-based entity retrieval with a transformer-based question encoder. The method uses an evolutionary approach to learn SPARQL patterns, which retrieve candidate entities from a knowledge base. The transformer-based regressor is then trained to estimate each pattern’s expected F1 score for answering the question, resulting in a ranking ofcandidate entities. Unlike other approaches, FRED can attribute results to learned SPARQL patterns, making them more interpretable. The method is evaluated on two datasets and yields MAP scores of up to 73 percent, with the transformer-based interpretation falling only 4 pp short of an oracle run. Additionally, the learned patterns successfully complement manually generated ones and generalize well to novel questions.
Application developers constitute an important part of a digital platform’s ecosystem. Knowledge about psychological processes that drive developer behavior in platform ecosystems is scarce. We build on the lead userness construct which comprises two dimensions, trend leadership and high expected benefits from a solution, to explain how developers’ innovative work behavior (IWB) is stimulated. We employ an efficiencyoriented and a social-political perspective to investigate the relationship between lead userness and IWB. The efficiency-oriented view resonates well with the expected benefit dimension of lead userness, while the social-political view might be interpreted as a reflection of trend leadership. Using structural equation modeling, we test our model with a sample of over 400 developers from three platform ecosystems. We find that lead userness is indirectly associated with IWB and the performance-enhancing view to be the stronger predictor of IWB. Finally, we unravel differences between paid and unpaid app developers in platform ecosystems.
Less is Often More: Header Whitelisting as Semantic Gap Mitigation in HTTP-Based Software Systems
(2021)
The web is the most wide-spread digital system in the world and is used for many crucial applications. This makes web application security extremely important and, although there are already many security measures, new vulnerabilities are constantly being discovered. One reason for some of the recent discoveries lies in the presence of intermediate systems—e.g. caches, message routers, and load balancers—on the way between a client and a web application server. The implementations of such intermediaries may interpret HTTP messages differently, which leads to a semantically different understanding of the same message. This so-called semantic gap can cause weaknesses in the entire HTTP message processing chain.
In this paper we introduce the header whitelisting (HWL) approach to address the semantic gap in HTTP message processing pipelines. The basic idea is to normalize and reduce an HTTP request header to the minimum required fields using a whitelist before processing it in an intermediary or on the server, and then restore the original request for the next hop. Our results show that HWL can avoid misinterpretations of HTTP messages in the different components and thus prevent many attacks rooted in a semantic gap including request smuggling, cache poisoning, and authentication bypass.