005 Computerprogrammierung, Programme, Daten
Refine
Departments, institutes and facilities
Document Type
- Conference Object (187) (remove)
Year of publication
Keywords
- Usable Security (7)
- Privacy (5)
- Usable Privacy (5)
- Cloud (4)
- GDPR (4)
- HTTP (4)
- Web (4)
- security (4)
- Authentication (3)
- REST (3)
Helping Johnny to Analyze Malware: A Usability-Optimized Decompiler and Malware Analysis User Study
(2016)
Digital ecosystems are driving the digital transformation of business models. Meanwhile, the associated processing of personal data within these complex systems poses challenges to the protection of individual privacy. In this paper, we explore these challenges from the perspective of digital ecosystems' platform providers. To this end, we present the results of an interview study with seven data protection officers representing a total of 12 digital ecosystems in Germany. We identified current and future challenges for the implementation of data protection requirements, covering issues on legal obligations and data subject rights. Our results support stakeholders involved in the implementation of privacy protection measures in digital ecosystems, and form the foundation for future privacy-related studies tailored to the specifics of digital ecosystems.
Risk-based authentication (RBA) extends authentication mechanisms to make them more robust against account takeover attacks, such as those using stolen passwords. RBA is recommended by NIST and NCSC to strengthen password-based authentication, and is already used by major online services. Also, users consider RBA to be more usable than two-factor authentication and just as secure. However, users currently obtain RBA's high security and usability benefits at the cost of exposing potentially sensitive personal data (e.g., IP address or browser information). This conflicts with user privacy and requires to consider user rights regarding the processing of personal data. We outline potential privacy challenges regarding different attacker models and propose improvements to balance privacy in RBA systems. To estimate the properties of the privacy-preserving RBA enhancements in practical environments, we evaluated a subset of them with long-term data from 780 users of a real-world online service. Our results show the potential to increase privacy in RBA solutions. However, it is limited to certain parameters that should guide RBA design to protect privacy. We outline research directions that need to be considered to achieve a widespread adoption of privacy preserving RBA with high user acceptance.
Risk-based authentication (RBA) aims to strengthen password-based authentication rather than replacing it. RBA does this by monitoring and recording additional features during the login process. If feature values at login time differ significantly from those observed before, RBA requests an additional proof of identification. Although RBA is recommended in the NIST digital identity guidelines, it has so far been used almost exclusively by major online services. This is partly due to a lack of open knowledge and implementations that would allow any service provider to roll out RBA protection to its users. To close this gap, we provide a first in-depth analysis of RBA characteristics in a practical deployment. We observed N=780 users with 247 unique features on a real-world online service for over 1.8 years. Based on our collected data set, we provide (i) a behavior analysis of two RBA implementations that were apparently used by major online services in the wild, (ii) a benchmark of the features to extract a subset that is most suitable for RBA use, (iii) a new feature that has not been used in RBA before, and (iv) factors which have a significant effect on RBA performance. Our results show that RBA needs to be carefully tailored to each online service, as even small configuration adjustments can greatly impact RBA's security and usability properties. We provide insights on the selection of features, their weightings, and the risk classification in order to benefit from RBA after a minimum number of login attempts.
Risk-based Authentication (RBA) is an adaptive security measure to strengthen password-based authentication. RBA monitors additional features during login, and when observed feature values differ significantly from previously seen ones, users have to provide additional authentication factors such as a verification code. RBA has the potential to offer more usable authentication, but the usability and the security perceptions of RBA are not studied well.
We present the results of a between-group lab study (n=65) to evaluate usability and security perceptions of two RBA variants, one 2FA variant, and password-only authentication. Our study shows with significant results that RBA is considered to be more usable than the studied 2FA variants, while it is perceived as more secure than password-only authentication in general and comparably secure to 2FA in a variety of application types. We also observed RBA usability problems and provide recommendations for mitigation. Our contribution provides a first deeper understanding of the users' perception of RBA and helps to improve RBA implementations for a broader user acceptance.
Risk-Based Authentication for OpenStack: A Fully Functional Implementation and Guiding Example
(2023)
Online services have difficulties to replace passwords with more secure user authentication mechanisms, such as Two-Factor Authentication (2FA). This is partly due to the fact that users tend to reject such mechanisms in use cases outside of online banking. Relying on password authentication alone, however, is not an option in light of recent attack patterns such as credential stuffing.
Risk-Based Authentication (RBA) can serve as an interim solution to increase password-based account security until better methods are in place. Unfortunately, RBA is currently used by only a few major online services, even though it is recommended by various standards and has been shown to be effective in scientific studies. This paper contributes to the hypothesis that the low adoption of RBA in practice can be due to the complexity of implementing it. We provide an RBA implementation for the open source cloud management software OpenStack, which is the first fully functional open source RBA implementation based on the Freeman et al. algorithm, along with initial reference tests that can serve as a guiding example and blueprint for developers.
The documentation requirements of data published in long term archives have significantly grown over the last decade. At WDCC the data publishing process is assisted by “Atarrabi”, a web-based workflow system for reviewing and editing metadata information by the data authors and the publication agent. The system ensures high metadata quality for long-term use of the data with persistent identifiers (DOI/URN). By these well-defined references (DOI) credit can properly be given to the data producers in any publication.
Within qualitative interviews we examine attitudes towards driverless cars in order to investigate new mobility services and explore the impact of such services on everyday mobility. We identified three main issues that we would like to discuss in the workshop: (I) Designing beyond a driver-centric approach; (II) Developing mobility services for cars which drive themselves; and (III) Exploring self-driving practices.
Vertrauen ist das Schmiermittel der Shareconomy. Einen zentralen Mechanismus hierfür stellen Crowd-basierte Reputationssysteme dar, bei denen Informationen und Bewertungen anderer Nutzer dazu dienen Vertrauen aufzubauen. Die Vernetzung zu teilender Gegenstände bietet hierbei neue Potentiale, um die Reputation eines Anbieters oder Nachfragers zu bewerten und einzuschätzen. In diesem Beitrag untersu-chen wir daher das Potential eines IoT-basierten Reputationssystems im Kontext von Peer-to-Peer Car-sharing, bei dem Informationen und Bewertungen mittels Sensorik während der Nutzung des Fahrzeugs erhoben und ausgewertet werden. Hierzu wurden zwei Fokusgruppen mit insgesamt 12 Personen durch-geführt. Die Ergebnisse deuten an, dass datenbasierte Reputationssysteme das Vertrauen nicht nur vor, sondern auch während der Vermietung und in der Nachkontrolle für Ver- und Entleiher steigern können. Jedoch sollten bei der Gestaltung solcher Systeme die Prinzipien der mehrseitigen Sicherheit wie Spar-samkeit, Verhältnismäßigkeit, Transparenz und Reziprozität beachtet werden.
This paper presents methods for the reduction and compression of meteorological data for web-based wind flow visualizations, which are tailored to the flow visualization technique. Flow data sets represent a large amount of data and are therefore not well suited for mobile networks with low data throughput rates and high latency. Using the mechanisms introduced in this paper, an efficient transfer of thinned out and compressed data can be achieved, while keeping the accuracy of the visualized information almost at the same quality level as for the original data.
Der Arbeitskreis Usable Security & Privacy bietet ein Forum für den Gedankenaustausch und die interdisziplinäre Zusammenarbeit rund um das Thema benutzerfreundliche Informationssicherheit und privatheitsfördernde Technologien. Sicherheit ist bei der Anschaffung von Software und Technikprodukten zwar eines der zentralen Auswahlkriterien – aufgrund mangelnder Gebrauchstauglichkeit werden die vorhandenen Sicherheitsfunktionen und -mechanismen von den Nutzern jedoch oft falsch oder überhaupt nicht bedient. Im alltäglichen Gebrauch ergeben sich hierdurch Sicherheitsgefährdungen beim Umgang mit IKT-Systemen bzw. -Produkten und den darin enthaltenen sensiblen Daten. Im Workshop werden mit den Teilnehmern Beispiele diskutiert und es wird gemeinsam ein Stimmungsbild zum Verständnis, zum Stellenwert und zum aktuellen Grad der Umsetzung von Usable Security & Privacy erhoben. Ergebnis des Workshops ist ein Positionspapier, in dem die aktuellen Problemfelder und die wichtigsten Herausforderungen aus Sicht der Usability und UX Professionals beschrieben sind.
Application developers constitute an important part of a digital platform’s ecosystem. Knowledge about psychological processes that drive developer behavior in platform ecosystems is scarce. We build on the lead userness construct which comprises two dimensions, trend leadership and high expected benefits from a solution, to explain how developers’ innovative work behavior (IWB) is stimulated. We employ an efficiencyoriented and a social-political perspective to investigate the relationship between lead userness and IWB. The efficiency-oriented view resonates well with the expected benefit dimension of lead userness, while the social-political view might be interpreted as a reflection of trend leadership. Using structural equation modeling, we test our model with a sample of over 400 developers from three platform ecosystems. We find that lead userness is indirectly associated with IWB and the performance-enhancing view to be the stronger predictor of IWB. Finally, we unravel differences between paid and unpaid app developers in platform ecosystems.
This paper addresses the urgent need for international standardization of Context Metadata for e-Learning environments. In particular, E-Learning when distributed over the Internet, can synchronously and asynchronously reach a huge number of learners but also has to deal with a variety of different cultures and societies and the related complications. A lot of the differences strongly demand adaptation processes in which especially the contents are being modified to fit the needs in the targeted contexts. In our approach solving this task, we determined a list of around 160 significant possible differences and defined those as context metadata. In this paper, we show the results of our research regarding to the determination of context related influence factors as well as approaches to deal with them and present a first specification of the representing context-metadata.
Datenschutz und informationelle Selbstbestimmung sind Bestandteile aktueller Leitbilder einer Digitalen Bildung in der Schule. Im Kontext der Schulschließungen und der vorrangigen Nutzung digitaler Medien zeigte sich jedoch, dass Datenschutz weder als Thema noch als Gestaltungsprinzip digitaler Lernumgebungen in der bildungsadministrativen und pädagogisch-praktischen Schulwirklichkeit systematisch verankert ist. Die Diskrepanz zwischen aktuellen Leitbildern einer digitalen Bildung und der sichtbar problematischen Praxis des digitalen Notfalldistanzunterrichts markiert den Ausgangspunkt des Beitrages, der sich der übergeordneten Frage widmet, welche Herausforderungen sich bei der Realisierung von Datenschutz in der Schul- und Unterrichtswirklichkeit in einer digital geprägten Welt stellen. Im Sinne einer Problemfeldanalyse werden prototypische Handlungsprobleme der Schule herausgearbeitet. Fokussiert betrachtet werden exemplarische Herausforderungen und Anforderungen an Technologien und Akteur:innen der inneren und äußeren Schulentwicklung auf den Ebenen der Unterrichtsentwicklung, der Personalentwicklung, der Technologieentwicklung und der Organisationsentwicklung.
In the project EILD.nrw, Open Educational Resources (OER) have been developed for teaching databases. Lecturers can use the tools and courses in a variety of learning scenarios. Students of computer science and application subjects can learn the complete life cycle of databases. For this purpose, quizzes, interactive tools, instructional videos, and courses for learning management systems are developed and published under a Creative Commons license. We give an overview of the developed OERs according to subject, description, teaching form, and format. Following, we describe how licencing, sustainability, accessibility, contextualization, content description, and technical adaptability are implemented. The feedback of students in ongoing classes are evaluated.
Recent years have seen extensive adoption of domain generation algorithms (DGA) by modern botnets. The main goal is to generate a large number of domain names and then use a small subset for actual C&C communication. This makes DGAs very compelling for botmasters to harden the infrastructure of their botnets and make it resilient to blacklisting and attacks such as takedown efforts. While early DGAs were used as a backup communication mechanism, several new botnets use them as their primary communication method, making it extremely important to study DGAs in detail.
In this paper, we perform a comprehensive measurement study of the DGA landscape by analyzing 43 DGAbased malware families and variants. We also present a taxonomy for DGAs and use it to characterize and compare the properties of the studied families. By reimplementing the algorithms, we pre-compute all possible domains they generate, covering the majority of known and active DGAs. Then, we study the registration status of over 18 million DGA domains and show that corresponding malware families and related campaigns can be reliably identified by pre-computing future DGA domains. We also give insights into botmasters’ strategies regarding domain registration and identify several pitfalls in previous takedown efforts of DGA-based botnets. We will share the dataset for future research and will also provide a web service to check domains for potential DGA identity.
Voice assistants (VA) collect data about users’ daily life including interactions with other connected devices, musical preferences, and unintended interactions. While users appreciate the convenience of VAs, their understanding and expectations of data collection by vendors are often vague and incomplete. By making the collected data explorable for consumers, our research-through-design approach seeks to unveil design resources for fostering data literacy and help users in making better informed decisions regarding their use of VAs. In this paper, we present the design of an interactive prototype that visualizes the conversations with VAs on a timeline and provides end users with basic means to engage with data, for instance allowing for filtering and categorization. Based on an evaluation with eleven households, our paper provides insights on how users reflect upon their data trails and presents design guidelines for supporting data literacy of consumers in the context of VAs.
Diese Studie untersucht die Aneignung und Nutzung von Sprachassistenten wie Google Assistant oder Amazon Alexa in Privathaushalten. Unsere Forschung basiert auf zehn Tiefeninterviews mit Nutzern von Sprachassistenten sowie der Evaluation bestimmter Interaktionen in der Interaktionshistorie. Unsere Ergebnisse illustrieren, zu welchen Anlässen Sprachassistenten im heimischen Umfeld genutzt werden, welche Strategien sich die Nutzer in der Interaktion mit Sprachassistenten angeeignet haben, wie die Interaktion abläuft und welche Schwierigkeiten sich bei der Einrichtung und Nutzung des Sprachassistenten ergeben haben. Ein besonderer Fokus der Studie liegt auf Fehlinteraktionen, also Situationen, in denen die Interaktion scheitert oder zu scheitern droht. Unsere Studie zeigt, dass das Nutzungspotenzial der Assistenten häufig nicht ausgeschöpft wird, da die Interaktion in komplexeren Anwendungsfällen häufig misslingt. Die Nutzer verwenden daher den Sprachassistenten eher in einfachen Anwendungsfällen und neue Apps und Anwendungsfälle werden gar nicht erst ausprobiert. Eine Analyse der Aneignungsstrategien, beispielsweise durch eine selbst erstellte Liste mit Befehlen, liefert Erkenntnisse für die Gestaltung von Unterstützungswerkzeugen sowie die Weiterentwicklung und Optimierung von sprachbasierten Mensch-Maschine-Interfaces.
Sprachassistenten wie Alexa oder Google Assistant sind aus dem Alltag vieler VerbraucherInnen nicht mehr wegzudenken. Sie überzeugen insbesondere durch die sprachbasierte und somit freihändige Steuerung und mitunter auch den unterhaltsamen Charakter. Als häuslicher Lebensmittelpunkt sind die häufigsten Aufstellungsorte das Wohnzimmer und die Küche, da sich Haushaltsmitglieder dort die meiste Zeit aufhalten und das alltägliche Leben abspielt. Dies bedeutet allerdings ebenso, dass an diesen Orten potenziell viele Daten erfasst und gesammelt werden können, die nicht für den Sprachassistenten bestimmt sind. Demzufolge ist nicht auszuschließen, dass der Sprachassistent – wenn auch versehentlich – durch Gespräche oder Geräusche aktiviert wird und Aufnahmen speichert, selbst wenn eine Aktivierung unbewusst von Anwesenden bzw. von anderen Geräten (z. B. Fernseher) erfolgt oder aus anderen Räumen kommt. Im Rahmen eines Forschungsprojekts haben wir dazu NutzerInnen über Ihre Nutzungs- und Aufstellungspraktiken der Sprachassistenten befragt und zudem einen Prototyp getestet, der die gespeicherten Interaktionen mit dem Sprachassistenten sichtbar macht. Dieser Beitrag präsentiert basierend auf den Erkenntnissen aus den Interviews und abgeleiteten Leitfäden aus den darauffolgenden Nutzungstests des Prototyps eine Anwendung zur Beantragung und Visualisierung der Interaktionsdaten mit dem Sprachassistenten. Diese ermöglicht es, Interaktionen und die damit zusammenhängende Situation darzustellen, indem sie zu jeder Interaktion die Zeit, das verwendete Gerät sowie den Befehl wiedergibt und unerwartete Verhaltensweisen wie die versehentliche oder falsche Aktivierung sichtbar macht. Dadurch möchten wir VerbraucherInnen für die Fehleranfälligkeit dieser Geräte sensibilisieren und einen selbstbestimmteren und sichereren Umgang ermöglichen.
Most people use disaster apps infrequently, primarily only in situations of turmoil, when they are physically or emotionally vulnerable. Personal data may be necessary to help them, data protections may be waived. In some circumstances, free movement and liberties may be curtailed for public protection, as was seen in the current COVID pandemic. Consuming and producing disaster data can deepen problems arising at the confluence of surveillance and disaster capitalism, where data has become a tool for solutionist instrumentarian power (Zuboff 2019, Klein 2008) and part of a destructive mode of one world worlding (Law 2015, Escobar 2020). The special use of disaster apps prompts us to ask what role consumer protection could play in safeguarding democratic liberties. Within this work, a set of current approaches are briefly reviewed and two case studies are presented of what we call appropriation or design against datafication. These combine document analysis and literature research with several months of online and field ethnographic observation. The first case study examines disaster app use in response to the 2010 Haiti earthquake, the second explores COVID Contact Tracing in Taiwan in 2020/21. Against this backdrop we ask, ‘how could and how should consumer protection respond to problems of surveillance disaster capitalism?’ Drawing on our work with the is IT ethical? Exchange, a co-designed community platform and knowledge exchange for disaster information sharing, and a Societal Readiness Assessment Framework that we are developing alongside it, we explore how co-design methodologies could help define answers.
In this paper, we present a solution how to test cultural influences on E-Learning in a global context. Based on a metadata approach, we show how specifically cultural influence factors can be determined to transfer and adapt learning environments. We present a method how those influence factors can be validated for both, to improve the dynamical meta-data specification and to be used in the development of (international) E-Learning scenarios.
Durch die Digitalisierung befindet sich die Mobilitätsbranche im starken Umbruch. So wird man bei der Verkehrsmittelwahl zukünftig wohl auch auf selbstfahrende Autos zurückgreifen können. Die Studie erweitert die Verkehrs- und Nutzerakzeptanzforschung, indem unter Berücksichtigung relativer Teilmehrwerte tiefergehend analysiert wird, wie sich die neuen Verkehrsmodi autonomer Privat-PKW, autonomes Carsharing und autonomes Taxi aus heutiger Sicht in den bestehenden Verkehrsmix einsortieren. Hierzu wurde auf Basis der Nutzerpräferenztheorie eine Onlineumfrage (n=172) zu den relativen Mehrwerten der neuen autonomen Verkehrsmodi durchgeführt. Es zeigt sich, dass Nutzer im Vergleich zum PKW bei den autonomen Modi Verbesserungen im Fahrkomfort und in der Zeitnutzung sehen, in vielen anderen Bereichen – insbesondere bei Fahrspaß und Kontrolle – hingegen keine Vorteile oder sogar relative Nachteile sehen. Gegenüber dem ÖPNV bieten die autonomen Modi in fast allen Eigenschaften Mehrwerte. Diese Betrachtung auf Teilnutzenebene liefert eine genauere Erklärung für Nutzerakzeptanz des automatisierten Fahrens.
Sharing economies enabled by technical platforms have been studied regarding their economic, legal, and social effects, as well as with regard to their possible influences on CSCW topics such as work, collaboration, and trust. While a lot current research is focusing on the sharing economy and related communities, there is little work addressing the phenomenon from a socio-technical point of view. Our workshop is meant to address this gap. Building on research themes and discussion from last year’s ECSCW, we seek to engage deeper with topics such as novel socio-technical approaches for enabling sharing communities, discussing issues around digital consumer and worker protection, as well as emerging challenges and opportunities of existing platforms and approaches.
Beyond HCI and CSCW: Challenges and Useful Practices Towards a Human-Centred Vision of AI and IA
(2019)
Data emerged as a central success factor for companies to benefit from digitization. However, the skills in successfully creating value from data – especially at the management level – are not always profound. To address this problem, several canvas models have already been designed. Canvas models are usually created to write down an idea in a structured way to promote transparency and traceability. However, some existing data science canvas models mainly address developers and are thus unsuitable for decision-makers and communication within interdisciplinary teams. Based on a literature review, we identified influencing factors that are essential for the success of data science projects. With the information gained, the Data Science Canvas was developed in an expert workshop and finally evaluated by practitioners to find out whether such an instrument could support data-driven value creation.
Due to ongoing digitalization, more and more cloud services are finding their way into companies. In this context, data integration from the various software solutions, which are provided both on-premise (local use or licensing for local use of software) and as a service, is of great importance. In this regard, Integration Platform as a Service (IPaaS) models aim to support companies as well as software providers in the context of data integration by providing connectors to enable data flow between different applications and systems and other integration services. Since previous research has mostly focused on technical or legal aspects of IPaaS, this article focuses on deriving integration practices and design-related barriers and drivers regarding the adoption of IPaaS. Therefore, we conducted 10 interviews with experts from different software as a services vendors. Our results show that the main factors regarding the adoption of IPaaS are the standardization of data models, the usability and variety of connectors provided, and the issues regarding data privacy, security, and transparency.
The corporate landscape is experiencing an increasing change in business models due to digitization. An increasing availability of data along the business processes enhance the opportunities for process automation. Technologies such as Robotic Process Automation (RPA) are widely used for business process optimization, but as a side effect an increase in stand-alone solutions and a lack of holistic approaches can be observed. Intelligent Process Automation (IPA) is said to support more complex processes and enable automated decision-making, but due to the lack of connectors makes the implementation difficult. RPA marketplaces can be a bridging technology to help companies implement Intelligent Process Automation. This paper explores the drivers and challenges for the adoption of RPA marketplaces to realize IPA. For this purpose, we conducted ten expert interviews with decision makers and IT staff from the process automation sector.
Experience made with free and open source software (FOSS) in the public research is shared with the community. The motivation for using and publishing FOSS is to increase visibility, transparancy and feedback quality while at the same time lowering software licensing costs. Also, the idea of giving back and returning a value plays a role. The most frequently given counter arguments are discussed. In the end, it’s important to embed FOSS publishing into the company’s strategy for the exploitation of scientific research results. To help with this, a checklist of criteria to indicate FOSS publishing is suggested. On the backround of wireless sensor networks, some case studies of FOSS contribution are detailed. The emphasis is on checking the original motivation and the spirit of FOSS back with the reality. Finally, further potential of publishing FOSS in the context of scientific research is identified.
This paper gives an overview of how we can benefit from using container technology in our academic work. It aims to be a starting point for fellow researchers which also think about applying these technologies. Hence, we focus on decribing our own experiences and motivations instead of proving hard scientific facts.
XML Encryption and XML Signature are fundamental security standards forming the core for many applications which require to process XML-based data. Due to the increased usage of XML in distributed systems and platforms such as in SOA and Cloud settings, the demand for robust and effective security mechanisms increased as well. Recent research work discovered, however, substantial vulnerabilities in these standards as well as in the vast majority of the available implementations. Amongst them, the so-called XML Signature Wrapping attack belongs to the most relevant ones. With the many possible instances of this attack type, it is feasible to annul security systems relying on XML Signature and to gain access to protected resources as has been successfully demonstrated lately for various Cloud infrastructures and services. This paper contributes a comprehensive approach to robust and effective XML Signatures for SOAP-based Web Services. An architecture is proposed, which integrates the r equired enhancements to ensure a fail-safe and robust signature generation and verification. Following this architecture, a hardened XML Signature library has been implemented. The obtained evaluation results show that the developed concept and library provide the targeted robustness against all kinds of known XML Signature Wrapping attacks. Furthermore the empirical results underline, that these security merits are obtained at low efficiency and performance costs as well as remain compliant with the underlying standards.
Appropriating Digital Fabrication Technologies — A comparative study of two 3D Printing Communities
(2015)
Digital fabrication technologies have a great potential for empowering consumers to produce their own creations. However, despite the growing availability of digital fabrication technologies in shared machine shops such as FabLabs or University Labs, they are often perceived as difficult to use, especially by users with limited technological aptitude. Hence, it is not yet clear if the potentials of the technology can be made accessible to a broader public, or if they will remain limited to some form of “maker elite”. In this paper, we study the appropriation of digital fabrication on the example of the use of 3D printers in two different communities. In doing so, we analyze how users conceptualize their use of the 3D printers, what kind of contextual understanding is necessary to work with the machines, and how users document and share their knowledge. Based on our empirical findings, we identify the potentials that the machines offer to the communities, and what kind of challenges have to be overcome in their appropriation of the technology.
Consolidating Principles and Patterns for Human-centred Usable Security Research and Development
(2018)
We present an evaluation of usable security principles and patterns to facilitate the transfer of existing knowledge to researchers and practitioners. Based on a literature review we extracted 23 common usable security principles and 47 usable security patterns and identified their interconnection. The results indicate that current research tends to focus on only a subset of important principles. The fact that some principles are not yet addressed by any design patterns suggests that further work on refining these patterns is needed. We developed an online repository, which stores the harmonized principles and patterns. The tool enables users to search for relevant patterns and explore them in an interactive and programmatic manner. We argue that both the insights presented in this paper and the repository will be highly valuable for students for getting a good overview, practitioners for implementing usable security and researchers for identifying areas of future research.
Ziel der achten Auflage des wissenschaftlichen Workshops “Usable Security and Privacy” auf der Mensch und Computer 2022 ist es, aktuelle Forschungs- und Praxisbeiträge zu präsentieren und anschließend mit den Teilnehmenden zu diskutieren. Der Workshop soll ein etabliertes Forum fortführen und weiterentwickeln, in dem sich Experten aus verschiedenen Bereichen, z. B. Usability und Security Engineering, transdisziplinär austauschen können.
Auch die mittlerweile siebte Ausgabe des wissenschaftlichen Workshops “Usable Security und Privacy” auf der Mensch und Computer 2021 wird aktuelle Forschungs- und Praxisbeiträge präsentiert und anschließend mit allen Teilnehmer:innen diskutiert. Zwei Beiträge befassen sich dieses Jahr mit dem Thema Privatsphäre, zwei mit dem Thema Sicherheit. Mit dem Workshop wird ein etabliertes Forum fortgeführt und weiterentwickelt, in dem sich Expert:innen aus unterschiedlichen Domänen, z. B. dem Usability- und Security- Engineering, transdisziplinär austauschen können.
Bei der sechsten Ausgabe des wissenschaftlichen Workshops ”Usable Security und Privacy” auf der Mensch und Computer 2020 werden wie in den vergangenen Jahren aktuelle Forschungs- und Praxisbeiträge präsentiert und anschließend mit allen Teilnehmenden diskutiert. Drei Beiträge befassen sich dieses Jahr mit dem Thema Privatsphäre, einer mit dem Thema Sicherheit. Mit dem Workshop wird ein etabliertes Forum fortgeführt und weiterentwickelt, in dem sich Expert*innen aus unterschiedlichen Domänen, z. B. dem Usability- und Security-Engineering, transdisziplinär austauschen können.
In Fortführung zu den drei erfolgreichen „Usable Security und Privacy“ Workshops der letzten drei Jahre, sollen in einem vierten ganztätigen wissenschaftlichen Workshop auf der diesjährigen Mensch und Computer sechs bis acht Arbeiten auf dem Gebiet Usable Security and Privacy vorgestellt und diskutiert werden. Vorgesehen sind Beiträge aus Forschung und Praxis, die neue nutzerzentrierte Ansätze aber auch praxisrelevante Lösungen zur nutzerzentrierten Entwicklung und Ausgestaltung von digitalen Schutzmechanismen thematisieren. Mit dem Workshop soll das etablierte Forum weiterentwickelt werden, in dem sich Experten aus unterschiedlichen Domänen, z. B. dem Usability-Engineering und Security-Engineering, transdisziplinär austauschen können. Der Workshop wird von den Organisatoren als klassischer wissenschaftlicher Workshop ausgestaltet. Ein Programmkomitee bewertet die Einreichungen und wählt daraus die zur Präsentation akzeptierten Beiträge aus. Diese werden zudem im Poster- und Workshopband der Mensch und Computer 2018 veröffentlicht.
In Fortführung zum erfolgreichen Auftaktworkshop „Usable Security and Privacy: Nutzerzentrierte Lösungsansätze zum Schutz sensibler Daten“ auf der Mensch und Computer 2015 werden in einem zweiten wissenschaftlichen Workshop auf der diesjährigen Mensch und Computer vier Arbeiten auf dem Gebiet Usable Security and Privacy vorgestellt und diskutiert. Das Programm bilden Beiträge aus Forschung und Praxis, die neue nutzerzentrierte Ansätze, aber auch praxisrelevante Lösungen zur nutzerzentrierten Entwicklung und Ausgestaltung von digitalen Schutzmechanismen thematisieren. Mit dem Workshop wird das etablierte Forum weiterentwickelt, in dem sich Experten aus unterschiedlichen Domänen, z. B. dem Usability-Engineering und Security-Engineering, transdisziplinär austauschen können. Der Workshop wird von den Organisatoren als klassischer wissenschaftlicher Workshop ausgestaltet. Ein Programmkomitee hat die Einreichungen bewertet und daraus die zur Präsentation akzeptierten Beiträge ausgewählt.
Echtzeit-orientierte Multimedia-Kommunikation im Internet eröffnet eine Vielzahl neuer Anwendungen. Diese innovative Kommunikationsplattform ist gerade für weltweit operierende Unternehmen von Interesse. So können z.B. durch die Verwendung von VoIP-Lösungen oder Groupware-Applikationen Kosten gesenkt und gleichzeitig die Zusammenarbeit der Mitarbeiter optimiert werden. Dies trifft auch für Video-Konferenzsysteme zu. Anstelle regelmäßiger Meetings, die meist mit Dienstreisen eines Großteils der Teilnehmer verbunden sind, können Konferenzen virtuell durch die Übertragung von Sprachund Videodaten über das Internet abgehalten werden. Die Akzeptanz der beschriebenen Kommunikationsanwendungen hängt stark von den Faktoren Dienstgüte und Sicherheit ab. Die Übertragung der echtzeit-orientierten Mediendaten muss möglichst kontinuierlich erfolgen, so dass sowohl eine ruckelfreie Wiedergabe der Sprache als auch der Bewegtbilder möglich ist. Da Konferenzen firmenintern und vertraulich sind, werden sie hinter verschlossener Tür abgehalten. Das Pendant in der elektronischen Welt muss eine Entsprechung anbieten. Se- curity-Mechanismen haben allerdings einen Einfluss auf Dienstgüteparameter. Dies muss bei der Entwicklung von Techniken zum Schutz multimedialer Kommunikation berücksichtigt und abgestimmt werden. Dieser Beitrag zeigt anhand des Beispiels eines Video-Konferenzsystems für das Internet, wie Sicherheitsmechanismen in echtzeit-orientierte Multimedia-Kommunikationsanwendungen unter Berücksichtigung von Quality of Service (QoS) integriert werden können.
This paper presents the security architecture of the @neurIST medical information system. @neurIST aims at a research and decision support system for treating diseases that unites multiple medical institutions and service providers offering technical solutions based on the Service Oriented Architecture (SOA) paradigm. The security architecture provides secure access to federated medical data spread across multiple sites and protects the privacy of the patients by pseudonymisation of the medical data required for the study.
Despite the lack of standardisation for building REST-ful HTTP applications, the deployment of REST-based Web Services has attracted an increased interest. This gap causes, however, an ambiguous interpretation of REST and induces the design and implementation of REST-based systems following proprietary approaches instead of clear and agreed upon definitions. Issues arising from these shortcomings have an influence on service properties such as the loose coupling of REST-based services via a unitary service contract and the automatic generation of code. To overcome such limitations, at least two prerequisites are required: the availability of specifications for implementing REST-based services and auxiliaries for auditing the compliance of those services with such specifications. This paper introduces an approach for conformance testing of REST-based Web Services. This appears conflicting at the first glance, since there are no specifications available for implementing REST by, e.g., t he prevalent technology set HTTP/URI to test against. Still, by providing a conformance test tool and leaning it on the current practice, the exploration of service properties is enabled. Moreover, the real demand for standardisation gets explorable by such an approach. First investigations conducted with the developed conformance test system targeting major Cloud-based storage services expose inconsistencies in many respects which emphasizes the necessity for further research and standardisation.
Online media consumption is the main driving force for the recent growth of the Web. As especially realtime media is becoming more and more accessible from a wide range of devices, with contrasting screen resolutions, processing resources and network connectivity, a necessary requirement is providing users with a seamless multimedia experience at the best possible quality, henceforth being able to adapt to the specific device and network conditions. This paper introduces a novel approach for adaptive media streaming in the Web. Despite the pervasive pullbased designs based on HTTP, this paper builds upon a Web-native push-based approach by which both the communication and processing overheads are reduced significantly in comparison to the pull-based counterparts. In order to maintain these properties when enhancing the scheme by adaptation features, a server-side monitoring and control needs to be developed as a consequence. Such an adaptive push-based media streaming approach is intr oduced as main contribution of this work. Moreover, the obtained evaluation results provide the evidence that with an adaptive push-based media delivery, on the one hand, an equivalent quality of experience can be provided at lower costs than by adopting pull-based media streaming. On the other hand, an improved responsiveness in switching between quality levels can be obtained at no extra costs.
Usable security puts the users into the center of cyber security developments. Software developers are a very specific user group in this respect, since their points of contact with security are application programming interfaces (APIs). In contrast to APIs providing functionalities of other domains than security, security APIs are not approachable by habitual means. Learning by doing exploration exercises is not well supported. Reasons for this range from missing documentation, tutorials and examples to lacking tools and impenetrable APIs, that makes this complex matter accessible. In this paper we study what abstraction level of security APIs is more suitable to meet common developers’ needs and expectations. For this purpose, we firstly define the term security API. Following this definition, we introduce a classification of security APIs according to their abstraction level. We then adopted this classification in two studies. In one we gathered the current coverage of the distinct classes by the standard set of security functionality provided by popular software development kits. The other study has been an online questionnaire in which we asked 55 software developers about their experiences and opinion in respect of integrating security mechanisms into their coding projects. Our findings emphasize that the right abstraction level of a security API is one important aspect to consider in usable security API design that has not been addressed much so far.
Critical consumerism is complex as ethical values are difficult to negotiate, appropriate products are hard to find, and product information is overwhelming. Although recommender systems offer solutions to reduce such complexity, current designs are not appropriate for niche practices and use non-personalized intransparent ethics. To support critical consumption, we conducted a design case study on a personalized food recommender system. Therefore, we first conducted an empirical pre-study with 24 consumers to understand value negotiations and current practices, co-designed the recommender system, and finally evaluated it in a real-world trial with ten consumers. Our findings show how recommender systems can support the negotiation of ethical values within the context of consumption practices, reduce the complexity of finding products and stores, and strengthen consumers. In addition to providing implications for the design to support critical consumption practices, we critically reflect on the scope of such recommender systems and its appropriation.
Recent publications propose concepts of systems that integrate the various services and data sources of everyday food practices. However, this research does not go beyond the conceptualization of such systems. Therefore, there is a deficit in understanding how to combine different services and data sources and which design challenges arise from building integrated Household Information Systems. In this paper, we probed the design of an Integrated Household Information System with 13 participants. The results point towards more personalization, automatization of storage administration and enabling flexible artifact ecologies. Our paper contributes to understanding the design and usage of Integrated Household Information Systems, as a new class of information systems for HCI research.
With the debates on climate change and sustainability, a reduction of the share of cars in the modal split has become increasingly prevalent in both public and academic discourse. Besides some motivational approaches, there is a lack of ICT artifacts that successfully raise the ability of consumers to adopt sustainable mobility patterns. To further understand the requirements and the design of these artifacts within everyday mobility adopted a practice-lens. This lens is helpful to get a broader perspective on the use of ICT artifacts along consumers’ transformational journey towards sustainable mobility practices. Based on 12 retrospective interviews with car-free mobility consumers, we argue that artifacts should not be viewed as ’magic-bullet’ solutions but should accompany the complex transformation of practices in multifaceted ways. Moreover, we highlight in particular the difficulties of appropriating shared infrastructures and aligning own practices with them. This opens up a design space to provide more support for these kinds of material-interactions, to provide access to consumption infrastructures and make them usable, rather than leaving consumers alone with increased motivation.
Question Answering (QA) has gained significant attention in recent years, with transformer-based models improving natural language processing. However, issues of explainability remain, as it is difficult to determine whether an answer is based on a true fact or a hallucination. Knowledge-based question answering (KBQA) methods can address this problem by retrieving answers from a knowledge graph. This paper proposes a hybrid approach to KBQA called FRED, which combines pattern-based entity retrieval with a transformer-based question encoder. The method uses an evolutionary approach to learn SPARQL patterns, which retrieve candidate entities from a knowledge base. The transformer-based regressor is then trained to estimate each pattern’s expected F1 score for answering the question, resulting in a ranking ofcandidate entities. Unlike other approaches, FRED can attribute results to learned SPARQL patterns, making them more interpretable. The method is evaluated on two datasets and yields MAP scores of up to 73 percent, with the transformer-based interpretation falling only 4 pp short of an oracle run. Additionally, the learned patterns successfully complement manually generated ones and generalize well to novel questions.
Threats to passwords are still very relevant due to attacks like phishing or credential stuffing. One way to solve this problem is to remove passwords completely. User studies on passwordless FIDO2 authentication using security tokens demonstrated the potential to replace passwords. However, widespread acceptance of FIDO2 depends, among other things, on how user accounts can be recovered when the security token becomes permanently unavailable. For this reason, we provide a heuristic evaluation of 12 account recovery mechanisms regarding their properties for FIDO2 passwordless authentication. Our results show that the currently used methods have many drawbacks. Some even rely on passwords, taking passwordless authentication ad absurdum. Still, our evaluation identifies promising account recovery solutions and provides recommendations for further studies.
Künstliche Intelligenz im autonomen Fahrzeug verarbeitet enorme Mengen an Daten. Beim Betrieb eines solchen Fahrzeugs basiert jede Bewegung auf einer datenbasierten, automatisierten und adaptiven Entscheidungsfindung. Aber auch, um Regeln zur Erkennung und Entscheidung in komplexen Situationen wie den hochindividuellen Verkehrsszenarien entwickeln zu können (KI-Training), sind bereits beachtliche Datenmengen von Fahrzeugen im Realverkehr erforderlich – zum Beispiel Videosequenzen aus Kamerafahrten. Für das Training Künstlicher Intelligenz ist es aus Sicht der Fahrzeugentwicklung attraktiv, auf den Datenschatz zuzugreifen, den die Gesamtheit der Fahrzeuge im realen Anwendungskontext erzeugen kann. Als Nutzer:innen und Insassen sind Verbraucher:innen so Teil einer groß angelegten Testdatenerhebung durch Fahrzeughersteller und Anbieter. Das wirft Datenschutzfragen auf. Ziel des vorliegenden Beitrags ist es herauszuarbeiten, inwiefern sich hierdurch Implikationen für die Rechte und Freiheiten von Verbraucher:innen ergeben und welche Mechanismen das geltende Recht sowie aktuelle legislative Entwicklungen bereithalten, den „Datenhunger“ der KI mit den Interessen an Datensouveränität und informationeller Selbstbestimmung in Einklang und Ausgleich zu bringen. Im Fokus steht dabei insbesondere, wie Anforderungen schon im Produktdesign „mitgedacht“ werden und damit für Verbraucher:innen rechts- und vertrauensfördernd wirken können.
Data transfer and staging services are common components in Grid-based, or more generally, in service-oriented applications. Security mechanisms play a central role in such services, especially when they are deployed in sensitive application fields like e-health. The adoption of WS-Security and related standards to SOAP-based transfer services is, however, problematic as a straightforward adoption of SOAP with MTOM introduces considerable inefficiencies in the signature generation process when large data sets are involved. This paper proposes a non-blocking, signature generation approach enabling a stream-like processing with considerable performance enhancements.