Refine
H-BRS Bibliography
- yes (87) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (38)
- Fachbereich Ingenieurwissenschaften und Kommunikation (21)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (17)
- Fachbereich Wirtschaftswissenschaften (15)
- Institut für Verbraucherinformatik (IVI) (14)
- Institut für Cyber Security & Privacy (ICSP) (13)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (8)
- Fachbereich Sozialpolitik und Soziale Sicherung (5)
- Institute of Visual Computing (IVC) (4)
- Zentrum für Innovation und Entwicklung in der Lehre (ZIEL) (3)
Document Type
- Conference Object (87) (remove)
Year of publication
- 2021 (87) (remove)
Keywords
- Augmented Reality (3)
- Big Data Analysis (2)
- Cognitive robot control (2)
- Explainable robotics (2)
- Learning from experience (2)
- Usable Privacy (2)
- Usable Security (2)
- AES (1)
- AR design (1)
- AR development (1)
In this paper, the electrochemical alkaline methanol oxidation process, which is relevant for the design of efficient fuel cells, is considered. An algorithm for reconstructing the reaction constants for this process from the experimentally measured polarization curve is presented. The approach combines statistical and principal component analysis and determination of the trust region for a linearized model. It is shown that this experiment does not allow one to determine accurately the reaction constants, but only some of their linear combinations. The possibilities of extending the method to additional experiments, including dynamic cyclic voltammetry and variations in the concentration of the main reagents, are discussed.
Konzept zum Umgang mit Prüfungsstress und Lernblockaden bei Studierenden in der Studieneingangsphase
(2021)
Auch die mittlerweile siebte Ausgabe des wissenschaftlichen Workshops “Usable Security und Privacy” auf der Mensch und Computer 2021 wird aktuelle Forschungs- und Praxisbeiträge präsentiert und anschließend mit allen Teilnehmer:innen diskutiert. Zwei Beiträge befassen sich dieses Jahr mit dem Thema Privatsphäre, zwei mit dem Thema Sicherheit. Mit dem Workshop wird ein etabliertes Forum fortgeführt und weiterentwickelt, in dem sich Expert:innen aus unterschiedlichen Domänen, z. B. dem Usability- und Security- Engineering, transdisziplinär austauschen können.
In Robot-Assisted Therapy for children with Autism Spectrum Disorder, the therapists’ workload is increased due to the necessity of controlling the robot manually. The solution for this problem is to increase the level of autonomy of the system, namely the robot should interpret and adapt to the behaviour of the child under therapy. The problem that we are adressing is to develop a behaviour model that will be used for the robot decision-making process, which will learn how to adequately react to certain child reactions. We propose the use of the reinforcement learning technique for this task, where feedback for learning is obtained from the therapist’s evaluation of a robot’s behaviour.
The rapid increase in solar photovoltaic (PV) installations worldwide has resulted in the electricity grid becoming increasingly dependent on atmospheric conditions, thus requiring more accurate forecasts of incoming solar irradiance. In this context, measured data from PV systems are a valuable source of information about the optical properties of the atmosphere, in particular the cloud optical depth (COD). This work reports first results from an inversion algorithm developed to infer global, direct and diffuse irradiance as well as atmospheric optical properties from PV power measurements, with the goal of assimilating this information into numerical weather prediction (NWP) models.
In den Atmosphärenwissenschaften spielt die Strahlungsbilanz der Erde eine wichtige Rolle für unser Verständnis des Klimasystems. Hier liefern ausgereifte Satellitenprodukte dekadische Klimazeitreihen mit einer so hohen Genauigkeit, dass z.B. Änderungen im Zusammenhang mit dem Klimawandel detektiert werden können. Dies gilt insbesondere auch für die solaren Strahlungsflüsse an der Erdoberfläche. Beim Vergleich dieser Satellitenprodukte mit instantanen Beobachtungen der Strahlung am Erdboden sind jedoch oft erhebliche Abweichungen feststellbar, die hauptsächlich durch kleinskalige Variabilität in der räumlichen Struktur von Wolken und ihrer Strahlungswirkung verursacht werden. Hier ist auch zu bedenken, dass Bodenbeobachtungen fast einer Punktmessung entsprechen, während Satellitenpixel eine Fläche in der Größenordnung von Quadratkilometern abtasten.
West Africa has a great potential for the application of solar energy systems, as it combines high levels of solar irradiance with a lack of energy production. Southern West Africa is a region with a very high aerosol load. Urbanization, uncontrolled fires, traffic as well as power plants and oil rigs lead to increasing anthropogenic emissions. The naturally circulating north winds bring mineral dust from the Sahel and Sahara and monsoons - sea salt and other oceanic compounds from the south. The EU-funded Dynamics-Aerosol-Chemistry-Cloud Interactions in West Africa (DACCIWA) project (2014–2018), dlivered the most complete dataset of the atmosphere over the region to date. In our study, we use in-situ measured optical properties of aerosols from the airborne campaign over the Gulf of Guinea and inland, and from ground measurements in coastal cities.
An der Hochschule Bonn-Rhein-Sieg fand am Donnerstag, den 23.9.21 das erste Verbraucherforum für Verbraucherinformatik statt. Im Rahmen der Online-Tagesveranstaltung diskutierten mehr als 30 Teilnehmer:innen über Themen und Ideen rund um den Bereich Verbraucherdatenschutz. Dabei kamen sowohl Beiträge aus der Informatik, den Verbraucher- und Sozialwissenschaften sowie auch der regulatorischen Perspektive zur Sprache. Der folgende Beitrag stellt den Hintergrund der Veranstaltung dar und berichtet über Inhalte der Vorträge sowie Anknüpfungspunkte für die weitere Konstituierung der Verbraucherinformatik. Veranstalter waren das Institut für Verbraucherinformatik an der H-BRS in Zusammenarbeit mit dem Lehrstuhl IT-Sicherheit der Universität Siegen sowie dem Kompetenzzentrum Verbraucherforschung NRW der Verbraucherzentrale NRW e. V. mit Förderung des Bundesministeriums der Justiz und für Verbraucherschutz.
Die Blockchain-Technologie ist einer der großen Innovationstreiber der letzten Jahre. Mit einer zugrundeliegenden Blockchain-Technologie ist auch der Betrieb von verteilten Anwendungen, sogenannter Decentralized Applications (DApps), bereits technisch umsetzbar. Dieser Beitrag verfolgt das Ziel, Gestaltungsmöglichkeiten der digitalen Verbraucherteilhabe an Blockchain-Anwendungen zu untersuchen. Hierzu enthält der Beitrag eine Einführung in die digitale Verbraucherteilhabe und die technischen Grundlagen und Eigenschaften der Blockchain-Technologie, einschließlich darauf basierender DApps. Abschließend werden technische, ethisch-organisatorische, rechtliche und sonstige Anforderungsbereiche für die Umsetzung von digitaler Verbraucherteilhabe in Blockchain-Anwendungen adressiert.
Frequently the main purpose of domestic artifacts equipped with smart sensors is to hide technology, like previous examples of a Smart Mirror show. However, current Smart Homes often fail to provide meaningful IoT applications for all residents’ needs. To design beyond efficiency and productivity, we propose to realize the potential of the traditional artifact for calm and engaging experiences. Therefore, we followed a design case study approach with 22 participants in total. After an initial focus group, we conducted a diary study to examine home routines and developed a conceptual design. The evaluation of our mid-fidelity prototype shows, that we need to study carefully the practices of the residents to leverage the physical material of the artifact to fit the routines. Our Smart Mirror, enhanced by digital qualities, supports meaningful activities and makes the bathroom more appealing. Thereby, we discuss domestic technology design beyond automation.
Recent publications propose concepts of systems that integrate the various services and data sources of everyday food practices. However, this research does not go beyond the conceptualization of such systems. Therefore, there is a deficit in understanding how to combine different services and data sources and which design challenges arise from building integrated Household Information Systems. In this paper, we probed the design of an Integrated Household Information System with 13 participants. The results point towards more personalization, automatization of storage administration and enabling flexible artifact ecologies. Our paper contributes to understanding the design and usage of Integrated Household Information Systems, as a new class of information systems for HCI research.
With the debates on climate change and sustainability, a reduction of the share of cars in the modal split has become increasingly prevalent in both public and academic discourse. Besides some motivational approaches, there is a lack of ICT artifacts that successfully raise the ability of consumers to adopt sustainable mobility patterns. To further understand the requirements and the design of these artifacts within everyday mobility adopted a practice-lens. This lens is helpful to get a broader perspective on the use of ICT artifacts along consumers’ transformational journey towards sustainable mobility practices. Based on 12 retrospective interviews with car-free mobility consumers, we argue that artifacts should not be viewed as ’magic-bullet’ solutions but should accompany the complex transformation of practices in multifaceted ways. Moreover, we highlight in particular the difficulties of appropriating shared infrastructures and aligning own practices with them. This opens up a design space to provide more support for these kinds of material-interactions, to provide access to consumption infrastructures and make them usable, rather than leaving consumers alone with increased motivation.
Die digitale Transformation verändert die internationale Kooperation der Hochschulen massiv. Über die Möglichkeiten der virtuellen Mobilität hinaus entstehen neue Themenfelder, die internationale Lern- und Lehrerlebnisse mit digitaler Unterstützung verändern, ergänzen oder neu ermöglichen. Dazu sind im Bereich der Förderung der Internationalisierung (DAAD, Erasmus+, BMBF u.a.) Projekte und Förderformate entstanden, die Digitalisierung und Internationalisierung kombinieren und die neuen Themenstellungen adressieren, z.B. didaktische Formate, administrative Prozesse (auch im Kontext OZG und DSGVO), virtuelle und hybride Mobilität, internationale Projekt- und Teamformate sowie schlussendlich auch Inhalte, die internationale, interkulturelle und interdisziplinäre Kompetenzen mit digitalen Kompetenzen verbinden. Der vorgeschlagene Workshop soll entsprechende Projekte zusammenbringen und die Themen strukturieren, um einen Überblick der Entwicklungen zu schaffen und somit einen Beitrag zur Definition des Themenfelds „Digitalisierung & Internationalisierung“ zu leisten.
Risk-based authentication (RBA) aims to strengthen password-based authentication rather than replacing it. RBA does this by monitoring and recording additional features during the login process. If feature values at login time differ significantly from those observed before, RBA requests an additional proof of identification. Although RBA is recommended in the NIST digital identity guidelines, it has so far been used almost exclusively by major online services. This is partly due to a lack of open knowledge and implementations that would allow any service provider to roll out RBA protection to its users. To close this gap, we provide a first in-depth analysis of RBA characteristics in a practical deployment. We observed N=780 users with 247 unique features on a real-world online service for over 1.8 years. Based on our collected data set, we provide (i) a behavior analysis of two RBA implementations that were apparently used by major online services in the wild, (ii) a benchmark of the features to extract a subset that is most suitable for RBA use, (iii) a new feature that has not been used in RBA before, and (iv) factors which have a significant effect on RBA performance. Our results show that RBA needs to be carefully tailored to each online service, as even small configuration adjustments can greatly impact RBA's security and usability properties. We provide insights on the selection of features, their weightings, and the risk classification in order to benefit from RBA after a minimum number of login attempts.
Low-Cost In-Hand Slippage Detection and Avoidance for Robust Robotic Grasping with Compliant Fingers
(2021)
Over the last decades, different kinds of design guides have been created to maintain consistency and usability in interactive system development. However, in the case of spatial applications, practitioners from research and industry either have difficulty finding them or perceive such guides as lacking relevance, practicability, and applicability. This paper presents the current state of scientific research and industry practice by investigating currently used design recommendations for mixed reality (MR) system development. We analyzed and compared 875 design recommendations for MR applications elicited from 89 scientific papers and documentation from six industry practitioners in a literature review. In doing so, we identified differences regarding four key topics: Focus on unique MR design challenges, abstraction regarding devices and ecosystems, level of detail and abstraction of content, and covered topics. Based on that,we contribute to the MR design research by providing three factors for perceived irrelevance and six main implications for design recommendations that are applicable in scientific and industry practice.
Start-ups als Arbeitgeber
(2021)
Designs for decorative surfaces, such as flooring, must cover several square meters to avoid visible repeats. While the use of desktop systems is feasible to support the designer, it is challenging for a non-domain expert to get the right impression of the appearances of surfaces due to limited display sizes and a potentially unnatural interaction with digital designs. At the same time, large-format editing of structure and gloss is becoming increasingly important. Advances in the printing industry allow for more faithful reproduction of such surface details. Unfortunately, existing systems for visualizing surface designs cannot adequately account for gloss, especially for non-domain experts. Here, the complex interaction of light sources and the camera position must be controlled using software controls. As a result, only small parts of the data set can be properly inspected at a time. Also, real-world lighting is not considered here. This work presents a system for the processing and realistic visualization of large decorative surface designs. To this end, we present a tabletop solution that is coupled to a live 360° video feed and a spatial tracking system. This allows for reproducing natural view-dependent effects like real-world reflections, live image-based lighting, and the interaction with the design using virtual light sources employing natural interaction techniques that allow for a more accurate inspection even for non-domain experts.
New cars are increasingly "connected" by default. Since not having a car is not an option for many people, understanding the privacy implications of driving connected cars and using their data-based services is an even more pressing issue than for expendable consumer products. While risk-based approaches to privacy are well established in law, they have only begun to gain traction in HCI. These approaches are understood not only to increase acceptance but also to help consumers make choices that meet their needs. To the best of our knowledge, perceived risks in the context of connected cars have not been studied before. To address this gap, our study reports on the analysis of a survey with 18 open-ended questions distributed to 1,000 households in a medium-sized German city. Our findings provide qualitative insights into existing attitudes and use cases of connected car features and, most importantly, a list of perceived risks themselves. Taking the perspective of consumers, we argue that these can help inform consumers about data use in connected cars in a user-friendly way. Finally, we show how these risks fit into and extend existing risk taxonomies from other contexts with a stronger social perspective on risks of data use.
Risk-based authentication (RBA) extends authentication mechanisms to make them more robust against account takeover attacks, such as those using stolen passwords. RBA is recommended by NIST and NCSC to strengthen password-based authentication, and is already used by major online services. Also, users consider RBA to be more usable than two-factor authentication and just as secure. However, users currently obtain RBA's high security and usability benefits at the cost of exposing potentially sensitive personal data (e.g., IP address or browser information). This conflicts with user privacy and requires to consider user rights regarding the processing of personal data. We outline potential privacy challenges regarding different attacker models and propose improvements to balance privacy in RBA systems. To estimate the properties of the privacy-preserving RBA enhancements in practical environments, we evaluated a subset of them with long-term data from 780 users of a real-world online service. Our results show the potential to increase privacy in RBA solutions. However, it is limited to certain parameters that should guide RBA design to protect privacy. We outline research directions that need to be considered to achieve a widespread adoption of privacy preserving RBA with high user acceptance.
Components and Architecture for the Implementation of Technology-Driven Employee Data Protection
(2021)
In the field of service robots, dealing with faults is crucial to promote user acceptance. In this context, this work focuses on some specific faults which arise from the interaction of a robot with its real world environment due to insufficient knowledge for action execution. In our previous work [1], we have shown that such missing knowledge can be obtained through learning by experimentation. The combination of symbolic and geometric models allows us to represent action execution knowledge effectively. However we did not propose a suitable representation of the symbolic model. In this work we investigate such symbolic representation and evaluate its learning capability. The experimental analysis is performed on four use cases using four different learning paradigms. As a result, the symbolic representation together with the most suitable learning paradigm are identified.
In contrast to the German power supply, the energy supply in many West African countries is very unstable. Frequent power outages are not uncommon. Especially for critical infrastructures, such as hospitals, a stable power supply is vital. To compensate for the power outages, diesel generators are often used. In the future, these systems will increasingly be supplemented by PV systems and storage, so that the generator will have to be used less or not at all when needed. For the design and operation of such systems, it is necessary to better understand the atmospheric variability of PV power generation. For example, there are large variations between rainy and dry seasons, between days with high and low dust levels - caused by sandstorms (harmattan) or urban air pollution.
In view of the rapid growth of solar power installations worldwide, accurate forecasts of photovoltaic (PV) power generation are becoming increasingly indispensable for the overall stability of the electricity grid. In the context of household energy storage systems, PV power forecasts contribute towards intelligent energy management and control of PV-battery systems, in particular so that self-sufficiency and battery lifetime are maximised. Typical battery control algorithms require day-ahead forecasts of PV power generation, and in most cases a combination of statistical methods and numerical weather prediction (NWP) models are employed. The latter are however often inaccurate, both due to deficiencies in model physics as well as an insufficient description of irradiance variability.
New communication technologies are changing the way we work and communicate with people around the world. Given this reality, students in Higher Education (HE) worldwide need to develop knowledge in their area of study as well as attitudes and values that will enable them to be responsible and ethical global citizens in the workforce they will soon enter, regardless of the degree. Different institutional and country-specific requirements are important factors when developing an international Virtual Exchange (VE) program. Digital learning environments such as ProGlobe – Promoting the Global Exchange of Ideas on Sustainable Goals, Practices, and Cultural Diversity – offer a platform for collaborating with diverse students around the world to share and reflect on ideas on sustainable practices. Students work together virtually on a joint interdisciplinary project that aims to create knowledge and foster cultural diversity. This project was successfully integrated into each country’s course syllabus through a common global theme; sustainability. The focus of this paper is to present multi-disciplinary perspectives on the opportunities and challenges in implementing a VE project in HE. Furthermore, it will present the challenges that country coordinators dealt with when planning and implementing their project. Given the disparity found in each course syllabus, project coordinators uniquely handled the project goal, approach, and assessment for their specific course and program. Not only did the students and faculty gain valuable insight into different aspects of collaboration when working in interdisciplinary HE projects, they also reflected on their own impact on the environment and learned to listen to how people in different countries deal with environmental issues. This approach provided students with meaningful intercultural experiences that helped them link ideas and concepts about a global issue through the lens of their own discipline as well as other disciplines worldwide.
Target meaning representations for semantic parsing tasks are often based on programming or query languages, such as SQL, and can be formalized by a context-free grammar. Assuming a priori knowledge of the target domain, such grammars can be exploited to enforce syntactical constraints when predicting logical forms. To that end, we assess how syntactical parsers can be integrated into modern encoder-decoder frameworks. Specifically, we implement an attentional SEQ2SEQ model that uses an LR parser to maintain syntactically valid sequences throughout the decoding procedure. Compared to other approaches to grammar-guided decoding that modify the underlying neural network architecture or attempt to derive full parse trees, our approach is conceptually simpler, adds less computational overhead during inference and integrates seamlessly with current SEQ2SEQ frameworks. We present preliminary evaluation results against a recurrent SEQ2SEQ baseline on GEOQUERY and ATIS and demonstrate improved performance while enforcing grammatical constraints.
Execution monitoring is essential for robots to detect and respond to failures. Since it is impossible to enumerate all failures for a given task, we learn from successful executions of the task to detect visual anomalies during runtime. Our method learns to predict the motions that occur during the nominal execution of a task, including camera and robot body motion. A probabilistic U-Net architecture is used to learn to predict optical flow, and the robot's kinematics and 3D model are used to model camera and body motion. The errors between the observed and predicted motion are used to calculate an anomaly score. We evaluate our method on a dataset of a robot placing a book on a shelf, which includes anomalies such as falling books, camera occlusions, and robot disturbances. We find that modeling camera and body motion, in addition to the learning-based optical flow prediction, results in an improvement of the area under the receiver operating characteristic curve from 0.752 to 0.804, and the area under the precision-recall curve from 0.467 to 0.549.
Solving transport network problems can be complicated by non-linear effects. In the particular case of gas transport networks, the most complex non-linear elements are compressors and their drives. They are described by a system of equations, composed of a piecewise linear ‘free’ model for the control logic and a non-linear ‘advanced’ model for calibrated characteristics of the compressor. For all element equations, certain stability criteria must be fulfilled, providing the absence of folds in associated system mapping. In this paper, we consider a transformation (warping) of a system from the space of calibration parameters to the space of transport variables, satisfying these criteria. The algorithm drastically improves stability of the network solver. Numerous tests on realistic networks show that nearly 100% convergence rate of the solver is achieved with this approach.
Since stationary self-checkout is widely introduced and well understood, previous research barely examined newer generations of smartphone-based Scan&Go. Especially from a design perspective, we know little about the factors contributing to the adoption of Scan&Go solutions and how design enables consumers to take full advantage of this development rather than being burdened with using complex and unenjoyable systems. To understand the influencing factors and the design from a consumer perspective, we conducted a mixed-methods study where we triangulated data of an online survey with 103 participants and a qualitative study with 20 participants. Based on the results, our study presents a refined and nuanced understanding of technology as well as infrastructure-related factors that influence adoption. Moreover, we present several implications for designing and implementing of Scan&Go in retail environments.
Due to ongoing digitalization, more and more cloud services are finding their way into companies. In this context, data integration from the various software solutions, which are provided both on-premise (local use or licensing for local use of software) and as a service, is of great importance. In this regard, Integration Platform as a Service (IPaaS) models aim to support companies as well as software providers in the context of data integration by providing connectors to enable data flow between different applications and systems and other integration services. Since previous research has mostly focused on technical or legal aspects of IPaaS, this article focuses on deriving integration practices and design-related barriers and drivers regarding the adoption of IPaaS. Therefore, we conducted 10 interviews with experts from different software as a services vendors. Our results show that the main factors regarding the adoption of IPaaS are the standardization of data models, the usability and variety of connectors provided, and the issues regarding data privacy, security, and transparency.
In the course of growing online retailing, recommendation systems have become established that derive recommendations from customers’ purchase histories. Recommending suitable food products can represent a lucrative added value for food retailers, but at the same time challenges them to make good predictions for repeated food purchases. Repeat purchase recommendations have been little explored in the literature. These predict when a product will be purchased again by a customer. This is especially important for food recommendations, since it is not the frequency of the same item in the shopping basket that is relevant for determining repeat purchase intervals, but rather their difference over time. In this paper, in addition to critically reflecting classical recommendation systems on the underlying repeat purchase context, two models for online product recommendations are derived from the literature, validated and discussed for the food context using real transaction data of a German stationary food retailer.
Property-Based Testing in Simulation for Verifying Robot Action Execution in Tabletop Manipulation
(2021)
An important prerequisite for the reliability and robustness of a service robot is ensuring the robot’s correct behavior when it performs various tasks of interest. Extensive testing is one established approach for ensuring behavioural correctness; this becomes even more important with the integration of learning-based methods into robot software architectures, as there are often no theoretical guarantees about the performance of such methods in varying scenarios. In this paper, we aim towards evaluating the correctness of robot behaviors in tabletop manipulation through automatic generation of simulated test scenarios in which a robot assesses its performance using property-based testing. In particular, key properties of interest for various robot actions are encoded in an action ontology and are then verified and validated within a simulated environment. We evaluate our framework with a Toyota Human Support Robot (HSR) which is tested in a Gazebo simulation. We show that our framework can correctly and consistently identify various failed actions in a variety of randomised tabletop manipulation scenarios, in addition to providing deeper insights into the type and location of failures for each designed property.
An der H-BRS, einer Hochschule für Angewandte Wissenschaften mit ca. 9.000 Studierenden, wurde die OER-Kultur bewusst als Teil der Strategie zur Digitalisierung der Lehre in drei Schritten etabliert: (1) Gemeinsame Strategiebildung als Teil eines partizipativ erarbeiteten Hochschulentwicklungsplans: Verankerung von OER in der Digitalisierungsstrategie. (2) Basierend auf der Vernetzung der Expertinnen und Experten erfolgreiche Einwerbung von OER-Projekten, die exemplarisch vorgestellt werden. (3) Dauerhafte strategische Verankerung, basierend auf kontinuierlicher interner und externer Netzwerkarbeit, Etablierung von digitalen Austauschplattformen für die Lehrenden, Transfer des OER-Gedankens (Kooperation, Austausch, Mehrfachnutzen) auf die Hochschuldidaktik sowie regelmäßige Ausschreibungen von Fördermaßnahmen.
When an autonomous robot learns how to execute actions, it is of interest to know if and when the execution policy can be generalised to variations of the learning scenarios. This can inform the robot about the necessity of additional learning, as using incomplete or unsuitable policies can lead to execution failures. Generalisation is particularly relevant when a robot has to deal with a large variety of objects and in different contexts. In this paper, we propose and analyse a strategy for generalising parameterised execution models of manipulation actions over different objects based on an object ontology. In particular, a robot transfers a known execution model to objects of related classes according to the ontology, but only if there is no other evidence that the model may be unsuitable. This allows using ontological knowledge as prior information that is then refined by the robot’s own experiences. We verify our algorithm for two actions – grasping and stowing everyday objects – such that we show that the robot can deduce cases in which an existing policy can generalise to other objects and when additional execution knowledge has to be acquired.
Voice assistants (VA) collect data about users’ daily life including interactions with other connected devices, musical preferences, and unintended interactions. While users appreciate the convenience of VAs, their understanding and expectations of data collection by vendors are often vague and incomplete. By making the collected data explorable for consumers, our research-through-design approach seeks to unveil design resources for fostering data literacy and help users in making better informed decisions regarding their use of VAs. In this paper, we present the design of an interactive prototype that visualizes the conversations with VAs on a timeline and provides end users with basic means to engage with data, for instance allowing for filtering and categorization. Based on an evaluation with eleven households, our paper provides insights on how users reflect upon their data trails and presents design guidelines for supporting data literacy of consumers in the context of VAs.
We consider multi-solution optimization and generative models for the generation of diverse artifacts and the discovery of novel solutions. In cases where the domain's factors of variation are unknown or too complex to encode manually, generative models can provide a learned latent space to approximate these factors. When used as a search space, however, the range and diversity of possible outputs are limited to the expressivity and generative capabilities of the learned model. We compare the output diversity of a quality diversity evolutionary search performed in two different search spaces: 1) a predefined parameterized space and 2) the latent space of a variational autoencoder model. We find that the search on an explicit parametric encoding creates more diverse artifact sets than searching the latent space. A learned model is better at interpolating between known data points than at extrapolating or expanding towards unseen examples. We recommend using a generative model's latent space primarily to measure similarity between artifacts rather than for search and generation. Whenever a parametric encoding is obtainable, it should be preferred over a learned representation as it produces a higher diversity of solutions.
Characterization of Urban Radio Channels and Base Station Antenna Correlation in the 3.75 GHz Band
(2021)
Less is Often More: Header Whitelisting as Semantic Gap Mitigation in HTTP-Based Software Systems
(2021)
The web is the most wide-spread digital system in the world and is used for many crucial applications. This makes web application security extremely important and, although there are already many security measures, new vulnerabilities are constantly being discovered. One reason for some of the recent discoveries lies in the presence of intermediate systems—e.g. caches, message routers, and load balancers—on the way between a client and a web application server. The implementations of such intermediaries may interpret HTTP messages differently, which leads to a semantically different understanding of the same message. This so-called semantic gap can cause weaknesses in the entire HTTP message processing chain.
In this paper we introduce the header whitelisting (HWL) approach to address the semantic gap in HTTP message processing pipelines. The basic idea is to normalize and reduce an HTTP request header to the minimum required fields using a whitelist before processing it in an intermediary or on the server, and then restore the original request for the next hop. Our results show that HWL can avoid misinterpretations of HTTP messages in the different components and thus prevent many attacks rooted in a semantic gap including request smuggling, cache poisoning, and authentication bypass.
XML Signature Wrapping (XSW) has been a relevant threat to web services for 15 years until today. Using the Personal Health Record (PHR), which is currently under development in Germany, we investigate a current SOAP-based web services system as a case study. In doing so, we highlight several deficiencies in defending against XSW. Using this real-world contemporary example as motivation, we introduce a guideline for more secure XML signature processing that provides practitioners with easier access to the effective countermeasures identified in the current state of research.
Threats to passwords are still very relevant due to attacks like phishing or credential stuffing. One way to solve this problem is to remove passwords completely. User studies on passwordless FIDO2 authentication using security tokens demonstrated the potential to replace passwords. However, widespread acceptance of FIDO2 depends, among other things, on how user accounts can be recovered when the security token becomes permanently unavailable. For this reason, we provide a heuristic evaluation of 12 account recovery mechanisms regarding their properties for FIDO2 passwordless authentication. Our results show that the currently used methods have many drawbacks. Some even rely on passwords, taking passwordless authentication ad absurdum. Still, our evaluation identifies promising account recovery solutions and provides recommendations for further studies.
Critical consumerism is complex as ethical values are difficult to negotiate, appropriate products are hard to find, and product information is overwhelming. Although recommender systems offer solutions to reduce such complexity, current designs are not appropriate for niche practices and use non-personalized intransparent ethics. To support critical consumption, we conducted a design case study on a personalized food recommender system. Therefore, we first conducted an empirical pre-study with 24 consumers to understand value negotiations and current practices, co-designed the recommender system, and finally evaluated it in a real-world trial with ten consumers. Our findings show how recommender systems can support the negotiation of ethical values within the context of consumption practices, reduce the complexity of finding products and stores, and strengthen consumers. In addition to providing implications for the design to support critical consumption practices, we critically reflect on the scope of such recommender systems and its appropriation.
Atomic oxygen in the mesosphere and lower thermosphere measured by terahertz heterodyne spectroscopy
(2021)
Atomic oxygen is a main component of the mesosphere and lower thermosphere (MLT). The photochemistry and the energy balance of the MLT are governed by atomic oxygen. In addition, it is a tracer for dynamical motions in the MLT. It is difficult to measure with remote sensing techniques. Concentrations can be inferred indirectly from the oxygen air glow or from observations of OH, which is involved in photochemical processes related to atomic oxygen. Such measurements have been performed with several satellite instruments such as SCIAMACHY, SABER, WINDII and OSIRIS. However, the methods are indirect and rely on photochemical models and assumptions such as quenching rates, radiative lifetimes, and reaction coefficients. The results are not always in agreement, particularly when obtained with different instruments.
Photovoltaic (PV) power data are a valuable but as yet under-utilised resource that could be used to characterise global irradiance with unprecedented spatio-temporal resolution. The resulting knowledge of atmospheric conditions can then be fed back into weather models and will ultimately serve to improve forecasts of PV power itself. This provides a data-driven alternative to statistical methods that use post-processing to overcome inconsistencies between ground-based irradiance measurements and the corresponding predictions of regional weather models (see for instance Frank et al., 2018). This work reports first results from an algorithm developed to infer global horizontal irradiance as well as atmospheric optical properties such as aerosol or cloud optical depth from PV power measurements.
Sharing economies enabled by technical platforms have been studied regarding their economic, legal, and social effects, as well as with regard to their possible influences on CSCW topics such as work, collaboration, and trust. While a lot current research is focusing on the sharing economy and related communities, there is little work addressing the phenomenon from a socio-technical point of view. Our workshop is meant to address this gap. Building on research themes and discussion from last year’s ECSCW, we seek to engage deeper with topics such as novel socio-technical approaches for enabling sharing communities, discussing issues around digital consumer and worker protection, as well as emerging challenges and opportunities of existing platforms and approaches.
Robot Action Diagnosis and Experience Correction by Falsifying Parameterised Execution Models
(2021)
When faced with an execution failure, an intelligent robot should be able to identify the likely reasons for the failure and adapt its execution policy accordingly. This paper addresses the question of how to utilise knowledge about the execution process, expressed in terms of learned constraints, in order to direct the diagnosis and experience acquisition process. In particular, we present two methods for creating a synergy between failure diagnosis and execution model learning. We first propose a method for diagnosing execution failures of parameterised action execution models, which searches for action parameters that violate a learned precondition model. We then develop a strategy that uses the results of the diagnosis process for generating synthetic data that are more likely to lead to successful execution, thereby increasing the set of available experiences to learn from. The diagnosis and experience correction methods are evaluated for the problem of handle grasping, such that we experimentally demonstrate the effectiveness of the diagnosis algorithm and show that corrected failed experiences can contribute towards improving the execution success of a robot.
Data emerged as a central success factor for companies to benefit from digitization. However, the skills in successfully creating value from data – especially at the management level – are not always profound. To address this problem, several canvas models have already been designed. Canvas models are usually created to write down an idea in a structured way to promote transparency and traceability. However, some existing data science canvas models mainly address developers and are thus unsuitable for decision-makers and communication within interdisciplinary teams. Based on a literature review, we identified influencing factors that are essential for the success of data science projects. With the information gained, the Data Science Canvas was developed in an expert workshop and finally evaluated by practitioners to find out whether such an instrument could support data-driven value creation.
Augmented/Virtual Reality (AR/VR) is still a fragmented space to design for due to the rapidly evolving hardware, the interdisciplinarity of teams, and a lack of standards and best practices. We interviewed 26 professional AR/VR designers and developers to shed light on their tasks, approaches, tools, and challenges. Based on their work and the artifacts they generated, we found that AR/VR application creators fulfill four roles: concept developers, interaction designers, content authors, and technical developers. One person often incorporates multiple roles and faces a variety of challenges during the design process from the initial contextual analysis to the deployment. From analysis of their tool sets, methods, and artifacts, we describe critical key challenges. Finally, we discuss the importance of prototyping for the communication in AR/VR development teams and highlight design implications for future tools to create a more usable AR/VR tool chain.
Representation and Experience-Based Learning of Explainable Models for Robot Action Execution
(2021)
For robots acting in human-centered environments, the ability to improve based on experience is essential for reliable and adaptive operation; however, particularly in the context of robot failure analysis, experience-based improvement is only useful if robots are also able to reason about and explain the decisions they make during execution. In this paper, we describe and analyse a representation of execution-specific knowledge that combines (i) a relational model in the form of qualitative attributes that describe the conditions under which actions can be executed successfully and (ii) a continuous model in the form of a Gaussian process that can be used for generating parameters for action execution, but also for evaluating the expected execution success given a particular action parameterisation. The proposed representation is based on prior, modelled knowledge about actions and is combined with a learning process that is supervised by a teacher. We analyse the benefits of this representation in the context of two actions – grasping handles and pulling an object on a table – such that the experiments demonstrate that the joint relational-continuous model allows a robot to improve its execution based on experience, while reducing the severity of failures experienced during execution.