Refine
Departments, institutes and facilities
- Fachbereich Informatik (606)
- Fachbereich Ingenieurwissenschaften und Kommunikation (235)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (216)
- Institute of Visual Computing (IVC) (210)
- Institut für Cyber Security & Privacy (ICSP) (201)
- Fachbereich Wirtschaftswissenschaften (148)
- Institut für Verbraucherinformatik (IVI) (134)
- Fachbereich Angewandte Naturwissenschaften (76)
- Institut für funktionale Gen-Analytik (IFGA) (56)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (54)
- Fachbereich Sozialpolitik und Soziale Sicherung (39)
- Institut für Sicherheitsforschung (ISF) (34)
- Zentrum für Innovation und Entwicklung in der Lehre (ZIEL) (21)
- Institut für Detektionstechnologien (IDT) (10)
- Sprachenzentrum (8)
- Institut für KI und Autonome Systeme (A2S) (5)
- Bibliothek (4)
- Centrum für Entrepreneurship, Innovation und Mittelstand (CENTIM) (2)
- Institut für IT-Service (ITS) (2)
- Institut für Medienentwicklung und -analyse (IMEA) (1)
Document Type
- Conference Object (1806) (remove)
Year of publication
Keywords
- FPGA (11)
- Machine Learning (9)
- Usable Security (9)
- Virtual Reality (9)
- Privacy (8)
- Robotics (7)
- Sustainability (7)
- DPA (6)
- Education (6)
- Entrepreneurship (6)
Interactive Distributed Rendering of 3D Scenes on Multiple Xbox 360 Systems and Personal Computers
(2012)
Scientific or statistical research has long been the domain of dedicated programming languages such as R, SPSS or SAS. A few years other competitors entered the arena, among them Python with its powerful SciPy package. The following article introduces SciPy by applying a small subset of its functionality to a well-known dataset.
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
We present GEM-NI -- a graph-based generative-design tool that supports parallel exploration of alternative designs. Producing alternatives is a key feature of creative work, yet it is not strongly supported in most extant tools. GEM-NI enables various forms of exploration with alternatives such as parallel editing, recalling history, branching, merging, comparing, and Cartesian products of and for alternatives. Further, GEM-NI provides a modal graphical user interface and a design gallery, which both allow designers to control and manage their design exploration. We conducted an exploratory user study followed by in-depth one-on-one interviews with moderately and highly skills participants and obtained positive feedback for the system features, showing that GEM-NI supports creative design work well.
We present a new interface for interactive comparisons of more than two alternative documents in the context of a generative design system that uses generative data-flow networks defined via directed acyclic graphs. To better show differences between such networks, we emphasize added, deleted, (un)changed nodes and edges. We emphasize differences in the output as well as parameters using highlighting and enable post-hoc merging of the state of a parameter across a selected set of alternatives. To minimize visual clutter, we introduce new difference visualizations for selected nodes and alternatives using additive and subtractive encodings, which improve readability and keep visual clutter low. We analyzed similarities in networks from a set of alternative designs produced by architecture students and found that the number of similarities outweighs the differences, which motivates use of subtractive encoding. We ran a user study to evaluate the two main proposed difference visualization encodings and found that they are equally effective.
The increasing complexity of tasks that are required to be executed by robots demands higher reliability of robotic platforms. For this, it is crucial for robot developers to consider fault diagnosis. In this study, a general non-intrusive fault diagnosis system for robotic platforms is proposed. A mini-PC is non-intrusively attached to a robot that is used to detect and diagnose faults. The health data and diagnosis produced by the mini-PC is then standardized and transmitted to a remote-PC. A storage device is also attached to the mini-PC for data logging of health data in case of loss of communication with the remote-PC. In this study, a hybrid fault diagnosis method is compared to consistency-based diagnosis (CBD), and CBD is selected to be deployed on the system. The proposed system is modular and can be deployed on different robotic platforms with minimum setup.
This work presents the preliminary research towards developing an adaptive tool for fault detection and diagnosis of distributed robotic systems, using explainable machine learning methods. Autonomous robots are complex systems that require high reliability in order to operate in different environments. Even more so, when considering distributed robotic systems, the task of fault detection and diagnosis becomes exponentially difficult.
To diagnose systems, models representing the behaviour under investigation need to be developed, and with distributed robotic systems generating large amount of data, machine learning becomes an attractive method of modelling especially because of its high performance. However, with current day methods such as artificial neural networks (ANNs), the issue of explainability arises where learnt models lack the ability to give explainable reasons behind their decisions.
This paper presents current trends in methods for data collection from distributed systems, inductive logic programming (ILP); an explainable machine learning method, and fault detection and diagnosis.
Intention: Within the research project EnerSHelF (Energy-Self-Sufficiency for Health Facilities in Ghana), i. a. energy-meteorological and load-related measurement data are collected, for which an overview of the availability is to be presented on a poster.
Context: In Ghana, the total electricity consumed has almost doubled between 2008 and 2018 according to the Energy Commission of Ghana. This goes along with an unstable power grid, resulting in power outages whenever electricity consumption peaks. The blackouts called "dumsor" in Ghana, pose a severe burden to the healthcare sector. Innovative solutions are needed to reduce greenhouse gas emissions and improve energy and health access.
Microwave Kinetic Inductance Detectors have great potential for large very sensitive detector arrays for use in, for example, ground and spaced based sub?mm imaging. Being intrinsically readout in the frequency domain, they are particularly suited for frequency domain multiplexing allowing 1000s of devices to be readout with one pair of coaxial cables. However, this moves the complexity of the detector from the cryogenics to the warm electronics. We present the use of a readout based on a Fast Fourier transform Spectrometer, showing no deterioration of the noise performance compared to low noise analog mixing while allowing high multiplexing ratios (>100). We present use of this technique to multiplex 44 MKIDs, while this and similar setups are regularly now being used in our array development. This development will help the realization of large cameras, particularly in the short term for ground based astronomy.
Helping Johnny to Analyze Malware: A Usability-Optimized Decompiler and Malware Analysis User Study
(2016)
In diesem Artikel wird darüber berichtet, ob die Glaubwürdigkeit von Avataren als mögliches Modulationskriterium für die virtuelle Expositionstherapie von Agoraphobie in Frage kommt. Dafür werden mehrere Glaubwürdigkeitsstufen für Avatare, die hypothetisch einen Einfluss auf die virtuelle Expositionstherapie von Agoraphobie haben könnten sowie ein potentielles Expositionsszenario entwickelt. Die Arbeit kann innerhalb einer Studie einen signifikanten Einfluss der Glaubwürdigkeitsstufen auf Präsenz, Kopräsenz und Realismus aufzeigen.
The dawn of the 21st Century has witnessed a tremendous increase in trade pacts among nations, resulting in renewed hopes for sustainable enterprise development in emerging economies worldwide. Ghana and other sub- Saharan African (SSA) countries have signed onto several North-South and South-South free trade agreements with the hope of strengthening their presence in the international trade arena, and to promote economic growth in SSA. For over two decades, however, very little has changed, and many have dashed their high hopes as enterprises continue to struggle in SSA. Not even the African Continental Free Trade Agreement (AfCFTA) could renew the hopes of sceptics. Several studies opined that enterprises in SSA could improve their domestic and international competitiveness by establishing mutually beneficial partnerships with their counterparts from the Global North and South. This study delved into the issues that affect North-South and South-South business collaborations and recommends key success factors that could help promote mutually beneficial cross-border business partnerships. The research includes both literature and empirical information on the key success factors of business partnerships between African enterprises as well as between African enterprises and firms from the Global North. We approached the study qualitatively using a phenomenological research design. Research participants included important stakeholders in Africa and Europe's international trade and sustainable enterprise development ecosystem. The study identified several challenges with the current business collaborations and recommended new ways of making such partnerships more beneficial.
Recent approaches in scaffold engineering for bone defects feature hybrid hydrogels made of a polymeric network (retains water and provides light and porous structures) and inorganic ceramics (add mechanical strength and improve cell-adhesion). Innovative scaffold materials should also induce bone tissue formation and incorporation of stem cells (osteogenic differentiation) and/or growth factors (inducing/supporting differentiation). Recently, purinergic P2X and P2Y receptors have been found to significantly influence the osteogenic differentiation process of human mesenchymal stem cells (hMSC). (1) Aim of this work is to develop polysaccharide (PS) composites to be used as scaffolds containing complementary receptor ligands to enable guided stem cell differentiation towards bone formation.
Temperature Dependency of Morphological Structure of Thermoplastic Polyurethane using WAXS and SAXS
(2016)
Polyurethanes achieved an exceptional position among the most important organic polymers due to their highly specific technological application areas. Polyurethanes represent a polyaddition product of isocyanate and diols. In terms of their enormous industrial importance, the chemistry of isocyanates has been extensively studied.
Möglichkeiten und Grenzen der Baustoffanalytik und anwendungstechnische Prüfungen an Objekten
(2018)
Untersuchungen zum Einfluss von chemischen Aktivatoren und Templaten auf die Zementhydratation
(2018)
Multimediaprojectors require sophisticated image processing realized on limited board space. An architecture is presented that combines available components and a dedicated display controller for a flexible, compact and cost efficient display electronic. A basic version of the display controller is available as an ASIC, an advanced version has been prototyped as an FPGA.
Optical distortions, resulting from lens characteristics, non-aligned projection and variations in the light source, decrease the quality of projection displays. Knowledge of the sources and characteristics of these distortions allows their electronic correction. The integration of electronic image correction in the display controller IC allows high quality projection without additional components.
Seit 2012 wird an der Hochschule Bonn-Rhein-Sieg die Studieneingangsphase im Qualitätspakt Lehre gefördert. Ein wesentliches Anliegen im Projekt „Pro-MINT-us“ ist die Einbeziehung der gesamten Hochschule, um keine isolierten Maßnahmen anzubieten, sondern die im Projekt entwickelten Lehrideen nachhaltig zu verankern.
An electronic display often has to present information from several sources. This contribution reports about an approach, in which programmable logic (FPGA) synchronises and combines several graphics inputs. The application area is computer graphics, especially rendering of large 3D models, which is a computing intensive task. Therefore, complex scenes are generated on parallel systems and merged to give the requested output image. So far, the transportation of intermediate results is often done by a local area network. However, as this can be a limiting factor, the new approach removes this bottleneck and combines the graphic signals with an FPGA.
Qualifikation für gute Lehre
(2010)
Eine von insgesamt sechs Arbeitsgruppen der Jahrestagung des HRK Bologna-Zentrums 2009 beschäftigte sich mit dem Themenbereich "Qualifikation für gute Lehre". Nach zwei Impulsvorträgen diskutierten die Teilnehmer, wie Hochschulangehörige noch stärker als bisher für die Lehre
qualifiziert werden können.
Influence of Statistical Properties of Video Signals on the Power Dissipation of CMOS Circuits
(1994)
Organisationen wollen Produkte mit guter User Experience herstellen. Durch die Evaluation der organisationalen UX-Gestaltungskompetenz können Organisationen erkennen, wie stark ihre momentane UX-Gestaltungskompetenz ausgeprägt ist und wie die Kompetenz gezielt gesteigert werden kann. Für die Abbildung der aktuellen Kompetenz werden ein Fragebogen zur theoretischen Kompetenz und ein Fragebogen für die Produktevaluation kombiniert. Durch diese Kombination wird die Kompetenz der Organisation aus der Handlungs- und der Ergebnisperspektive betrachtet. Für die Erarbeitung von Handlungsfeldern zur Verbesserung der Kompetenz werden qualitative Interviews durchgeführt und mit den Ergebnissen der quantitativen Erhebungen verknüpft. Durch einen anschließenden Ergebnisworkshop erarbeiten sich die Mitglieder der Organisation einen effizienten Weg zur Steigerung der organisationalen UX-Kompetenz.
Usability und User Experience (UX) haben als Design-Aspekte in der Produktentwicklung zunehmend an Bedeutung gewonnen. Daher ist es sinnvoll, die organisationale Kompetenz zur Entwicklung von Produkten mit einer positiven UX zu stärken. Veränderungen in Organisationen sind jedoch mit großem Aufwand verbunden. Deshalb müssen Organisationen entscheiden, welche Aktivitäten zur Veränderung der eigenen Kompetenz durchgeführt werden sollen und welche nicht. Die bisherige Forschung hat sich weitgehend auf die Anwendbarkeit bestimmter Methoden im Projekt- und Produktkontext konzentriert. Um geeignete Aktivitäten zur Verbesserung der organisationalen UX-Kompetenz zu identifizieren, wurden 17 UX-Professionals befragt. Diese UX-Professionals haben mindestens zehn Jahre Erfahrung durch die Arbeit in mehreren Unternehmen und durch die Übernahme einer Führungsrolle im Bereich UX gesammelt. Aus diesen Interviews wurden 13 mögliche Maßnahmen zur Steigerung der UX-Kompetenz von Organisationen abgeleitet. Dazu gehören beispielsweise die Erhöhung der Kompetenz einzelner Mitarbeiter, das Teilen von UX-Erfolgsgeschichten oder das Ermöglichen von User Research.
UX-Professionals stehen vor der Aufgabe ihre Fertigkeiten und Kenntnisse kontinuierlich auszubauen. Eine Möglichkeit dies zu tun sind Communities of Practice, also Gemeinschaften von Personen mit ähnlichen Aufgaben und Schwerpunkten sowie einem gemeinsamen Interesse an Lösungen. Sie agieren weitgehend selbstorganisiert und dienen dem Austausch und der gegenseitigen Unterstützung. So entstehen ein gemeinsamer Wissensschatz sowie ein Netzwerk zwischen allen UX-Interessierten. Der Aufbau einer Community of Practice für UX-Professionals wurde in einem mittelständigen Unternehmen über 18 Monate begleitet und ausgewertet. Die Ergebnisse führten zu Handlungsempfehlungen, um Hindernisse beim Aufbau zu reduzieren und einen Mehrwert für alle Beteiligten herbeizuführen.
Mit dem Dual Shore Delivery Model Sprach- und Kulturbarrieren bei IT-Offshoreprojekten überwinden
(2005)
Die Detektion von Explosivstoffen stellt ein zentrales Feld der zivilen Sicherheitsforschung dar. Eine besondere Herausforderung liegt hierbei in dem Nachweis verpackter Substanzen, wie es bei Unkonventionellen Spreng- und Brandvorrichtung (USBV) häufig der Fall ist. Derzeit eingesetzte Verfahren arbeiten meist mit bildgebenden Techniken, durch die sich ein Anfangsverdacht ergibt. Der tatsächliche chemische Inhalt der USBV lässt sich jedoch nicht exakt ermitteln. Eine genaue Beurteilung der Gefährdung durch solche Substanzen ist allerdings von großer Bedeutung, insbesondere wenn die Entschärfung des Objekts in bewohntem Gebiet stattfinden muss. In der vorliegenden Arbeit wird ein Verfahren vorgestellt, das sich als Verifikationsverfahren bei bestehendem Anfangsverdacht gezielt einsetzen lässt. Hierzu wird mittels Laserbohrtechnik zunächst die äußere Hülle des zu untersuchenden Gegenstandes durchdrungen. Anschließend finden eine lasergestützte Probenahme des Inhalts sowie die Detektion unter Verwendung geeigneter Analysemöglichkeiten statt. Der Bohr- und Probenahmefortschritt wird über verschiedene spektroskopische und sensorische Verfahren begleitend überwacht. Zukünftig soll das System abstandsfähig auf Entschärfungsrobotern eingesetzt werden.
In der vorliegenden Arbeit wird ein neuartiges Verfahren zur Echtzeitüberwachung von Laserbohrprozessen vorgestellt. Die Untersuchungen werden an unterschiedlichen Materialien unter Einsatz eines passiv-gütegeschalteten Nd:YAG Lasers durchgeführt. Prozessbegleitend findet eine Aufzeichnung der akustischen Emissionen mit anschließender Analyse durch schnelle Fourier-Transformation statt. Hierdurch lassen sich der Durchbruch beim Bohren durch ein Material sowie der Materialübergang mehrschichtiger Systeme detektieren. Die akustischen Messungen werden durchAuswertung der Pulsfolge des Lasers mittels einer Fotodiode gestützt. Hierbei zeigt sich eine gute Übereinstimmung der im akustischen Spektrum dominanten Frequenz mit der jeweils im Laserburstauftretenden Pulsfrequenz. Das vorgestellte Verfahren ermöglicht eine Echtzeitüberwachung beim Laserbohren mittels kostengünstiger und einfacher Hardware. Zudem zeichnet es sich im Gegensatz zu bestehenden Verfahren durch eine hohe Robustheit gegen äußere Störeinflüsse aus, da eine frequenzbasierte Auswertung stattfindet.
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.
Cosynthesis in CASTLE
(1995)
Digital ecosystems are driving the digital transformation of business models. Meanwhile, the associated processing of personal data within these complex systems poses challenges to the protection of individual privacy. In this paper, we explore these challenges from the perspective of digital ecosystems' platform providers. To this end, we present the results of an interview study with seven data protection officers representing a total of 12 digital ecosystems in Germany. We identified current and future challenges for the implementation of data protection requirements, covering issues on legal obligations and data subject rights. Our results support stakeholders involved in the implementation of privacy protection measures in digital ecosystems, and form the foundation for future privacy-related studies tailored to the specifics of digital ecosystems.
Risk-based authentication (RBA) extends authentication mechanisms to make them more robust against account takeover attacks, such as those using stolen passwords. RBA is recommended by NIST and NCSC to strengthen password-based authentication, and is already used by major online services. Also, users consider RBA to be more usable than two-factor authentication and just as secure. However, users currently obtain RBA's high security and usability benefits at the cost of exposing potentially sensitive personal data (e.g., IP address or browser information). This conflicts with user privacy and requires to consider user rights regarding the processing of personal data. We outline potential privacy challenges regarding different attacker models and propose improvements to balance privacy in RBA systems. To estimate the properties of the privacy-preserving RBA enhancements in practical environments, we evaluated a subset of them with long-term data from 780 users of a real-world online service. Our results show the potential to increase privacy in RBA solutions. However, it is limited to certain parameters that should guide RBA design to protect privacy. We outline research directions that need to be considered to achieve a widespread adoption of privacy preserving RBA with high user acceptance.
Risk-based Authentication (RBA) is an adaptive security measure that improves the security of password-based authentication by protecting against credential stuffing, password guessing, or phishing attacks. RBA monitors extra features during login and requests for an additional authentication step if the observed feature values deviate from the usual ones in the login history. In state-of-the-art RBA re-authentication deployments, users receive an email with a numerical code in its body, which must be entered on the online service. Although this procedure has a major impact on RBA's time exposure and usability, these aspects were not studied so far.
We introduce two RBA re-authentication variants supplementing the de facto standard with a link-based and another code-based approach. Then, we present the results of a between-group study (N=592) to evaluate these three approaches. Our observations show with significant results that there is potential to speed up the RBA re-authentication process without reducing neither its security properties nor its security perception. The link-based re-authentication via "magic links", however, makes users significantly more anxious than the code-based approaches when perceived for the first time. Our evaluations underline the fact that RBA re-authentication is not a uniform procedure. We summarize our findings and provide recommendations.
Risk-based authentication (RBA) is an adaptive security measure to strengthen password-based authentication. RBA monitors additional implicit features during password entry such as device or geolocation information, and requests additional authentication factors if a certain risk level is detected. RBA is recommended by the NIST digital identity guidelines, is used by several large online services, and offers protection against security risks such as password database leaks, credential stuffing, insecure passwords and large-scale guessing attacks. Despite its relevance, the procedures used by RBA-instrumented online services are currently not disclosed. Consequently, there is little scientific research about RBA, slowing down progress and deeper understanding, making it harder for end users to understand the security provided by the services they use and trust, and hindering the widespread adoption of RBA.
In this paper, with a series of studies on eight popular online services, we (i) analyze which features and combinations/classifiers are used and are useful in practical instances, (ii) develop a framework and a methodology to measure RBA in the wild, and (iii) survey and discuss the differences in the user interface for RBA. Following this, our work provides a first deeper understanding of practical RBA deployments and helps fostering further research in this direction.
Risk-based authentication (RBA) aims to strengthen password-based authentication rather than replacing it. RBA does this by monitoring and recording additional features during the login process. If feature values at login time differ significantly from those observed before, RBA requests an additional proof of identification. Although RBA is recommended in the NIST digital identity guidelines, it has so far been used almost exclusively by major online services. This is partly due to a lack of open knowledge and implementations that would allow any service provider to roll out RBA protection to its users. To close this gap, we provide a first in-depth analysis of RBA characteristics in a practical deployment. We observed N=780 users with 247 unique features on a real-world online service for over 1.8 years. Based on our collected data set, we provide (i) a behavior analysis of two RBA implementations that were apparently used by major online services in the wild, (ii) a benchmark of the features to extract a subset that is most suitable for RBA use, (iii) a new feature that has not been used in RBA before, and (iv) factors which have a significant effect on RBA performance. Our results show that RBA needs to be carefully tailored to each online service, as even small configuration adjustments can greatly impact RBA's security and usability properties. We provide insights on the selection of features, their weightings, and the risk classification in order to benefit from RBA after a minimum number of login attempts.
Risk-based Authentication (RBA) is an adaptive security measure to strengthen password-based authentication. RBA monitors additional features during login, and when observed feature values differ significantly from previously seen ones, users have to provide additional authentication factors such as a verification code. RBA has the potential to offer more usable authentication, but the usability and the security perceptions of RBA are not studied well.
We present the results of a between-group lab study (n=65) to evaluate usability and security perceptions of two RBA variants, one 2FA variant, and password-only authentication. Our study shows with significant results that RBA is considered to be more usable than the studied 2FA variants, while it is perceived as more secure than password-only authentication in general and comparably secure to 2FA in a variety of application types. We also observed RBA usability problems and provide recommendations for mitigation. Our contribution provides a first deeper understanding of the users' perception of RBA and helps to improve RBA implementations for a broader user acceptance.
Risikobasierte Authentifizierung (RBA) ist eine adaptive Sicherheitsmaßnahme zur Stärkung passwortbasierter Authentifizierung. Sie zeichnet Merkmale während des Logins auf und fordert zusätzliche Authentifizierung an, wenn sich Ausprägungen dieser Merkmale signifikant von den bisher bekannten unterscheiden. RBA bietet das Potenzial für gebrauchstauglichere Sicherheit. Bisher jedoch wurde RBA noch nicht ausreichend im Bezug auf Usability, Sicherheit und Privatsphäre untersucht. Dieser Extended Abstract legt das geplante Dissertationsvorhaben zur Erforschung von RBA dar. Innerhalb des Vorhabens konnte bereits eine Grundlagenstudie und eine darauf aufbauende Laborstudie durchgeführt werden. Wir präsentieren erste Ergebnisse dieser Studien und geben einen Ausblick auf weitere Schritte.