Fachbereich Informatik
Refine
H-BRS Bibliography
- yes (1219)
Departments, institutes and facilities
- Fachbereich Informatik (1219)
- Institute of Visual Computing (IVC) (282)
- Institut für Cyber Security & Privacy (ICSP) (144)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (95)
- Institut für funktionale Gen-Analytik (IFGA) (67)
- Fachbereich Ingenieurwissenschaften und Kommunikation (47)
- Institut für Sicherheitsforschung (ISF) (39)
- Institut für KI und Autonome Systeme (A2S) (22)
- Graduierteninstitut (21)
- Fachbereich Angewandte Naturwissenschaften (11)
Document Type
- Conference Object (642)
- Article (286)
- Report (77)
- Part of a Book (53)
- Preprint (53)
- Book (monograph, edited volume) (33)
- Doctoral Thesis (22)
- Dataset (18)
- Conference Proceedings (17)
- Master's Thesis (7)
Year of publication
Keywords
- Virtual Reality (18)
- Robotics (12)
- virtual reality (12)
- Machine Learning (11)
- Usable Security (11)
- 3D user interface (7)
- Augmented Reality (7)
- Quality diversity (7)
- Lehrbuch (6)
- Navigation (6)
This article introduces a model-based design, implementation, deployment, and execution methodology, with tools supporting the systematic composition of algorithms from generic and domain-specific computational building blocks that prevent code duplication and enable robots to adapt their software themselves. The envisaged algorithms are numerical solvers based on graph structures. In this article, we focus on kinematics and dynamics algorithms, but examples such as message passing on probabilistic networks and factor graphs or cascade control diagrams fall under the same pattern. The tools rely on mature standards from the Semantic Web. They first synthesize algorithms symbolically, from which they then generate efficient code. The use case is an overactuated mobile robot with two redundant arms.
This specification defines a method for computing a hash value over a CBOR Object Signing and Encryption (COSE) Key. It specifies which fields within the COSE Key structure are included in the cryptographic hash computation, the process for creating a canonical representation of these fields, and how to hash the resulting byte sequence. The resulting hash value, referred to as a "thumbprint", can be used to identify or select the corresponding key.
This dataset contains questions and answers from an introductory computer science bachelor course on statistics and probability theory at Hochschule Bonn-Rhein-Sieg. The dataset includes three questions and a total of 90 answers, each evaluated using binary rubrics (yes/no) associated with specific scores.
This dataset contains multimodal data recorded on robots performing robot-to-human and human-to-robot handovers. The intended application of the dataset is to develop and benchmark methods for failure detection during the handovers. Thus the trials in the dataset contain both successful and failed handover actions. For a more detailed description of the dataset, please see the included Datasheet.
RoboCup @work 2023 dataset
(2023)
RoboCup 2023 dataset
(2023)
Ventricular Pressure-Volume Loops Obtained by 3D Real-Time Echocardiography and Mini-Pressure Wire
(2013)
Neuromorphic computing mimics computational principles of the brain in silico and motivates research into event-based vision and spiking neural networks (SNNs). Event cameras (ECs) exclusively capture local intensity changes and offer superior power consumption, response latencies, and dynamic ranges. SNNs replicate biological neuronal dynamics and have demonstrated potential as alternatives to conventional artificial neural networks (ANNs), such as in reducing energy expenditure and inference time in visual classification. Nevertheless, these novel paradigms remain scarcely explored outside the domain of aerial robots. To investigate the utility of brain-inspired sensing and data processing, we developed a neuromorphic approach to obstacle avoidance on a camera-equipped manipulator. Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN, decoding neural activations into avoidance motions, and adjusting plans using a dynamic motion primitive. We conducted experiments with a Kinova Gen3 arm performing simple reaching tasks that involve obstacles in sets of distinct task scenarios and in comparison to a non-adaptive baseline. Our neuromorphic approach facilitated reliable avoidance of imminent collisions in simulated and real-world experiments, where the baseline consistently failed. Trajectory adaptations had low impacts on safety and predictability criteria. Among the notable SNN properties were the correlation of computations with the magnitude of perceived motions and a robustness to different event emulation methods. Tests with a DAVIS346 EC showed similar performance, validating our experimental event emulation. Our results motivate incorporating SNN learning, utilizing neuromorphic processors, and further exploring the potential of neuromorphic methods.
Grading student answers and providing feedback are essential yet time-consuming tasks for educators. Recent advancements in Large Language Models (LLMs), including ChatGPT, Llama, and Mistral, have paved the way for automated support in this domain. This paper investigates the efficacy of instruction-following LLMs in adhering to predefined rubrics for evaluating student answers and delivering meaningful feedback. Leveraging the Mohler dataset and a custom German dataset, we evaluate various models, from commercial ones like ChatGPT to smaller open-source options like Llama, Mistral, and Command R. Additionally, we explore the impact of temperature parameters and techniques such as few-shot prompting. Surprisingly, while few-shot prompting enhances grading accuracy closer to ground truth, it introduces model inconsistency. Furthermore, some models exhibit non-deterministic behavior even at near-zero temperature settings. Our findings highlight the importance of rubrics in enhancing the interpretability of model outputs and fostering consistency in grading practices.
Transport Layer Security (TLS) is a widely used protocol for secure channel establishment. However, TLS lacks any inherent mechanism for validating the security state of the endpoint software and its platform. To overcome this limitation, recent works have combined remote attestation (RA) and TLS, named attested TLS. The most popular attested TLS protocol for confidential computing is Intel’s RA-TLS, which is used in multiple open-source industrial projects. However, there is no formal reasoning for security of attested TLS for confidential computing in general and RA-TLS in particular. Using the state-of-the-art symbolic security analysis tool ProVerif, we found vulnerabilities in RA-TLS at both RA and TLS layers, which have been acknowledged by Intel. We also propose mitigations for the vulnerabilities. During the process of formalization, we found that despite several formal verification efforts for TLS to ensure its security, the validation of corresponding formal models has been largely overlooked. This work demonstrates that a simple validation framework could discover crucial issues in state-of-the-art formalization of TLS 1.3 key schedule. These issues have been acknowledged and fixed by the authors. Finally, we provide recommendations for protocol designers and the formal verification community based on the lessons learned in the formal verification and validation.
In computer vision, a larger effective receptive field (ERF) is associated with better performance. While attention natively supports global context, its quadratic complexity limits its applicability to tasks that benefit from high-resolution input. In this work, we extend Hyena, a convolution-based attention replacement, from causal sequences to bidirectional data and two-dimensional image space. We scale Hyena’s convolution kernels beyond the feature map size, up to 191×191, to maximize ERF while maintaining sub-quadratic complexity in the number of pixels. We integrate our two-dimensional Hyena, HyenaPixel, and bidirectional Hyena into the MetaFormer framework. For image categorization, HyenaPixel and bidirectional Hyena achieve a competitive ImageNet-1k top-1 accuracy of 84.9% and 85.2%, respectively, with no additional training data, while outperforming other convolutional and large-kernel networks. Combining HyenaPixel with attention further improves accuracy. We attribute the success of bidirectional Hyena to learning the data-dependent geometric arrangement of pixels without a fixed neighborhood definition. Experimental results on downstream tasks suggest that HyenaPixel with large filters and a fixed neighborhood leads to better localization performance.
The workshop XAI for U aims to address the critical need for transparency in Artificial Intelligence (AI) systems that integrate into our daily lives through mobile systems, wearables, and smart environments. Despite advances in AI, many of these systems remain opaque, making it difficult for users, developers, and stakeholders to verify their reliability and correctness. This workshop addresses the pressing need for enabling Explainable AI (XAI) tools within Ubiquitous and Wearable Computing and highlights the unique challenges that come with it, such as XAI that deals with time-series and multimodal data, XAI that explains interconnected machine learning (ML) components, and XAI that provides user-centered explanations. The workshop aims to foster collaboration among researchers in related domains, share recent advancements, address open challenges, and propose future research directions to improve the applicability and development of XAI in Ubiquitous Pervasive and Wearable Computing - and with that seeks to enhance user trust, understanding, interaction, and adoption, ensuring that AI- driven solutions are not only more explainable but also more aligned with ethical standards and user expectations.
Time for an Explanation: A Mini-Review of Explainable Physio-Behavioural Time-Series Classification
(2024)
Time-series classification is seeing growing importance as device proliferation has lead to the collection of an abundance of sensor data. Although black-box models, whose internal workings are difficult to understand, are a common choice for this task, their use in safety-critical domains has raised calls for greater transparency. In response, researchers have begun employing explainable artificial intelligence together with physio-behavioural signals in the context of real-world problems. Hence, this paper examines the current literature in this area and contributes principles for future research to overcome the limitations of the reviewed works.
Altering posture relative to the direction of gravity, or exposure to microgravity has been shown to affect many aspects of perception, including size perception. Our aims in this study were to investigate whether changes in posture and long-term exposure to microgravity bias the visual perception of object height and to test whether any such biases are accompanied by changes in precision. We also explored the possibility of sex/gender differences. Two cohorts of participants (12 astronauts and 20 controls, 50% women) varied the size of a virtual square in a simulated corridor until it was perceived to match a reference stick held in their hands. Astronauts performed the task before, twice during, and twice after an extended stay onboard the International Space Station. On Earth, they performed the task of sitting upright and lying supine. Earth-bound controls also completed the task five times with test sessions spaced similarly to the astronauts; to simulate the microgravity sessions on the ISS they lay supine. In contrast to earlier studies, we found no immediate effect of microgravity exposure on perceived object height. However, astronauts robustly underestimated the height of the square relative to the haptic reference and these estimates were significantly smaller 60 days or more after their return to Earth. No differences were found in the precision of the astronauts’ judgments. Controls underestimated the height of the square when supine relative to sitting in their first test session (simulating Pre-Flight) but not in later sessions. While these results are largely inconsistent with previous results in the literature, a posture-dependent effect of simulated eye height might provide a unifying explanation. We were unable to make any firm statements related to sex/gender differences. We conclude that no countermeasures are required to mitigate the acute effects of microgravity exposure on object height perception. However, space travelers should be warned about late-emerging and potentially long-lasting changes in this perceptual skill.
Ziel der zehnten Ausgabe des wissenschaftlichen Workshops "Usable Security und Privacy" auf der Mensch und Computer 2024 ist es, aktuelle Forschungs- und Praxisbeiträge auf diesem Gebiet zu präsentieren und mit den Teilnehmer:innen zu diskutieren. Getreu dem Konferenzmotto "Hybrid Worlds" soll mit dem Workshop ein etabliertes Forum fortgeführt und weiterentwickelt werden, in dem sich Expert:innen, Forscher:innen und Praktiker:innen aus unterschiedlichen Domänen transdisziplinär zum Thema Usable Security und Privacy austauschen können. Das Thema betrifft neben dem Usability- und Security-Engineering unterschiedliche Forschungsgebiete und Berufsfelder, z.~B. Informatik, Ingenieurwissenschaften, Mediengestaltung und Psychologie. Der Workshop richtet sich an interessierte Wissenschaftler:innen aus all diesen Bereichen, aber auch ausdrücklich an Vertreter:innen der Wirtschaft, Industrie und öffentlichen Verwaltung.
Virtual Reality (VR) sickness remains a significant challenge in the widespread adoption of VR technologies. The absence of a standardized benchmark system hinders progress in understanding and effectively countering VR sickness. This paper proposes an initial step towards a benchmark system, utilizing a novel methodological framework to serve as a common platform for evaluating contributing VR sickness factors and mitigation strategies. Our benchmark, grounded in established theories and leveraging existing research, features both small and large environments. In two research studies, we validated our system by demonstrating its capability to (1) quickly, reliably, and controllably induce VR sickness in both environments, followed by a rapid decline post-stimulus, facilitating cost and time-effective within-subject studies and increased statistical power, (2) integrate and evaluate established VR sickness mitigation methods — static and dynamic field of view reduction, blur, and virtual nose — demonstrating their effectiveness in reducing symptoms in the benchmark and their direct comparison within a standardized setting. Our proposed benchmark also enables broader, more comparative research into different technical, setup, and participant variables influencing VR sickness and overall user experience, ultimately paving the way for building a comprehensive database to identify the most effective strategies for specific VR applications.
Push notifications are widely used in Android apps to show users timely and potentially sensitive information outside the apps’ regular user interface. Google’s default service for sending push notifications, Firebase Cloud Messaging (FCM), provides only transport layer security and does not offer app developers message protection schemes to prevent access or detect modifications by the push notification service provider or other intermediate systems.We present and discuss an in-depth mixed-methods study of push notification message security and privacy in Android apps. We statically analyze a representative set of 100,000 up-to-date and popular Android apps from Google Play to get an overview of push notification usage in the wild. In an in-depth follow-up analysis of 60 apps, we gain detailed insights into the leaked content and what some developers do to protect the messages. We find that (a) about half of the analyzed apps use push notifications, (b) about half of the in-depth analyzed messaging apps do not protect their push notifications, allowing access to sensitive data that jeopardizes users’ security and privacy and (c) the means of protection lack a standardized approach, manifesting in various developer-defined encryption schemes, custom protocols, or out-of-band communication methods. Our research highlights gaps in developer-centric security regarding appropriate technologies and supporting measures that researchers and platform providers should address.
Few mobile robot developers already test their software on simulated robots in virtual environments or sceneries. However, the majority still shy away from simulation-based test campaigns because it remains challenging to specify and execute suitable testing scenarios, that is, models of the environment and the robots’ tasks. Through developer interviews, we identified that managing the enormous variability of testing scenarios is a major barrier to the application of simulation-based testing in robotics. Furthermore, traditional CAD or 3D-modelling tools such as SolidWorks, 3ds Max, or Blender are not suitable for specifying sceneries that vary significantly and serve different testing objectives. For some testing campaigns, it is required that the scenery replicates the dynamic (e.g., opening doors) and static features of real-world environments, whereas for others, simplified scenery is sufficient. Similarly, the task and mission specifications used for simulation-based testing range from simple point-to-point navigation tasks to more elaborate tasks that require advanced deliberation and decision-making. We propose the concept of composable and executable scenarios and associated tooling to support developers in specifying, reusing, and executing scenarios for the simulation-based testing of robotic systems. Our approach differs from traditional approaches in that it offers a means of creating scenarios that allow the addition of new semantics (e.g., dynamic elements such as doors or varying task specifications) to existing models without altering them. Thus, we can systematically construct richer scenarios that remain manageable. We evaluated our approach in a small simulation-based testing campaign, with scenarios defined around the navigation stack of a mobile robot. The scenarios gradually increased in complexity, composing new features into the scenery of previous scenarios. Our evaluation demonstrated how our approach can facilitate the reuse of models and revealed the presence of errors in the configuration of the publicly available navigation stack of our SUT, which had gone unnoticed despite its frequent use.
This dataset contains questions and answers from an introductory computer science bachelor course on statistics and probability theory at Hochschule Bonn-Rhein-Sieg. The dataset includes three questions and a total of 90 answers, each evaluated using binary rubrics (yes/no) associated with specific scores.
This dataset contains multimodal data recorded on robots performing robot-to-human and human-to-robot handovers. The intended application of the dataset is to develop and benchmark methods for failure detection during the handovers. Thus the trials in the dataset contain both successful and failed handover actions. For a more detailed description of the dataset, please see the included Datasheet.