006 Spezielle Computerverfahren
Refine
Departments, institutes and facilities
- Fachbereich Informatik (97)
- Institute of Visual Computing (IVC) (34)
- Fachbereich Wirtschaftswissenschaften (22)
- Institut für Verbraucherinformatik (IVI) (22)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (16)
- Institut für KI und Autonome Systeme (A2S) (14)
- Institut für Sicherheitsforschung (ISF) (11)
- Fachbereich Ingenieurwissenschaften und Kommunikation (7)
- Graduierteninstitut (3)
- Institut für Cyber Security & Privacy (ICSP) (3)
Document Type
- Conference Object (63)
- Article (53)
- Part of a Book (9)
- Dataset (6)
- Preprint (6)
- Report (6)
- Contribution to a Periodical (4)
- Doctoral Thesis (4)
- Book (monograph, edited volume) (3)
- Other (1)
Year of publication
Keywords
- Augmented Reality (6)
- Virtual Reality (6)
- deep learning (5)
- virtual reality (5)
- Machine Learning (4)
- Machine learning (4)
- Knowledge Graphs (3)
- facial expression analysis (3)
- haptics (3)
- 3D user interface (2)
This article introduces a model-based design, implementation, deployment, and execution methodology, with tools supporting the systematic composition of algorithms from generic and domain-specific computational building blocks that prevent code duplication and enable robots to adapt their software themselves. The envisaged algorithms are numerical solvers based on graph structures. In this article, we focus on kinematics and dynamics algorithms, but examples such as message passing on probabilistic networks and factor graphs or cascade control diagrams fall under the same pattern. The tools rely on mature standards from the Semantic Web. They first synthesize algorithms symbolically, from which they then generate efficient code. The use case is an overactuated mobile robot with two redundant arms.
This specification defines a method for computing a hash value over a CBOR Object Signing and Encryption (COSE) Key. It specifies which fields within the COSE Key structure are included in the cryptographic hash computation, the process for creating a canonical representation of these fields, and how to hash the resulting byte sequence. The resulting hash value, referred to as a "thumbprint", can be used to identify or select the corresponding key.
This dataset contains multimodal data recorded on robots performing robot-to-human and human-to-robot handovers. The intended application of the dataset is to develop and benchmark methods for failure detection during the handovers. Thus the trials in the dataset contain both successful and failed handover actions. For a more detailed description of the dataset, please see the included Datasheet.
RoboCup @work 2023 dataset
(2023)
RoboCup 2023 dataset
(2023)
The relevance of introducing digital twin technology in robotics is substantiated, which allows testing and modelling the capabilities of robots, such as manipulation, grasping, etc., using virtual robot prototypes that are identical copies of physical robot prototypes. An overview of the key components of the digital twin framework for robotics, including the physical element, virtual element, middleware, service and transport components, is provided. A technology for designing a robot using digital twins is proposed, including the design of a computer model of a robot using a computer-aided design system or three-dimensional graphics packages, the use of robot simulation systems, data management, data analysis and human-machine interaction. The further development of the research is the implementation of digital twin technology for a rescue robot according to the proposed stages: building a computer model, programming robot behaviour in a simulation system, developing a mathematical and digital model of the robot, implementing human-machine interaction between a physical robot and its digital replica, which will allow testing the interaction of the main components of the digital twin, performing data exchange between the physical and digital replica, and building a digital data model to verify the main operations.
Ventricular Pressure-Volume Loops Obtained by 3D Real-Time Echocardiography and Mini-Pressure Wire
(2013)
Neuromorphic computing mimics computational principles of the brain in silico and motivates research into event-based vision and spiking neural networks (SNNs). Event cameras (ECs) exclusively capture local intensity changes and offer superior power consumption, response latencies, and dynamic ranges. SNNs replicate biological neuronal dynamics and have demonstrated potential as alternatives to conventional artificial neural networks (ANNs), such as in reducing energy expenditure and inference time in visual classification. Nevertheless, these novel paradigms remain scarcely explored outside the domain of aerial robots. To investigate the utility of brain-inspired sensing and data processing, we developed a neuromorphic approach to obstacle avoidance on a camera-equipped manipulator. Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN, decoding neural activations into avoidance motions, and adjusting plans using a dynamic motion primitive. We conducted experiments with a Kinova Gen3 arm performing simple reaching tasks that involve obstacles in sets of distinct task scenarios and in comparison to a non-adaptive baseline. Our neuromorphic approach facilitated reliable avoidance of imminent collisions in simulated and real-world experiments, where the baseline consistently failed. Trajectory adaptations had low impacts on safety and predictability criteria. Among the notable SNN properties were the correlation of computations with the magnitude of perceived motions and a robustness to different event emulation methods. Tests with a DAVIS346 EC showed similar performance, validating our experimental event emulation. Our results motivate incorporating SNN learning, utilizing neuromorphic processors, and further exploring the potential of neuromorphic methods.
Grading student answers and providing feedback are essential yet time-consuming tasks for educators. Recent advancements in Large Language Models (LLMs), including ChatGPT, Llama, and Mistral, have paved the way for automated support in this domain. This paper investigates the efficacy of instruction-following LLMs in adhering to predefined rubrics for evaluating student answers and delivering meaningful feedback. Leveraging the Mohler dataset and a custom German dataset, we evaluate various models, from commercial ones like ChatGPT to smaller open-source options like Llama, Mistral, and Command R. Additionally, we explore the impact of temperature parameters and techniques such as few-shot prompting. Surprisingly, while few-shot prompting enhances grading accuracy closer to ground truth, it introduces model inconsistency. Furthermore, some models exhibit non-deterministic behavior even at near-zero temperature settings. Our findings highlight the importance of rubrics in enhancing the interpretability of model outputs and fostering consistency in grading practices.
In computer vision, a larger effective receptive field (ERF) is associated with better performance. While attention natively supports global context, its quadratic complexity limits its applicability to tasks that benefit from high-resolution input. In this work, we extend Hyena, a convolution-based attention replacement, from causal sequences to bidirectional data and two-dimensional image space. We scale Hyena’s convolution kernels beyond the feature map size, up to 191×191, to maximize ERF while maintaining sub-quadratic complexity in the number of pixels. We integrate our two-dimensional Hyena, HyenaPixel, and bidirectional Hyena into the MetaFormer framework. For image categorization, HyenaPixel and bidirectional Hyena achieve a competitive ImageNet-1k top-1 accuracy of 84.9% and 85.2%, respectively, with no additional training data, while outperforming other convolutional and large-kernel networks. Combining HyenaPixel with attention further improves accuracy. We attribute the success of bidirectional Hyena to learning the data-dependent geometric arrangement of pixels without a fixed neighborhood definition. Experimental results on downstream tasks suggest that HyenaPixel with large filters and a fixed neighborhood leads to better localization performance.
Altering posture relative to the direction of gravity, or exposure to microgravity has been shown to affect many aspects of perception, including size perception. Our aims in this study were to investigate whether changes in posture and long-term exposure to microgravity bias the visual perception of object height and to test whether any such biases are accompanied by changes in precision. We also explored the possibility of sex/gender differences. Two cohorts of participants (12 astronauts and 20 controls, 50% women) varied the size of a virtual square in a simulated corridor until it was perceived to match a reference stick held in their hands. Astronauts performed the task before, twice during, and twice after an extended stay onboard the International Space Station. On Earth, they performed the task of sitting upright and lying supine. Earth-bound controls also completed the task five times with test sessions spaced similarly to the astronauts; to simulate the microgravity sessions on the ISS they lay supine. In contrast to earlier studies, we found no immediate effect of microgravity exposure on perceived object height. However, astronauts robustly underestimated the height of the square relative to the haptic reference and these estimates were significantly smaller 60 days or more after their return to Earth. No differences were found in the precision of the astronauts’ judgments. Controls underestimated the height of the square when supine relative to sitting in their first test session (simulating Pre-Flight) but not in later sessions. While these results are largely inconsistent with previous results in the literature, a posture-dependent effect of simulated eye height might provide a unifying explanation. We were unable to make any firm statements related to sex/gender differences. We conclude that no countermeasures are required to mitigate the acute effects of microgravity exposure on object height perception. However, space travelers should be warned about late-emerging and potentially long-lasting changes in this perceptual skill.
Diese Arbeit untersucht Entwicklungspraktiken im Kontext professioneller Softwareentwicklung für Augmented und Virtual Reality (XR) Anwendungen. Die Verbindung einer Design Science Linse mit praxeologischen Ansätzen ermöglicht einen umfassenden Einblick in existierende und aufkommende Entwicklungsprozesse in der aufstrebenden XR-Softwareindustrie. Angesichts des aktuellen Mangels an Design Guidelines, Entwicklungs- und Technologiestandards sowie unterstützenden Entwicklungswerkzeugen bietet die Arbeit einen ganzheitlichen Überblick und entwickelt mögliche Lösungsansätze sowie Designvorschläge zur (softwarebasierten) Unterstützung professioneller XR-Entwickler in interdisziplinären Teams.
Virtual Reality (VR) sickness remains a significant challenge in the widespread adoption of VR technologies. The absence of a standardized benchmark system hinders progress in understanding and effectively countering VR sickness. This paper proposes an initial step towards a benchmark system, utilizing a novel methodological framework to serve as a common platform for evaluating contributing VR sickness factors and mitigation strategies. Our benchmark, grounded in established theories and leveraging existing research, features both small and large environments. In two research studies, we validated our system by demonstrating its capability to (1) quickly, reliably, and controllably induce VR sickness in both environments, followed by a rapid decline post-stimulus, facilitating cost and time-effective within-subject studies and increased statistical power, (2) integrate and evaluate established VR sickness mitigation methods — static and dynamic field of view reduction, blur, and virtual nose — demonstrating their effectiveness in reducing symptoms in the benchmark and their direct comparison within a standardized setting. Our proposed benchmark also enables broader, more comparative research into different technical, setup, and participant variables influencing VR sickness and overall user experience, ultimately paving the way for building a comprehensive database to identify the most effective strategies for specific VR applications.
Few mobile robot developers already test their software on simulated robots in virtual environments or sceneries. However, the majority still shy away from simulation-based test campaigns because it remains challenging to specify and execute suitable testing scenarios, that is, models of the environment and the robots’ tasks. Through developer interviews, we identified that managing the enormous variability of testing scenarios is a major barrier to the application of simulation-based testing in robotics. Furthermore, traditional CAD or 3D-modelling tools such as SolidWorks, 3ds Max, or Blender are not suitable for specifying sceneries that vary significantly and serve different testing objectives. For some testing campaigns, it is required that the scenery replicates the dynamic (e.g., opening doors) and static features of real-world environments, whereas for others, simplified scenery is sufficient. Similarly, the task and mission specifications used for simulation-based testing range from simple point-to-point navigation tasks to more elaborate tasks that require advanced deliberation and decision-making. We propose the concept of composable and executable scenarios and associated tooling to support developers in specifying, reusing, and executing scenarios for the simulation-based testing of robotic systems. Our approach differs from traditional approaches in that it offers a means of creating scenarios that allow the addition of new semantics (e.g., dynamic elements such as doors or varying task specifications) to existing models without altering them. Thus, we can systematically construct richer scenarios that remain manageable. We evaluated our approach in a small simulation-based testing campaign, with scenarios defined around the navigation stack of a mobile robot. The scenarios gradually increased in complexity, composing new features into the scenery of previous scenarios. Our evaluation demonstrated how our approach can facilitate the reuse of models and revealed the presence of errors in the configuration of the publicly available navigation stack of our SUT, which had gone unnoticed despite its frequent use.
This dataset contains multimodal data recorded on robots performing robot-to-human and human-to-robot handovers. The intended application of the dataset is to develop and benchmark methods for failure detection during the handovers. Thus the trials in the dataset contain both successful and failed handover actions. For a more detailed description of the dataset, please see the included Datasheet.
Open RAN: A Concise Overview
(2025)
Open RAN has emerged as a transformative approach in the evolution of cellular networks, addressing challenges posed by modern applications and high network density. By leveraging disaggregated, virtualized, and software-based elements interconnected through open standardized interfaces, Open RAN introduces agility, cost-effectiveness, and enhanced competition in the Radio Access Network (RAN) domain. The Open RAN paradigm, driven by the O-RAN Alliance specifications, is set to transform the telecom ecosystem. Despite extensive technical literature, there is a lack of succinct summaries for industry professionals, researchers, and policymakers. This paper addresses this gap by providing a concise, yet comprehensive overview of Open RAN. Compared to previous work, our approach introduces Open RAN by gradually splitting up different components known from previous RAN architectures. We believe that this approach leads to a better understanding for people already familiar with the general concept of mobile communication networks. Building upon this general understanding of Open RAN, we introduce key architectural principles, interfaces, components and use-cases. Moreover, this work investigates potential security implications associated with adopting Open RAN architecture, emphasizing the necessity of robust network protection measures.
The rapid progress in sensor technology has empowered smart home systems to efficiently monitor and control household appliances. AI-enabled smart home systems can forecast household future energy demand so that the occupants can revise their energy consumption plan and be aware of optimal energy consumption practices. However, deep learning (DL)-based demand forecasting models are complex and decisions from such black-box models are often considered opaque. Recently, eXplainable Artificial Intelligence (XAI) has garnered substantial attention in explaining decisions of complex DL models. The primary objective is to enhance the acceptance, trust, and transparency of AI models by offering explanations about provided decisions. We propose ForecastExplainer, an explainable deep energy demand forecasting framework that leverages Deep Learning Important Features (DeepLIFT) to approximate Shapley values to map the contribution of different appliances and features with time. The generated explanations can shed light to explain the prediction highlighting the impact of energy consumption attributes corresponding to time, such as responsible appliances, consumption by household areas and activities, and seasonal effects. Experiments on household datasets demonstrated the effectiveness of our method in accurate forecasting. We designed a new metric to evaluate the effectiveness of the generated explanations and the experiment results indicate the comprehensibility of the explanations. These insights might empower users to optimize energy consumption practices, fostering AI adoption in smart applications.
As voice assistants (VAs) become more advanced leveraging Large Language Models (LLMs) and natural language processing, their potential for accountable behavior expands. Yet, the long-term situational effectiveness of VAs’ accounts when errors occur remains unclear. In our 19-month exploratory study with 19 households, we investigated the impact of an Alexa feature that allows users to inquire about the reasons behind its actions. Our findings indicate that Alexa's accounts are often single, decontextualized responses that led to users’ alternative repair strategies over the long term, such as turning off the device, rather than initiating a dialogue about what went wrong. Through role-playing workshops, we demonstrate that VA interactions should facilitate explanatory dialogues as dynamic exchanges that consider a range of speech acts, recognizing users’ emotional states and the context of interaction. We conclude by discussing the implications of our findings for the design of accountable VAs.
Explaining AI Decisions: Towards Achieving Human-Centered Explainability in Smart Home Environments
(2024)
When performing manipulation-based activities such as picking objects, a mobile robot needs to position its base at a location that supports successful execution. To address this problem, prominent approaches typically rely on costly grasp planners to provide grasp poses for a target object, which are then are then analysed to identify the best robot placements for achieving each grasp pose. In this paper, we propose instead to first find robot placements that would not result in collision with the environment and from where picking up the object is feasible, then evaluate them to find the best placement candidate. Our approach takes into account the robot's reachability, as well as RGB-D images and occupancy grid maps of the environment for identifying suitable robot poses. The proposed algorithm is embedded in a service robotic workflow, in which a person points to select the target object for grasping. We evaluate our approach with a series of grasping experiments, against an existing baseline implementation that sends the robot to a fixed navigation goal. The experimental results show how the approach allows the robot to grasp the target object from locations that are very challenging to the baseline implementation.
Advanced Rapid Directional Over-Current Protection for DC Microgrids Using K-Means Clustering
(2024)
This textbook presents an in-depth introductory survey of several fundamental advanced control concepts and techniques all ranging from modern ideas. The book emphasizes ideas, an understanding of key concepts, methodologies, and results. In line with this, the book addresses master’s students in the overlap of engineering and computer science as well as engineers working in various application fields and interested in useful control techniques and less in system theories appealing from a mathematical point of view.
Biometric fingerprint identification hinges on the reliability of its sensors; however, calibrating and standardizing these sensors poses significant challenges, particularly in regards to repeatability and data diversity. To tackle these issues, we propose methodologies for fabricating synthetic 3D fingerprint targets, or phantoms, that closely emulate real human fingerprints. These phantoms enable the precise evaluation and validation of fingerprint sensors under controlled and repeatable conditions. Our research employs laser engraving, 3D printing, and CNC machining techniques, utilizing different materials. We assess the phantoms’ fidelity to synthetic fingerprint patterns, intra-class variability, and interoperability across different manufacturing methods. The findings demonstrate that a combination of laser engraving or CNC machining with silicone casting produces finger-like phantoms with high accuracy and consistency for rolled fingerprint recordings. For slap recordings, direct laser engraving of flat silicone targets excels, and in the contactless fingerprint sensor setting, 3D printing and silicone filling provide the most favorable attributes. Our work enables a comprehensive, method-independent comparison of various fabrication methodologies, offering a unique perspective on the strengths and weaknesses of each approach. This facilitates a broader understanding of fingerprint recognition system validation and performance assessment.
Biometric authentication plays a vital role in various everyday applications with increasing demands for reliability and security. However, the use of real biometric data for research raises privacy concerns and data scarcity issues. A promising approach using synthetic biometric data to address the resulting unbalanced representation and bias, as well as the limited availability of diverse datasets for the development and evaluation of biometric systems, has emerged. Methods for a parameterized generation of highly realistic synthetic data are emerging and the necessary quality metrics to prove that synthetic data can compare to real data are open research tasks. The generation of 3D synthetic face data using game engines’ capabilities of generating varied realistic virtual characters is explored as a possible alternative for generating synthetic face data while maintaining reproducibility and ground truth, as opposed to other creation methods. While synthetic data offer several benefits, including improved resilience against data privacy concerns, the limitations and challenges associated with their usage are addressed. Our work shows concurrent behavior in comparing semi-synthetic data as a digital representation of a real identity with their real datasets. Despite slight asymmetrical performance in comparison with a larger database of real samples, a promising performance in face data authentication is shown, which lays the foundation for further investigations with digital avatars and the creation and analysis of fully synthetic data. Future directions for improving synthetic biometric data generation and their impact on advancing biometrics research are discussed.
Dark Patterns are deceptive designs that influence a user's interactions with an interface to benefit someone other than the user. Prior work has identified dark patterns in WIMP interfaces and ubicomp environments, but how dark patterns can manifest in Augmented and Virtual Reality (collectively XR) requires more attention. We therefore conducted ten co-design workshops with 20 experts in XR and deceptive design. Our participants co-designed 42 scenarios containing dark patterns, based on application archetypes presented in recent HCI/XR literature. In the co-designed scenarios, we identified ten novel dark patterns in addition to 39 existing ones, as well as ten examples in which specific characteristics associated with XR potentially amplified the effect dark patterns could have on users. Based on our findings and prior work, we present a classification of XR-specific properties that facilitate dark patterns: perception, spatiality, physical/virtual barriers, and XR device sensing. We also present the experts’ assessments of the likelihood and severity of the co-designed scenarios and highlight key aspects they considered for this evaluation, for example, technological feasibility, ease of upscaling and distributing malicious implementations, and the application's context of use. Finally, we discuss means to mitigate XR dark patterns and support regulatory bodies to reduce potential harms.
Due to their user-friendliness and reliability, biometric systems have taken a central role in everyday digital identity management for all kinds of private, financial and governmental applications with increasing security requirements. A central security aspect of unsupervised biometric authentication systems is the presentation attack detection (PAD) mechanism, which defines the robustness to fake or altered biometric features. Artifacts like photos, artificial fingers, face masks and fake iris contact lenses are a general security threat for all biometric modalities. The Biometric Evaluation Center of the Institute of Safety and Security Research (ISF) at the University of Applied Sciences Bonn-Rhein-Sieg has specialized in the development of a near-infrared (NIR)-based contact-less detection technology that can distinguish between human skin and most artifact materials. This technology is highly adaptable and has already been successfully integrated into fingerprint scanners, face recognition devices and hand vein scanners. In this work, we introduce a cutting-edge, miniaturized near-infrared presentation attack detection (NIR-PAD) device. It includes an innovative signal processing chain and an integrated distance measurement feature to boost both reliability and resilience. We detail the device’s modular configuration and conceptual decisions, highlighting its suitability as a versatile platform for sensor fusion and seamless integration into future biometric systems. This paper elucidates the technological foundations and conceptual framework of the NIR-PAD reference platform, alongside an exploration of its potential applications and prospective enhancements.
During robot-assisted therapy, a robot typically needs to be partially or fully controlled by therapists, for instance using a Wizard-of-Oz protocol; this makes therapeutic sessions tedious to conduct, as therapists cannot fully focus on the interaction with the person under therapy. In this work, we develop a learning-based behaviour model that can be used to increase the autonomy of a robot’s decision-making process. We investigate reinforcement learning as a model training technique and compare different reward functions that consider a user’s engagement and activity performance. We also analyse various strategies that aim to make the learning process more tractable, namely i) behaviour model training with a learned user model, ii) policy transfer between user groups, and iii) policy learning from expert feedback. We demonstrate that policy transfer can significantly speed up the policy learning process, although the reward function has an important effect on the actions that a robot can choose. Although the main focus of this paper is the personalisation pipeline itself, we further evaluate the learned behaviour models in a small-scale real-world feasibility study in which six users participated in a sequence learning game with an assistive robot. The results of this study seem to suggest that learning from guidance may result in the most adequate policies in terms of increasing the engagement and game performance of users, but a large-scale user study is needed to verify the validity of that observation.
Users should always play a central role in the development of (software) solutions. The human-centered design (HCD) process in the ISO 9241-210 standard proposes a procedure for systematically involving users. However, due to its abstraction level, the HCD process provides little guidance for how it should be implemented in practice. In this chapter, we propose three concrete practical methods that enable the reader to develop usable security and privacy (USP) solutions using the HCD process. This chapter equips the reader with the procedural knowledge and recommendations to: (1) derive mental models with regard to security and privacy, (2) analyze USP needs and privacy-related requirements, and (3) collect user characteristics on privacy and structure them by user group profiles and into privacy personas. Together, these approaches help to design measures for a user-friendly implementation of security and privacy measures based on a firm understanding of the key stakeholders.
Selection Performance and Reliability of Eye and Head Gaze Tracking Under Varying Light Conditions
(2024)
Self-motion perception is a multi-sensory process that involves visual, vestibular, and other cues. When perception of self-motion is induced using only visual motion, vestibular cues indicate that the body remains stationary, which may bias an observer’s perception. When lowering the precision of the vestibular cue by for example, lying down or by adapting to microgravity, these biases may decrease, accompanied by a decrease in precision. To test this hypothesis, we used a move-to-target task in virtual reality. Astronauts and Earth-based controls were shown a target at a range of simulated distances. After the target disappeared, forward self-motion was induced by optic flow. Participants indicated when they thought they had arrived at the target’s previously seen location. Astronauts completed the task on Earth (supine and sitting upright) prior to space travel, early and late in space, and early and late after landing. Controls completed the experiment on Earth using a similar regime with a supine posture used to simulate being in space. While variability was similar across all conditions, the supine posture led to significantly higher gains (target distance/perceived travel distance) than the sitting posture for the astronauts pre-flight and early post-flight but not late post-flight. No difference was detected between the astronauts’ performance on Earth and onboard the ISS, indicating that judgments of traveled distance were largely unaffected by long-term exposure to microgravity. Overall, this constitutes mixed evidence as to whether non-visual cues to travel distance are integrated with relevant visual cues when self-motion is simulated using optic flow alone.