Fachbereich Informatik
Refine
H-BRS Bibliography
- yes (1265)
Departments, institutes and facilities
- Fachbereich Informatik (1265)
- Institute of Visual Computing (IVC) (297)
- Institut für Cyber Security & Privacy (ICSP) (146)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (104)
- Institut für funktionale Gen-Analytik (IFGA) (68)
- Fachbereich Ingenieurwissenschaften und Kommunikation (49)
- Institut für Sicherheitsforschung (ISF) (40)
- Institut für KI und Autonome Systeme (A2S) (37)
- Graduierteninstitut (22)
- Fachbereich Angewandte Naturwissenschaften (11)
Document Type
- Conference Object (664)
- Article (295)
- Report (77)
- Preprint (62)
- Part of a Book (54)
- Book (monograph, edited volume) (33)
- Doctoral Thesis (23)
- Conference Proceedings (18)
- Dataset (18)
- Master's Thesis (7)
Year of publication
Keywords
- Virtual Reality (19)
- Robotics (12)
- virtual reality (12)
- Machine Learning (11)
- Usable Security (11)
- 3D user interface (7)
- Augmented Reality (7)
- Quality diversity (7)
- Lehrbuch (6)
- Measurement (6)
The success of any agent, human or artificial, ultimately depends on their successfully accomplishing the given goals. Agents may, however, fail to do so for many reasons. With artificial agents, such as robots, this may be due to internal faults or exogenous events in the complex, dynamic environments in which they operate. The bottom line is that plans, even good ones, can fail. Despite decades of research, effective methods for artificial agents to cope with plan failure remain limited and are often impractical in the real world. One common reason for failure that plagues agents, human and artificial alike, is that objects that are expected to be used to get the job done are often found to be missing or unavailable. Humans might, with little effort, accomplish their tasks by making substitutions. When they are not sure if an object is available, they may even proceed optimistically and switch to making a substitution when they confirm that an object is indeed unavailable. In this work, the system uses Description Logics to enable open-world reasoning --- making it possible to distinguish between cases where an object is missing/unavailable and cases where the failure to even generate a plan is due to the planner's use of the closed-world assumption (where the fact stating that something is true is missing from its knowledge base and so it is assumed to be not true). This ability to distinguish between something being missing and having incomplete information enables the agent to behave intelligently: recognising whether it should identify and then plan with a suitable substitute or create a placeholder, in the case of incomplete information. By representing the functional affordances of objects (i.e. what they are meant to be used for), socially-expected and accepted object substitutions are made possible. The system also uses the Conceptual Spaces approach to provide feature-based similarity measures that make the given task a first-class citizen in the identification of a suitable substitute. The generation of plans to `get the job done' is made possible by incorporating the Hierarchical Task Network planning approach. It is combined with a robust execution/monitoring system and contributes to the success of the robot in achieving its goals.
This paper investigates the ongoing use of the A5/1 ciphering algorithm within 2G GSM networks. Despite its known vulnerabilities and the gradual phasing out of GSM technology by some operators, GSM security remains relevant due to potential downgrade attacks from 4G/5G networks and its use in IoT applications. We present a comprehensive overview of a historical weakness associated with the A5 family of cryptographic algorithms. Building on this, our main contribution is the design of a measurement approach using low-cost, off-the-shelf hardware to passively monitor Cipher Mode Command messages transmitted by base transceiver stations (BTS). We collected over 500,000 samples at 10 different locations, focusing on the three largest mobile network operators in Germany. Our findings reveal significant variations in algorithm usage among these providers. One operator favors A5/3, while another surprisingly retains a high reliance on the compromised A5/1. The third provider shows a marked preference for A5/3 and A5/4, indicating a shift towards more secure ciphering algorithms in GSM networks.
In mobile network research, the integration of real-world components such as User Equipment (UE) with open-source network infrastructure is essential yet challenging. To address these issues, we introduce open5Gcube, a modular framework designed to integrate popular open-source mobile network projects into a unified management environment. Our publicly available framework allows researchers to flexibly combine different open-source implementations, including different versions, and simplifies experimental setups through containerization and lightweight orchestration. We demonstrate the practical usability of open5Gcube by evaluating its compatibility with various commercial off-the-shelf (COTS) smartphones and modems across multiple mobile generations (2G, 4G, and 5G). The results underline the versatility and reproducibility of our approach, significantly advancing the accessibility of rigorous experimentation in mobile network laboratories.
Unmanned Aerial Vehicles (UAVs) are increasingly used for reforestation and forest monitoring, including seed dispersal in hard-to-reach terrains. However, a detailed understanding of the forest floor remains a challenge due to high natural variability, quickly changing environmental parameters, and ambiguous annotations due to unclear definitions. To address this issue, we adapt the Segment Anything Model (SAM), a vision foundation model with strong generalization capabilities, to segment forest floor objects such as tree stumps, vegetation, and woody debris. To this end, we employ parameter-efficient fine-tuning (PEFT) to fine-tune a small subset of additional model parameters while keeping the original weights fixed. We adjust SAM's mask decoder to generate masks corresponding to our dataset categories, allowing for automatic segmentation without manual prompting. Our results show that the adapter-based PEFT method achieves the highest mean intersection over union (mIoU), while Low-rank Adaptation (LoRA), with fewer parameters, offers a lightweight alternative for resource-constrained UAV platforms.
The first Data Competence College was hosted from March 27th to 28th, 2025 at the IT center of RWTH Aachen. Based on the concept of the Wissenschaftskolleg in Berlin or the Institute of Advanced Studies in Princeton, we invited two individuals with high data competence from different scientific fields (“Data Experts”) to participate as part of the data competence college:
Prof. Sebastian Houben (Hochschule Bonn-Rhein-Sieg, specialist in AI and autonomous systems)
Dr. Moritz Wolter (University of Bonn, expert in high performance computing and machine learning)
For two days we aimed to create a space where not only local scientists, and especially early career researchers, learn from the data experts and each other regarding research data and methods but also data experts could inspire each other. The schedule included keynote presentations by all data experts, poster and group presentations by the participants, 1:1 sessions between data experts and early career researchers, as well as a method- and data-related workshop. We aimed foremost to create an environment in which everyone feels safe to give input, share their knowledge and learn from the other participants and experts.
Quadruped robots excel in traversing complex, unstructured environments where wheeled robots often fail. However, enabling efficient and adaptable locomotion remains challenging due to the quadrupeds' nonlinear dynamics, high degrees of freedom, and the computational demands of real-time control. Optimization-based controllers, such as Nonlinear Model Predictive Control (NMPC), have shown strong performance, but their reliance on accurate state estimation and high computational overhead makes deployment in real-world settings challenging. In this work, we present a Multi-Task Learning (MTL) framework in which expert NMPC demonstrations are used to train a single neural network to predict actions for multiple locomotion behaviors directly from raw proprioceptive sensor inputs. We evaluate our approach extensively on the quadruped robot Go1, both in simulation and on real hardware, demonstrating that it accurately reproduces expert behavior, allows smooth gait switching, and simplifies the control pipeline for real-time deployment. Our MTL architecture enables learning diverse gaits within a unified policy, achieving high $R^{2}$ scores for predicted joint targets across all tasks.
The remarkable success of Deep Learning approaches is often based and demonstrated on large public datasets. However, when applying such approaches to internal, private datasets, one frequently faces challenges arising from structural differences in the datasets, domain shift, and the lack of labels. In this work, we introduce Tabular Data Adapters (TDA), a novel method for generating soft labels for unlabeled tabular data in outlier detection tasks. By identifying statistically similar public datasets and transforming private data (based on a shared autoencoder) into a format compatible with state-of-the-art public models, our approach enables the generation of weak labels. It thereby can help to mitigate the cold start problem of labeling by basing on existing outlier detection models for public datasets. In experiments on 50 tabular datasets across different domains, we demonstrate that our method is able to provide more accurate annotations than baseline approaches while reducing computational time. Our approach offers a scalable, efficient, and cost-effective solution, to bridge the gap between public research models and real-world industrial applications.
Contrastive learning (CL) approaches have gained great recognition as a very successful subset of self-supervised learning (SSL) methods. SSL enables learning from unlabeled data, a crucial step in the advancement of deep learning, particularly in computer vision (CV), given the plethora of unlabeled image data. CL works by comparing different random augmentations (e.g., different crops) of the same image, thus achieving self-labeling. Nevertheless, randomly augmenting images and especially random cropping can result in an image that is semantically very distant from the original and therefore leads to false labeling, hence undermining the efficacy of the methods. In this research, two novel parameterized cropping methods are introduced that increase the robustness of self-labeling and consequently increase the efficacy. The results show that the use of these methods significantly improves the accuracy of the model by between 2.7\% and 12.4\% on the downstream task of classifying CIFAR-10, depending on the crop size compared to that of the non-parameterized random cropping method.
Force field (FF) based molecular modeling is an often used method to investigate and study structural and dynamic properties of (bio-)chemical substances and systems. When such a system is modeled or refined, the force-field parameters need to be adjusted. This force-field parameter optimization can be a tedious task and is always a trade-off in terms of errors regarding the targeted properties. To better control the balance of various properties’ errors, in this study we introduce weighting factors for the optimization objectives. Different weighting strategies are compared to fine-tune the balance between bulk-phase density and relative conformational energies (RCE), using n-octane as a representative system. Additionally, a non-linear projection of the individual property-specific parts of the optimized loss function is deployed to further improve the balance between them. The results show that the combined error for the reproduction of the properties targeted in this optimization is reduced. Furthermore, the transferability of the force field parameters (FFParams) to chemically similar systems is increased. One interesting outcome is a large variety in the resulting optimized FFParams and corresponding errors, suggesting that the optimization landscape is multi-modal and very dependent on the weighting factor setup. We conclude that adjusting the weighting factors can be a very important feature to lower the overall error in the FF optimization procedure, giving researchers the possibility to fine-tune their FFs.
Second International Workshop on Perception-driven Graphics and Displays for VR and AR (PerGraVAR)
(2025)
In Ghana, unreliable public grid infrastructure greatly impacts rural healthcare, where diesel generators are commonly used despite their high financial and environmental costs. Photovoltaic (PV)-hybrid systems offer a sustainable alternative, but require robust, predictive control strategies to ensure reliability. This study proposes a sector-specific Model Predictive Control (MPC) approach, integrating advanced load and meteorological forecasting for optimal energy dispatch. The methodology includes a long-short-term memory (LSTM)-based load forecasting model with probabilistic Monte Carlo dropout, a customized Numerical Weather Prediction (NWP) model based on the Weather Research and Forecasting (WRF) framework, and deep learning-based All-Sky Imager (ASI) nowcasting to improve short-term solar predictions. By combining these forecasting methods into a seamless prediction framework, the proposed MPC optimizes system performance while reducing reliance on fossil fuels. This study benchmarks the MPC against a traditional rule-based dispatch system, using data collected from a rural health facility in Kologo, Ghana. Results demonstrate that predictive control greatly reduces both economic and ecological costs. Compared to rule-based dispatch, diesel generator operation and fuel consumption are reduced by up to 61.62% and 47.17%, leading to economical and ecological cost savings of up to 20.7% and 31.78%. Additionally, system reliability improves, with battery depletion events during blackouts decreasing by up to 99.42%, while wear and tear on the diesel generator and battery are reduced by up to 54.93% and 37.34%, respectively. Furthermore, hyperparameter tuning enhances MPC performance, introducing further optimization potential. These findings highlight the effectiveness of predictive control in improving energy resilience for critical healthcare applications in rural settings.
Generative AI can considerably speed up the process of producing narrative content including different media. This may be particularly helpful for the generation of modular variations on narrative themes in hypermedia, crossmedia, or transmedia contexts, thereby enabling personalized access to the content by heterogenous target groups. We present an example where GenAI has been applied for image creation and translation of a text to multiple languages for a crossmedia edutainment project transferring IT security knowledge to vulnerable groups. GenAI still seems inadequate to produce interesting narrative text integrating dedicated educational content. AI-generated illustrations often require manual rework. However, LLM support in multilingual translations displays more intelligent solutions than expected, including the implementation of a password generation process from a narrated description.
This repository includes the following datasets:
Own Dataset: A collection of 394 with source code typosquatting packages we have collected based on SonaType, Phylum.io and Snyk listings.
Backstabbers Knife Collection: A snapshot of Backstabbers Knife Collection during our analysis for reproduction purposes
MalOSS: A snapshot of the MalOSS dataset during our analysis for reproduction purposes.
Source code: The source code of our programs and algorithms, mainly the Random Forest models, and the Extended Damerau-Levenshtein MetricHowever, the source code of the packages provided by MalOSS and Backstabbers Knife Collection must be retrieved by the corresponding owner/maintainer.
Hochschulen für Angewandte Wissenschaften (HAW) übernehmen in Deutschland etwa die Hälfte der Informatikausbildung im Bereich Bachelor und Master. Neu ist, dass auch Doktortitel in Informatik an einigen HAW erworben werden können. Dies eröffnet für Informatik neue Perspektiven und wirft Fragestellungen auf. Neben den Charakteristika einer Promotion im Fach Informatik an einer HAW sind dies beispielsweise das Qualifikationsprofil von Promovierten und ihre Karriereperspektiven, Kooperationsformen mit externen Anwendungspartnern als auch die Frage, wie Potenziale im Sinne der Informatik gehoben werden können. Dieser Beitrag liefert Grundlagen zu dieser Diskussion. Der Hintergrund des eigenständigen Promotionsrechts von HAW wird besprochen, es werden verschiedene Modelle der Umsetzung anhand von Beispielen aus Hessen, Nordrhein-Westfalen und Rheinland-Pfalz vorgestellt sowie erste Erfahrungen mit dem eigenständigen Promotionsrecht berichtet.
This contribution explores the opportunities and challenges of digitalizing cultural heritage, using the Digitalization of Cultural Heritage project as a case study. The project, a collaboration among universities from multiple countries, focuses on creating 3D models of historical artifacts, exemplified by the 3D modelling of Roman-period fragments using photogrammetry. The paper discusses the broader implications of digitalization with a particular focus on the use of AI technologies, including its potential to enhance education, accessibility, artifact preservation, and cultural tourism. It also addresses the technical and ethical challenges involved, emphasizing the need for ongoing innovation and interdisciplinary collaboration to maximize the benefits of digital cultural preservation.This contribution explores the opportunities and challenges of digitalizing cultural heritage, using the Digitalization of Cultural Heritage project as a case study. The project, a collaboration among universities from multiple countries, focuses on creating 3D models of historical artifacts, exemplified by the 3D modelling of Roman-period fragments using photogrammetry. The paper discusses the broader implications of digitalization with a particular focus on the use of AI technologies, including its potential to enhance education, accessibility, artifact preservation, and cultural tourism. It also addresses the technical and ethical challenges involved, emphasizing the need for ongoing innovation and interdisciplinary collaboration to maximize the benefits of digital cultural preservation.
torchtime
(2022)
The aim of torchtime is to apply PyTorch to the time series domain. By supporting PyTorch, torchtime follows the same philosophy of providing strong GPU acceleration, having a focus on trainable features through the autograd system, and having consistent style (tensor names and dimension names). Therefore, it is primarily a machine learning library and not a general signal processing library. The benefits of PyTorch can be seen in torchtime through having all the computations be through PyTorch operations, which makes it easy to use and feel like a natural extension.
Diagnosis and prognosis of intermittent faults, in general, is difficult as it is unknown when and for how long intermittent faults will reappear. This paper addresses the case of parametric incipient faults and simultaneously occurring intermittent faults with a magnitude that increases over time so that they may reach a failure alarm threshold and may eventually lead to a component or even a system failure.
The presented Bond Graph-based approach consists of two parts. First, a Diagnostic Bond Graph (DBG) is developed offline for an online diagnosis of intermittent faults by means of Analytical Redundancy Relations (ARRs). Due to the magnitude of intermittent faults increasing with time, the time evolutions of ARRs indicate a degradation trend. In a second, data-based, part, values of this trend over a moving time window stored in a buffer are extrapolated concurrently to the monitoring and to the fault detection and isolation (FDI) part in a repeated failure prognosis resulting in a sequence of Remaining Useful Life (RUL) estimates. A case study considers a small switched electronic circuit.
Understanding the interactions between the cervico-vaginal microbiome, immune responses, and sexually transmitted infections (STIs) is crucial for developing targeted diagnostic and therapeutic strategies. Although microbiome analyses are not yet standard practice, integrating them into routine diagnostics could enhance personalized medicine and therapies. We investigated the extent to which partial 16S short-read amplicon microbiome analyses could inform on the presence of six commonly encountered STI-causing pathogens in a patient cohort referred for colposcopy, and whether relevant taxonomic or diagnostic discrepancies occur when using vaginal rather than cervical swabs. The study cohort included cervical and vaginal samples collected from women referred for colposcopy at the University Hospital Bonn between November 2021 and February 2022, due to an abnormal PAP smear or positive hrHPV results. 16S rRNA gene sequencing libraries were prepared targeting the V1–V2 and V4 regions of the 16S RNA gene and sequenced on the Illumina MiSeq. PCR diagnostics for common STI-causing pathogens were conducted using the Allplex STI Essential Assay Kit (Seegene, Seoul, Republic of Korea). Concerning the bacterial microbiome, no significant differences were found between vaginal and cervical samples in terms of prevalence of taxa present or diversity. A total of 95 patients and 171 samples tested positive for at least one among Ureaplasma parvum, Ureaplasma urealyticum, Mycoplasma hominis, Mycoplasma genitalium, Chlamydophila trachomatis or Neisseria gonorrhoeae. Sequencing the V1–V2 region enabled detection of one-third to half of the PCR-positive samples, with the detection likelihood increasing at lower cycle threshold (Ct) values. In contrast, sequencing the V4 region was less effective overall, yielding fewer species-level identifications and a higher proportion of undetermined taxa. We demonstrate that the vaginal microbiome closely mirrors the cervical microbiome, a relationship that has not been explored previously, but which broadens the possibilities for microbiome analysis and pathogen detection and establishes vaginal swabs as a reliable method for detecting the investigated pathogens, with sensitivities comparable with or superior to endocervical swabs. On the other hand, the sensitivity of partial 16S amplicon sequencing appears insufficient for effective STI diagnostics, as it fails to reliably identify or even detect pathogens at higher Ct values.
Implementation of the 3D Face Recognition algorithm described in M. Jribi, S. Mathlouthi, and F. Ghorbel, "A geodesic multipolar parameterization-based representation for 3D face recognition," Signal Processing: Image Communication, vol. 99, 2021.
There is a C++ implementation and a python wrapper (Attention, this underlies BSD-3 Clause licensing) available.
As the Code was only tested with a single STL Face Image yet, it's currently only a Beta. Should you find any bugs, please report them back to me via mail: alexandra.mielke@smail.emt.h-brs.de or add issues in Github.
If you use this software, please cite it as below.
To ensure reliable performance of Question Answering (QA) systems, evaluation of robustness is crucial. Common evaluation benchmarks commonly only include performance metrics, such as Exact Match (EM) and the F1 score. However, these benchmarks overlook critical factors for the deployment of QA systems. This oversight can result in systems vulnerable to minor perturbations in the input such as typographical errors. While several methods have been proposed to test the robustness of QA models, there has been minimal exploration of these approaches for languages other than English. This study focuses on the robustness evaluation of German language QA models, extending methodologies previously applied primarily to English. The objective is to nurture the development of robust models by defining an evaluation method specifically tailored to the German language. We assess the applicability of perturbations used in English QA models for German and perform a comprehensive experimental evaluation with eight models. The results show that all models are vulnerable to character-level perturbations. Additionally, the comparison of monolingual and multilingual models suggest that the former are less affected by character and word-level perturbations.
This paper presents dCTIDH, a CSIDH implementation that combines two recent developments into a novel state-of-the-art deterministic implementation. We combine the approach of deterministic variants of CSIDH with the batching strategy of CTIDH, which shows that the full potential of this key space has not yet been explored. This high-level adjustment in itself leads to a significant speed-up. To achieve an effective deterministic evaluation in constant time, we introduce Wombats, a new approach to performing isogenies in batches, specifically tailored to the behavior required for deterministic CSIDH using CTIDH batching.
Furthermore, we explore the two-dimensional space of optimal primes for dCTIDH, with regard to both the performance of dCTIDH in terms of finite-field operations per prime and the efficiency of finite-field operations, determined by the prime shape, in terms of cycles. This allows us to optimize both for choice of prime and scheme parameters simultaneously. Lastly, we implement and benchmark constant-time, deterministic dCTIDH. Our results show that dCTIDH not only outperforms state-of-the-art deterministic CSIDH, but even non-deterministic CTIDH: dCTIDH-2048 is faster than CTIDH-2048 by 17 percent, and is almost five times faster than dCSIDH-2048.
This article introduces a model-based design, implementation, deployment, and execution methodology, with tools supporting the systematic composition of algorithms from generic and domain-specific computational building blocks that prevent code duplication and enable robots to adapt their software themselves. The envisaged algorithms are numerical solvers based on graph structures. In this article, we focus on kinematics and dynamics algorithms, but examples such as message passing on probabilistic networks and factor graphs or cascade control diagrams fall under the same pattern. The tools rely on mature standards from the Semantic Web. They first synthesize algorithms symbolically, from which they then generate efficient code. The use case is an overactuated mobile robot with two redundant arms.