004 Datenverarbeitung; Informatik
Refine
Departments, institutes and facilities
- Fachbereich Informatik (942)
- Institute of Visual Computing (IVC) (251)
- Institut für Cyber Security & Privacy (ICSP) (145)
- Fachbereich Wirtschaftswissenschaften (138)
- Institut für funktionale Gen-Analytik (IFGA) (82)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (56)
- Institut für Verbraucherinformatik (IVI) (40)
- Institut für Sicherheitsforschung (ISF) (27)
- Fachbereich Ingenieurwissenschaften und Kommunikation (24)
- Graduierteninstitut (21)
Document Type
- Conference Object (780)
- Article (279)
- Report (86)
- Part of a Book (83)
- Book (monograph, edited volume) (73)
- Master's Thesis (45)
- Preprint (45)
- Doctoral Thesis (31)
- Conference Proceedings (26)
- Bachelor Thesis (19)
Year of publication
Keywords
- Lehrbuch (14)
- Robotics (12)
- Virtual Reality (12)
- Controlling (10)
- Informationstechnik (10)
- Machine Learning (10)
- virtual reality (9)
- Robotik (7)
- Software (7)
- DPA (6)
torchtime
(2022)
The aim of torchtime is to apply PyTorch to the time series domain. By supporting PyTorch, torchtime follows the same philosophy of providing strong GPU acceleration, having a focus on trainable features through the autograd system, and having consistent style (tensor names and dimension names). Therefore, it is primarily a machine learning library and not a general signal processing library. The benefits of PyTorch can be seen in torchtime through having all the computations be through PyTorch operations, which makes it easy to use and feel like a natural extension.
Diagnosis and prognosis of intermittent faults, in general, is difficult as it is unknown when and for how long intermittent faults will reappear. This paper addresses the case of parametric incipient faults and simultaneously occurring intermittent faults with a magnitude that increases over time so that they may reach a failure alarm threshold and may eventually lead to a component or even a system failure.
The presented Bond Graph-based approach consists of two parts. First, a Diagnostic Bond Graph (DBG) is developed offline for an online diagnosis of intermittent faults by means of Analytical Redundancy Relations (ARRs). Due to the magnitude of intermittent faults increasing with time, the time evolutions of ARRs indicate a degradation trend. In a second, data-based, part, values of this trend over a moving time window stored in a buffer are extrapolated concurrently to the monitoring and to the fault detection and isolation (FDI) part in a repeated failure prognosis resulting in a sequence of Remaining Useful Life (RUL) estimates. A case study considers a small switched electronic circuit.
Die Thesis befasst sich mit der Darstellung der Systemtheorie Niklas Luhmanns als theoretisches Fundament zur Erforschung von Informationssicherheitsmanagementsystemen (ISMS). Anhand von verschiedenen Anwendungsfällen stellt diese Arbeit die Vorzüge der Theorie Luhmanns gegenüber Keiner oder anderen Theorien dar.
Introduction of a Reasoning Approach to Enhance Virtual Agent Perception Capabilities Subject
(2024)
Diese Masterarbeit präsentiert eine Weiterentwicklung des FIVIS Fahrradfahrsimulators [HHK+08, See22], um ein Konzept für zeitliches Schlussfolgern. Zeitliches Schlussfolgern ermöglicht es, Nicht-Spieler-Charakteren (NPCs) neues Wissen auf Basis zuvor erworbener Informationen abzuleiten und es in ihren Entscheidungsprozess einzubeziehen. Das Ziel besteht darin, die Glaubwürdigkeit des NPC-Verhaltens zu verbessern, indem es ihnen erlaubt, fehlerhafte Entscheidungen auf der Basis von mehrdeutigen oder irreführenden Informationen zu treffen. Der bestehende Simulator wurde um ein regelbasiertes Folgerungssystem erweitert, das den motorisierten NPCs ermöglicht, die Absichten von Fußgängern frühzeitig zu erkennen und in ihre Entscheidungsfindung zu integrieren. Zur Evaluation wurden drei realitätsnahe Verkehrssituationen innerhalb des FIVIS Fahrradfahrsimulators realisiert. Es konnte gezeigt werden, dass seltene Simulationsszenarien wie Verkehrsunfälle zwischen virtuellen Fußgängern und Autofahrern auf eine plausiblere und nicht deterministische Art und Weise simuliert werden können. Diese Szenarien treten nur unter bestimmten Bedingungen auf und auch nur dann, wenn der Entscheidungsprozess durch unvollständige, fehleranfällige oder mehrdeutige Eingabedaten beeinflusst wurde, sodass angemessene Entscheidungen schwer zu treffen waren. Dies bedeutet, dass das realisierte Konzept des zeitlichen Schlussfolgerns in der Lage ist, menschliche Fehler und Mehrdeutigkeit im Entscheidungsprozess von NPCs zu modellieren.
Transport Layer Security (TLS) is a widely used protocol for secure channel establishment. However, TLS lacks any inherent mechanism for validating the security state of the endpoint software and its platform. To overcome this limitation, recent works have combined remote attestation (RA) and TLS, named attested TLS. The most popular attested TLS protocol for confidential computing is Intel’s RA-TLS, which is used in multiple open-source industrial projects. However, there is no formal reasoning for security of attested TLS for confidential computing in general and RA-TLS in particular. Using the state-of-the-art symbolic security analysis tool ProVerif, we found vulnerabilities in RA-TLS at both RA and TLS layers, which have been acknowledged by Intel. We also propose mitigations for the vulnerabilities. During the process of formalization, we found that despite several formal verification efforts for TLS to ensure its security, the validation of corresponding formal models has been largely overlooked. This work demonstrates that a simple validation framework could discover crucial issues in state-of-the-art formalization of TLS 1.3 key schedule. These issues have been acknowledged and fixed by the authors. Finally, we provide recommendations for protocol designers and the formal verification community based on the lessons learned in the formal verification and validation.
The workshop XAI for U aims to address the critical need for transparency in Artificial Intelligence (AI) systems that integrate into our daily lives through mobile systems, wearables, and smart environments. Despite advances in AI, many of these systems remain opaque, making it difficult for users, developers, and stakeholders to verify their reliability and correctness. This workshop addresses the pressing need for enabling Explainable AI (XAI) tools within Ubiquitous and Wearable Computing and highlights the unique challenges that come with it, such as XAI that deals with time-series and multimodal data, XAI that explains interconnected machine learning (ML) components, and XAI that provides user-centered explanations. The workshop aims to foster collaboration among researchers in related domains, share recent advancements, address open challenges, and propose future research directions to improve the applicability and development of XAI in Ubiquitous Pervasive and Wearable Computing - and with that seeks to enhance user trust, understanding, interaction, and adoption, ensuring that AI- driven solutions are not only more explainable but also more aligned with ethical standards and user expectations.
Time for an Explanation: A Mini-Review of Explainable Physio-Behavioural Time-Series Classification
(2024)
Time-series classification is seeing growing importance as device proliferation has lead to the collection of an abundance of sensor data. Although black-box models, whose internal workings are difficult to understand, are a common choice for this task, their use in safety-critical domains has raised calls for greater transparency. In response, researchers have begun employing explainable artificial intelligence together with physio-behavioural signals in the context of real-world problems. Hence, this paper examines the current literature in this area and contributes principles for future research to overcome the limitations of the reviewed works.
This paper introduces a novel Wireshark dissector designed to facilitate the analysis of Service-Based Interface (SBI) communication in 5G Core Networks. Our approach involves parsing the OpenAPI schemes provided by the 5G specification to automatically generate the dissector code. Our tool enables the validation of 5G Core Network traces to ensure compliance with the specifications. Through testing against three open-source 5G Core Network projects, we identified several issues where messages deviate from specification standards, highlighting the significance of our implementation in ensuring protocol conformity and network reliability.
The Information and Communication Technology (ICT) sector is a significant global industry, and addressing climate change is of critical importance. This paper aims to assess the resources utilized by the ICT sector, the associated negative environmental impacts, and potential mitigation measures. In order to understand these aspects, this study attempts to categorize the resources used by ICT, analyze the amount consumed and the resulting negative impacts, and determine what measures exist to mitigate them. An economic and empirical evaluation shows a negative trend in ICT’s resource consumption, mainly due to increased energy consumption and rising carbon emissions from devices such as smartphones and data centers. The investigated countermeasures focus on Green IT strategies that encompass energy efficiency, carbon awareness, and hardware efficiency principles as outlined by the Green Software Foundation. Special attention is given to reducing the environmental footprint of data center operations and smartphones. This paper concludes that Green IT strategies, although promising in theory, are often not implemented at an industry level.
In vision tasks, a larger effective receptive field (ERF) is associated with better performance. While attention natively supports global context, convolution requires multiple stacked layers and a hierarchical structure for large context. In this work, we extend Hyena, a convolution-based attention replacement, from causal sequences to the non-causal two-dimensional image space. We scale the Hyena convolution kernels beyond the feature map size up to 191$\times$191 to maximize the ERF while maintaining sub-quadratic complexity in the number of pixels. We integrate our two-dimensional Hyena, HyenaPixel, and bidirectional Hyena into the MetaFormer framework. For image categorization, HyenaPixel and bidirectional Hyena achieve a competitive ImageNet-1k top-1 accuracy of 83.0% and 83.5%, respectively, while outperforming other large-kernel networks. Combining HyenaPixel with attention further increases accuracy to 83.6%. We attribute the success of attention to the lack of spatial bias in later stages and support this finding with bidirectional Hyena.
In recent years, eXtended Reality (XR) technology like Augmented Reality and Virtual Reality became both technically feasible as well as affordable which lead to a drastic demand of professionally designed and developed applications. However, this demand combined with a rapid pace of innovation revealed a lack of design tool support for professional interaction designers as well as a knowledge gap regarding their approaches and needs. To address this gap, this thesis engages with the work of professional XR interaction designers in a qualitative research into XR interaction design approach. Therefore, this thesis applies two complementary lenses stemming from scientific design and social practice theory discourses to observe, describe, analyze, and understand professional XR interaction designers' challenges and approaches with a focus on application prototyping.
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot. We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation, robust object recognition and task planning. New developments include an approach to grasp vertical objects, placement of objects by considering the empty space on a workstation, and the process of porting our code to ROS2.
Neuromorphic computing aims to mimic the computational principles of the brain in silico and has motivated research into event-based vision and spiking neural networks (SNNs). Event cameras (ECs) capture local, independent changes in brightness, and offer superior power consumption, response latencies, and dynamic ranges compared to frame-based cameras. SNNs replicate neuronal dynamics observed in biological neurons and propagate information in sparse sequences of ”spikes”. Apart from biological fidelity, SNNs have demonstrated potential as an alternative to conventional artificial neural networks (ANNs), such as in reducing energy expenditure and inference time in visual classification. Although potentially beneficial for robotics, the novel event-driven and spike-based paradigms remain scarcely explored outside the domain of aerial robots.
To investigate the utility of brain-inspired sensing and data processing in a robotics application, we developed a neuromorphic approach to real-time, online obstacle avoidance on a manipulator with an onboard camera. Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN, decoding neural activations into avoidance motions, and adjusting plans in a dynamic motion primitive formulation. We conducted simulated and real experiments with a Kinova Gen3 arm performing simple reaching tasks involving static and dynamic obstacles. Our implementation was systematically tuned, validated, and tested in sets of distinct task scenarios, and compared to a non-adaptive baseline through formalized quantitative metrics and qualitative criteria.
The neuromorphic implementation facilitated reliable avoidance of imminent collisions in most scenarios, with 84% and 92% median success rates in simulated and real experiments, where the baseline consistently failed. Adapted trajectories were qualitatively similar to baseline trajectories, indicating low impacts on safety, predictability and smoothness criteria. Among notable properties of the SNN were the correlation of processing time with the magnitude of perceived motions (captured in events) and robustness to different event emulation methods. Preliminary tests with a DAVIS346 EC showed similar performance, validating our experimental event emulation method. These results motivate future efforts to incorporate SNN learning, utilize neuromorphic processors, and target other robot tasks to further explore this approach.
This thesis proposes a multi-label classification approach using the Multimodal Transformer (MulT) [80] to perform multi-modal emotion categorization on a dataset of oral histories archived at the Haus der Geschichte (HdG). Prior uni-modal emotion classification experiments conducted on the novel HdG dataset provided less than satisfactory results. They uncovered issues such as class imbalance, ambiguities in emotion perception between annotators, and lack of representative training data to perform transfer learning [28]. Hence, the objectives of this thesis were to achieve better results by performing a multi-modal fusion and resolving the problems arising from class imbalance and annotator-induced bias in emotion perception. A further objective was to assess the quality of the novel HdG dataset and benchmark the results using SOTA techniques. Through a literature survey on the challenges, models, and datasets related to multi-modal emotion recognition, we created a methodology utilizing the MulT along with a multi-label classification approach. This approach produced a considerable improvement in the overall emotion recognition by obtaining an average AUC of 0.74 and Balanced-accuracy of 0.70 on the HdG dataset, which is comparable to state-of-the-art (SOTA) results on other datasets. In this manner, we were also able to benchmark the novel HdG dataset as well as introduce a novel multi-annotator learning approach to understand each annotator’s relative strengths and weaknesses for emotion perception. Our evaluation results highlight the potential benefits of the novel multi-annotator learning approach in improving overall performance by resolving the problems arising from annotator-induced bias and variation in the perception of emotions. Complementing these results, we performed a further qualitative analysis of the HdG annotations with a psychologist to study the ambiguities found in the annotations. We conclude that the ambiguities in annotations may have resulted from a combination of several socio-psychological factors and systemic issues associated with the process of creating these annotations. As these problems are also present in most multi-modal emotion recognition datasets, we conclude that the domain could benefit from a set of annotation guidelines to create standardized datasets.
Object detection concerns the classification and localization of objects in an image. To cope with changes in the environment, such as when new classes are added or a new domain is encountered, the detector needs to update itself with the new information while retaining knowledge learned in the past. Previous works have shown that training the detector solely on new data would produce a severe "forgetting" effect, in which the performance on past tasks deteriorates through each new learning phase. However, in many cases, storing and accessing past data is not possible due to privacy concerns or storage constraints. This project aims to investigate promising continual learning strategies for object detection without storing and accessing past training images and labels. We show that by utilizing the pseudo-background trick to deal with missing labels, and knowledge distillation to deal with missing data, the forgetting effect can be significantly reduced in both class-incremental and domain-incremental scenarios. Furthermore, an integration of a small latent replay buffer can result in a positive backward transfer, indicating the enhancement of past knowledge when new knowledge is learned.
The continuously increasing number of biomedical scholarly publications makes it challenging to construct document recommendation algorithms that can efficiently navigate through literature. Such algorithms would help researchers in finding similar, relevant, and related publications that align with their research interests. Natural Language Processing offers various alternatives to compare publications, ranging from entity recognition to document embeddings. In this paper, we present the results of a comparative analysis of vector-based approaches to assess document similarity in the RELISH corpus. We aim to determine the best approach that resembles relevance without the need for further training. Specifically, we employ five different techniques to generate vectors representing the text in the documents. These techniques employ a combination of various Natural Language Processing frameworks such as Word2Vec, Doc2Vec, dictionary-based Named Entity Recognition, and state-of-the-art models based on BERT. To evaluate the document similarity obtained by these approaches, we utilize different evaluation metrics that account for relevance judgment, relevance search, and re-ranking of the relevance search. Our results demonstrate that the most promising approach is an in-house version of document embeddings, starting with word embeddings and using centroids to aggregate them by document.
Smart heating systems are one of the core components of smart homes. A large portion of domestic energy consumption is derived from HVAC (heating, ventilation and air conditioning) systems, making them a relevant topic of the efforts to support an energy transition in private housing. For that reason, the technology has attracted attention both from the academic and the industry communities. User interfaces of smart heating systems have evolved from simple adjusting knobs to advanced data visualization interfaces, that allow for more advanced setting such as time tables and status information. With the advent of AI, we are interested in exploring how the interfaces will be evolving to build the connection between user needs and underlying AI system. Hence, this paper is targeted to provide early design implications towards an AI-based user interface for smart heating systems.
AI systems pose unknown challenges for designers, policymakers, and users which aggravates the assessment of potential harms and outcomes. Although understanding risks is a requirement for building trust in technology, users are often excluded from legal assessments and explanations of AI hazards. To address this issue we conducted three focus groups with 18 participants in total and discussed the European proposal for a legal framework for AI. Based on this, we aim to build a (conceptual) model that guides policymakers, designers, and researchers in understanding users’ risk perception of AI systems. In this paper, we provide selected examples based on our preliminary results. Moreover, we argue for the benefits of such a perspective.