Refine
Departments, institutes and facilities
- Fachbereich Informatik (820)
- Institut für funktionale Gen-Analytik (IFGA) (488)
- Fachbereich Angewandte Naturwissenschaften (446)
- Fachbereich Ingenieurwissenschaften und Kommunikation (305)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (280)
- Institute of Visual Computing (IVC) (251)
- Institut für Cyber Security & Privacy (ICSP) (220)
- Fachbereich Wirtschaftswissenschaften (204)
- Institut für Verbraucherinformatik (IVI) (141)
- Graduierteninstitut (51)
Document Type
- Conference Object (1286)
- Article (1256)
- Part of a Book (191)
- Preprint (70)
- Doctoral Thesis (64)
- Book (monograph, edited volume) (55)
- Report (39)
- Master's Thesis (28)
- Research Data (23)
- Conference Proceedings (22)
Year of publication
Language
- English (3107) (remove)
Has Fulltext
- no (3107) (remove)
Keywords
- Robotics (14)
- FPGA (12)
- Virtual Reality (12)
- ENaC (10)
- Machine Learning (10)
- apoptosis (10)
- virtual reality (10)
- ICT (9)
- Privacy (9)
- sustainability (9)
Dynamic Programming
(2024)
Queueing Theory
(2024)
The Decision Tree Procedure
(2024)
Heuristic Methods
(2024)
Network Analysis Method
(2024)
The Peren-Clement Index
(2024)
Sequencing Problems
(2024)
Linear Optimization
(2024)
The Peren Theorem
(2024)
Pyrolysis–Gas Chromatography
(2024)
The methodology of analytical pyrolysis-GC/MS has been known for several years, but is seldom used in research laboratories and process control in the chemical industry. This is due to the relative difficulty of interpreting the identified pyrolysis products as well as the variety of them. This book contains full identification of several classes of polymers/copolymers and biopolymers that can be very helpful to the user. In addition, the practical applications can encourage analytical chemists and engineers to use the techniques explored in this volume.
Social policy research on the ageing workforce from the perspective of employees and employers
(2024)
Process-induced changes in thermo-mechanical viscoelastic properties and the corresponding morphology of biodegradable polybutylene adipate terephthalate (PBAT) and polylactic acid (PLA) blown film blends modified with four multifunctional chain-extending cross-linkers (CECL) were investigated. The introduction of CECL modified the properties of the reference PBAT/PLA blend significantly. The thermal analysis showed that the chemical reactions were incomplete after compounding, and that film blowing extended them. SEM investigations of the fracture surfaces of blown extrusion films reveal the significant effect of CECL on the morphology formed during the processing. The anisotropic morphology introduced during film blowing proved to affect the degradation processes as well. Furthermore, the reactions of CECL with PBAT/PLA induced by the processing depend on the deformation directions. The blow-up ratio parameter was altered to investigate further process-induced changes proving synergy with mechanical and morphological features. Using blown film extrusion, the elongational behavior represents a very important characteristic. However, its evaluation may be quite often problematic, but with the SER Universal Testing Platform it was possible to determine changes in the duration of time intervals corresponding to the rupture of elongated samples.
Traditional and newly developed testing methods were used for extensive application-related characterization of transdermal therapeutic systems (TTS) and pressure sensitive adhesives (PSA). Large amplitude oscillatory shear tests of PSAs were correlated to the material behavior during the patient’s motion and showed that all PSAs were located close to the gel point. Furthermore, an increasing strain amplitude results in stretching and yielding of the PSA´s microstructure causing a consolidation of the network and a release with increasing strain amplitude. RheoTack approach was developed to allow for an advanced tack characterization of TTS with visual inspection. The results showed a clear resin content and rod geometry dependent behavior, and displays the PSA´s viscoelasticity resulting in either high tack and long stretched fibrils or non-adhesion and brittle behavior. Moreover, diffusion of water / sweat during TTS´s application might influence its performance. Therefore, a dielectric analysis based evaluation method displayed occurring water diffusion into the PSA from which the diffusion coefficient can be determined, and showed clear material and resin content dependent behavior. All methods allow for an advanced product-oriented material testing that can be utilized within further TTS development.
Is It Really You Who Forgot the Password? When Account Recovery Meets Risk-Based Authentication
(2024)
Force field (FF) based molecular modeling is an often used method to investigate and study structural and dynamic properties of (bio-)chemical substances and systems. When such a system is modeled or refined, the force field parameters need to be adjusted. This force field parameter optimization can be a tedious task and is always a trade-off in terms of errors regarding the targeted properties. To better control the balance of various properties’ errors, in this study we introduce weighting factors for the optimization objectives. Different weighting strategies are compared to fine-tune the balance between bulk-phase density and relative conformational energies (RCE), using n-octane as a representative system. Additionally, a non-linear projection of the individual property-specific parts of the optimized loss function is deployed to further improve the balance between them. The results show that the overall error is reduced. One interesting outcome is a large variety in the resulting optimized force field parameters (FFParams) and corresponding errors, suggesting that the optimization landscape is multi-modal and very dependent on the weighting factor setup. We conclude that adjusting the weighting factors can be a very important feature to lower the overall error in the FF optimization procedure, giving researchers the possibility to fine-tune their FFs.
In recent years, eXtended Reality (XR) technology like Augmented Reality and Virtual Reality became both technically feasible as well as affordable which lead to a drastic demand of professionally designed and developed applications. However, this demand combined with a rapid pace of innovation revealed a lack of design tool support for professional interaction designers as well as a knowledge gap regarding their approaches and needs. To address this gap, this thesis engages with the work of professional XR interaction designers in a qualitative research into XR interaction design approach. Therefore, this thesis applies two complementary lenses stemming from scientific design and social practice theory discourses to observe, describe, analyze, and understand professional XR interaction designers' challenges and approaches with a focus on application prototyping.
Selection Performance and Reliability of Eye and Head Gaze Tracking Under Varying Light Conditions
(2024)
In vision tasks, a larger effective receptive field (ERF) is associated with better performance. While attention natively supports global context, convolution requires multiple stacked layers and a hierarchical structure for large context. In this work, we extend Hyena, a convolution-based attention replacement, from causal sequences to the non-causal two-dimensional image space. We scale the Hyena convolution kernels beyond the feature map size up to 191$\times$191 to maximize the ERF while maintaining sub-quadratic complexity in the number of pixels. We integrate our two-dimensional Hyena, HyenaPixel, and bidirectional Hyena into the MetaFormer framework. For image categorization, HyenaPixel and bidirectional Hyena achieve a competitive ImageNet-1k top-1 accuracy of 83.0% and 83.5%, respectively, while outperforming other large-kernel networks. Combining HyenaPixel with attention further increases accuracy to 83.6%. We attribute the success of attention to the lack of spatial bias in later stages and support this finding with bidirectional Hyena.
Design and characterization of geopolymer foams reinforced with Miscanthus x giganteus fibers
(2024)
This paper presents the effects of different amounts of fibers and foaming agent, as well as different fiber sizes, on the mechanical and thermal properties of fly ash-based geopolymer foams reinforced with Miscanthus x giganteus fibers. The mechanical properties of the geopolymer foams were measured through compressive strength, and their thermal properties were characterized by thermal conductivity and X-ray micro-computed tomography. Furthermore, design of experiment (DoE) were used to optimize the thermal conductivity and compressive strength of Miscanthus x giganteus reinforced geopolymer foams. In addition, the microstructure was studied using X-ray diffraction (XRD), Field emission scanning electron microscopy (SEM) and Fourier-Transform Infrared Spectroscopy (FTIR). Mixtures with a low thermal conductivity of 0.056 W (m K)−1 and a porosity of 79 vol% achieved a compressive strength of only 0.02 MPa. In comparison, mixtures with a thermal conductivity of 0.087 W (m K)−1 and a porosity of 58 vol% achieved a compressive strength of 0.45 MPa.
Introduction: Antimicrobial resistance (AMR) has emerged as one of the leading threats to public health. AMR possesses a multidimensional challenge that has social, economic, and environmental dimensions that encompass the food production system, influencing human and animal health. The One Health approach highlights the inextricable linkage and interdependence between the health of people, animals, agriculture, and the environment. Antibiotic use in any of these One Health areas can potentially impact the health of other areas. There is a dearth of evidence on AMR from the natural environment, such as the plant-based agriculture sector. Antibiotics, antibiotic-resistant bacteria (ARB), and related AMR genes (ARGs) are assumed to present in the natural environment and disseminate resistance to fresh produce/vegetables and thus to human health upon consumption. Therefore, this study aims to investigate the role of vegetables in the spread of AMR through an agroecosystem exploration from a One Health perspective in Ahmedabad, India.
Protocol: The present study will be executed in Ahmedabad, located in Gujarat state in the Western part of India, by adopting a mixed-method approach. First, a systematic review will be conducted to document the prevalence of ARB and ARGs on fresh produce in South Asia. Second, agriculture farmland surveys will be used to collect the general farming practices and the data on common vegetables consumed raw by the households in Ahmedabad. Third, vegetable and soil samples will be collected from the selected agriculture farms and analyzed for the presence or absence of ARB and ARGs using standard microbiological and molecular methods.
Discussion: The analysis will help to understand the spread of ARB/ARGs through the agroecosystem. This is anticipated to provide an insight into the current state of ARB/ARGs contamination of fresh produce/vegetables and will assist in identifying the relevant strategies for effectively controlling and preventing the spread of AMR.
Although climate-induced liquidity risks can cause significant disruptions and instabilities in the financial sector, they are frequently overlooked in current debates and policy discussions. This paper proposes a macro-financial agent-based integrated assessment model to investigate the transmission channels of climate risks to financial instability and study the emergence of liquidity crises through interbank market dynamics. Our simulations show that the financial system could experience serious funding and market liquidity shortages due to climate-induced liquidity crises. Our investigation contributes to our understanding of the impact - and possible solutions - to climate-induced liquidity crises, besides the issue of asset stranding related to transition risks usually considered in the existing studies.
A PM2.5 concentration prediction framework with vehicle tracking system: From cause to effect
(2023)
Representing 3D surfaces as level sets of continuous functions over R3 is the common denominator of neural implicit representations, which recently enabled remarkable progress in geometric deep learning and computer vision tasks. In order to represent 3D motion within this framework, it is often assumed (either explicitly or implicitly) that the transformations which a surface may undergo are homeomorphic: this is not necessarily true, for instance, in the case of fluid dynamics. In order to represent more general classes of deformations, we propose to apply this theoretical framework as regularizers for the optimization of simple 4D implicit functions (such as signed distance fields). We show that our representation is capable of capturing both homeomorphic and topology-changing deformations, while also defining correspondences over the continuously-reconstructed surfaces.
Spektroskopische Qualifizierung und Quantifizierung von Hyaluronsäure in Nahrungsergänzungsmitteln
(2023)
Trueness and precision of milled and 3D printed root-analogue implants: A comparative in vitro study
(2023)
Skill generalisation and experience acquisition for predicting and avoiding execution failures
(2023)
For performing tasks in their target environments, autonomous robots usually execute and combine skills. Robot skills in general and learning-based skills in particular are usually designed so that flexible skill acquisition is possible, but without an explicit consideration of execution failures, the impact that failure analysis can have on the skill learning process, or the benefits of introspection for effective coexistence with humans. Particularly in human-centered environments, the ability to understand, explain, and appropriately react to failures can affect a robot's trustworthiness and, consequently, its overall acceptability. Thus, in this dissertation, we study the questions of how parameterised skills can be designed so that execution-level decisions are associated with semantic knowledge about the execution process, and how such knowledge can be utilised for avoiding and analysing execution failures. The first major segment of this work is dedicated to developing a representation for skill parameterisation whose objective is to improve the transparency of the skill parameterisation process and enable a semantic analysis of execution failures. We particularly develop a hybrid learning-based representation for parameterising skills, called an execution model, which combines qualitative success preconditions with a function that maps parameters to predicted execution success. The second major part of this work focuses on applications of the execution model representation to address different types of execution failures. We first present a diagnosis algorithm that, given parameters that have resulted in a failure, finds a failure hypothesis by searching for violations of the qualitative model, as well as an experience correction algorithm that uses the found hypothesis to identify parameters that are likely to correct the failure. Furthermore, we present an extension of execution models that allows multiple qualitative execution contexts to be considered so that context-specific execution failures can be avoided. Finally, to enable the avoidance of model generalisation failures, we propose an adaptive ontology-assisted strategy for execution model generalisation between object categories that aims to combine the benefits of model-based and data-driven methods; for this, information about category similarities as encoded in an ontology is integrated with outcomes of model generalisation attempts performed by a robot. The proposed methods are exemplified in terms of various use cases - object and handle grasping, object stowing, pulling, and hand-over - and evaluated in multiple experiments performed with a physical robot. The main contributions of this work include a formalisation of the skill parameterisation problem by considering execution failures as an integral part of the skill design and learning process, a demonstration of how a hybrid representation for parameterising skills can contribute towards improving the introspective properties of robot skills, as well as an extensive evaluation of the proposed methods in various experiments. We believe that this work constitutes a small first step towards more failure-aware robots that are suitable to be used in human-centered environments.
Loading of shipping containers for dairy products often includes a press-fit task, which involves manually stacking milk cartons in a container without using pallets or packaging. Automating this task with a mobile manipulator can reduce worker strain, and also enhance the efficiency and safety of the container loading process. This paper proposes an approach called Adaptive Compliant Control with Integrated Failure Recovery (ACCIFR), which enables a mobile manipulator to reliably perform the press-fit task. We base the approach on a demonstration learning-based compliant control framework, such that we integrate a monitoring and failure recovery mechanism for successful task execution. Concretely, we monitor the execution through distance and force feedback, detect collisions while the robot is performing the press-fit task, and use wrench measurements to classify the direction of collision; this information informs the subsequent recovery process. We evaluate the method on a miniature container setup, considering variations in the (i) starting position of the end effector, (ii) goal configuration, and (iii) object grasping position. The results demonstrate that the proposed approach outperforms the baseline demonstration-based learning framework regarding adaptability to environmental variations and the ability to recover from collision failures, making it a promising solution for practical press-fit applications.
In the design of robot skills, the focus generally lies on increasing the flexibility and reliability of the robot execution process; however, typical skill representations are not designed for analysing execution failures if they occur or for explicitly learning from failures. In this paper, we describe a learning-based hybrid representation for skill parameterisation called an execution model, which considers execution failures to be a natural part of the execution process. We then (i) demonstrate how execution contexts can be included in execution models, (ii) introduce a technique for generalising models between object categories by combining generalisation attempts performed by a robot with knowledge about object similarities represented in an ontology, and (iii) describe a procedure that uses an execution model for identifying a likely hypothesis of a parameterisation failure. The feasibility of the proposed methods is evaluated in multiple experiments performed with a physical robot in the context of handle grasping, object grasping, and object pulling. The experimental results suggest that execution models contribute towards avoiding execution failures, but also represent a first step towards more introspective robots that are able to analyse some of their execution failures in an explicit manner.
Saliency methods are frequently used to explain Deep Neural Network-based models. Adebayo et al.'s work on evaluating saliency methods for classification models illustrate certain explanation methods fail the model and data randomization tests. However, on extending the tests for various state of the art object detectors we illustrate that the ability to explain a model is more dependent on the model itself than the explanation method. We perform sanity checks for object detection and define new qualitative criteria to evaluate the saliency explanations, both for object classification and bounding box decisions, using Guided Backpropagation, Integrated Gradients, and their Smoothgrad versions, together with Faster R-CNN, SSD, and EfficientDet-D0, trained on COCO. In addition, the sensitivity of the explanation method to model parameters and data labels varies class-wise motivating to perform the sanity checks for each class. We find that EfficientDet-D0 is the most interpretable method independent of the saliency method, which passes the sanity checks with little problems.
The representation, or encoding, utilized in evolutionary algorithms has a substantial effect on their performance. Examination of the suitability of widely used representations for quality diversity optimization (QD) in robotic domains has yielded inconsistent results regarding the most appropriate encoding method. Given the domain-dependent nature of QD, additional evidence from other domains is necessary. This study compares the impact of several representations, including direct encoding, a dictionary-based representation, parametric encoding, compositional pattern producing networks, and cellular automata, on the generation of voxelized meshes in an architecture setting. The results reveal that some indirect encodings outperform direct encodings and can generate more diverse solution sets, especially when considering full phenotypic diversity. The paper introduces a multi-encoding QD approach that incorporates all evaluated representations in the same archive. Species of encodings compete on the basis of phenotypic features, leading to an approach that demonstrates similar performance to the best single-encoding QD approach. This is noteworthy, as it does not always require the contribution of the best-performing single encoding.
Based on the WEF Travel & Tourism Report data, this study deploys k-means cluster analysis to build a global typology of national destination governance. Previous studies have focused on case studies, while this chapter focuses on classification of different destination types, by deploying indicators a set of following relevant indicators: wastewater treatment, fixed broadband internet subscriptions, ground transport efficiency, quality of roads, quality of railroad infrastructure, reliability of police services, ease of finding skilled employees. The results present a four-cluster solution of national destination governance types, as well as their major characteristics. The chapter than provides and discusses important implication for theory and practice of destination governance.
AI systems pose unknown challenges for designers, policymakers, and users which aggravates the assessment of potential harms and outcomes. Although understanding risks is a requirement for building trust in technology, users are often excluded from legal assessments and explanations of AI hazards. To address this issue we conducted three focus groups with 18 participants in total and discussed the European proposal for a legal framework for AI. Based on this, we aim to build a (conceptual) model that guides policymakers, designers, and researchers in understanding users’ risk perception of AI systems. In this paper, we provide selected examples based on our preliminary results. Moreover, we argue for the benefits of such a perspective.
Modern engineering relies heavily on utilizing computer technologies. This is especially true for thermoplastic manufacturing, such as blow molding. A crucial milestone for digitalization is the continuous integration of data in unified or interoperable systems. While new simulation technologies are constantly developed, data management standards such as STEP fail at integrating them. On the other hand, industrial standards such as ”VMAP” manage to improve interoperability for Small and Medium-sized Enterprises. However, they do not provide Simulation Process and Data Management (SPDM) technologies. For SPDM integration of VMAP data, Ontology-Based Data Access is used to allow continuing the digital thread in custom semantic-based open-source solutions. An ontology of the database format (VMAP) was generated alongside an expandable knowledge graph of data access methods. A Python-based software architecture was developed, automatically using the semantic representations of database format and data access to query data and metadata within the VMAP file. The result is a software architecture template that can be adapted for other data standards and integrated into semantic data management systems. It allows semantic queries on simulation data down to element-wise resolution without integrating the whole model information. The architecture can instantiate a file in a knowledge graph, query a file’s metadatum and, in case it is not yet available, find a semantically represented process that allows the creation and instantiation of the required metadatum. See Figure 1. The results of this thesis can be expected to form a basis for semantic SPDM tools.
When dialogues with voice assistants (VAs) fall apart, users often become confused or even frustrated. To address these issues and related privacy concerns, Amazon recently introduced a feature allowing Alexa users to inquire about why it behaved in a certain way. But how do users perceive this new feature? In this paper, we present preliminary results from research conducted as part of a three-year project involving 33 German households. This project utilized interviews, fieldwork, and co-design workshops to identify common unexpected behaviors of VAs, as well as users’ needs and expectations for explanations. Our findings show that, contrary to its intended purpose, the new feature actually exacerbates user confusion and frustration instead of clarifying Alexa's behavior. We argue that such voice interactions should be characterized as explanatory dialogs that account for VA’s unexpected behavior by providing interpretable information and prompting users to take action to improve their current and future interactions.
Western consumption patterns are strongly associated with environmental pollution and climate change, which challenges us with transforming our society and consumption towards a sustainable future. This thesis takes up this challenge and aims to contribute to this debate at the intersection of ICT artifacts and social practices through the examples of food and mobility consumption. The social practice lens is employed as an alternative to the predominant persuasive or motivational lens of design in the respective consumption domains. Against this background, this thesis first presents three research papers that contribute to a broader understanding of dynamic practices and their transformation towards a sustainable stable state. The following research takes up these sections' empirical results that more intensely focus on the appropriation of materials and infrastructures utilizing Recommender Systems. Given this approach, this thesis contributes to three fields - practice-based Computing, Recommender Systems, and Consumer Informatics.
Smart heating systems are one of the core components of smart homes. A large portion of domestic energy consumption is derived from HVAC (heating, ventilation and air conditioning) systems, making them a relevant topic of the efforts to support an energy transition in private housing. For that reason, the technology has attracted attention both from the academic and the industry communities. User interfaces of smart heating systems have evolved from simple adjusting knobs to advanced data visualization interfaces, that allow for more advanced setting such as time tables and status information. With the advent of AI, we are interested in exploring how the interfaces will be evolving to build the connection between user needs and underlying AI system. Hence, this paper is targeted to provide early design implications towards an AI-based user interface for smart heating systems.
Machine learning-based solutions are frequently adapted in several applications that require big data in operations. The performance of a model that is deployed into operations is subject to degradation due to unanticipated changes in the flow of input data. Hence, monitoring data drift becomes essential to maintain the model’s desired performance. Based on the conducted review of the literature on drift detection, statistical hypothesis testing enables to investigate whether incoming data is drifting from training data. Because Maximum Mean Discrepancy (MMD) and Kolmogorov-Smirnov (KS) have shown to be reliable distance measures between multivariate distributions in the literature review, both were selected from several existing techniques for experimentation. For the scope of this work, the image classification use case was experimented with using the Stream-51 dataset. Based on the results from different drift experiments, both MMD and KS showed high Area Under Curve values. However, KS exhibited faster performance than MMD with fewer false positives. Furthermore, the results showed that using the pre-trained ResNet-18 for feature extraction maintained the high performance of the experimented drift detectors. Furthermore, the results showed that the performance of the drift detectors highly depends on the sample sizes of the reference (training) data and the test data that flow into the pipeline’s monitor. Finally, the results also showed that if the test data is a mixture of drifting and non-drifting data, the performance of the drift detectors does not depend on how the drifting data are scattered with the non-drifting ones, but rather their amount in the test set
There has been a growing interest in taste research in the HCI and CSCW communities. However, the focus is more on stimulating the senses, while the socio-cultural aspects have received less attention. However, individual taste perception is mediated through social interaction and collective negotiation and is not only dependent on physical stimulation. Therefore, we study the digital mediation of taste by drawing on ethnographic research of four online wine tastings and one self-organized event. Hence, we investigated the materials, associated meanings, competences, procedures, and engagements that shaped the performative character of tasting practices. We illustrate how the tastings are built around the taste-making process and how online contexts differ in providing a more diverse and distributed environment. We then explore the implications of our findings for the further mediation of taste as a social and democratized phenomenon through online interaction.
The representation, or encoding, utilized in evolutionary algorithms has a substantial effect on their performance. Examination of the suitability of widely used representations for quality diversity optimization (QD) in robotic domains has yielded inconsistent results regarding the most appropriate encoding method. Given the domain-dependent nature of QD, additional evidence from other domains is necessary. This study compares the impact of several representations, including direct encoding, a dictionary-based representation, parametric encoding, compositional pattern producing networks, and cellular automata, on the generation of voxelized meshes in an architecture setting. The results reveal that some indirect encodings outperform direct encodings and can generate more diverse solution sets, especially when considering full phenotypic diversity. The paper introduces a multi-encoding QD approach that incorporates all evaluated representations in the same archive. Species of encodings compete on the basis of phenotypic features, leading to an approach that demonstrates similar performance to the best single-encoding QD approach. This is noteworthy, as it does not always require the contribution of the best-performing single encoding.
In memoriam Willy Lehnert
(2023)
Microorganisms not only contribute to the spoilage of food but can also cause illnesses through consumption. Consumer concerns and doubts about the shelf life of the products and the resulting enormous amounts of food waste have led to a demand for a rapid, robust, and non-destructive method for the detection of microorganisms, especially in the food sector. Therefore, a rapid and simple sampling method for the Raman- and infrared (IR)-microspectroscopic study of microorganisms associated with spoilage processes was developed. For subsequent evaluation pre-processing routines, as well as chemometric models for classification of spoilage microorganisms were developed. The microbiological samples are taken using a disinfectable sampling stamp and measured by microspectroscopy without the usual pre-treatments such as purification separation, washing, and centrifugation. The resulting complex multivariate data sets were pre-processed, reduced by principal component analysis, and classified by discriminant analysis. Classification of independent unlabeled test data showed that microorganisms could be classified at genus, species, and strain levels with an accuracy of 96.5 % (Raman) and 94.5 % (IR), respectively, despite large biological differences and novel sampling strategies. As bacteria are exposed to constantly changing conditions and their adaptation mechanisms may make them inaccessible to conventional measurement methods, the methods and models developed were investigated for their suitability for microorganisms exposed to stress. Compared to normal growth conditions, spectral changes in lipids, polysaccharides, nucleic acids, and proteins were observed in microorganisms exposed to stress. Models were developed to discriminate microorganisms, independent of the involvement of various stress factors and storage times. Classification of the investigated bacteria yielded accuracies of 97.6 % (Raman) and 96.6 % (IR), respectively, and a robust and meaningful model was developed to discriminate different microorganisms at the genus, species, and strain levels. The obtained results are very promising and show that the methods and models developed for the discrimination of microorganisms as well as the investigation of stress factors on microorganisms by means of Raman- and IR-microspectroscopy have the potential to be used, for example, in the food sector for the rapid determination of surface contamination.
Intelligent virtual agents provide a framework for simulating more life-like behavior and increasing plausibility in virtual training environments. They can improve the learning process if they portray believable behavior that can also be controlled to support the training objectives. In the context of this thesis, cognitive agents are considered a subset of intelligent virtual agents (IVA) with the focus on emulating cognitive processes to achieve believable behavior. The complexity of employed algorithms, however, is often limited since multiple agents need to be simulated in real-time. Available solutions focus on a subset of the indicated aspects: plausibility, controllability, or real-time capability (scalability). Within this thesis project, an agent architecture for attentive cognitive agents is developed that considers all three aspects at once. The result is a lightweight cognitive agent architecture that is customizable to application-specific requirements. A generic trait-based personality model influences all cognitive processes, facilitating the generation of consistent and individual behavior. An additional mapping process provides a formalized mechanism to transfer results of psychological studies to the architecture. Personality profiles are combined with an emotion model to achieve situational behavior adaptation. Which action an agent selects in a situation also influences plausibility. An integral element of this selection process is an agent's knowledge about its world. Therefore, synthetic perception is modeled and integrated into the architecture to provide a credible knowledge base. The developed perception module includes a unified sensor interface, a memory hierarchy, and an attention process. With the presented realization of the architecture (CAARVE), it is possible for the first time to simulate cognitive agents, whose behaviors are simultaneously computable in real-time and controllable. The architecture's applicability is demonstrated by integrating an agent-based traffic simulation built with CAARVE into a bicycle simulator for road-safety education. The developed ideas and their realization are evaluated within this work using different strategies and scenarios. For example, it is shown how CAARVE agents utilize personality profiles and emotions to plausibly resolve deadlocks in traffic simulations. Controllability and adaptability are demonstrated in additional scenarios. Using the realization, 200 agents can be simulated in real-time (50 FPS), illustrating scalability. The achieved results verify that the developed architecture can generate plausible and controllable agent behavior in real-time. The presented concepts and realizations provide sound fundamentals to everyone interested in simulating IVA in real-time environments.
This research investigates the efficacy of multisensory cues for locating targets in Augmented Reality (AR). Sensory constraints can impair perception and attention in AR, leading to reduced performance due to factors such as conflicting visual cues or a restricted field of view. To address these limitations, the research proposes head-based multisensory guidance methods that leverage audio-tactile cues to direct users' attention towards target locations. The research findings demonstrate that this approach can effectively reduce the influence of sensory constraints, resulting in improved search performance in AR. Additionally, the thesis discusses the limitations of the proposed methods and provides recommendations for future research.