Refine
Departments, institutes and facilities
- Fachbereich Informatik (56)
- Fachbereich Ingenieurwissenschaften und Kommunikation (53)
- Fachbereich Wirtschaftswissenschaften (47)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (38)
- Fachbereich Sozialpolitik und Soziale Sicherung (28)
- Fachbereich Angewandte Naturwissenschaften (21)
- Institut für Verbraucherinformatik (IVI) (15)
- Graduierteninstitut (12)
- Institut für funktionale Gen-Analytik (IFGA) (8)
- Institut für Cyber Security & Privacy (ICSP) (7)
Document Type
- Conference Object (62)
- Article (52)
- Part of a Book (37)
- Book (monograph, edited volume) (18)
- Doctoral Thesis (13)
- Preprint (11)
- Video (9)
- Research Data (6)
- Master's Thesis (6)
- Contribution to a Periodical (4)
Year of publication
- 2023 (229) (remove)
Has Fulltext
- no (229) (remove)
Keywords
- Normen (9)
- Normen-ABC (9)
- Normenkompetenz (9)
- Normenwissen (9)
- Virtual Reality (4)
- Lehrbuch (3)
- Quality diversity (3)
- Wirtschaftsmathematik (3)
- Compositional Pattern Producing Networks (2)
- Digital Sovereignty (2)
Zur Geste der Medien
(2023)
Zertifizierungsnormen
(2023)
Dieses Video aus der Videoreihe „Normen-ABC“ gibt eine Übersicht über Zertifizierungsnormen von A-Z. Die Zertifizierung von „Produkten“, „Prozessen“, „Systemen“ und „Personen“ wird erklärt. Am Beispiel der FFP2-Masken mit richtiger CE-Kennzeichnung wird begründet, wie wichtig die Einhaltung von Normen für Gesundheit und Leben ist.
Voraussehen heißt, Visionen für die Zukunft zu entwickeln und verantwortungsvoll mitzugestalten und dies im engen Austausch zwischen angewandter Wissenschaft, Gesellschaft und Wirtschaft. Das ist der Hochschule Bonn-Rhein-Sieg ein wichtiges Anliegen. Die H-BRS hat in Lehre, Forschung und Transfer neue Wege beschritten und Akzente gesetzt – zum Beispiel auf den Gebieten Nachhaltigkeit, Energiewende oder Cybersecurity. Der Jahresbericht 2022/23 bietet einen Überblick über die wichtigsten Themen aus den Gebieten Forschung, Lehre, Studium und Kooperation.
Verbraucherpolitik
(2023)
Introduction: Antimicrobial resistance (AMR) has emerged as one of the leading threats to public health. AMR possesses a multidimensional challenge that has social, economic, and environmental dimensions that encompass the food production system, influencing human and animal health. The One Health approach highlights the inextricable linkage and interdependence between the health of people, animals, agriculture, and the environment. Antibiotic use in any of these One Health areas can potentially impact the health of other areas. There is a dearth of evidence on AMR from the natural environment, such as the plant-based agriculture sector. Antibiotics, antibiotic-resistant bacteria (ARB), and related AMR genes (ARGs) are assumed to present in the natural environment and disseminate resistance to fresh produce/vegetables and thus to human health upon consumption. Therefore, this study aims to investigate the role of vegetables in the spread of AMR through an agroecosystem exploration from a One Health perspective in Ahmedabad, India.
Protocol: The present study will be executed in Ahmedabad, located in Gujarat state in the Western part of India, by adopting a mixed-method approach. First, a systematic review will be conducted to document the prevalence of ARB and ARGs on fresh produce in South Asia. Second, agriculture farmland surveys will be used to collect the general farming practices and the data on common vegetables consumed raw by the households in Ahmedabad. Third, vegetable and soil samples will be collected from the selected agriculture farms and analyzed for the presence or absence of ARB and ARGs using standard microbiological and molecular methods.
Discussion: The analysis will help to understand the spread of ARB/ARGs through the agroecosystem. This is anticipated to provide an insight into the current state of ARB/ARGs contamination of fresh produce/vegetables and will assist in identifying the relevant strategies for effectively controlling and preventing the spread of AMR.
When dialogues with voice assistants (VAs) fall apart, users often become confused or even frustrated. To address these issues and related privacy concerns, Amazon recently introduced a feature allowing Alexa users to inquire about why it behaved in a certain way. But how do users perceive this new feature? In this paper, we present preliminary results from research conducted as part of a three-year project involving 33 German households. This project utilized interviews, fieldwork, and co-design workshops to identify common unexpected behaviors of VAs, as well as users’ needs and expectations for explanations. Our findings show that, contrary to its intended purpose, the new feature actually exacerbates user confusion and frustration instead of clarifying Alexa's behavior. We argue that such voice interactions should be characterized as explanatory dialogs that account for VA’s unexpected behavior by providing interpretable information and prompting users to take action to improve their current and future interactions.
Risikobasierte Authentifizierung (RBA) ist ein adaptiver Ansatz zur Stärkung der Passwortauthentifizierung. Er überwacht eine Reihe von Merkmalen, die sich auf das Loginverhalten während der Passworteingabe beziehen. Wenn sich die beobachteten Merkmalswerte signifikant von denen früherer Logins unterscheiden, fordert RBA zusätzliche Identitätsnachweise an. Regierungsbehörden und ein Erlass des US-Präsidenten empfehlen RBA, um Onlineaccounts vor Angriffen mit gestohlenen Passwörtern zu schützen. Trotz dieser Tatsachen litt RBA unter einem Mangel an offenem Wissen. Es gab nur wenige bis keine Untersuchungen über die Usability, Sicherheit und Privatsphäre von RBA. Das Verständnis dieser Aspekte ist jedoch wichtig für eine breite Akzeptanz.
Diese Arbeit soll ein umfassendes Verständnis von RBA mit einer Reihe von Studien vermitteln. Die Ergebnisse ermöglichen es, datenschutzfreundliche RBA-Lösungen zu schaffen, die die Authentifizierung stärken bei gleichzeitig hoher Menschenakzeptanz.
Die vorliegende Arbeit beschäftigt sich mit Unternehmenspodcasts. Ziel dieser Arbeit ist es aktuelle Erkenntnisse über den Entwicklungsstand bei der Konzeption und Produktion von Unternehmenspodcasts zu erhalten. Fokussiert wird sich hierbei auf die Sicht der Kommunikatoren, in Form von Podcast-Agenturen. Es wird untersucht, ob Trends zu erkennen sind, ob bei unterschiedlichen Podcast-Agenturen ein Erfahrungswissen vorliegt und ob Überschneidungen zu erkennen sind. Für die Beantwortung der Fragestellungen wird in dieser Studie eine qualitative Befragung in Form von Experteninterviews durchgeführt.
Während ihrer Untersuchung zur Situation von geflüchteten Menschen in kommunalen Unterbringungen beginnt der Krieg in der Ukraine: Im Interview berichten Prof. Dr. Rosenow-Williams, Dr. Alina Bergedick und Dr. Katharina Behmer-Prinz von neuen Herausforderungen und Chancen und geben Einblicke in die Praxis kommunaler Flüchtlingsarbeit.
Twitchen als Kulturtechnik
(2023)
TSEM: Temporally-Weighted Spatiotemporal Explainable Neural Network for Multivariate Time Series
(2023)
Trueness and precision of milled and 3D printed root-analogue implants: A comparative in vitro study
(2023)
Work-related thoughts in off-job time have been studied extensively in occupational health psychology and related fields. We provide a focused review of research on overcommitment – a component within the effort-reward imbalance model – and aim to connect this line of research to the most commonly studied aspects of work-related rumination. Drawing on this integrative review, we analyze survey data on ten facets of work-related rumination, namely (1) overcommitment, (2) psychological detachment, (3) affective rumination, (4) problem-solving pondering, (5) positive work reflection, (6) negative work reflection, (7) distraction, (8) cognitive irritation, (9) emotional irritation, and (10) inability to recover. First, we leverage exploratory factor analysis to self-report survey data from 357 employees to calibrate overcommitment items and to position overcommitment within the nomological net of work-related rumination constructs. Second, we leverage confirmatory factor analysis to self-report survey data from 388 employees to provide a more specific test of uniqueness vs. overlap among these constructs. Third, we apply relative weight analysis to quantify the unique criterion-related validity of each work-related rumination facet regarding (1) physical fatigue, (2) cognitive fatigue, (3) emotional fatigue, (4) burnout, (5) psychosomatic complaints, and (6) satisfaction with life. Our results suggest that several measures of work-related rumination (e.g., overcommitment and cognitive irritation) can be used interchangeably. Emotional irritation and affective rumination emerge as the strongest unique predictors of fatigue, burnout, psychosomatic complaints, and satisfaction with life. Our study assists researchers in making informed decisions on selecting scales for their research and paves the way for integrating research on effort-reward imbalance and work-related rumination.
There & Back again: Developing a tool for testing of antimicrobial surfaces for space habitat design
(2023)
Smart heating systems are one of the core components of smart homes. A large portion of domestic energy consumption is derived from HVAC (heating, ventilation and air conditioning) systems, making them a relevant topic of the efforts to support an energy transition in private housing. For that reason, the technology has attracted attention both from the academic and the industry communities. User interfaces of smart heating systems have evolved from simple adjusting knobs to advanced data visualization interfaces, that allow for more advanced setting such as time tables and status information. With the advent of AI, we are interested in exploring how the interfaces will be evolving to build the connection between user needs and underlying AI system. Hence, this paper is targeted to provide early design implications towards an AI-based user interface for smart heating systems.
The art of nudging
(2023)
Do simple and subtle changes in the living and study environment improve the eating behaviour of students in an educational setting? This dissertation provides a not-so-simple answer to this simple question based on the outcomes of four studies that explore the effects and design of artwork nudges (specifically the artwork of Alberto Giacometti) on the eating behaviour of students by applying different research designs. Study 1 explores the effects of a Giacometti-like nudge (a more contemporary version of the original nudge) regarding the dietary behaviour of high school students in a controlled setting. Study 2 applies different artwork nudges within a virtual vignette setting to measure their effects on virtual meal choices made. Also, the degree to which individuals were aware of the nudge’s presence is included as an influential factor in nudge effectiveness. Study 3 assesses the susceptibility to nudges as measured with a questionnaire. Susceptibility to nudges is defined as nudgeability. Study 4 assesses the effects of the original Giacometti nudge in a real-world university cafeteria setting. Specifically, the immediate and sustained effects of the original Giacometti nudge on students’ meal purchases in the university cafeteria are considered. In addition, the role of awareness of the nudge’s presence as well as the acceptance of this specific nudge are discussed. The conclusion is drawn that the original Giacometti nudge should only be applied in an educational setting to improve healthy eating behaviour if the intended target groups and environment meet certain conditions. Artwork nudges in general should be applied only after rigorous testing of various types of different nudges and more research reflecting healthy eating in its entirety.
Text und Bild/Illustrationen
(2023)
Technik(en) des Designs
(2023)
Statistik im Fokus
(2023)
Spektroskopische Qualifizierung und Quantifizierung von Hyaluronsäure in Nahrungsergänzungsmitteln
(2023)
Skill generalisation and experience acquisition for predicting and avoiding execution failures
(2023)
For performing tasks in their target environments, autonomous robots usually execute and combine skills. Robot skills in general and learning-based skills in particular are usually designed so that flexible skill acquisition is possible, but without an explicit consideration of execution failures, the impact that failure analysis can have on the skill learning process, or the benefits of introspection for effective coexistence with humans. Particularly in human-centered environments, the ability to understand, explain, and appropriately react to failures can affect a robot's trustworthiness and, consequently, its overall acceptability. Thus, in this dissertation, we study the questions of how parameterised skills can be designed so that execution-level decisions are associated with semantic knowledge about the execution process, and how such knowledge can be utilised for avoiding and analysing execution failures. The first major segment of this work is dedicated to developing a representation for skill parameterisation whose objective is to improve the transparency of the skill parameterisation process and enable a semantic analysis of execution failures. We particularly develop a hybrid learning-based representation for parameterising skills, called an execution model, which combines qualitative success preconditions with a function that maps parameters to predicted execution success. The second major part of this work focuses on applications of the execution model representation to address different types of execution failures. We first present a diagnosis algorithm that, given parameters that have resulted in a failure, finds a failure hypothesis by searching for violations of the qualitative model, as well as an experience correction algorithm that uses the found hypothesis to identify parameters that are likely to correct the failure. Furthermore, we present an extension of execution models that allows multiple qualitative execution contexts to be considered so that context-specific execution failures can be avoided. Finally, to enable the avoidance of model generalisation failures, we propose an adaptive ontology-assisted strategy for execution model generalisation between object categories that aims to combine the benefits of model-based and data-driven methods; for this, information about category similarities as encoded in an ontology is integrated with outcomes of model generalisation attempts performed by a robot. The proposed methods are exemplified in terms of various use cases - object and handle grasping, object stowing, pulling, and hand-over - and evaluated in multiple experiments performed with a physical robot. The main contributions of this work include a formalisation of the skill parameterisation problem by considering execution failures as an integral part of the skill design and learning process, a demonstration of how a hybrid representation for parameterising skills can contribute towards improving the introspective properties of robot skills, as well as an extensive evaluation of the proposed methods in various experiments. We believe that this work constitutes a small first step towards more failure-aware robots that are suitable to be used in human-centered environments.
Modern engineering relies heavily on utilizing computer technologies. This is especially true for thermoplastic manufacturing, such as blow molding. A crucial milestone for digitalization is the continuous integration of data in unified or interoperable systems. While new simulation technologies are constantly developed, data management standards such as STEP fail at integrating them. On the other hand, industrial standards such as ”VMAP” manage to improve interoperability for Small and Medium-sized Enterprises. However, they do not provide Simulation Process and Data Management (SPDM) technologies. For SPDM integration of VMAP data, Ontology-Based Data Access is used to allow continuing the digital thread in custom semantic-based open-source solutions. An ontology of the database format (VMAP) was generated alongside an expandable knowledge graph of data access methods. A Python-based software architecture was developed, automatically using the semantic representations of database format and data access to query data and metadata within the VMAP file. The result is a software architecture template that can be adapted for other data standards and integrated into semantic data management systems. It allows semantic queries on simulation data down to element-wise resolution without integrating the whole model information. The architecture can instantiate a file in a knowledge graph, query a file’s metadatum and, in case it is not yet available, find a semantically represented process that allows the creation and instantiation of the required metadatum. See Figure 1. The results of this thesis can be expected to form a basis for semantic SPDM tools.
Saliency methods are frequently used to explain Deep Neural Network-based models. Adebayo et al.'s work on evaluating saliency methods for classification models illustrate certain explanation methods fail the model and data randomization tests. However, on extending the tests for various state of the art object detectors we illustrate that the ability to explain a model is more dependent on the model itself than the explanation method. We perform sanity checks for object detection and define new qualitative criteria to evaluate the saliency explanations, both for object classification and bounding box decisions, using Guided Backpropagation, Integrated Gradients, and their Smoothgrad versions, together with Faster R-CNN, SSD, and EfficientDet-D0, trained on COCO. In addition, the sensitivity of the explanation method to model parameters and data labels varies class-wise motivating to perform the sanity checks for each class. We find that EfficientDet-D0 is the most interpretable method independent of the saliency method, which passes the sanity checks with little problems.
Western consumption patterns are strongly associated with environmental pollution and climate change, which challenges us with transforming our society and consumption towards a sustainable future. This thesis takes up this challenge and aims to contribute to this debate at the intersection of ICT artifacts and social practices through the examples of food and mobility consumption. The social practice lens is employed as an alternative to the predominant persuasive or motivational lens of design in the respective consumption domains. Against this background, this thesis first presents three research papers that contribute to a broader understanding of dynamic practices and their transformation towards a sustainable stable state. The following research takes up these sections' empirical results that more intensely focus on the appropriation of materials and infrastructures utilizing Recommender Systems. Given this approach, this thesis contributes to three fields - practice-based Computing, Recommender Systems, and Consumer Informatics.
Transdermal therapeutic systems (TTS) represent an up-to-day medication applied to human skin, which consists of a drug-containing pressure-sensitive adhesive (PSA) and a flexible backing layer. The development of a reliable TTS requires precise knowledge of the viscoelastic tack behavior of PSA in terms of adhesion and detaching. Tailoring of a PSA can be achieved by altering the resin content or modifying the chemical properties of the macromolecules. In this study, three different resin content of two silicone-based PSA – non-amine compatible, and less tack, amine-compatible – were investigated with the help of recently developed RheoTack method to characterize the retraction speed dependent tack behavior for various geometries of the testing rods. The obtained force-retraction displacement-curves clearly depict the effect of the chemical structure as well as the resin content. Decreasing the resin content shifts the start of fibril fracture to larger deformations states and significantly enhances the stretchability of the fibrils. To compare various rod geometries precisely, the force-retraction displacement curves were normalized to account for effective contact areas. The flat and spherical rods led to completely different failure and tack behaviors. Furthermore, the adhesion formation between TTS with flexible backing layers and rods during the dwell phase happens in a different manner compared to rigid plates, in particular for flat rods, where maximum compression stresses occur at the edges and not uniformly over the cross-section. Thus, the approach to follow ASTM D2949 has to be reconsidered for tests of these materials.
Microorganisms not only contribute to the spoilage of food but can also cause illnesses through consumption. Consumer concerns and doubts about the shelf life of the products and the resulting enormous amounts of food waste have led to a demand for a rapid, robust, and non-destructive method for the detection of microorganisms, especially in the food sector. Therefore, a rapid and simple sampling method for the Raman- and infrared (IR)-microspectroscopic study of microorganisms associated with spoilage processes was developed. For subsequent evaluation pre-processing routines, as well as chemometric models for classification of spoilage microorganisms were developed. The microbiological samples are taken using a disinfectable sampling stamp and measured by microspectroscopy without the usual pre-treatments such as purification separation, washing, and centrifugation. The resulting complex multivariate data sets were pre-processed, reduced by principal component analysis, and classified by discriminant analysis. Classification of independent unlabeled test data showed that microorganisms could be classified at genus, species, and strain levels with an accuracy of 96.5 % (Raman) and 94.5 % (IR), respectively, despite large biological differences and novel sampling strategies. As bacteria are exposed to constantly changing conditions and their adaptation mechanisms may make them inaccessible to conventional measurement methods, the methods and models developed were investigated for their suitability for microorganisms exposed to stress. Compared to normal growth conditions, spectral changes in lipids, polysaccharides, nucleic acids, and proteins were observed in microorganisms exposed to stress. Models were developed to discriminate microorganisms, independent of the involvement of various stress factors and storage times. Classification of the investigated bacteria yielded accuracies of 97.6 % (Raman) and 96.6 % (IR), respectively, and a robust and meaningful model was developed to discriminate different microorganisms at the genus, species, and strain levels. The obtained results are very promising and show that the methods and models developed for the discrimination of microorganisms as well as the investigation of stress factors on microorganisms by means of Raman- and IR-microspectroscopy have the potential to be used, for example, in the food sector for the rapid determination of surface contamination.
Psychische Beeinträchtigungen nach Arbeitsunfällen – Probleme der Rechtsanwendung und Begutachtung
(2023)
Project Management
(2023)
Companies are increasingly developing into dynamic and project-oriented organizations. Globalization, innovations and organizational dynamics require more and more projects, and thus a more project-oriented corporate organization and management. As a rule, managers as well as employees already work parallel to their line function in projects or completely from project to project.
At the same time, cross-company and especially international value chains lead to the cooperation of cross-departamental and intercultural teams. For this, the specialists and executives need above all knowledge and experience in project management and the corresponding concepts as well as in the special form of cooperation, team development and communication. Because the most problems in project management are not caused by project goals and methods, but by the many different problem-solving behavior and attitudes, e.g. between engineers and business people, different departments or the different country cultures. The international IT project specialist Tom DeMarco puts it in a nutshell (in Peopleware - Productive Projects and Teams: The major problems of our work are not so much technological as socio-logical in nature. In terms of content here, in contrast to traditional professional textbooks, not only the technologies are priority, but also the social and intercultural aspects of project work.
The book is aimed equally at students of all disciplines with a focus on managerial and project-related work as well as practitioners and entrepreneurs in all private business sectors as well as in NGOs, public projects or PPPs as public-private partnership.
Der Programmier-Trainingsplan für alle, die weiter kommen wollen.
In diesem Übungsbuch trainierst du anhand von kurzweiligen und praxisnahen Aufgaben deine Programmierfähigkeiten. Jedes Kapitel beginnt mit einem kurzen Warmup zum behandelten Programmierkonzept; die Umsetzung übst du dann anhand von zahlreichen Workout-Aufgaben. Du startest mit einfachen Aufgaben und steigerst dich hin zu komplexeren Fragestellungen. Damit dir nicht langweilig wird, gibt es über 150 praxisnahe Übungen. So lernst du z. B. einen BMI-Rechner oder einen PIN-Generator zu programmieren oder wie du eine Zeitangabe mit einer analogen Uhr anzeigen kannst. (Verlagsangaben)
Politische Ökonomie
(2023)
Dieses Buch beleuchtet den Online-Lebensmittelhandel in Deutschland aus Anbieter- und Kundenperspektive, leitet Zukunftsprognosen ab und zeigt Konsequenzen für Handel und Hersteller. Trotz des Aufwinds während der Corona-Pandemie bewegen sich die Umsätze im Online-Handel mit Lebensmitteln noch auf relativ niedrigem Niveau; die Entwicklung verläuft jedoch turbulent und wird kontrovers diskutiert. Dieses Buch beschreibt den Status quo und regt zu Diskussionen an. Es bietet eine systematische Analyse einschlägiger Studien sowie aktuelle Erkenntnisse auf Basis qualitativer Interviews mit Experten aus Handel, Industrie und Wissenschaft. (Verlagsangaben)
The representation, or encoding, utilized in evolutionary algorithms has a substantial effect on their performance. Examination of the suitability of widely used representations for quality diversity optimization (QD) in robotic domains has yielded inconsistent results regarding the most appropriate encoding method. Given the domain-dependent nature of QD, additional evidence from other domains is necessary. This study compares the impact of several representations, including direct encoding, a dictionary-based representation, parametric encoding, compositional pattern producing networks, and cellular automata, on the generation of voxelized meshes in an architecture setting. The results reveal that some indirect encodings outperform direct encodings and can generate more diverse solution sets, especially when considering full phenotypic diversity. The paper introduces a multi-encoding QD approach that incorporates all evaluated representations in the same archive. Species of encodings compete on the basis of phenotypic features, leading to an approach that demonstrates similar performance to the best single-encoding QD approach. This is noteworthy, as it does not always require the contribution of the best-performing single encoding.
The representation, or encoding, utilized in evolutionary algorithms has a substantial effect on their performance. Examination of the suitability of widely used representations for quality diversity optimization (QD) in robotic domains has yielded inconsistent results regarding the most appropriate encoding method. Given the domain-dependent nature of QD, additional evidence from other domains is necessary. This study compares the impact of several representations, including direct encoding, a dictionary-based representation, parametric encoding, compositional pattern producing networks, and cellular automata, on the generation of voxelized meshes in an architecture setting. The results reveal that some indirect encodings outperform direct encodings and can generate more diverse solution sets, especially when considering full phenotypic diversity. The paper introduces a multi-encoding QD approach that incorporates all evaluated representations in the same archive. Species of encodings compete on the basis of phenotypic features, leading to an approach that demonstrates similar performance to the best single-encoding QD approach. This is noteworthy, as it does not always require the contribution of the best-performing single encoding.
Nur die halbe Wahrheit?
(2023)
Normen-ABC als Uebersicht
(2023)
Dieses Video dient der Motivation, sich mit Normenthemen zu befassen. Mit dem Internationalen Normenklassifizierung System (ICS) wird begründet, warum Normenkompetenz für alle Studierenden aller Studiengänge oder Berufstätigen jeder Fachrichtung von A-Z wichtig ist. Dazu werden Nützlichkeitsbeispiele gegeben. Abschließend wird das Normen-ABC als Übersicht vorgestellt und welche Lern- und Lehrziele die einzelnen Videos haben.
This research investigates the efficacy of multisensory cues for locating targets in Augmented Reality (AR). Sensory constraints can impair perception and attention in AR, leading to reduced performance due to factors such as conflicting visual cues or a restricted field of view. To address these limitations, the research proposes head-based multisensory guidance methods that leverage audio-tactile cues to direct users' attention towards target locations. The research findings demonstrate that this approach can effectively reduce the influence of sensory constraints, resulting in improved search performance in AR. Additionally, the thesis discusses the limitations of the proposed methods and provides recommendations for future research.
This thesis proposes a multi-label classification approach using the Multimodal Transformer (MulT) [80] to perform multi-modal emotion categorization on a dataset of oral histories archived at the Haus der Geschichte (HdG). Prior uni-modal emotion classification experiments conducted on the novel HdG dataset provided less than satisfactory results. They uncovered issues such as class imbalance, ambiguities in emotion perception between annotators, and lack of representative training data to perform transfer learning [28]. Hence, the objectives of this thesis were to achieve better results by performing a multi-modal fusion and resolving the problems arising from class imbalance and annotator-induced bias in emotion perception. A further objective was to assess the quality of the novel HdG dataset and benchmark the results using SOTA techniques. Through a literature survey on the challenges, models, and datasets related to multi-modal emotion recognition, we created a methodology utilizing the MulT along with a multi-label classification approach. This approach produced a considerable improvement in the overall emotion recognition by obtaining an average AUC of 0.74 and Balanced-accuracy of 0.70 on the HdG dataset, which is comparable to state-of-the-art (SOTA) results on other datasets. In this manner, we were also able to benchmark the novel HdG dataset as well as introduce a novel multi-annotator learning approach to understand each annotator’s relative strengths and weaknesses for emotion perception. Our evaluation results highlight the potential benefits of the novel multi-annotator learning approach in improving overall performance by resolving the problems arising from annotator-induced bias and variation in the perception of emotions. Complementing these results, we performed a further qualitative analysis of the HdG annotations with a psychologist to study the ambiguities found in the annotations. We conclude that the ambiguities in annotations may have resulted from a combination of several socio-psychological factors and systemic issues associated with the process of creating these annotations. As these problems are also present in most multi-modal emotion recognition datasets, we conclude that the domain could benefit from a set of annotation guidelines to create standardized datasets.
Eine Überprüfung der Leistungsentwicklung im Radsport geht bis heute mit der Durchführung einer spezifischen Leistungsdiagnostik unter Verwendung vorgegebener Testprotokolle einher. Durch die zwischenzeitlich stark gestiegene Popularität von »wearable devices« ist es gleichzeitig heutzutage sehr einfach, die Herzfrequenz im Alltag und bei sportlichen Aktivitäten aufzuzeichnen. Doch eine geeignete Modellierung der Herzfrequenz, die es ermöglicht, Rückschlüsse über die Leistungsentwicklung ziehen zu können, fehlt bislang. Die Herzfrequenzaufzeichnungen in Kombination mit einer phänomenologisch interpretierbaren Modellierung zu nutzen, um auf möglichst direkte Weise und ohne spezifische Anforderungen an die Trainingsfahrten Rückschlüsse über die Leistungsentwicklung ziehen zu können, bietet die Chance, sowohl im professionellen Radsport wie auch in der ambitionierten Radsportpraxis den Erkenntnisgewinn über die eigene Leistungsentwicklung maßgeblich zu vereinfachen. In der vorliegenden Arbeit wird ein neuartiges und phänomenologisch interpretierbares Modell zur Simulation und Prädiktion der Herzfrequenz beim Radsport vorgestellt und im Rahmen einer empirischen Studie validiert. Dieses Modell ermöglicht es, die Herzfrequenz (sowie andere Beanspruchungsparameter aus Atemgasanalysen) mit adäquater Genauigkeit zu simulieren und bei vorgegebener Wattbelastung zu prognostizieren. Weiterhin wird eine Methode zur Reduktion der Anzahl der kalibrierbaren freien Modellparameter vorgestellt und in zwei empirischen Studien validiert. Nach einer individualisierten Parameterreduktion kann das Modell mit lediglich einem einzigen freien Parameter verwendet werden. Dieser verbleibende freie Parameter bietet schließlich die Möglichkeit, im zeitlichen Verlauf mit dem Verlauf der Leistungsentwicklung verglichen zu werden. In zwei unterschiedlichen Studien zeigt sich, dass der freie Modellparameter grundsätzlich in der Lage zu sein scheint, den Verlauf der Leistungsentwicklung über die Zeit abzubilden.
Medien – Aufklärung – Kritik
(2023)
Die Schriftenreihe Medien – Aufklärung – Kritik setzt sich zum Ziel, eine theoretische Reflexion über die Bedingungen von Nachrichtenaufklärung in demokratischen Gesellschaften anzustoßen. Nachrichtenaufklärung wird dabei eingebunden in die kommunikationswissenschaftlichen Debatten um Medialisierung, transnationale Kommunikation, Nachrichtenselektion/Nachrichtenwerttheorie und Öffentlichkeitstheorie.
Lignin ist ein aromatisches Biopolymer, das in den Zellwänden von Pflanzen vorkommt. Es ist hauptsächlich aus drei sogenannten Monolignolen (p-Hydroxyphenyl (H), Guajakol (G) und Syringol (S)) aufgebaut, die über verschiedene Bindungen miteinander verknüpft sein können, und enthält eine Vielzahl an funktionellen Gruppen. Interessant für die Verwendung von Lignin sind dabei insbesondere die vielen phenolischen Hydroxygruppen, die als Ausgangsstoff bei der Synthese neuer Produkte dienen können, daneben aber auch für seine antioxidativen Eigenschaften verantwortlich sind. Da Struktur und Eigenschaften von vielen Faktoren wie Biomasse und Aufschlussprozess abhängen, ist eine detaillierte Charakterisierung der Lignine nötig, um Struktur-Eigenschafts-Beziehungen aufzuklären und so einen Schritt näher an eine mögliche stoffliche Nutzung zu kommen. Mit dieser Arbeit soll der Einfluss der Biomasse inklusive der verwendeten Partikelgröße sowie des Organosolv-Aufschlussprozesses auf die Monomerzusammensetzung, das Molekulargewicht und die Antioxidanz der isolierten Lignine untersucht werden.
Als Rohstoffe zur Ligningewinnung dienen die drei mehrjährigen lignocellulosereichen Low-Input-Pflanzen Miscanthus x giganteus, Silphium perfoliatum und Paulownia tomentosa, die momentan hauptsächlich zur Energiegewinnung genutzt werden. Im Rahmen der Bioökonomiestrategie der Europäischen Union soll der Schwerpunkt zukünftiger Bioraffinerien jedoch auf eine ganzheitliche Nutzung von Biomassen gelegt und so auch die stoffliche Nutzung fokussiert werden. Zusätzlich zu diesen drei Pflanzen werden auch Organosolv-Lignine aus den in der Literatur bereits gut beschriebenen Biomassen Weizenstroh und Buchenholz isoliert, und zwei Nadelholz-Kraft-Lignine als Vergleich herangezogen. Die Ergebnisse zeigen, dass die Art der Biomasse hauptsächlich die Monomerzusammensetzung beeinflusst: Gräser bestehen aus allen drei Monolignolen, Laubhölzer mehrheitlich aus S- und G-Einheiten, während Nadelhölzer nur aus G-Einheiten aufgebaut sind. Die Holzlignine besitzen zudem höhere Molekulargewichte sowie bessere antioxidative Eigenschaften als die Gras- und Krautlignine. Mit der feineren Vermahlung der Biomasse kann die Monomerzusammensetzung beeinflusst werden: der Einsatz kleinerer Partikelgrößen führt zu Ligninen mit einem höheren Gehalt an H-Einheiten, sowohl für Miscanthus als auch für Paulownia. Außerdem kann bei Paulownia die Ausbeute gesteigert und eine Zunahme des Molekulargewichtes beobachtet werden, wenn die kleinste Siebfraktion für den Organosolv-Aufschluss verwendet wird. Einen größeren Einfluss als der Mahlgrad der Biomasse haben die Autohydrolyse sowie der Organosolv-Aufschlussprozess selbst. Die Monomerzusammensetzung ändert sich aufgrund derselben Biomasse zwar kaum, die Bindungstypen zwischen den Monolignolen dagegen schon. Mit höherer Prozessstärke (Zeit, Temperatur, Ethanol-Konzentration) werden Etherbindungen gespalten, was den Anteil an phenolischen Hydroxygruppen und somit die Antioxidanz erhöht. Neben dieser Depolymerisation werden partiell auch Rekondensationsreaktionen beobachtet.
Die erzielten Ergebnisse liefern einen Beitrag zum Verständnis des Zusammenhangs zwischen Ligninquelle und -gewinnung mit der daraus resultierenden Ligninstruktur und Antioxidanz und bieten damit eine Grundlage für den Wandel von der energetischen hin zu einer nachhaltigen stofflichen Nutzung dieses nachwachsenden Biopolymers. Gerade über die Wahl der Aufschlussparameter können Struktur und Antioxidanz gezielt beeinflusst werden, was in zukünftigen Studien weiter fokussiert werden sollte.
Liebe Leserinnen und Leser!
(2023)
LiDAR-based Indoor Localization with Optimal Particle Filters using Surface Normal Constraints
(2023)
In dieser Arbeit wird eine kompressible Semi-Lagrangesche Lattice-Boltzmann-Methode neu entwickelt und erprobt. Die Lattice-Boltzmann-Methode ist ein Verfahren zur numerischen Strömungssimulation, das auf einer Modellierung von Partikeldichten und deren Interaktion untereinander basiert. In ihrer Ursprungsform ist die Methode jedoch auf schwach kompressible Strömungen mit niedriger Machzahl beschränkt. Wesentliche Nachteile der bisherigen Versuche zur Erweiterung auf supersonische Strömungen sind entweder mangelhafte Stabilität der Verfahren, unpraktikabel große Geschwindigkeitssätze oder die Beschränktheit auf kleine Zeitschrittweiten. Als Alternative zu bisherigen Ansätzen wird in dieser Arbeit ein Semi-Lagrangescher Strömungsschritt eingesetzt. Semi-Lagrangesche Verfahren entkoppeln mittels Interpolation die Orts-, Zeit- und Geschwindigkeitsdiskretisierung der ursprünglichen Lattice-Boltzmann-Methode. Nach der Einleitung wird im zweiten und dritten Kapitel dieser Arbeit zunächst auf die Grundlagen und Prinzipien der Lattice-Boltzmann-Methode eingegangen sowie bisherige Ansätze zur Simulation kompressibler Strömungen aufgeführt. Im Anschluss wird die kompressible Semi-Lagrangesche Lattice-Boltzmann-Methode entwickelt und beschrieben. Die Erweiterung erfolgt im Wesentlichen durch die Verknüpfung der Methode mit geeigneten Gleichgewichtsfunktionen und Geschwindigkeitssätzen. Im vierten Kapitel der Arbeit werden neue Kubatur-basierte Geschwindigkeitssätze entwickelt und getestet, darunter ein D3Q45-Geschwindigkeitssatz zur Berechnung kompressibler Strömungen, der den Rechenaufwand gegenüber konventionellen Geschwindigkeitsdiskretisierungen erheblich verringert. Im fünften Kapitel der Arbeit werden zur Validierung Simulationen von eindimensionalen Stoßrohren, zweidimensionalen Riemann-Problemen und Stoß-Wirbel-Interaktionen durchgeführt. Im Anschluss zeigen Simulationen von dreidimensionalen, kompressiblen Taylor-Green-Wirbeln sowie von wandgebundenen Testfällen die Vorteile der Methode für kompressible Strömungssimulationen. Zu diesem Zweck werden die Überschallströmung um ein zweidimensionales NACA-0012-Profil und um eine dreidimensionale Kugel sowie eine supersonische Kanalströmung untersucht. Dem Simulationsteil folgt eine umfangreiche Diskussion der Semi-Lagrangeschen Lattice-Boltzmann-Methode im Vergleich zu anderen Methoden. Die Vorteile der Methode, wie vergleichsweise große Zeitschrittweiten, körperangepasste Netze und die Stabilität der Methode, werden hier herausgearbeitet.
Kreatives Schreiben
(2023)