Refine
H-BRS Bibliography
- yes (327) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (74)
- Fachbereich Wirtschaftswissenschaften (70)
- Fachbereich Ingenieurwissenschaften und Kommunikation (67)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (61)
- Fachbereich Angewandte Naturwissenschaften (44)
- Fachbereich Sozialpolitik und Soziale Sicherung (36)
- Institut für funktionale Gen-Analytik (IFGA) (24)
- Institut für Verbraucherinformatik (IVI) (20)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (20)
- Graduierteninstitut (12)
Document Type
- Article (119)
- Conference Object (74)
- Part of a Book (43)
- Book (monograph, edited volume) (18)
- Doctoral Thesis (13)
- Preprint (12)
- Video (9)
- Master's Thesis (8)
- Research Data (6)
- Report (5)
Year of publication
- 2023 (327) (remove)
Keywords
- Normen (9)
- Normen-ABC (9)
- Normenkompetenz (9)
- Normenwissen (9)
- Virtual Reality (4)
- Global horizontal irradiance (3)
- Lehrbuch (3)
- Machine Learning (3)
- Quality diversity (3)
- Wirtschaftsmathematik (3)
Im Rahmen dieser Arbeit wurden zunächst neuartige ionische Agarosederivate synthetisiert und anschließend umfassend charakterisiert. Anionische Agarosesulfate mit einer regioselektiven Derivatisierung in Position G6 wurden durch homogene Umsetzung in ionischer Flüssigkeit erhalten. Kationische Agarosecarbamate mit einstellbarem Funktionalisierungsgrad waren durch einen zweistufigen Syntheseansatz zugänglich. Hierzu wurden zunächst Agarosephenylcarbonate in einer homogenen Synthese hergestellt, im Anschluss folgte eine Aminolyse zu den gewünschten funktionalen Agarosederivaten. Die ionischen Agarosederivate waren bereits bei geringen Funktionalisierungsgraden vollständig löslich in Wasser. Damit war es möglich, Alginatmikrokapseln polyelektrolytisch zu beschichten und diese als Träger für eine kontrollierte Wirkstofffreisetzung zu verwenden. Ebenfalls konnten Kompositgele aus Agarose, Hydroxyapatit und Agarosederivaten hergestellt und charakterisiert werden. Im zweiten Teil wurden sowohl die Kompositträgermaterialien als auch die Alginatmikrokapseln mit vier verschiedenen Modellwirkstoffen (ATP, Suramin, Methylenblau und A740003) beladen und die Wirkstofffreisetzung über einen Zeitraum von zwei Wochen untersucht. Für die ionischen Modellwirkstoffe erwiesen sich Kompositträgermaterialien mit ionischem Agarosederivat, die beschichteten Mikrokapseln sowie die Kombination aus Komposit und Kapseln als effektiv, um die Freisetzung auf bis zu 40% zu verlangsamen. Für die schlecht wasserlösliche Substanz A740003, ein Rezeptorligand für die osteogene Differenzierung von Stammzellen, wurde eine stark verzögerte Freisetzung aus Polyelektrolytemikrokapseln festgestellt. Mithilfe von literaturbekannten und neu entwickelten Anpassungsmodellen gelang es, die Diffusion als Hauptmechanismus der Wirkstofffreisetzung zu identifizieren und die Freisetzungskurven mathematisch akkurat zu beschreiben und daraus Rückschlüsse über die einzelnen Phasen der Freisetzung zu ziehen.
Elektronik für Entscheider
(2023)
Dieses Buch gibt Nichtingenieuren, die sich beruflich mit Elektronik beschäftigen, die Möglichkeit, sich ein Stück auf dieses Fachgebiet zu begeben, um Aufgaben, Sprache und Vorgehensweise von Ingenieuren zu verstehen. Ziel ist es dabei nicht, nach dem Lesen dieses Buches eine elektronische Schaltung entwickeln zu können. Im Vordergrund steht vielmehr ein generelles Verständnis für die Zusammenhänge und Grundbegriffe der Elektronik. (Verlagsangaben)
Background: the potency of drugs that interfere with glucose metabolism, i.e., glucose transporters (GLUT) and nicotinamide phosphoribosyltransferase (NAMPT) was analyzed in neuroendocrine tumor (NET, BON-1, and QPG-1 cells) and small cell lung cancer (SCLC, GLC-2, and GLC-36 cells) tumor cell lines. (2) Methods: the proliferation and survival rate of tumor cells was significantly affected by the GLUT-inhibitors fasentin and WZB1127, as well as by the NAMPT inhibitors GMX1778 and STF-31. (3) Results: none of the NET cell lines that were treated with NAMPT inhibitors could be rescued with nicotinic acid (usage of the Preiss–Handler salvage pathway), although NAPRT expression could be detected in two NET cell lines. We finally analyzed the specificity of GMX1778 and STF-31 in NET cells in glucose uptake experiments. As previously shown for STF-31 in a panel NET-excluding tumor cell lines, both drugs specifically inhibited glucose uptake at higher (50 μM), but not at lower (5 μM) concentrations. (4) Conclusions: our data suggest that GLUT and especially NAMPT inhibitors are potential candidates for the treatment of NET tumors.
Stably stratified Taylor–Green vortex simulations are performed by lattice Boltzmann methods (LBM) and compared to other recent works using Navier–Stokes solvers. The density variation is modeled with a separate distribution function in addition to the particle distribution function modeling the flow physics. Different stencils, forcing schemes, and collision models are tested and assessed. The overall agreement of the lattice Boltzmann solutions with reference solutions from other works is very good, even when no explicit subgrid model is used, but the quality depends on the LBM setup. Although the LBM forcing scheme is not decisive for the quality of the solution, the choice of the collision model and of the stencil are crucial for adequate solutions in underresolved conditions. The LBM simulations confirm the suppression of vertical flow motion for decreasing initial Froude numbers. To gain further insight into buoyancy effects, energy decay, dissipation rates, and flux coefficients are evaluated using the LBM model for various Froude numbers.
In dieser Arbeit wird eine kompressible Semi-Lagrangesche Lattice-Boltzmann-Methode neu entwickelt und erprobt. Die Lattice-Boltzmann-Methode ist ein Verfahren zur numerischen Strömungssimulation, das auf einer Modellierung von Partikeldichten und deren Interaktion untereinander basiert. In ihrer Ursprungsform ist die Methode jedoch auf schwach kompressible Strömungen mit niedriger Machzahl beschränkt. Wesentliche Nachteile der bisherigen Versuche zur Erweiterung auf supersonische Strömungen sind entweder mangelhafte Stabilität der Verfahren, unpraktikabel große Geschwindigkeitssätze oder die Beschränktheit auf kleine Zeitschrittweiten. Als Alternative zu bisherigen Ansätzen wird in dieser Arbeit ein Semi-Lagrangescher Strömungsschritt eingesetzt. Semi-Lagrangesche Verfahren entkoppeln mittels Interpolation die Orts-, Zeit- und Geschwindigkeitsdiskretisierung der ursprünglichen Lattice-Boltzmann-Methode. Nach der Einleitung wird im zweiten und dritten Kapitel dieser Arbeit zunächst auf die Grundlagen und Prinzipien der Lattice-Boltzmann-Methode eingegangen sowie bisherige Ansätze zur Simulation kompressibler Strömungen aufgeführt. Im Anschluss wird die kompressible Semi-Lagrangesche Lattice-Boltzmann-Methode entwickelt und beschrieben. Die Erweiterung erfolgt im Wesentlichen durch die Verknüpfung der Methode mit geeigneten Gleichgewichtsfunktionen und Geschwindigkeitssätzen. Im vierten Kapitel der Arbeit werden neue Kubatur-basierte Geschwindigkeitssätze entwickelt und getestet, darunter ein D3Q45-Geschwindigkeitssatz zur Berechnung kompressibler Strömungen, der den Rechenaufwand gegenüber konventionellen Geschwindigkeitsdiskretisierungen erheblich verringert. Im fünften Kapitel der Arbeit werden zur Validierung Simulationen von eindimensionalen Stoßrohren, zweidimensionalen Riemann-Problemen und Stoß-Wirbel-Interaktionen durchgeführt. Im Anschluss zeigen Simulationen von dreidimensionalen, kompressiblen Taylor-Green-Wirbeln sowie von wandgebundenen Testfällen die Vorteile der Methode für kompressible Strömungssimulationen. Zu diesem Zweck werden die Überschallströmung um ein zweidimensionales NACA-0012-Profil und um eine dreidimensionale Kugel sowie eine supersonische Kanalströmung untersucht. Dem Simulationsteil folgt eine umfangreiche Diskussion der Semi-Lagrangeschen Lattice-Boltzmann-Methode im Vergleich zu anderen Methoden. Die Vorteile der Methode, wie vergleichsweise große Zeitschrittweiten, körperangepasste Netze und die Stabilität der Methode, werden hier herausgearbeitet.
Liebe Leserinnen und Leser!
(2023)
Statistik im Fokus
(2023)
Digital ecosystems are driving the digital transformation of business models. Meanwhile, the associated processing of personal data within these complex systems poses challenges to the protection of individual privacy. In this paper, we explore these challenges from the perspective of digital ecosystems' platform providers. To this end, we present the results of an interview study with seven data protection officers representing a total of 12 digital ecosystems in Germany. We identified current and future challenges for the implementation of data protection requirements, covering issues on legal obligations and data subject rights. Our results support stakeholders involved in the implementation of privacy protection measures in digital ecosystems, and form the foundation for future privacy-related studies tailored to the specifics of digital ecosystems.
Risk-based authentication (RBA) aims to protect users against attacks involving stolen passwords. RBA monitors features during login, and requests re-authentication when feature values widely differ from those previously observed. It is recommended by various national security organizations, and users perceive it more usable than and equally secure to equivalent two-factor authentication. Despite that, RBA is still used by very few online services. Reasons for this include a lack of validated open resources on RBA properties, implementation, and configuration. This effectively hinders the RBA research, development, and adoption progress.
To close this gap, we provide the first long-term RBA analysis on a real-world large-scale online service. We collected feature data of 3.3 million users and 31.3 million login attempts over more than 1 year. Based on the data, we provide (i) studies on RBA’s real-world characteristics plus its configurations and enhancements to balance usability, security, and privacy; (ii) a machine learning–based RBA parameter optimization method to support administrators finding an optimal configuration for their own use case scenario; (iii) an evaluation of the round-trip time feature’s potential to replace the IP address for enhanced user privacy; and (iv) a synthesized RBA dataset to reproduce this research and to foster future RBA research. Our results provide insights on selecting an optimized RBA configuration so that users profit from RBA after just a few logins. The open dataset enables researchers to study, test, and improve RBA for widespread deployment in the wild.
Risikobasierte Authentifizierung (RBA) ist ein adaptiver Ansatz zur Stärkung der Passwortauthentifizierung. Er überwacht eine Reihe von Merkmalen, die sich auf das Loginverhalten während der Passworteingabe beziehen. Wenn sich die beobachteten Merkmalswerte signifikant von denen früherer Logins unterscheiden, fordert RBA zusätzliche Identitätsnachweise an. Regierungsbehörden und ein Erlass des US-Präsidenten empfehlen RBA, um Onlineaccounts vor Angriffen mit gestohlenen Passwörtern zu schützen. Trotz dieser Tatsachen litt RBA unter einem Mangel an offenem Wissen. Es gab nur wenige bis keine Untersuchungen über die Usability, Sicherheit und Privatsphäre von RBA. Das Verständnis dieser Aspekte ist jedoch wichtig für eine breite Akzeptanz.
Diese Arbeit soll ein umfassendes Verständnis von RBA mit einer Reihe von Studien vermitteln. Die Ergebnisse ermöglichen es, datenschutzfreundliche RBA-Lösungen zu schaffen, die die Authentifizierung stärken bei gleichzeitig hoher Menschenakzeptanz.
Work-related thoughts during off-job time have been studied extensively in occupational health psychology and related fields. We provide a focused review of the research on overcommitment—a component within the effort–reward imbalance model—and aim to connect this line of research to the most commonly studied aspects of work-related rumination. Drawing on this integrative review, we analyze survey data on ten facets of work-related rumination, namely (1) overcommitment, (2) psychological detachment, (3) affective rumination, (4) problem-solving pondering, (5) positive work reflection, (6) negative work reflection, (7) distraction, (8) cognitive irritation, (9) emotional irritation, and (10) inability to recover. First, we apply exploratory factor analysis to self-reported survey data from 357 employees to calibrate overcommitment items and to position overcommitment within the nomological net of work-related rumination constructs. Second, we leverage apply confirmatory factor analysis to self-reported survey data from 388 employees to provide a more specific test of uniqueness vs. overlap among these constructs. Third, we apply relative weight analysis to assess the unique criterion-related validity of each work-related rumination facet regarding (1) physical fatigue, (2) cognitive fatigue, (3) emotional fatigue, (4) burnout, (5) psychosomatic complaints, and (6) satisfaction with life. Our results suggest that several measures of work-related rumination (e.g., overcommitment and cognitive irritation) can be used interchangeably. Emotional irritation and affective rumination emerge as the strongest unique predictors of fatigue, burnout, psychosomatic complaints, and satisfaction with life. Our study is intended to assist researchers in making informed decisions on selecting scales for their research and paves the way for integrating research on the effort–reward imbalance and work-related rumination.
Work-related thoughts in off-job time have been studied extensively in occupational health psychology and related fields. We provide a focused review of research on overcommitment – a component within the effort-reward imbalance model – and aim to connect this line of research to the most commonly studied aspects of work-related rumination. Drawing on this integrative review, we analyze survey data on ten facets of work-related rumination, namely (1) overcommitment, (2) psychological detachment, (3) affective rumination, (4) problem-solving pondering, (5) positive work reflection, (6) negative work reflection, (7) distraction, (8) cognitive irritation, (9) emotional irritation, and (10) inability to recover. First, we leverage exploratory factor analysis to self-report survey data from 357 employees to calibrate overcommitment items and to position overcommitment within the nomological net of work-related rumination constructs. Second, we leverage confirmatory factor analysis to self-report survey data from 388 employees to provide a more specific test of uniqueness vs. overlap among these constructs. Third, we apply relative weight analysis to quantify the unique criterion-related validity of each work-related rumination facet regarding (1) physical fatigue, (2) cognitive fatigue, (3) emotional fatigue, (4) burnout, (5) psychosomatic complaints, and (6) satisfaction with life. Our results suggest that several measures of work-related rumination (e.g., overcommitment and cognitive irritation) can be used interchangeably. Emotional irritation and affective rumination emerge as the strongest unique predictors of fatigue, burnout, psychosomatic complaints, and satisfaction with life. Our study assists researchers in making informed decisions on selecting scales for their research and paves the way for integrating research on effort-reward imbalance and work-related rumination.
Background
Consumers rely heavily on online user reviews when shopping online and cybercriminals produce fake reviews to manipulate consumer opinion. Much prior research focuses on the automated detection of these fake reviews, which are far from perfect. Therefore, consumers must be able to detect fake reviews on their own. In this study we survey the research examining how consumers detect fake reviews online.
Methods
We conducted a systematic literature review over the research on fake review detection from the consumer-perspective. We included academic literature giving new empirical data. We provide a narrative synthesis comparing the theories, methods and outcomes used across studies to identify how consumers detect fake reviews online.
Results
We found only 15 articles that met our inclusion criteria. We classify the most often used cues identified into five categories which were (1) review characteristics (2) textual characteristics (3) reviewer characteristics (4) seller characteristics and (5) characteristics of the platform where the review is displayed.
Discussion
We find that theory is applied inconsistently across studies and that cues to deception are often identified in isolation without any unifying theoretical framework. Consequently, we discuss how such a theoretical framework could be developed.
Protokoll 27
(2023)
This thesis proposes a multi-label classification approach using the Multimodal Transformer (MulT) [80] to perform multi-modal emotion categorization on a dataset of oral histories archived at the Haus der Geschichte (HdG). Prior uni-modal emotion classification experiments conducted on the novel HdG dataset provided less than satisfactory results. They uncovered issues such as class imbalance, ambiguities in emotion perception between annotators, and lack of representative training data to perform transfer learning [28]. Hence, the objectives of this thesis were to achieve better results by performing a multi-modal fusion and resolving the problems arising from class imbalance and annotator-induced bias in emotion perception. A further objective was to assess the quality of the novel HdG dataset and benchmark the results using SOTA techniques. Through a literature survey on the challenges, models, and datasets related to multi-modal emotion recognition, we created a methodology utilizing the MulT along with a multi-label classification approach. This approach produced a considerable improvement in the overall emotion recognition by obtaining an average AUC of 0.74 and Balanced-accuracy of 0.70 on the HdG dataset, which is comparable to state-of-the-art (SOTA) results on other datasets. In this manner, we were also able to benchmark the novel HdG dataset as well as introduce a novel multi-annotator learning approach to understand each annotator’s relative strengths and weaknesses for emotion perception. Our evaluation results highlight the potential benefits of the novel multi-annotator learning approach in improving overall performance by resolving the problems arising from annotator-induced bias and variation in the perception of emotions. Complementing these results, we performed a further qualitative analysis of the HdG annotations with a psychologist to study the ambiguities found in the annotations. We conclude that the ambiguities in annotations may have resulted from a combination of several socio-psychological factors and systemic issues associated with the process of creating these annotations. As these problems are also present in most multi-modal emotion recognition datasets, we conclude that the domain could benefit from a set of annotation guidelines to create standardized datasets.
Risk-Based Authentication for OpenStack: A Fully Functional Implementation and Guiding Example
(2023)
Online services have difficulties to replace passwords with more secure user authentication mechanisms, such as Two-Factor Authentication (2FA). This is partly due to the fact that users tend to reject such mechanisms in use cases outside of online banking. Relying on password authentication alone, however, is not an option in light of recent attack patterns such as credential stuffing.
Risk-Based Authentication (RBA) can serve as an interim solution to increase password-based account security until better methods are in place. Unfortunately, RBA is currently used by only a few major online services, even though it is recommended by various standards and has been shown to be effective in scientific studies. This paper contributes to the hypothesis that the low adoption of RBA in practice can be due to the complexity of implementing it. We provide an RBA implementation for the open source cloud management software OpenStack, which is the first fully functional open source RBA implementation based on the Freeman et al. algorithm, along with initial reference tests that can serve as a guiding example and blueprint for developers.
Pursuant to Sustainable Development Goal (SDG) 15 of the 2030 Agenda for Sustainable Development of the United Nations, one pivotal target is to halt biodiversity loss. This paper’s objective is to analyze why and how German farmers hesitate to implement more than the prescriptive measures with regard to cross compliance and direct payments under the European Common Agricultural Policy (CAP) and what their aspirations are for possible incentives to bring biodiversity into focus. By applying a mixed methods approach, we investigate the experience of individual farmers by means of a qualitative approach followed by a quantitative study. This analysis sheds light on how farmers perceive indirect influencing factors and how these factors play a non-negligible role in farmers´ commitment to biodiversity. Economy, policy and society are intertwined and need to be considered from a multi-faceted perspective. In addition, an in-depth analysis is conducted based on online focus group discussions to determine whether farmers accept financial support, focusing on both action- and success-oriented payments. Our results highlight the importance of paying attention to the heterogeneity of farmers, their locations and, consequently, farmers’ different views on indirect drivers influencing agricultural processes, showing the complexity of the problem. Although farmers’ expectations can be met with financial allocations, other aspects must also be taken into account.
This paper aims to assess farmers’ challenges in enhancing biodiversity. The so-called “trilemma” (WBGU 2021) of land use stems from the multiple demands made on land for the benefit of mitigating climate change, securing food, and maintaining biodiversity. Agriculture is accused of maladministration, causing soil contamination, animal cruelty, bee mortality, and climate change. However, farmers play a key role in overcoming upcoming sustainability challenges. While their supportive role is urgently needed, farmers find themselves caught between a “rock” and a ”hard place”. Consumers call for sustainable production and affordable food products without pesticide residues, demanding enough for all. Farmers are restricted by the wants and needs of consumers who are influenced by interest groups and exposed to interdependent direct and indirect influencing factors. They need to balance the scrutiny of the critical public as well as the regulatory control. In this paper, we collected and surveyed the data of farmers within or close to the 21 selected nature protected areas of the DINA (Diversity of Insects in Nature protected Areas) Project, using a mixed methods approach with a semi-structured questionnaire considering issues’ interdependencies and the complexity of today´s problems. The conflicts and obstacles faced by farmers were assessed. The results reflect the farmers’ willingness and the importance of receiving appreciation for implementing biodiversity measures. These results, complemented by a following quantitative study, are the basis for recommendations for policymakers and farmers in all German nature protected areas.
Object detection concerns the classification and localization of objects in an image. To cope with changes in the environment, such as when new classes are added or a new domain is encountered, the detector needs to update itself with the new information while retaining knowledge learned in the past. Previous works have shown that training the detector solely on new data would produce a severe "forgetting" effect, in which the performance on past tasks deteriorates through each new learning phase. However, in many cases, storing and accessing past data is not possible due to privacy concerns or storage constraints. This project aims to investigate promising continual learning strategies for object detection without storing and accessing past training images and labels. We show that by utilizing the pseudo-background trick to deal with missing labels, and knowledge distillation to deal with missing data, the forgetting effect can be significantly reduced in both class-incremental and domain-incremental scenarios. Furthermore, an integration of a small latent replay buffer can result in a positive backward transfer, indicating the enhancement of past knowledge when new knowledge is learned.
There are several recent works which had proposed an automatic computer-aided diagnosis (CAD) deep learning (DL) model to diagnose coronavirus disease 2019 (COVID-19) using chest x-ray images (CXR) to propose a high-accuracy CAD method to detect COVID-19 automatically. In this study, seven different models including Convolutional Neural Network (CNN) models such as VGG-16 and vision transformer (ViT) models, are proposed. The different proposed models are trained with a three-class balanced dataset consisting of 3,000 CXR images consisting of 1,000 CXR images for each class of COVID-19, Normal, and Lung-Opacity. A publicly available dataset to train and test the models is used from Kaggle-COVID-19-Radiography-Dataset. From the experiments, the accuracy of the VGG16 model is 93.44% and ViT's is 92.33%. Besides, the binary classification between two classes of COVID-19 and Normal CXR with a limited number of just 100 images for each class, using a transfer learning technique, with a validation accuracy of 97.5% is proposed.
The European General Data Protection Regulation requires the implementation of Technical and Organizational Measures (TOMs) to reduce the risk of illegitimate processing of personal data. For these measures to be effective, they must be applied correctly by employees who process personal data under the authority of their organization. However, even data processing employees often have limited knowledge of data protection policies and regulations, which increases the likelihood of misconduct and privacy breaches. To lower the likelihood of unintentional privacy breaches, TOMs must be developed with employees’ needs, capabilities, and usability requirements in mind. To reduce implementation costs and help organizations and IT engineers with the implementation, privacy patterns have proven to be effective for this purpose. In this chapter, we introduce the privacy pattern Data Cart, which specifically helps to develop TOMs for data processing employees. Based on a user-centered design approach with employees from two public organizations in Germany, we present a concept that illustrates how Privacy by Design can be effectively implemented. Organizations, IT engineers, and researchers will gain insight on how to improve the usability of privacy-compliant tools for managing personal data.
The French–Italian Concordia Research Station, situated on the Antarctic Polar Plateau at an elevation of 3233 m above sea level, offers a unique opportunity to study the presence and variation of microbes introduced by abiotic or biotic vectors and, consequently, appraise the amplitude of human impact in such a pristine environment. This research built upon a previous work, which explored microbial diversity in the surface snow surrounding the Concordia Research Station. While that study successfully characterized the bacterial assemblage, detecting fungal diversity was hampered by the low DNA content. To address this knowledge gap, in the present study, we optimized the sampling by increasing ice/snow collected to leverage the final DNA yield. The V4 variable region of the 16S rDNA and Internal Transcribed Spacer (ITS1) rDNA was used to evaluate bacterial and fungal diversity. From the sequencing, we obtained 3,352,661 and 4,433,595 reads clustered in 930 and 3182 amplicon sequence variants (ASVs) for fungi and bacteria, respectively. Amplicon sequencing revealed a predominance of Basidiomycota (49%) and Ascomycota (42%) in the fungal component; Bacteroidota (65.8%) is the main representative among the bacterial phyla. Basidiomycetes are almost exclusively represented by yeast-like fungi. Our findings provide the first comprehensive overview of both fungal and bacterial diversity in the Antarctic Polar Plateau’s surface snow/ice near Concordia Station and to identify seasonality as the main driver of microbial diversity; we also detected the most sensitive microorganisms to these factors, which could serve as indicators of human impact in this pristine environment and aid in planetary protection for future exploration missions.
Rosenbrock–Wanner methods for systems of stiff ordinary differential equations are well known since the seventies. They have been continuously developed and are efficient for differential-algebraic equations of index-1, as well. Their disadvantage that the Jacobian matrix has to be updated in every time step becomes more and more obsolete when automatic differentiation is used. Especially the family of Rodas methods has proven to be a standard in the Julia package DifferentialEquations. However, the fifth-order Rodas5 method undergoes order reduction for certain problem classes. Therefore, the goal of this paper is to compute a new set of coefficients for Rodas5 such that this order reduction is reduced. The procedure is similar to the derivation of the methods Rodas4P and Rodas4P2. In addition, it is possible to provide new dense output formulas for Rodas5 and the new method Rodas5P. Numerical tests show that for higher accuracy requirements Rodas5P always belongs to the best methods within the Rodas family.
This paper presents a novel approach to address noise, vibration, and harshness (NVH) issues in electrically assisted bicycles (e-bikes) caused by the drive unit. By investigating and optimising the structural dynamics during early product development, NVH can decisively be improved and valuable resources can be saved, emphasising its significance for enhancing riding performance. The paper offers a comprehensive analysis of the e-bike drive unit’s mechanical interactions among relevant components, culminating—to the best of our knowledge—in the development of the first high-fidelity model of an entire e-bike drive unit. The proposed model uses the principles of elastic multi body dynamics (eMBD) to elucidate the structural dynamics in dynamic-transient calculations. Comparing power spectra between measured and simulated motion variables validates the chosen model assumptions. The measurements of physical samples utilise accelerometers, contactless laser Doppler vibrometry (LDV) and various test arrangements, which are replicated in simulations and provide accessibility to measure vibrations onto rotating shafts and stationary structures. In summary, this integrated system-level approach can serve as a viable starting point for comprehending and managing the NVH behaviour of e-bikes.
Heutzutage werden alternative Mobilitätslösungen immer wichtiger. Dabei haben eBikes ihr Potential längst unter Beweis gestellt. Der zugehörige Markt ist über die letzten 10 Jahre enorm gewachsen und gleichermaßen auch die Erwartungen an das Produkt, wie bspw. eine Fahrt ohne störende Vibrationen und Geräusche zu haben. Der Motorfreilauf leistet dabei einen maßgeblichen Einfluss auf das dynamische Verhalten. In diesem Beitrag soll daher eine methodische Vorgehensweise vorgestellt werden, um mittels Versuch und Simulation den Einfluss des Motorfeilaufs auf das dynamische Verhalten der eBike Antriebseinheit zu bestimmen.
Trends of environmental awareness, combined with a focus on personal fitness and health, motivate many people to switch from cars and public transport to micromobility solutions, namely bicycles, electric bicycles, cargo bikes, or scooters. To accommodate urban planning for these changes, cities and communities need to know how many micromobility vehicles are on the road. In a previous work, we proposed a concept for a compact, mobile, and energy-efficient system to classify and count micromobility vehicles utilizing uncooled long-wave infrared (LWIR) image sensors and a neuromorphic co-processor. In this work, we elaborate on this concept by focusing on the feature extraction process with the goal to increase the classification accuracy. We demonstrate that even with a reduced feature list compared with our early concept, we manage to increase the detection precision to more than 90%. This is achieved by reducing the images of 160 × 120 pixels to only 12 × 18 pixels and combining them with contour moments to a feature vector of only 247 bytes.
Modern engineering relies heavily on utilizing computer technologies. This is especially true for thermoplastic manufacturing, such as blow molding. A crucial milestone for digitalization is the continuous integration of data in unified or interoperable systems. While new simulation technologies are constantly developed, data management standards such as STEP fail at integrating them. On the other hand, industrial standards such as ”VMAP” manage to improve interoperability for Small and Medium-sized Enterprises. However, they do not provide Simulation Process and Data Management (SPDM) technologies. For SPDM integration of VMAP data, Ontology-Based Data Access is used to allow continuing the digital thread in custom semantic-based open-source solutions. An ontology of the database format (VMAP) was generated alongside an expandable knowledge graph of data access methods. A Python-based software architecture was developed, automatically using the semantic representations of database format and data access to query data and metadata within the VMAP file. The result is a software architecture template that can be adapted for other data standards and integrated into semantic data management systems. It allows semantic queries on simulation data down to element-wise resolution without integrating the whole model information. The architecture can instantiate a file in a knowledge graph, query a file’s metadatum and, in case it is not yet available, find a semantically represented process that allows the creation and instantiation of the required metadatum. See Figure 1. The results of this thesis can be expected to form a basis for semantic SPDM tools.
Electric vehicles (EVs) are rapidly growing in popularity, but range variability has become an important research area with significant implications for EV performance, usability, and overall market adoption. This study aims to unravel the complexities of range variability by examining the contributing factors and offering innovative strategies to mitigate these differences during pack design. Through a detailed analysis of cell parameter deviation, cell connections, battery configuration, battery pack size, and driving behavior, the research illuminates their impact on extractable energy and driving range. The study employed a comprehensive approach and conducted systematic simulation-based experimentation to identify the optimal battery pack configuration based on maximum extractable energy, minimal variability and maximum range. The results reveal insights into the relationship between discharge rate and battery pack performance, and the impact of cell parameter variations on pack energy output. This research advances the understanding of EV performance optimisation, reduces pack-to-pack variability, and extends battery pack lifespan.
Loading of shipping containers for dairy products often includes a press-fit task, which involves manually stacking milk cartons in a container without using pallets or packaging. Automating this task with a mobile manipulator can reduce worker strain, and also enhance the efficiency and safety of the container loading process. This paper proposes an approach called Adaptive Compliant Control with Integrated Failure Recovery (ACCIFR), which enables a mobile manipulator to reliably perform the press-fit task. We base the approach on a demonstration learning-based compliant control framework, such that we integrate a monitoring and failure recovery mechanism for successful task execution. Concretely, we monitor the execution through distance and force feedback, detect collisions while the robot is performing the press-fit task, and use wrench measurements to classify the direction of collision; this information informs the subsequent recovery process. We evaluate the method on a miniature container setup, considering variations in the (i) starting position of the end effector, (ii) goal configuration, and (iii) object grasping position. The results demonstrate that the proposed approach outperforms the baseline demonstration-based learning framework regarding adaptability to environmental variations and the ability to recover from collision failures, making it a promising solution for practical press-fit applications.
Climate change is increasingly affecting vulnerable groups and resulting in dire social and economic consequences, especially for those in the Global South. Managing current and emerging climate-related risks will require increasing individual’s and communities’ resilience, including enhancing absorptive, adaptive, and transformative capacities. Policymakers are now considering the role that social protection policies and programmes can play in building climate resilience by contributing to these capacities. However, there is a limited understanding of the extent to which social protection instruments can influence these three resilience-related capacities. Lack of assessment tools or frameworks might contribute to limited evidence of social protection’s ability to increase climate resilience. In particular, there appear to be no frameworks or tools that help assess the role of social cash transfers (SCT) in building adaptive capacity. Based on a multi-staged literature review, we develop an adaptive capacity outcomes framework (ACOF) that can help assess SCT’s contribution to building adaptive capacity, and, consequently, resilience. The framework is then tested using impact evaluation and assessment reports from SCT programmes in Indonesia, Zambia, Ethiopia, Bangladesh, and Tanzania. The exercise finds that SCTs alone have a limited contribution to adaptive capacity outcomes, but interventions that combine cash transfers with other components such as nutrition or livelihood training show positive impacts. We find that the ACOF can support assessments of SCT’s contribution towards adaptive capacity. It can help build evidence, evaluate impacts, and through further research, can facilitate learning on SCTs' role in increasing climate resilience.
Intelligent virtual agents provide a framework for simulating more life-like behavior and increasing plausibility in virtual training environments. They can improve the learning process if they portray believable behavior that can also be controlled to support the training objectives. In the context of this thesis, cognitive agents are considered a subset of intelligent virtual agents (IVA) with the focus on emulating cognitive processes to achieve believable behavior. The complexity of employed algorithms, however, is often limited since multiple agents need to be simulated in real-time. Available solutions focus on a subset of the indicated aspects: plausibility, controllability, or real-time capability (scalability). Within this thesis project, an agent architecture for attentive cognitive agents is developed that considers all three aspects at once. The result is a lightweight cognitive agent architecture that is customizable to application-specific requirements. A generic trait-based personality model influences all cognitive processes, facilitating the generation of consistent and individual behavior. An additional mapping process provides a formalized mechanism to transfer results of psychological studies to the architecture. Personality profiles are combined with an emotion model to achieve situational behavior adaptation. Which action an agent selects in a situation also influences plausibility. An integral element of this selection process is an agent's knowledge about its world. Therefore, synthetic perception is modeled and integrated into the architecture to provide a credible knowledge base. The developed perception module includes a unified sensor interface, a memory hierarchy, and an attention process. With the presented realization of the architecture (CAARVE), it is possible for the first time to simulate cognitive agents, whose behaviors are simultaneously computable in real-time and controllable. The architecture's applicability is demonstrated by integrating an agent-based traffic simulation built with CAARVE into a bicycle simulator for road-safety education. The developed ideas and their realization are evaluated within this work using different strategies and scenarios. For example, it is shown how CAARVE agents utilize personality profiles and emotions to plausibly resolve deadlocks in traffic simulations. Controllability and adaptability are demonstrated in additional scenarios. Using the realization, 200 agents can be simulated in real-time (50 FPS), illustrating scalability. The achieved results verify that the developed architecture can generate plausible and controllable agent behavior in real-time. The presented concepts and realizations provide sound fundamentals to everyone interested in simulating IVA in real-time environments.
The UN Declaration on the Right to Development (UNDRTD) adopted in 1986 and the 2030 Agenda for Sustainable Development adopted in 2015 share a universal concept of development that refers both to individual and collective dimensions of prosperity and thus includes the rights of future generations.2 They thus offer a definition of the relationship between development and human rights that is very relevant for the 21st century. The core norm of the UNDRTD has been defined later as “the right of peoples and individuals to the constant improvement of their wellbeing and to a national and global enabling environment conducive to just, equitable, participatory and human-centred development respectful of all human rights”3.