The 50 most recently published documents
The accurate treatment of outflow boundary conditions remains a critical challenge in computational fluid dynamics when predicting aerodynamic forces and/or acoustic emissions. This is particularly evident when employing the lattice Boltzmann method (LBM) as the numerical solution technique, which often suffers from inaccuracies induced by artificial reflections from outflow boundaries. This paper investigates the use of neural networks (NN) to mitigate these adverse boundary effects and enable truncated domain requirements. Two distinct NN-based approaches are proposed: (1) direct reconstruction of unknown particle distribution functions at the outflow boundary; and (2) enhancement of established characteristic boundary conditions (CBC) by dynamically tuning their parameters. The direct reconstruction model was trained on data generated from a 2D flow over a cylindrical obstruction. The drag, lift, and Strouhal number were used to test the new boundary condition. We analyzed results for various Reynolds numbers and restricted domain sizes where it demonstrated significantly improved predictions when compared with the traditional Zou & He boundary condition. To examine the robustness of the NN-based reconstruction, the same condition was applied to the simulation of a NACA0012 airfoil, again providing accurate aerodynamic performance predictions. The neural-enhanced CBC were evaluated on a 2D convected vortex benchmark and showed superior performance in minimizing density errors compared to CBCs with fixed parameters. These findings highlight the potential of NN-integrated boundary conditions to improve accuracy and reduce computational expense of aerodynamic and acoustic emissions simulations with the LBM.
The role of gut microbiome in obesity and metabolic dysfunctions: Insights and therapeutic potential
(2025)
Obesity is a chronic inflammatory disease defined by an excessive accumulation of body fat. The human gut microbiota (GM) is an intricate ecosystem of microorganisms living symbiotically within the gastrointestinal tract and has emerged as a key player in health and metabolic diseases. Recently, several studies have increasingly revolved around understanding the specific compositions and strains of GM and their potential impact on obesity. This review provides a summary of the most recent findings regarding obesity and newly developed therapies that show exceptional efficacy in treating this condition. In addition, it explores different GM strains that may contribute to the progression and development of obesity. This article summarizes the molecular insights involved in the relationship between obesity and GM, the characteristics of this ecosystem, and its involvement in human metabolism, energy balance, and inflammation leading to obesity. Furthermore, it examines the bacteria most engaged in managing obesity. These findings contribute to a better understanding of this significant and intricate relationship, ultimately aiding in obesity prevention.
Polyphenols, a diverse group of phytochemicals, are an indispensable component of the antioxidant defense system, given their capacity to neutralize free radicals and modulate redox reactions. This review examines the chemical diversity and antioxidant potential of polyphenols derived from viticultural byproducts, including grape skins, seeds, pomace, and stems. These biomass sources provide a sustainable reservoir of bioactive compounds with potential applications in the development of functional and biobased materials. This review also addresses the methodological challenges inherent to this field, such as the variability of extraction procedures and test conditions. This study critically examines the influence of the structural characteristics of polyphenols, including the number, nature, and distribution of hydroxyl groups as well as molecular size, on antioxidant activity. Additionally, innovative extraction techniques that enhance yield and bioactivity are presented and evaluated. Besides conventional monomeric and oligomeric polyphenolic compounds, lignins, a class of high-molecular-weight polyphenols of industrial importance and stability in oxidative environments, are addressed. The results underscore the necessity for standardized multiassay approaches to precisely assess antioxidant capacity and facilitate targeted polyphenol application in diverse fields. Future research should address the intricate interplay between biomass composition, extraction parameters, and polyphenol functionality to tailor their utilization.
Hepatic insulin resistance is an important pathophysiology in type 2 diabetes, and the mechanisms by which high-caloric diets induce insulin resistance are unclear. Among vertebrate animals, mammals have retained a unique molecular change that allows an intracellular arrestin domain-containing protein called Thioredoxin-Interacting Protein (TXNIP) to bind covalently to thioredoxin, allowing TXNIP to "sense" oxidative stress(1). Here, we show that a single cysteine in TXNIP mediates the development of hepatic insulin resistance in the setting of a high-fat diet (HFD). Mice with an exchange of TXNIP Cysteine 247 for Serine (C247S) showed improved whole-body and hepatic insulin sensitivity compared to wild-type (WT) controls following an 8-week HFD. HFD-fed TXNIP C247S mouse livers also showed improved insulin signaling. The Transmembrane 7 superfamily member 2 (Tm7sf2) gene encodes for a sterol reductase involved in the process of cholesterol biosynthesis (2). We identified TM7SF2 as a potential mediator of enhanced insulin signaling in HFD-fed TXNIP C247S mouse livers. TM7SF2 increased Akt phosphorylation and suppressed gluconeogenic markers PCK1 and G6Pc specifically under oxidative-stress-induced conditions in HepG2 cells. We also present data suggesting that a heterozygous variant of TXNIP C247 is well-tolerated in humans. Thus, mammals have a single redox-sensitive amino acid in TXNIP that mediates insulin resistance in the setting of a HFD. Our results reveal an evolutionarily conserved mechanism for hepatic insulin resistance in obesity. Hepatic insulin resistance is an important pathophysiology in type 2 diabetes, and the mechanisms by which high-caloric diets induce insulin resistance are unclear. Among vertebrate animals, mammals have retained a unique molecular change that allows an intracellular arrestin domain-containing protein called Thioredoxin-Interacting Protein (TXNIP) to bind covalently to thioredoxin, allowing TXNIP to "sense" oxidative stress. Here, we show that a single cysteine in TXNIP mediates the development of hepatic insulin resistance in the setting of a high-fat diet (HFD). Mice with an exchange of TXNIP Cysteine 247 for Serine (C247S) showed improved whole-body and hepatic insulin sensitivity compared with WT controls following an 8-week HFD. HFD-fed TXNIP C247S mouse livers also showed improved insulin signaling. The Transmembrane 7 Superfamily Member 2 (Tm7sf2) gene encodes for a sterol reductase involved in the process of cholesterol biosynthesis. We identified TM7SF2 as a potential mediator of enhanced insulin signaling in HFD-fed TXNIP C247S mouse livers. TM7SF2 increased Akt phosphorylation and suppressed gluconeogenic markers PCK1 and G6Pc specifically under oxidative stress-induced conditions in HepG2 cells. We also present data suggesting that a heterozygous variant of TXNIP C247 is well tolerated in humans. Thus, mammals have a single redox-sensitive amino acid in TXNIP that mediates insulin resistance in the setting of an HFD. Our results reveal an evolutionarily conserved mechanism for hepatic insulin resistance in obesity.
Since the public release of ChatGPT in late 2022, the role of Generative AI chatbots in education has been widely debated. While some see their potential as automated tutors, others worry that inaccuracies and hallucinations could harm student learning. This study assesses ChatGPT models (GPT-3.5, GPT-4o, and o1preview) across important dimensions of student learning by evaluating their capabilities and limitations to serve as a non-interactive, automated tutor. In particular, we analyse performance in two tasks commonly used in principles of economics courses: explaining economic concepts and answering multiple-choice questions. Our findings indicate that newer models generate very accurate responses, although some inaccuracies persist. A key concern is that ChatGPT presents all responses with full confidence, making errors difficult for students to recognize. Furthermore, explanations are often quite narrow, lacking holistic perspectives, and the quality of examples remains poor. Despite these limitations, we argue that ChatGPT can serve as an effective automated tutor for basic, knowledge-based questions—supporting students while posing a relatively low risk of misinformation. Educators can hence recommend Generative AI chatbots for student learning, but should teach students the limitations of the technology.
In der vorliegenden Arbeit werden die nematischen Flüssigkristallgemische (E7 und E8) zum Zwecke der Gassensorik mit einer reaktiven, optisch aktiven Substanz dotiert. Die Dotierung verursacht die Ausbildung einer chiral-nematischen Phase, die einen eindimensionalen photonischen Kristall mit Reflexionsmaxima im sichtbaren Bereich des elektromagnetischen Spektrums erzeugt. Infolge einer chemischen Reaktion des Dotiermittels mit dem einem Analyten, ändert sich mit seiner chemischen Zusammensetzung auch dessen helical twisting power (HTP). Diese Änderung verursacht eine Verschiebung des reflektierten Wellenlängenbereichs, was als Änderung der farblichen Erscheinung mit dem bloßen Auge wahrgenommen werden kann. In dieser Arbeit wird das koaxiale Elektrospinnen verwendet, um Flüssigkristalle in Polymerfasern von wenigen Mikrometern Durchmesser einzukapseln. Der Vergleich zwischen eingekapseltem und nicht eingekapseltem dotierten Flüssigkristall wird mit einer dafür entwickelten Reaktionskammer UV/VIS-spektroskopisch durchgeführt. Die ablaufenden Reaktionen werden mittels FTIR-Spektroskopie untersucht. Die Fasern und die verwendeten Flüssigkristalle werden lichtmikroskopisch charakterisiert. Es werden zusätzlich Möglichkeiten untersucht die Wasserbeständigkeit der hergestellten Fasern zu verbessern, um ihre Eignung für künftige technische Anwendungen zu steigern. Hierzu wird das triaxiale Elektrospinnen verwendet, um die Fasern mit einer zusätzlichen wasserbeständigen Polymerhülle zu überziehen. Es wird zudem die Möglichkeit untersucht koaxial gesponnene Fasern nachträglich zu vernetzen, um so eine Wasserfestigkeit zu erzielen.
The electrolytic in situ generation of oxidants is an increasingly widespread technique for producing sanitized and thus safe process water in ultrapure water distribution systems. In particular, the anodic production of ozone on functionalized electrodes is a commercially available option for providing pharmaceutical-grade water. The present work therefore investigates the use of a newly developed electrolyzer with a polymer electrolyte membrane (PEM) and lead dioxide (PbO2) electrodes for drinking and ultrapure water treatment. The selective analysis of electrolytically generated oxidizing agents or reactive oxygen species such as ozone (O3), hydrogen peroxide (H2O2) and hydroxyl radicals (·OH) is often impeded by cross-sensitivities of commonly used photometric assays. To account for these imperfections, a step-by-step procedure to consolidate different analytical methods was developed. Depending on the applied current density, different electrolytically generated species can be detected selectively and enable the observation that the electrolytic generation of ozone only increases significantly for current densities above 0.5 A cm-2. In addition, the evolution of H2O2 only occurs in significant amounts in the presence of an organic impurity. The resulting, rapid decomposition of ozone via the peroxone process requires several equivalents of H2O2, depending on the present amount of dissolved O3.
In order to provide a sensitive in- or on-line detection for ozone in ultrapure water, electrode materials based on Pt-functionalized ionomers were developed using a modified impregnation-reduction process. The metal loading on the sensor material was determined satisfactorily using a non-destructive approach by means of computed tomography (CT). Different synthesis conditions led to different sensor properties in terms of sensitivity and applicable concentration range. After evaluation of different models by an objective information criterion, the potentiometric sensor behavior is best described by a Langmuir pseudo-isotherm. On average, 2.9 μg L-1 of dissolved ozone was found as the detection limit for all sensor materials produced, which is comparable to complex reference analysis.
Extending the application range for PEM electrolysis to the drinking water sector was evaluated by exposing the analytic feed to different water hardness levels. The electroosmosis of water is a direct function of the current density and can be estimated at 95 ± 2 mmol A-1 h-1. The transport rates of sodium, potassium, calcium and magnesium ions were modeled as a function of the current density and water hardness and were directly related to the ion mobility, independent of the water quality. Permeation leads to higher pH values of the catholyte within a few minutes and consequently to insoluble hydroxides and carbonates of the formerly dissolved hardeners. The introduction of an auxiliary cathode in the anode compartment was able to reduce tap water cation permeation indiscriminately by 18 ± 4 %.
The results show that the selected methods are suitable and directly applicable for the sensitive and selective detection of in situ produced disinfectants, in particular electrolytically generated ozone in the aqueous phase. An initial transfer of this PEM electrolyzer into the tap and drinking water environment is showcased the example of temporarily stagnant water with a constructive solution for suppressing unwanted ion crossover.
The success of any agent, human or artificial, ultimately depends on their successfully accomplishing the given goals. Agents may, however, fail to do so for many reasons. With artificial agents, such as robots, this may be due to internal faults or exogenous events in the complex, dynamic environments in which they operate. The bottom line is that plans, even good ones, can fail. Despite decades of research, effective methods for artificial agents to cope with plan failure remain limited and are often impractical in the real world. One common reason for failure that plagues agents, human and artificial alike, is that objects that are expected to be used to get the job done are often found to be missing or unavailable. Humans might, with little effort, accomplish their tasks by making substitutions. When they are not sure if an object is available, they may even proceed optimistically and switch to making a substitution when they confirm that an object is indeed unavailable. In this work, the system uses Description Logics to enable open-world reasoning --- making it possible to distinguish between cases where an object is missing/unavailable and cases where the failure to even generate a plan is due to the planner's use of the closed-world assumption (where the fact stating that something is true is missing from its knowledge base and so it is assumed to be not true). This ability to distinguish between something being missing and having incomplete information enables the agent to behave intelligently: recognising whether it should identify and then plan with a suitable substitute or create a placeholder, in the case of incomplete information. By representing the functional affordances of objects (i.e. what they are meant to be used for), socially-expected and accepted object substitutions are made possible. The system also uses the Conceptual Spaces approach to provide feature-based similarity measures that make the given task a first-class citizen in the identification of a suitable substitute. The generation of plans to `get the job done' is made possible by incorporating the Hierarchical Task Network planning approach. It is combined with a robust execution/monitoring system and contributes to the success of the robot in achieving its goals.
This paper investigates the ongoing use of the A5/1 ciphering algorithm within 2G GSM networks. Despite its known vulnerabilities and the gradual phasing out of GSM technology by some operators, GSM security remains relevant due to potential downgrade attacks from 4G/5G networks and its use in IoT applications. We present a comprehensive overview of a historical weakness associated with the A5 family of cryptographic algorithms. Building on this, our main contribution is the design of a measurement approach using low-cost, off-the-shelf hardware to passively monitor Cipher Mode Command messages transmitted by base transceiver stations (BTS). We collected over 500,000 samples at 10 different locations, focusing on the three largest mobile network operators in Germany. Our findings reveal significant variations in algorithm usage among these providers. One operator favors A5/3, while another surprisingly retains a high reliance on the compromised A5/1. The third provider shows a marked preference for A5/3 and A5/4, indicating a shift towards more secure ciphering algorithms in GSM networks.
In mobile network research, the integration of real-world components such as User Equipment (UE) with open-source network infrastructure is essential yet challenging. To address these issues, we introduce open5Gcube, a modular framework designed to integrate popular open-source mobile network projects into a unified management environment. Our publicly available framework allows researchers to flexibly combine different open-source implementations, including different versions, and simplifies experimental setups through containerization and lightweight orchestration. We demonstrate the practical usability of open5Gcube by evaluating its compatibility with various commercial off-the-shelf (COTS) smartphones and modems across multiple mobile generations (2G, 4G, and 5G). The results underline the versatility and reproducibility of our approach, significantly advancing the accessibility of rigorous experimentation in mobile network laboratories.
Unmanned Aerial Vehicles (UAVs) are increasingly used for reforestation and forest monitoring, including seed dispersal in hard-to-reach terrains. However, a detailed understanding of the forest floor remains a challenge due to high natural variability, quickly changing environmental parameters, and ambiguous annotations due to unclear definitions. To address this issue, we adapt the Segment Anything Model (SAM), a vision foundation model with strong generalization capabilities, to segment forest floor objects such as tree stumps, vegetation, and woody debris. To this end, we employ parameter-efficient fine-tuning (PEFT) to fine-tune a small subset of additional model parameters while keeping the original weights fixed. We adjust SAM's mask decoder to generate masks corresponding to our dataset categories, allowing for automatic segmentation without manual prompting. Our results show that the adapter-based PEFT method achieves the highest mean intersection over union (mIoU), while Low-rank Adaptation (LoRA), with fewer parameters, offers a lightweight alternative for resource-constrained UAV platforms.
The first Data Competence College was hosted from March 27th to 28th, 2025 at the IT center of RWTH Aachen. Based on the concept of the Wissenschaftskolleg in Berlin or the Institute of Advanced Studies in Princeton, we invited two individuals with high data competence from different scientific fields (“Data Experts”) to participate as part of the data competence college:
Prof. Sebastian Houben (Hochschule Bonn-Rhein-Sieg, specialist in AI and autonomous systems)
Dr. Moritz Wolter (University of Bonn, expert in high performance computing and machine learning)
For two days we aimed to create a space where not only local scientists, and especially early career researchers, learn from the data experts and each other regarding research data and methods but also data experts could inspire each other. The schedule included keynote presentations by all data experts, poster and group presentations by the participants, 1:1 sessions between data experts and early career researchers, as well as a method- and data-related workshop. We aimed foremost to create an environment in which everyone feels safe to give input, share their knowledge and learn from the other participants and experts.
For analysis with liquid chromatography (LC), samples and calibration standards generally require a dilution by a factor of 103 to 106. To guarantee a high accuracy, sample preparation usually employs high-volume pipettes and volumetric flasks for dilution series. Consequently, sample preparation is a prominent driving factor for consumption of solvents in the LC laboratory. Miniaturisation in sample preparation can thus be a means of reducing the required amount of solvent within the laboratory, saving valuable resources. In the context of dilution series, this can be achieved by the use of low-volume dispensing tools, which usually have a higher relative instrument error, resulting in a less accurate overall method. Another approach is the transition to a gravimetric sample preparation, in which the dilution steps are not measured in volume but weight, only depending on the much lower error of the analytical balance. By implementing weighing robots, one can fully automate the sample preparation workflow. This study deals with the comparison of various dilution methods. Gravimetric, robot-aided dilution allows for the reduction of the solvent down to the amount of sample needed for analyses. Including the initial dissolution of the sample, using gravimetric dilution can reliably and repeatedly reduce the required solvent amount by over 90 %, while still generating the same analytical results. Overall, this application leads to significant economic, ecological, social, and technological benefits for the LC laboratory.
Background: Soluble CD21 (sCD21) is the product of metalloprotease-mediated proteolysis of CD21, a mechanism in which the entire extracellular domain of CD21 is shed from the cell surface. Through its retained ligand-binding ability and presence in human serum, sCD21 joins the growing list of surface proteins shed from the leukocyte cell surface which allows modulation of the immune response. Summary: sCD21 plays a multifaceted role in the body, including the promotion of inflammatory responses through receptor-ligand interactions with monocyte CD23, acting as a decoy receptor during Epstein-Barr virus infection preventing lymphoproliferation, and suppression of IgG and IgE responses by competitively inhibiting cell surface CD21. Clinical studies have shown that in comparison with healthy individuals, levels of sCD21 in serum are significantly altered in various diseases, highlighted by diverse viral infections, B-cell leukemias, and autoimmune disorders. Key Messages: Although findings of prevalence and functionality suggest sCD21 to be a key modulator of cellular and humoral immunity, questions remain about its origins and the regulation of its responses. Here, we aim to clarify and connect the advances in understanding sCD21 over time with emphasis on its generation by surface cleavage, binding partners, and functional roles. We also provide an outlook on its clinical significance and usage as a diagnostic target and therapeutic biomarker to monitor treatment efficacy in the context of chronic autoimmune disorders.
Multiple myeloma (MM) is a clonal hematologic malignancy characterized by low rate of complete remissions. Cytokine-induced killer (CIK) cell therapy has shown promising benefits in MM treatment. In this study, we investigated whether the pro-inflammatory cytokines secreted by macrophages could upregulate MICA/B expression and thus the cytotoxicity of CIK cells. Flow cytometry was used for phenotypic measurement and the cytotoxicity assay of CIK cells. Soluble MICA/B and macrophage-derived cytokines were measured using ELISA assay. CCK-8 assay was applied to evaluate cell viabilities. Gene expression levels were investigated using RT-qPCR. The expression of MICA/B and PD-L1 in MM cells was upregulated by pro-inflammatory cytokines. Pro-inflammatory cytokines enhanced the cytotoxicity of CIK cells against MM cells, with TNF-α exhibiting a more potent effect than IL-1β and IL-6 as it strengthened both components of the NKG2D-MICA/B axis. PD-L1 blockade promoted the cytotoxic ability of CIK cells. Mechanistically, IL-1β, IL-6, and TNF-α enhanced the transcription of MICA/B and PD-L1 genes via the PI3K/AKT, JAK/STAT3, and MKK/p38 MAPK pathways. Pro-inflammatory cytokines upregulated the expression of MICA/B and PD-L1, thereby promoting the cytotoxicity of CIK cells against MM by strengthening the NKG2D pathway, while PD-L1 blockade enhanced the cytotoxicity of CIK cells.
Der vorliegende Beitrag unternimmt den Versuch, Wissenschaftskommunikation auch als medienästhetische Praxis zu lesen, indem die Entwicklung eines spielerischen Formats in einem geisteswissenschaftlichen Forschungslabor untersucht wird. Das Labor wird dabei als ein Hybrid zwischen räumlichem Format und akademischer Formation verstanden, dessen Gestaltung, Ausstattung und spezifische Praktiken miteinander verflochten sind. Im Fokus steht daher die dynamische Interaktion zwischen Raum und Akteur:innen, die als interdependentes System begriffen wird, das fünf Praktiken – Dokumentieren, Forschen, Spielen, Verhandeln und Produzieren – umfasst. Verdeutlicht werden diese Praktiken am Beispiel der Entwicklung eines Brettspiels als Format der Wissenschaftskommunikation, was mittels ethnografischer Methoden begleitet und dokumentiert wird. Anhand dieses Fallbeispiels wird schließlich das Innovationspotential des Spiels als (Forschungs-)Format, (Forschungs-)Methode und (Forschungs-)Gegenstand reflektiert. Dieser Rahmen bildet, so die Perspektive, schließlich eine spezifische ‚Labor-Atmosphäre‘ heraus. Mithin lässt sich ein solches ‚Spiel-Labor‘ als experimenteller Raum charakterisieren, der konventionelle Methoden mit ästhetischen Explorationen konfrontiert.
Die 3. Auflage dieses Buchs zeigt konkret auf, was Geschäftsprozessmanagement ist und wie man es nutzen kann. Hierzu werden die zentralen Aspekte erklärt und praxistaugliche Tools anhand von Beispielen vorgestellt. Erleichtern Sie sich die tägliche Praxis der Analyse und Optimierung von Geschäftsprozessen!
Quadruped robots excel in traversing complex, unstructured environments where wheeled robots often fail. However, enabling efficient and adaptable locomotion remains challenging due to the quadrupeds' nonlinear dynamics, high degrees of freedom, and the computational demands of real-time control. Optimization-based controllers, such as Nonlinear Model Predictive Control (NMPC), have shown strong performance, but their reliance on accurate state estimation and high computational overhead makes deployment in real-world settings challenging. In this work, we present a Multi-Task Learning (MTL) framework in which expert NMPC demonstrations are used to train a single neural network to predict actions for multiple locomotion behaviors directly from raw proprioceptive sensor inputs. We evaluate our approach extensively on the quadruped robot Go1, both in simulation and on real hardware, demonstrating that it accurately reproduces expert behavior, allows smooth gait switching, and simplifies the control pipeline for real-time deployment. Our MTL architecture enables learning diverse gaits within a unified policy, achieving high $R^{2}$ scores for predicted joint targets across all tasks.
Hybrid energy storage systems (HESS), consisting of a battery, hydrogen storage, electrolyzer and fuel cell, have received increasing attention from the scientific community in recent years as they can help increase the use of renewable energy. In this paper, a novel metamodel-based sizing optimization workflow is developed. The workflow is demonstrated on residential building scenarios with real solar power data. Sobol sequences are utilized to vary the component size in simulation, while radial basis functions are employed to approximate simulation results, which are then used for the optimization. The utilization of waste heat from Power-to-Gas and Gas-to-Power processes is evaluated using thermal equations by extending an existing coupled electrochemical and thermodynamic HESS model built in the multiphysical energy system simulator MEgy. The results show a functional workflow that optimizes sizing for a given scenario and outputs the component’s daily energy balance and state of charge.
Every authority, organization, and company worldwide is currently facing the challenge of determining which ap1plications of artificial intelligence (AI) are meaningful. This article lists application examples to demonstrate the opportunities. It also explains the difference between GPAI models and GPAI systems. The risk classification fol2lows the risk classes defined in the AI Act in Europe. To ensure trustworthy application and risk limitation, an over3view of the most important international standards is provided. Based on the international concept of competence, "AI competencies" are defined and formulated along the competence levels according to BLOOM's taxonomy. These AI competencies are assigned to different functional roles in organizations and companies. Using six typical application examples—from simple users to AI system manufacturers—the required AI competencies are mapped to both international standards and the risk classes from the European AI Act. Finally, eight recommendations for AI implementation are provided that are useful for any organization or company
This thesis investigates how consumers perceive the value of products in the context of online shopping, particularly on Amazon, and how this perception influences their ability to detect fake reviews. With the shift from traditional to digital retail, challenges arise in how consumers assess product value without direct interaction. Given the rise of deceptive online practices like fake reviews, understanding human detection of such misinformation is crucial.
The study integrates two theoretical perspectives: the human detection of fake reviews and the concept of perceived customer value. While most research focuses on algorithmic detection, this study emphasizes consumer decision-making and review evaluation. It builds on Holbrook’s customer value typology and tests Leroi-Werelds’ 2019 update, which introduces 24 value types (positive and negative) relevant to modern digital consumption.
A quantitative approach was employed using a survey that examined participants' responses to seven Amazon products of varying value levels (e.g. laptop, wireless headphones, robot vacuums). The Consumer Value Index was operationalized with 21 value types tailored to product assessment and the data was analyzed using structural equation modeling.
The results support that Leroi-Werelds' typology effectively captures perceived customer value. This contributes to better understanding of consumer behavior and can help design more effective e-commerce strategies and safeguards against fake reviews.
Die vorliegende Bachelorarbeit zeigt, wie die „Neue Rechte“ Naturschutzthemen für ihre politischen Zwecke instrumentalisiert. Im Mittelpunkt stehen rechte Narrative und Argumentationsmuster, deren Wirkung auf Gesellschaft und Demokratie mit Gramscis Hegemonietheorie analysiert wird. Ziel ist es, die ideologische Unterwanderung des Naturschutzes sichtbar zu machen und Handlungsmöglichkeiten aufzuzeigen.
The remarkable success of Deep Learning approaches is often based and demonstrated on large public datasets. However, when applying such approaches to internal, private datasets, one frequently faces challenges arising from structural differences in the datasets, domain shift, and the lack of labels. In this work, we introduce Tabular Data Adapters (TDA), a novel method for generating soft labels for unlabeled tabular data in outlier detection tasks. By identifying statistically similar public datasets and transforming private data (based on a shared autoencoder) into a format compatible with state-of-the-art public models, our approach enables the generation of weak labels. It thereby can help to mitigate the cold start problem of labeling by basing on existing outlier detection models for public datasets. In experiments on 50 tabular datasets across different domains, we demonstrate that our method is able to provide more accurate annotations than baseline approaches while reducing computational time. Our approach offers a scalable, efficient, and cost-effective solution, to bridge the gap between public research models and real-world industrial applications.
Contrastive learning (CL) approaches have gained great recognition as a very successful subset of self-supervised learning (SSL) methods. SSL enables learning from unlabeled data, a crucial step in the advancement of deep learning, particularly in computer vision (CV), given the plethora of unlabeled image data. CL works by comparing different random augmentations (e.g., different crops) of the same image, thus achieving self-labeling. Nevertheless, randomly augmenting images and especially random cropping can result in an image that is semantically very distant from the original and therefore leads to false labeling, hence undermining the efficacy of the methods. In this research, two novel parameterized cropping methods are introduced that increase the robustness of self-labeling and consequently increase the efficacy. The results show that the use of these methods significantly improves the accuracy of the model by between 2.7\% and 12.4\% on the downstream task of classifying CIFAR-10, depending on the crop size compared to that of the non-parameterized random cropping method.
Force field (FF) based molecular modeling is an often used method to investigate and study structural and dynamic properties of (bio-)chemical substances and systems. When such a system is modeled or refined, the force-field parameters need to be adjusted. This force-field parameter optimization can be a tedious task and is always a trade-off in terms of errors regarding the targeted properties. To better control the balance of various properties’ errors, in this study we introduce weighting factors for the optimization objectives. Different weighting strategies are compared to fine-tune the balance between bulk-phase density and relative conformational energies (RCE), using n-octane as a representative system. Additionally, a non-linear projection of the individual property-specific parts of the optimized loss function is deployed to further improve the balance between them. The results show that the combined error for the reproduction of the properties targeted in this optimization is reduced. Furthermore, the transferability of the force field parameters (FFParams) to chemically similar systems is increased. One interesting outcome is a large variety in the resulting optimized FFParams and corresponding errors, suggesting that the optimization landscape is multi-modal and very dependent on the weighting factor setup. We conclude that adjusting the weighting factors can be a very important feature to lower the overall error in the FF optimization procedure, giving researchers the possibility to fine-tune their FFs.
Background
The increased application of oxidative water treatment is associated with the presence of halogenated disinfection by-products (DBPs) in aqueous systems. Due to their hazard potential, permanent measurements of selected analytes are required to monitor their compliance with regulatory limits and guidelines. However, the simultaneous acquisition of alarming inorganic oxyhalide species by conventional ion chromatography (IC) is often impeded by co-elution of interfering analyte or matrix components, especially when DBPs of multiple halogens are present in solution. This necessitates a complementary, orthogonal detection setup that allows for an element-specific analysis.
Ordnung für finanzielle Hilfen der Studierendenschaft der Hochschule Bonn-Rhein-Sieg vom 12.05.2025
(2025)
Active 3D-imaging systems are increasingly important in fields such as logistics, biometric authentication, and industrial safety because they enable precise three-dimensional measurements. Unlike passive systems, they generate their own illumination, allowing these systems to perform measurements independently of ambient light. The active modulation of the illumination also makes it possible to obtain additional information. Active 3D imaging systems require the use of precise, efficient, and compact illumination components. In this work, existing illumination solutions are reviewed, with particular emphasis on two approaches: randomized microlens arrays (rMLAs) and computer-generated holograms (CGHs). A robust development process for the design, fabrication and validation of rMLAs and CGHs is presented and demonstrated using the design of diffusers for time-of-flight (ToF) cameras as an example. These diffusers convert the circular output of an uncollimated array of vertical-cavity surface-emitting lasers (VCSELs) into a rectangular beam profile with higher intensity at the edges, known as a batwing profile, to compensate for vignetting in the ToF camera. Vignetting describes the drop-off in illumination at the edges of the image. Two different target batwing profiles are selected, and for each one CGH and one rMLA is designed. In addition to these four state-of-the-art designs, four novel diffusers are designed, which incorporate a collimating lens in the original diffuser surface.
The designed diffusers are first validated through simulations, and subsequently fabricated using a two-photon polymerization 3D printer. The beam profiles generated by the manufactured diffusers, when illuminated with a VCSEL array, are then investigated using a camera-based beam profile measurement setup.
The efficiency with which the fabricated diffusers direct light into the desired region is up to 81 %. The diffusers correctly illuminate the intended rectangular field of illumination (FOI), although minor errors in the FOI size are observed. The cause of these errors is shown in simulations and can be avoided in future work. The proposed integrated collimation successfully results in sharper and more efficient beam profiles
Germany aims to achieve a national climate-neutral energy system by 2045. The residential sector still accounts for 29% of end energy consumption, with 74% attributed to the direct use of fossil fuels for heating and hot water. In order to reduce fossil energy use in the household sector, great efforts are being made to design new energy concepts that expand the use of renewable energies to supply electricity and heat. One possibility is to convert parts of the natural gas grid to a hydrogen-based gas grid to deliver and store energy for urban quarters of buildings, especially with older building stock where electrification of heat via heat pumps is difficult due to technical, acoustical, and economic reasons. A comprehensive dataset was generated by a bottom-up analysis with open governmental and statistical data to determine regional building types regarding energy demand, solar potential, and existing grid infrastructure. The buildings’ connections to the electricity, gas, and district heating networks are considered. From this, a representative sample dataset was chosen as input for a newly developed energy system model based on energy flow simulation. The model simulates the interaction of hydrogen generation (HG) (from excess solar energy by electrolysis), storage in a metal-hydride storage (MHS) tank, and hydrogen use in a connected fuel cell (FC), forming a local PVPtGtHP (Photovoltaic Power-to-Gas-to-Heat-and-Power) network. Next to the seasonal hydrogen storage path (HSP), a battery will complete the system to form a hybrid energy storage system (HESS). Paired with seasonal time series for PV power, electricity and heat demand, and a model for connection to grid infrastructure, the simulation of different hydrogen applications and MHS placements aims to analyze operating times and energy share of the systems’ equipment and existing infrastructure. The method to obtain the data set together with the simulation model presented can be used by energy planners for cities, communities, and building developers to analyze the potentials of a quarter or region and plan a transition towards a more energy-efficient and sustainable energy system.
N-Nitrosamines have long been identified as a relevant contaminant in potable water due to their identification as probable human carcinogens. Thus, highly sensitive detection of these pollutants in the ultra-trace range is imperative to comply with strict regulatory specifications. To this end, many institutions rely on mass spectrometry-based analysis methods, which have the disadvantage of being cost- and resource intensive. This study aims to develop, optimise, and evaluate a gas chromatography-drift tube-ion mobility spectrometry (GCIMS) based method with a twofold enrichment strategy consisting of solid phase extraction (SPE) followed by intube extraction (ITEX) of the eluate for nine different nitrosamines in drinking water in order to offer a sensitive alternative to the current state of the art. Optimisation of ITEX parameters was successfully performed using a simplex self-directing design approach, so that a calibration range between 5 and 50 ng/L could be achieved. The suitability of a linear regression model was demonstrated via analysis of variance (ANOVA) criteria. The analysis of different spiked drinking water samples allowed for the determination of the method's accuracy (27.3 - 114.5 % across different nitrosamine analytes and matrices, with most above 70 % recovery) and detection limits (1.12 - 12.48 ng/L across different nitrosamine analytes and matrices), which fall within the range of required limit values. Tested drinking waters show innate nitrosamine concentrations well below detection limits and can thus be deemed free from contaminants.
Russia's Maternity Capital program, launched in 2007, is a key policy with an obvious pro-natalist focus aimed at countering the country's long-term depopulation trend by encouraging families to have more children. Initially, providing financial incentives for second and subsequent births. The program was moderately effective in influencing reproductive decisions within this target group. In 2020, the government revised the Maternity Capital program to include first-born children. This redesign shifted the program's emphasis from promoting higher-order births, which are more likely to raise fertility, to broader poverty reduction among families with children, weakening its demographic effectiveness. This study provides a literature review and demographic analysis to evaluate the program's effectiveness in achieving its stated goals.
The data sources include state legislation, strategic demographic and family policy documents, official government reports, peer-reviewed research, and expert opinions. The findings show that while Maternity Capital investments have supported families in improving housing and educational access, the program's limited flexibility, due to its paternalistic design restricting fund usage, may constrain its ability to meet diverse family needs. The study observed short-term increases in fertility. Still, the program alone has proven insufficient to reverse long-term demographic decline. This paper identifies gaps between the policy's intentions and the socioeconomic realities faced by families, offering recommendations to improve the Maternity Capital program to achieve more favourable demographic outcomes and address social protection needs and providing insights for policymakers and researchers from the standpoint of its impacts on family social protection and child well-being. It argues that financial support must be substantial, well-targeted, particularly toward families considering a second or subsequent child, and paired with complementary pro-natalist measures rather than replaced by them.