Refine
H-BRS Bibliography
- yes (1196) (remove)
Departments, institutes and facilities
- Präsidium (397)
- Fachbereich Angewandte Naturwissenschaften (189)
- Fachbereich Informatik (178)
- Fachbereich Wirtschaftswissenschaften (154)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (133)
- Fachbereich Ingenieurwissenschaften und Kommunikation (124)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (100)
- Institut für funktionale Gen-Analytik (IFGA) (72)
- Fachbereich Sozialpolitik und Soziale Sicherung (43)
- Institute of Visual Computing (IVC) (41)
Document Type
- Article (436)
- Part of Periodical (407)
- Conference Object (102)
- Part of a Book (72)
- Report (54)
- Working Paper (42)
- Preprint (19)
- Bachelor Thesis (18)
- Master's Thesis (14)
- Other (10)
Year of publication
Has Fulltext
- yes (1196) (remove)
Keywords
- Hochschule Bonn-Rhein-Sieg (7)
- Machine Learning (7)
- Robotik (7)
- cytokine-induced killer cells (7)
- lignin (7)
- Digitalisierung (6)
- Lignin (6)
- Sustainability (6)
- immunotherapy (6)
- GC/MS (5)
After more than twenty years of research, the molecular events of apoptotic cell death can be succinctly stated; different pathways, activated by diverse signals, increase the activity of proteases called caspases that rapidly and irreversibly dismantle condemned cell by cleaving specific substrates. In this time the ideas that apoptosis protects us from tumourigenesis and that cancer chemotherapy works by inducing apoptosis also emerged. Currently, apoptosis research is shifting away from the intracellular events within the dying cell to focus on the effect of apoptotic cells on surrounding tissues. This is producing counterintuitive data showing that our understanding of the role of apoptosis in tumourigenesis and cancer therapy is too simple, with some interesting and provocative implications. Here, we will consider evidence supporting the idea that dying cells signal their presence to the surrounding tissue and, in doing so, elicit repair and regeneration that compensates for any loss of function caused by cell death. We will discuss evidence suggesting that cancer cell proliferation may be driven by inappropriate or corrupted tissue-repair programmes that are initiated by signals from apoptotic cells and show how this may dramatically modify how we view the role of apoptosis in both tumourigenesis and cancer therapy.
"Entdecken wir die Welt"
(2007)
Wer ein Studium beginnt, begibt sich auf einen neuen, noch ungekannten Weg in seinem Leben. Viel gilt es erst einmal zu lernen, damit Wissen und Können zu kompetentem Handeln befähigen und im späteren Berufsalltag zur Anwendung gelangen können. Neue Wissensinhalte sind das eine, das andere sind persönliche Erfahrungen, die in der kurzen Zeit eines Studiums ein Leben bereichern und prägen können.
Studierende und Absolventen der neuen Bachelorstudiengänge berichten über ihre Erfahrungen im Praxissemester. Sie wollen damit anderen Studierenden Mut machen, ihren persönlichen Weg ins Ausland zu finden, den ersten Schritt zu wagen und vor allem zur Planung eines eigenen Auslandspraxissemesters anregen. Ihre Beiträge zeigen, persönliche Erfahrungen und Entdeckungen im Ausland können das Leben bereichern und Freundschaften über Kulturen und Länder hinweg entstehen lassen.
Software developers build complex systems using plenty of third-party libraries. Documentation is key to understand and use the functionality provided via the libraries’ APIs. Therefore, functionality is the main focus of contemporary API documentation, while cross-cutting concerns such as security are almost never considered at all, especially when the API itself does not provide security features. Documentations of JavaScript libraries for use in web applications, e.g., do not specify how to add or adapt a Content Security Policy (CSP) to mitigate content injection attacks like Cross-Site Scripting (XSS). This is unfortunate, as security-relevant API documentation might have an influence on secure coding practices and prevailing major vulnerabilities such as XSS. For the first time, we study the effects of integrating security-relevant information in non-security API documentation. For this purpose, we took CSP as an exemplary study object and extended the official Google Maps JavaScript API documentation with security-relevant CSP information in three distinct manners. Then, we evaluated the usage of these variations in a between-group eye-tracking lab study involving N=49 participants. Our observations suggest: (1) Developers are focused on elements with code examples. They mostly skim the documentation while searching for a quick solution to their programming task. This finding gives further evidence to results of related studies. (2) The location where CSP-related code examples are placed in non-security API documentation significantly impacts the time it takes to find this security-relevant information. In particular, the study results showed that the proximity to functional-related code examples in documentation is a decisive factor. (3) Examples significantly help to produce secure CSP solutions. (4) Developers have additional information needs that our approach cannot meet.
Overall, our study contributes to a first understanding of the impact of security-relevant information in non-security API documentation on CSP implementation. Although further research is required, our findings emphasize that API producers should take responsibility for adequately documenting security aspects and thus supporting the sensibility and training of developers to implement secure systems. This responsibility also holds in seemingly non-security relevant contexts.
Das sogenannte „Deutschlandstipendium“ ist 2010 ins Leben gerufen worden. Gemäß den gesetzlichen Vorgaben sollen die Stipendien nach Begabung und Leistung vergeben werden. Darüber hinaus sollen auch gesellschaftliches Engagement oder besondere soziale, familiäre oder persönliche Umstände berücksichtigt werden. Bei der Finanzierung sind die Hochschulen zunächst auf das Einwerben privater Fördermittel angewiesen, die von Bund und Land um denselben Betrag aufgestockt werden. Die privaten Mittelgeber können für die von ihnen anteilig finanzierten Stipendien festlegen, aus welchen Studiengängen ihre Stipendiaten ausgewählt werden sollen. Die Hochschulen haben jedoch darauf zu achten, dass ein Drittel aller zu vergebenden Stipendien ohne eine entsprechende Zweckbindung vergeben werden. Einen direkten Einfluss auf die Auswahl einzelner Kandidaten dürfen die Förderer nicht haben. Vor diesem Hintergrund sind die Hochschulen angehalten, Anreize für private Förderer zu schaffen und parallel Bewerbungs- und Auswahlverfahren zu konzipieren, die die genannten gesetzlichen Vorgaben einhalten. Dadurch entsteht bei den Hochschulen ein erheblicher Verwaltungsaufwand. Zu dessen Reduzierung wird in diesem Artikel ein transparenter, nachvollziehbarer, zeit- und kostensparender Prozess durch einen programmierten Workflow beschrieben.
2-methylacetoacetyl-coenzyme A thiolase (beta-ketothiolase) deficiency: one disease - two pathways
(2020)
Background: 2-methylacetoacetyl-coenzyme A thiolase deficiency (MATD; deficiency of mitochondrial acetoacetyl-coenzyme A thiolase T2/ “beta-ketothiolase”) is an autosomal recessive disorder of ketone body utilization and isoleucine degradation due to mutations in ACAT1.
Methods: We performed a systematic literature search for all available clinical descriptions of patients with MATD. Two hundred forty-four patients were identified and included in this analysis. Clinical course and biochemical data are presented and discussed.
Results: For 89.6% of patients at least one acute metabolic decompensation was reported. Age at first symptoms ranged from 2 days to 8 years (median 12 months). More than 82% of patients presented in the first 2 years of life, while manifestation in the neonatal period was the exception (3.4%). 77.0% (157 of 204 patients) of patients showed normal psychomotor development without neurologic abnormalities. Conclusion: This comprehensive data analysis provides a systematic overview on all cases with MATD identified in the literature. It demonstrates that MATD is a rather benign disorder with often favourable outcome, when compared with many other organic acidurias.
Nitrile-type inhibitors are known to interact with cysteine proteases in a covalent-reversible manner. The chemotype of 3-cyano-3-aza-β-amino acid derivatives was designed in which the N-cyano group is centrally arranged in the molecule to allow for interactions with the nonprimed and primed binding regions of the target enzymes. These compounds were evaluated as inhibitors of the human cysteine cathepsins K, S, B, and L. They exhibited slow-binding behavior and were found to be exceptionally potent, in particular toward cathepsin K, with second-order rate constants up to 52 900 × 103 M–1 s–1.
Background: 3-hydroxy-3-methylglutaryl-coenzyme A lyase deficiency (HMGCLD) is an autosomal recessive disorder of ketogenesis and leucine degradation due to mutations in HMGCL.
Method: We performed a systematic literature search to identify all published cases. Two hundred eleven patients of whom relevant clinical data were available were included in this analysis. Clinical course, biochemical findings and mutation data are highlighted and discussed. An overview on all published HMGCL variants is provided.
Results: More than 95% of patients presented with acute metabolic decompensation. Most patients manifested within the first year of life, 42.4% already neonatally. Very few individuals remained asymptomatic. The neurologic long-term outcome was favorable with 62.6% of patients showing normal development.
Conclusion: This comprehensive data analysis provides a systematic overview on all published cases with HMGCLD including a list of all known HMGCL mutations.
3-Hydroxyisobutyrate Dehydrogenase (HIBADH) deficiency - a novel disorder of valine metabolism
(2021)
3-Hydroxyisobutyric acid (3HiB) is an intermediate in the degradation of the branched-chain amino acid valine. Disorders in valine degradation can lead to 3HiB accumulation and its excretion in the urine. This article describes the first two patients with a new metabolic disorder, 3-hydroxyisobutyrate dehydrogenase (HIBADH) deficiency, its phenotype and its treatment with a low-valine diet. The detected mutation in the HIBADH gene leads to nonsense-mediated mRNA decay of the mutant allele and to a complete loss-of-function of the enzyme. Under strict adherence to a low-valine diet a rapid decrease of 3HiB excretion in the urine was observed. Due to limited patient numbers and intrafamilial differences in phenotype with one affected and one unaffected individual, the clinical phenotype of HIBADH deficiency needs further evaluation.
In service robotics, tasks without the involvement of objects are barely applicable, like in searching, fetching or delivering tasks. Service robots are supposed to capture efficiently object related information in real world scenes while for instance considering clutter and noise, and also being flexible and scalable to memorize a large set of objects. Besides object perception tasks like object recognition where the object’s identity is analyzed, object categorization is an important visual object perception cue that associates unknown object instances based on their e.g. appearance or shape to a corresponding category. We present a pipeline from the detection of object candidates in a domestic scene over the description to the final shape categorization of detected candidates. In order to detect object related information in cluttered domestic environments an object detection method is proposed that copes with multiple plane and object occurrences like in cluttered scenes with shelves. Further a surface reconstruction method based on Growing Neural Gas (GNG) in combination with a shape distribution-based descriptor is proposed to reflect shape characteristics of object candidates. Beneficial properties provided by the GNG such as smoothing and denoising effects support a stable description of the object candidates which also leads towards a more stable learning of categories. Based on the presented descriptor a dictionary approach combined with a supervised shape learner is presented to learn prediction models of shape categories.
Experimental results, of different shapes related to domestically appearing object shape categories such as cup, can, box, bottle, bowl, plate and ball, are shown. A classification accuracy of about 90% and a sequential execution time of lesser than two seconds for the categorization of an unknown object is achieved which proves the reasonableness of the proposed system design. Additional results are shown towards object tracking and false positive handling to enhance the robustness of the categorization. Also an initial approach towards incremental shape category learning is proposed that learns a new category based on the set of previously learned shape categories.
The ability of detecting people has become a crucial subtask, especially in robotic systems which aim an application in public or domestic environments. Robots already provide their services e.g. in real home improvement markets and guide people to a desired product. In such a scenario many robot internal tasks would benefit from the knowledge of knowing the number and positions of people in the vicinity. The navigation for example could treat them as dynamical moving objects and also predict their next motion directions in order to compute a much safer path. Or the robot could specifically approach customers and offer its services. This requires to detect a person or even a group of people in a reasonable range in front of the robot. Challenges of such a real-world task are e.g. changing lightning conditions, a dynamic environment and different people shapes. In this thesis a 3D people detection approach based on point cloud data provided by the Microsoft Kinect is implemented and integrated on mobile service robot. A Top-Down/Bottom-Up segmentation is applied to increase the systems flexibility and provided the capability to the detect people even if they are partially occluded. A feature set is proposed to detect people in various pose configurations and motions using a machine learning technique. The system can detect people up to a distance of 5 meters. The experimental evaluation compared different machine learning techniques and showed that standing people can be detected with a rate of 87.29% and sitting people with 74.94% using a Random Forest classifier. Certain objects caused several false detections. To elimante those a verification is proposed which further evaluates the persons shape in the 2D space. The detection component has been implemented as s sequential (frame rate of 10 Hz) and a parallel application (frame rate of 16 Hz). Finally, the component has been embedded into complete people search task which explorates the environment, find all people and approach each detected person.
Animal models are often needed in cancer research but some research questions may be answered with other models, e.g., 3D replicas of patient-specific data, as these mirror the anatomy in more detail. We, therefore, developed a simple eight-step process to fabricate a 3D replica from computer tomography (CT) data using solely open access software and described the method in detail. For evaluation, we performed experiments regarding endoscopic tumor treatment with magnetic nanoparticles by magnetic hyperthermia and local drug release. For this, the magnetic nanoparticles need to be accumulated at the tumor site via a magnetic field trap. Using the developed eight-step process, we printed a replica of a locally advanced pancreatic cancer and used it to find the best position for the magnetic field trap. In addition, we described a method to hold these magnetic field traps stably in place. The results are highly important for the development of endoscopic tumor treatment with magnetic nanoparticles as the handling and the stable positioning of the magnetic field trap at the stomach wall in close proximity to the pancreatic tumor could be defined and practiced. Finally, the detailed description of the workflow and use of open access software allows for a wide range of possible uses.
4GREAT is an extension of the German Receiver for Astronomy at Terahertz frequencies (GREAT) operated aboard the Stratospheric Observatory for Infrared Astronomy (SOFIA). The spectrometer comprises four different detector bands and their associated subsystems for simultaneous and fully independent science operation. All detector beams are co-aligned on the sky. The frequency bands of 4GREAT cover 491-635, 890-1090, 1240-1525 and 2490-2590 GHz, respectively. This paper presents the design and characterization of the instrument, and its in-flight performance. 4GREAT saw first light in June 2018, and has been offered to the interested SOFIA communities starting with observing cycle 6.
The objective of this research project is to develop a user-friendly and cost-effective interactive input device that allows intuitive and efficient manipulation of 3D objects (6 DoF) in virtual reality (VR) visualization environments with flat projections walls. During this project, it was planned to develop an extended version of a laser pointer with multiple laser beams arranged in specific patterns. Using stationary cameras observing projections of these patterns from behind the screens, it is planned to develop an algorithm for reconstruction of the emitter’s absolute position and orientation in space. Laser pointer concept is an intuitive way of interaction that would provide user with a familiar, mobile and efficient navigation though a 3D environment. In order to navigate in a 3D world, it is required to know the absolute position (x, y and z position) and orientation (roll, pitch and yaw angles) of the device, a total of 6 degrees of freedom (DoF). Ordinary laser-based pointers when captured on a flat surface with a video camera system and then processed, will only provide x and y coordinates effectively reducing available input to 2 DoF only. In order to overcome this problem, an additional set of multiple (invisible) laser pointers should be used in the pointing device. These laser pointers should be arranged in a way that the projection of their rays will form one fixed dot pattern when intersected with the flat surface of projection screens. Images of such a pattern will be captured via a real-time camera-based system and then processed using mathematical re-projection algorithms. This would allow the reconstruction of the full absolute 3D pose (6 DoF) of the input device. Additionally, multi-user or collaborative work should be supported by the system, would allow several users to interact with a virtual environment at the same time. Possibilities to port processing algorithms into embedded processors or FPGAs will be investigated during this project as well.
Background: Cancer heterogeneity poses a serious challenge concerning the toxicity and adverse effects of therapeutic inhibitors, especially when it comes to combinatorial therapies that involve multiple targeted inhibitors. In particular, in non-small cell lung cancer (NSCLC), a number of studies have reported synergistic effects of drug combinations in the preclinical models, while they were only partially successful in the clinical setup, suggesting those alternative clinical strategies (with genetic background and immune response) should be considered. Herein, we investigated the antitumor effect
of cytokine-induced killer (CIK) cells in combination with ALK and PD-1 inhibitors in vitro on genetically variable NSCLC cell lines.
Methods: We co-cultured the three genetically different NSCLC cell lines NCI-H2228 (EML4-ALK), A549 (KRAS mutation), and HCC-78 (ROS1 rearrangement) with and without nivolumab (PD-1 inhibitor) and crizotinib (ALK inhibitor). Additionally, we profiled the variability of surface expression multiple immune checkpoints, the concentration of absolute dead cells, intracellular granzyme B on CIK cells using flow cytometry as well as RT-qPCR. ELISA and Western blot were performed to verify the activation of CIK cells.
Results: Our analysis showed that (a) nivolumab significantly weakened PD-1 surface expression on CIK cells without impacting other immune checkpoints or PD-1 mRNA expression, (b) this combination strategy showed an effective response on cell viability, IFN-g production, and intracellular release of granzyme B in CD3+ CD56+ CIK cells, but solely in NCI-H2228, (c) the intrinsic expression of Fas ligand (FasL) as a T-cell activation marker in CIK cells was upregulated by this additive effect, and (d) nivolumab induced Foxp3 expression in CD4+CD25+ subpopulation of CIK cells significantly increased. Taken together, we could show that CIK cells in combination with crizotinib and nivolumab can enhance the anti-tumor immune response through FasL activation, leading to increased IFN-g and granzyme B, but only in NCI-H2228 cells with EML4-ALK rearrangement. Therefore, we hypothesize that CIK therapy may be a potential alternative in NSCLC patients harboring EML4-ALK rearrangement, in addition, we support the idea that combination therapies offer significant potential when they are optimized on a patient-by-patient basis.
The simultaneous operation of multiple different semiconducting metal oxide (MOX) gas sensors is demanding for the readout circuitry. The challenge results from the strongly varying signal intensities of the various sensor types to the target gas. While some sensors change their resistance only slightly, other types can react with a resistive change over a range of several decades. Therefore, a suitable readout circuit has to be able to capture all these resistive variations, requiring it to have a very large dynamic range. This work presents a compact embedded system that provides a full, high range input interface (readout and heater management) for MOX sensor operation. The system is modular and consists of a central mainboard that holds up to eight sensor-modules, each capable of supporting up to two MOX sensors, therefore supporting a total maximum of 16 different sensors. Its wide input range is archived using the resistance-to-time measurement method. The system is solely built with commercial off-the-shelf components and tested over a range spanning from 100Ω to 5 GΩ (9.7 decades) with an average measurement error of 0.27% and a maximum error of 2.11%. The heater management uses a well-tested power-circuit and supports multiple modes of operation, hence enabling the system to be used in highly automated measurement applications. The experimental part of this work presents the results of an exemplary screening of 16 sensors, which was performed to evaluate the system’s performance.
The choice of suitable semiconducting metal oxide (MOX) gas sensors for the detection of a specific gas or gas mixture is time-consuming since the sensor’s sensitivity needs to be characterized at multiple temperatures to find its optimal operating conditions. To obtain reliable measurement results, it is very important that the power for the sensor’s integrated heater is stable, regulated and error-free (or error-tolerant). Especially the error-free requirement can be only be achieved if the power supply implements failure-avoiding and failure-detection methods. The biggest challenge is deriving multiple different voltages from a common supply in an efficient way while keeping the system as small and lightweight as possible. This work presents a reliable, compact, embedded system that addresses the power supply requirements for fully automated simultaneous sensor characterization for up to 16 sensors at multiple temperatures. The system implements efficient (avg. 83.3% efficiency) voltage conversion with low ripple output (<32 mV) and supports static or temperature-cycled heating modes. Voltage and current of each channel are constantly monitored and regulated to guarantee reliable operation. To evaluate the proposed design, 16 sensors were screened. The results are shown in the experimental part of this work.
A Comparative Study of Uncertainty Estimation Methods in Deep Learning Based Classification Models
(2020)
Deep learning models produce overconfident predictions even for misclassified data. This work aims to improve the safety guarantees of software-intensive systems that use deep learning based classification models for decision making by performing comparative evaluation of different uncertainty estimation methods to identify possible misclassifications.
In this work, uncertainty estimation methods applicable to deep learning models are reviewed and those which can be seamlessly integrated to existing deployed deep learning architectures are selected for evaluation. The different uncertainty estimation methods, deep ensembles, test-time data augmentation and Monte Carlo dropout with its variants, are empirically evaluated on two standard datasets (CIFAR-10 and CIFAR-100) and two custom classification datasets (optical inspection and RoboCup@Work dataset). A relative ranking between the methods is provided by evaluating the deep learning classifiers on various aspects such as uncertainty quality, classifier performance and calibration. Standard metrics like entropy, cross-entropy, mutual information, and variance, combined with a rank histogram based method to identify uncertain predictions by thresholding on these metrics, are used to evaluate uncertainty quality.
The results indicate that Monte Carlo dropout combined with test-time data augmentation outperforms all other methods by identifying more than 95% of the misclassifications and representing uncertainty in the highest number of samples in the test set. It also yields a better classifier performance and calibration in terms of higher accuracy and lower Expected Calibration Error (ECE), respectively. A python based uncertainty estimation library for training and real-time uncertainty estimation of deep learning based classification models is also developed.
The continuously increasing number of biomedical scholarly publications makes it challenging to construct document recommendation algorithms that can efficiently navigate through literature. Such algorithms would help researchers in finding similar, relevant, and related publications that align with their research interests. Natural Language Processing offers various alternatives to compare publications, ranging from entity recognition to document embeddings. In this paper, we present the results of a comparative analysis of vector-based approaches to assess document similarity in the RELISH corpus. We aim to determine the best approach that resembles relevance without the need for further training. Specifically, we employ five different techniques to generate vectors representing the text in the documents. These techniques employ a combination of various Natural Language Processing frameworks such as Word2Vec, Doc2Vec, dictionary-based Named Entity Recognition, and state-of-the-art models based on BERT. To evaluate the document similarity obtained by these approaches, we utilize different evaluation metrics that account for relevance judgment, relevance search, and re-ranking of the relevance search. Our results demonstrate that the most promising approach is an in-house version of document embeddings, starting with word embeddings and using centroids to aggregate them by document.
The research of autonomous artificial agents that adapt to and survive in changing, possibly hostile environments, has gained momentum in recent years. Many of such agents incorporate mechanisms to learn and acquire new knowledge from its environment, a feature that becomes fundamental to enable the desired adaptation, and account for the challenges that the environment poses. The issue of how to trigger such learning, however, has not been as thoroughly studied as its significance suggest. The solution explored is based on the use of surprise (the reaction to unexpected events), as the mechanism that triggers learning. This thesis introduces a computational model of surprise that enables the robotic learner to experience surprise and start the acquisition of knowledge to explain it. A measure of surprise that combines elements from information and probability theory, is presented. Such measure offers a response to surprising situations faced by the robot, that is proportional to the degree of unexpectedness of such event. The concepts of short- and long-term memory are investigated as factors that influence the resulting surprise. Short-term memory enables the robot to habituate to new, repeated surprises, and to “forget” about old ones, allowing them to become surprising again. Long-term memory contains knowledge that is known a priori or that has been previously learned by the robot. Such knowledge influences the surprise mechanism, by applying a subsumption principle: if the available knowledge is able to explain the surprising event, suppress any trigger of surprise. The computational model of robotic surprise has been successfully applied to the domain of a robotic learner, specifically one that learns by experimentation. A brief introduction to the context of such application is provided, as well as a discussion on related issues like the relationship of the surprise mechanism with other components of the robot conceptual architecture, the challenges presented by the specific learning paradigm used, and other components of the motivational structure of the agent.
Computers can help us to trigger our intuition about how to solve a problem. But how does a computer take into account what a user wants and update these triggers? User preferences are hard to model as they are by nature vague, depend on the user’s background and are not always deterministic, changing depending on the context and process under which they were established. We pose that the process of preference discovery should be the object of interest in computer aided design or ideation. The process should be transparent, informative, interactive and intuitive. We formulate Hyper-Pref, a cyclic co-creative process between human and computer, which triggers the user’s intuition about what is possible and is updated according to what the user wants based on their decisions. We combine quality diversity algorithms, a divergent optimization method that can produce many, diverse solutions, with variational autoencoders to both model that diversity as well as the user’s preferences, discovering the preference hypervolume within large search spaces.
Electrical signal transmission in power electronic devices takes place through high-purity aluminum bonding wires. Cyclic mechanical and thermal stresses during operation lead to fatigue loads, resulting in premature failure of the wires, which cannot be reliably predicted. The following work presents two fatigue lifetime models calibrated and validated based on experimental fatigue results of an aluminum bonding wire and subsequently transferred and applied to other wire types. The lifetime modeling of Wöhler curves for different load ratios shows good but limited applicability for the linear model. The model can only be applied above 10,000 cycles and within the investigated load range of R = 0.1 to R = 0.7. The nonlinear model shows very good agreement between model prediction and experimental results over the entire investigated cycle range. Furthermore, the predicted Smith diagram is not only consistent in the investigated load range but also in the extrapolated load range from R = −1.0 to R = 0.8. A transfer of both model approaches to other wire types by using their tensile strengths can be implemented as well, although the nonlinear model is more suitable since it covers the entire load and cycle range.
Climate change is increasingly affecting vulnerable groups and resulting in dire social and economic consequences, especially for those in the Global South. Managing current and emerging climate-related risks will require increasing individual’s and communities’ resilience, including enhancing absorptive, adaptive, and transformative capacities. Policymakers are now considering the role that social protection policies and programmes can play in building climate resilience by contributing to these capacities. However, there is a limited understanding of the extent to which social protection instruments can influence these three resilience-related capacities. Lack of assessment tools or frameworks might contribute to limited evidence of social protection’s ability to increase climate resilience. In particular, there appear to be no frameworks or tools that help assess the role of social cash transfers (SCT) in building adaptive capacity. Based on a multi-staged literature review, we develop an adaptive capacity outcomes framework (ACOF) that can help assess SCT’s contribution to building adaptive capacity, and, consequently, resilience. The framework is then tested using impact evaluation and assessment reports from SCT programmes in Indonesia, Zambia, Ethiopia, Bangladesh, and Tanzania. The exercise finds that SCTs alone have a limited contribution to adaptive capacity outcomes, but interventions that combine cash transfers with other components such as nutrition or livelihood training show positive impacts. We find that the ACOF can support assessments of SCT’s contribution towards adaptive capacity. It can help build evidence, evaluate impacts, and through further research, can facilitate learning on SCTs' role in increasing climate resilience.
Failure prognostic builds up on constant data acquisition and processing and fault diagnosis and is an essential part of predictive maintenance of smart manufacturing systems enabling condition based maintenance, optimised use of plant equipment, improved uptime and yield and to prevent safety problems. Given known control inputs into a plant and real sensor outputs or simulated measurements, the model-based part of the proposed hybrid method provides numerical values of unknown parameter degradation functions at sampling time points by the evaluation of equations that have been derived offline from a bicausal diagnostic bond graph. These numerical values are computed concurrently to the constant monitoring of a system and are stored in a buffer of fixed length. The data-driven part of the method provides a sequence of remaining useful life estimates by repeated projection of the parameter degradation into the future based on the use of values in a sliding time window. Existing software can be used to determine the best fitting function and can account for its random parameters. The continuous parameter estimation and their projection into the future can be performed in parallel for multiple isolated simultaneous parametric faults on a multicore, multiprocessor computer.
The proposed hybrid bond graph model-based, data-driven method is verified by an offline simulation case study of a typical power electronic circuit. It can be used to implement embedded systems that enable cooperating machines in smart manufacturing to perform prognostic themselves.
A Method for the Sustainable Documentation of Operations Processes in Parcel Distribution Centers
(2018)
There is often no common understanding on operational processes in logistics companies as they are not properly documented. Hence, people execute the same process differently and training is conducted by experienced operators on an ad-hoc basis. Furthermore, continuous process improvement is hampered as neither the ideal process nor current issues in as-is processes are visible. A major reason for the missing documentation is the complexity of existing business process modelling languages. Modelling experts are required for initially describing the processes and also for updating the models after process changes. Furthermore, operations people are usually not used to read complex process models in EPCs or BPMN diagrams. In order to overcome these limitations, a domain-specific modelling language which facilitates maintaining up-to-date process models has been designed with a large logistics company in Germany. The paper at hand briefly describes this language and illustrates the method on how to apply it in operations environments.
Integrating physical simulation data into data ecosystems challenges the compatibility and interoperability of data management tools. Semantic web technologies and relational databases mostly use other data types, such as measurement or manufacturing design data. Standardizing simulation data storage and harmonizing the data structures with other domains is still a challenge, as current standards such as the ISO standard STEP (ISO 10303 ”Standard for the Exchange of Product model data”) fail to bridge the gap between design and simulation data. This challenge requires new methods, such as ontologies, to rethink simulation results integration. This research describes a new software architecture and application methodology based on the industrial standard ”Virtual Material Modelling in Manufacturing” (VMAP). The architecture integrates large quantities of structured simulation data and their analyses into a semantic data structure. It is capable of providing data permeability from the global digital twin level to the detailed numerical values of data entries and even new key indicators in a three-step approach: It represents a file as an instance in a knowledge graph, queries the file’s metadata, and finds a semantically represented process that enables new metadata to be created and instantiated.
Recessive mutations in the MPV17 gene cause mitochondrial DNA depletion syndrome, a fatal infantile genetic liver disease in humans. Loss of function in mice leads to glomerulosclerosis and sensineural deafness accompanied with mitochondrial DNA depletion. Mutations in the yeast homolog Sym1, and in the zebra fish homolog tra cause interesting, but not obviously related phenotypes, although the human gene can complement the yeast Sym1 mutation. The MPV17 protein is a hydrophobic membrane protein of 176 amino acids and unknown function. Initially localised in murine peroxisomes, it was later reported to be a mitochondrial inner membrane protein in humans and in yeast. To resolve this contradiction we tested two new mouse monoclonal antibodies directed against the human MPV17 protein in Western blots and immunohistochemistry on human U2OS cells. One of these monoclonal antibodies showed specific reactivity to a protein of 20 kD absent in MPV17 negative mouse cells. Immunofluorescence studies revealed colocalisation with peroxisomal, endosomal and lysosomal markers, but not with mitochondria. This data reveal a novel connection between a possible peroxisomal/endosomal/lysosomal function and mitochondrial DNA depletion.
Neuromorphic computing aims to mimic the computational principles of the brain in silico and has motivated research into event-based vision and spiking neural networks (SNNs). Event cameras (ECs) capture local, independent changes in brightness, and offer superior power consumption, response latencies, and dynamic ranges compared to frame-based cameras. SNNs replicate neuronal dynamics observed in biological neurons and propagate information in sparse sequences of ”spikes”. Apart from biological fidelity, SNNs have demonstrated potential as an alternative to conventional artificial neural networks (ANNs), such as in reducing energy expenditure and inference time in visual classification. Although potentially beneficial for robotics, the novel event-driven and spike-based paradigms remain scarcely explored outside the domain of aerial robots.
To investigate the utility of brain-inspired sensing and data processing in a robotics application, we developed a neuromorphic approach to real-time, online obstacle avoidance on a manipulator with an onboard camera. Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN, decoding neural activations into avoidance motions, and adjusting plans in a dynamic motion primitive formulation. We conducted simulated and real experiments with a Kinova Gen3 arm performing simple reaching tasks involving static and dynamic obstacles. Our implementation was systematically tuned, validated, and tested in sets of distinct task scenarios, and compared to a non-adaptive baseline through formalized quantitative metrics and qualitative criteria.
The neuromorphic implementation facilitated reliable avoidance of imminent collisions in most scenarios, with 84% and 92% median success rates in simulated and real experiments, where the baseline consistently failed. Adapted trajectories were qualitatively similar to baseline trajectories, indicating low impacts on safety, predictability and smoothness criteria. Among notable properties of the SNN were the correlation of processing time with the magnitude of perceived motions (captured in events) and robustness to different event emulation methods. Preliminary tests with a DAVIS346 EC showed similar performance, validating our experimental event emulation method. These results motivate future efforts to incorporate SNN learning, utilize neuromorphic processors, and target other robot tasks to further explore this approach.
With the increasing demand for ultrapure water in the pharmaceutical and semiconductor industry, the need for precise measuring instruments for those applications is also growing. One critical parameter of water quality is the amount of total organic carbon (TOC). This work presents a system that uses the advantage of the increased oxidation power achieved with UV/O3 advanced oxidation process (AOP) for TOC measurement in combination with a significant miniaturization compared to the state of the art. The miniaturization is achieved by using polymer-electrolyte membrane (PEM) electrolysis cells for ozone generation in combination with UV-LEDs for irradiation of the measuring solution, as both components are significantly smaller than standard equipment. Conductivity measurement after oxidation is the measuring principle and measurements were carried out in the TOC range between 10 and 1000 ppb TOC. The suitability of the system for TOC measurement is demonstrated using the oxidation by ozonation combined with UV irradiation of defined concentrations of isopropyl alcohol (IPA).
This paper presents a novel approach to address noise, vibration, and harshness (NVH) issues in electrically assisted bicycles (e-bikes) caused by the drive unit. By investigating and optimising the structural dynamics during early product development, NVH can decisively be improved and valuable resources can be saved, emphasising its significance for enhancing riding performance. The paper offers a comprehensive analysis of the e-bike drive unit’s mechanical interactions among relevant components, culminating—to the best of our knowledge—in the development of the first high-fidelity model of an entire e-bike drive unit. The proposed model uses the principles of elastic multi body dynamics (eMBD) to elucidate the structural dynamics in dynamic-transient calculations. Comparing power spectra between measured and simulated motion variables validates the chosen model assumptions. The measurements of physical samples utilise accelerometers, contactless laser Doppler vibrometry (LDV) and various test arrangements, which are replicated in simulations and provide accessibility to measure vibrations onto rotating shafts and stationary structures. In summary, this integrated system-level approach can serve as a viable starting point for comprehending and managing the NVH behaviour of e-bikes.
In this paper, a gas-to-power (GtoP) system for power outages is digitally modeled and experimentally developed. The design includes a solid-state hydrogen storage system composed of TiFeMn as a hydride forming alloy (6.7 kg of alloy in five tanks) and an air-cooled fuel cell (maximum power: 1.6 kW). The hydrogen storage system is charged under room temperature and 40 bar of hydrogen pressure, reaching about 110 g of hydrogen capacity. In an emergency use case of the system, hydrogen is supplied to the fuel cell, and the waste heat coming from the exhaust air of the fuel cell is used for the endothermic dehydrogenation reaction of the metal hydride. This GtoP system demonstrates fast, stable, and reliable responses, providing from 149 W to 596 W under different constant as well as dynamic conditions. A comprehensive and novel simulation approach based on a network model is also applied. The developed model is validated under static and dynamic power load scenarios, demonstrating excellent agreement with the experimental results.
In the context of the Franco-German research project Re(h)strain, this work focuses on a global system analysis integrating both safety and security analysis of international and/or urban railway stations. The Re(h)strain project focuses on terrorist attacks on high speed train systems and investigates prevention and mitigation measures to reduce the overall vulnerability and strengthen the system resilience. One main criterion regarding public transport issues is the number of passengers. For example, the railway station of Paris “Gare du Nord” deals with a bigger number of passengers than the biggest airport in the world (SNCF open Data 2014), the Atlanta airport, but in terms of passengers, it is only around the 23rd rank railway station in the world. Due to the enormous mass of people, this leads to the system approach of breaking out the station into several classes of zones, e.g. entrance, main hall, quays, trains, etc. All classes are analysed considering state-of-the-art parameters, like targets attractiveness, feasibility of attack, possible damage, possible mitigation and defences. Then, safety incidence of security defence is discussed in order to refine security requirement with regard to the considered zone. Finally, global requirements of security defence correlated to the corresponding class of zones are proposed.
A qualitative study of Machine Learning practices and engineering challenges in Earth Observation
(2021)
Machine Learning (ML) is ubiquitously on the advance. Like many domains, Earth Observation (EO) also increasingly relies on ML applications, where ML methods are applied to process vast amounts of heterogeneous and continuous data streams to answer socially and environmentally relevant questions. However, developing such ML- based EO systems remains challenging: Development processes and employed workflows are often barely structured and poorly reported. The application of ML methods and techniques is considered to be opaque and the lack of transparency is contradictory to the responsible development of ML-based EO applications. To improve this situation a better understanding of the current practices and engineering-related challenges in developing ML-based EO applications is required. In this paper, we report observations from an exploratory study where five experts shared their view on ML engineering in semi-structured interviews. We analysed these interviews with coding techniques as often applied in the domain of empirical software engineering. The interviews provide informative insights into the practical development of ML applications and reveal several engineering challenges. In addition, interviewees participated in a novel workflow sketching task, which provided a tangible reflection of implicit processes. Overall, the results confirm a gap between theoretical conceptions and real practices in ML development even though workflows were sketched abstractly as textbook-like. The results pave the way for a large-scale investigation on requirements for ML engineering in EO.
This work presents the analysis of data recorded by an eye tracking device in the course of evaluating a foveated rendering approach for head-mounted displays (HMDs). Foveated rendering methods adapt the image synthesis process to the user’s gaze and exploiting the human visual system’s limitations to increase rendering performance. Especially, foveated rendering has great potential when certain requirements have to be fulfilled, like low-latency rendering to cope with high display refresh rates. This is crucial for virtual reality (VR), as a high level of immersion, which can only be achieved with high rendering performance and also helps to reduce nausea, is an important factor in this field. We put things in context by first providing basic information about our rendering system, followed by a description of the user study and the collected data. This data stems from fixation tasks that subjects had to perform while being shown fly-through sequences of virtual scenes on an HMD. These fixation tasks consisted of a combination of various scenes and fixation modes. Besides static fixation targets, moving tar- gets on randomized paths as well as a free focus mode were tested. Using this data, we estimate the precision of the utilized eye tracker and analyze the participants’ accuracy in focusing the displayed fixation targets. Here, we also take a look at eccentricity-dependent quality ratings. Comparing this information with the users’ quality ratings given for the displayed sequences then reveals an interesting connection between fixation modes, fixation accuracy and quality ratings.
This paper introduces a random number generator (RNG) based on the avalanche noise of two diodes. A true random number generator (TRNG) generates true random numbers with the use of the electronic noise produced by two avalanche diodes. The amplified outputs of the diodes are sampled and digitized. The difference between the two concurrently sampled and digitized outputs is calculated and used to select a seed and to drive a pseudo-random number generator (PRNG). The PRNG is an xorshift generator that generates 1024 bits in each cycle. Every sequence of 1024 bits is moderately modified and output. The TRNG delivers the next seed and the next cycle begins. The statistical behavior of the generator is analyzed and presented.
Jet engines of airplanes are designed such that in some components damage occurs and accumulates in service without being critical up to a certain level of damage. Since maintenance, repair, and component exchange are very cost-intensive, it is necessary to predict efficiently the component lifetime with high accuracy. A former developed lifetime model, based on interpolated results of aerodynamic and structural mechanics simulations, uses material parameters estimated from literature values of standard creep experiments. For improved accuracy, an experimental procedure is developed for the characterization of the short-time creep behavior, which is relevant for the operation of turbine blades of jet engines. To consider microstructural influences resulting from the manufacturing of thin-walled single crystal turbine blades, small-scale specimens from used turbine blades are extracted and tested in short- and medium-time creep experiments. Based on experimental results and literature values, a creep model, which describes the fracture behavior for a wide range of creep loads, is calibrated and is now used for the lifetime prediction of turbine blades under real loading conditions.
For research in audiovisual interview archives often it is not only of interest what is said but also how. Sentiment analysis and emotion recognition can help capture, categorize and make these different facets searchable. In particular, for oral history archives, such indexing technologies can be of great interest. These technologies can help understand the role of emotions in historical remembering. However, humans often perceive sentiments and emotions ambiguously and subjectively. Moreover, oral history interviews have multi-layered levels of complex, sometimes contradictory, sometimes very subtle facets of emotions. Therefore, the question arises of the chance machines and humans have capturing and assigning these into predefined categories. This paper investigates the ambiguity in human perception of emotions and sentiment in German oral history interviews and the impact on machine learning systems. Our experiments reveal substantial differences in human perception for different emotions. Furthermore, we report from ongoing machine learning experiments with different modalities. We show that the human perceptual ambiguity and other challenges, such as class imbalance and lack of training data, currently limit the opportunities of these technologies for oral history archives. Nonetheless, our work uncovers promising observations and possibilities for further research.
Pozzolanic properties of Pennisetum purpureum grass ash were tested on Portland cement. Results show that the ash can be blended with cements without compromising binding strength of the cement. It was found that Portland cement could be blended with Pennisetum purpureum up to a ratio of 3:2 compromising compressive strength of mortar.Mortar with lower cement replacement took longer to set as evidenced by lower compressive strength within the 28-day aging time. Mortar with higher cement replacement had lower water absorption capacity, an indication that the test pozzolan was of smaller particulate size. XRF analysis and the FTIR spectrum showed that the ash has a higher content of silica. The XRD pattern of the ash showed that the ash was predominantly amorphous. SEM images showed that the ash produced at 600 o C had residual carbon material.
Host-derived succinate accumulates in the airways during bacterial infection. Here, we show that luminal succinate activates murine tracheal brush (tuft) cells through a signaling cascade involving the succinate receptor 1 (SUCNR1), phospholipase Cβ2, and the cation channel transient receptor potential channel subfamily M member 5 (TRPM5). Stimulated brush cells then trigger a long-range Ca2+ wave spreading radially over the tracheal epithelium through a sequential signaling process. First, brush cells release acetylcholine, which excites nearby cells via muscarinic acetylcholine receptors. From there, the Ca2+ wave propagates through gap junction signaling, reaching also distant ciliated and secretory cells. These effector cells translate activation into enhanced ciliary activity and Cl- secretion, which are synergistic in boosting mucociliary clearance, the major innate defense mechanism of the airways. Our data establish tracheal brush cells as a central hub in triggering a global epithelial defense program in response to a danger-associated metabolite.
The development of advanced robotic systems is challenging as expertise from multiple domains needs to be integrated conceptually and technically. Model-driven engineering promises an efficient and flexible approach for developing robotics applications that copes with this challenge. Domain-specific modeling allows to describe robotics concerns with concepts and notations closer to the respective problem domain. This raises the level of abstraction and results in models that are easier to understand and validate. Furthermore, model-driven engineering allows to increase the level of automation, e.g. through code generation, and to bridge the gap between modeling and implementation. The anticipated results are improved efficiency and quality of the robotics systems engineering process. Within this contribution, we survey the available literature on domain-specific modeling and languages that target core robotics concerns. In total 137 publications were identified that comply with a set of defined criteria, which we consider essential for contributions in this field. With the presented survey, we provide an overview on the state-of-the-art of domain-specific modeling approaches in robotics. The surveyed publications are investigated from the perspective of users and developers of model-based approaches in robotics along a set of quantitative and qualitative research questions. The presented quantitative analysis clearly indicates the rising popularity of applying domain-specific modeling approaches to robotics in the academic community. Beyond this statistical analysis, we map the selected publications to a defined set of robotics subdomains and typical development phases in robotic systems engineering as reference for potential users. Furthermore, we analyze these contributions from a language engineering viewpoint and discuss aspects such as the methods and tools used for their implementation as well as their documentation status, platform integration, typical use cases and the evaluation strategies used for validation of the proposed approaches. Finally, we conclude with recommendations for discussion in the model-driven engineering and robotics community based on the insights gained in this survey.
Background
Consumers rely heavily on online user reviews when shopping online and cybercriminals produce fake reviews to manipulate consumer opinion. Much prior research focuses on the automated detection of these fake reviews, which are far from perfect. Therefore, consumers must be able to detect fake reviews on their own. In this study we survey the research examining how consumers detect fake reviews online.
Methods
We conducted a systematic literature review over the research on fake review detection from the consumer-perspective. We included academic literature giving new empirical data. We provide a narrative synthesis comparing the theories, methods and outcomes used across studies to identify how consumers detect fake reviews online.
Results
We found only 15 articles that met our inclusion criteria. We classify the most often used cues identified into five categories which were (1) review characteristics (2) textual characteristics (3) reviewer characteristics (4) seller characteristics and (5) characteristics of the platform where the review is displayed.
Discussion
We find that theory is applied inconsistently across studies and that cues to deception are often identified in isolation without any unifying theoretical framework. Consequently, we discuss how such a theoretical framework could be developed.
WiFi-based Long Distance (WiLD) networks have emerged as a promising alternative approach for Internet in rural areas. However, the MAC layer, which is based on the IEEE802.11 standard, comprises contiguous stations in a cell and is spatially restricted to a few hundred meters at most. In this work, we summarize efforts by different researchers to use IEEE802.11 over long-distances. In addition, we introduce WiLDToken, our solution to optimizing the throughput and fairness and reducing the delay on WiLD links. Compared to previous alternative MAC layers protocols for WiLD, our focus is on optimizing a single link in a multi-radio multi-channel mesh. We implement our protocol in the ns-3 network simulator and show thatWiLDToken is superior to an adapted version of the Distributed Coordination Function (DCF) for different link distances. We find that the throughput on a single link is close to the physical data-rate without a major decrease over longer distances.
Abstract Classical ballet requires dancers to exercise significant muscle control and strength both while stationary and when moving. Following the Royal Academy of Dance (RAD) syllabus, 8 male and 27 female dancers (aged 20.2 + 1.9 yr) in a full-time university undergraduate dance training program were asked to stand in first position for 10 seconds and then perform 10 repeats of a demi-plié exercise to a counted rhythm. Accelerometer records from the wrist, sacrum, knee and ankle were compared with the numerical scores from a professional dance instructor. The sacrum mounted sensor detected lateral tilts of the torso in dances with lower scores (Spearman’s rank correlation coefficient r = -0.64, p < 0.005). The 5RMS6 acceleration amplitude of wrist mounted sensor was linearly correlated to the movement scores (Spearman’s rank correlation coefficient r = 0.63, p < 0.005). The application of sacrum and wrist mounted sensors for biofeedback during dance training is a realistic, low cost option.
This article concerns with the accessibility of Business process modelling tools (BPMo tools) and business process modelling languages (BPMo languages). Therefore the reader will be introduced to business process management and the authors' motivation behind this inquiry. Afterwards, the paper will reflect problems when applying inaccessible BPMo tools. To illustrate these problems the authors distinguish between two different categories of issues and provide practical examples. Finally the article will present three approaches to improve the accessibility of BPMo tools and BPMo languages.
Users should always play a central role in the development of (software) solutions. The human-centered design (HCD) process in the ISO 9241-210 standard proposes a procedure for systematically involving users. However, due to its abstraction level, the HCD process provides little guidance for how it should be implemented in practice. In this chapter, we propose three concrete practical methods that enable the reader to develop usable security and privacy (USP) solutions using the HCD process. This chapter equips the reader with the procedural knowledge and recommendations to: (1) derive mental models with regard to security and privacy, (2) analyze USP needs and privacy-related requirements, and (3) collect user characteristics on privacy and structure them by user group profiles and into privacy personas. Together, these approaches help to design measures for a user-friendly implementation of security and privacy measures based on a firm understanding of the key stakeholders.
This thesis work presents the implementation and validation of image processing problems in hardware to estimate the performance and precision gain. It compares the implementation for the addressed problem on a Field Programmable Gate Array (FPGA) with a software implementation for a General Purpose Processor (GPP) architecture. For both solutions the implementation costs for their development is an important aspect in the validation. The analysis of the flexibility and extendability that can be achieved by a modular implementation for the FPGA design was another major aspect. This work is based upon approaches from previous work, which included the detection of Binary Large OBjects (BLOBs) in static images and continuous video streams [13, 15]. One addressed problem of this work is the tracking of the detected BLOBs in continuous image material. This has been implemented for the FPGA platform and the GPP architecture. Both approaches have been compared with respect to performance and precision. This research project is motivated by the MI6 project of the Computer Vision research group, which is located at the Bonn-Rhein-Sieg University of Applied Sciences. The intent of the MI6 project is the tracking of a user in an immersive environment. The proposed solution is to attach a light emitting device to the user for tracking the created light dots on the projection surface of the immersive environment. Having the center points of those light dots would allow the estimation of the user’s position and orientation. One major issue that makes Computer Vision problems computationally expensive is the high amount of data that has to be processed in real-time. Therefore, one major target for the implementation was to get a processing speed of more than 30 frames per second. This would allow the system to realize feedback to the user in a response time which is faster than the human visual perception. One problem that comes with the idea of using a light emitting device to represent the user, is the precision error. Dependent on the resolution of the tracked projection surface of the immersive environment, a pixel might have a size in cm2. Having a precision error of only a few pixels, might lead to an offset in the estimated user’s position of several cm. In this research work the development and validation of a detection and tracking system for BLOBs on a Cyclone II FPGA from Altera has been realized. The system supports different input devices for the image acquisition and can perform detection and tracking for five to eight BLOBs. A further extension of the design has been evaluated and is possible with some constraints. Additional modules for compressing the image data based on run-length encoding and sub-pixel precision for the computed BLOB center-points have been designed. For the comparison of the FPGA approach for BLOB tracking a similar implementation in software using a multi-threaded approach has been realized. The system can transmit the detection or tracking results on two available communication interfaces, USB and RS232. The analysis of the hardware solution showed a similar precision for the BLOB detection and tracking as the software approach. One problem is the strong increase of the allocated resources when extending the system to process more BLOBs. With one of the applied target platforms, the DE2-70 board from Altera, the BLOB detection could be extended to process up to thirty BLOBs. The implementation of the tracking approach in hardware required much more effort than the software solution. The design of high level problems in hardware for this case are more expensive than the software implementation. The search and match steps in the tracking approach could be realized more efficiently and reliably in software. The additional pre-processing modules for sub-pixel precision and run-length-encoding helped to increase the system’s performance and precision.
Actors
(2021)
Social protection is for many international organizations a state’s affair.1 While the state definitely plays an important role, the state is by far not the only actor and there is no predefined institutional arrangement of how social protection should be implemented. An exclusive focus on the state would therefore be short-sighted when assessing and comparing the performance of social protection systems. It is hence important to understand the mix of actors involved, the type of contribution they can make to social protection and their modes of cooperation. This contribution will therefore first sketch out the role and interplay of the main actors in social protection and then challenge some of the common assumptions made around how roles are best allocated in the social protection system concerning the providers of informal social protection, the private sector, civil society organizations (CSO) as well as international actors.
Low power dissipation is a current topic in digital design, and therefore, it should be covered in a state-of-the-art electrical engineering curriculum. This paper describes how low-power design can be addressed within a digital design course. Doing so would be beneficial for both topics because low-power design is not detached from the systems perspective, and the digital design course would be enriched by references to current challenges and applications. Thus, the presented course should serve as an example of how a course can be developed to also teach students about sustainable engineering.
Introduction: The paper analyses – basing itself on reports and other documents created by different parts of the International Labour Organisation (ILO) – the process which led to the adoption of Social Protection Floor Recommendation No. 202 and the shift in focus of social policy advice towards basic protection and to the Global South countries. We look at the actions of different actors which shape the standard setting and policy stand of the organisation. Objective: To provide a comprehensive analysis of the historical trajectory of ILO social security standards, examining the evolution of principles, conventions, and the global dynamics that have shaped the organization's approach to social protection over time. Materials and methods: The methods include examining ILO documents, relevant subject literature, and the author's participant observations from over twenty-years of service in the ILO's Social Security Department, aiming to provide insights into the decision-making processes within the organization. Conclusion: We conclude that change was brought by: 1) shift in the membership of the ILO and of its decision-making bodies towards the increased presence and powers of representatives from countries of the Global South, 2) the shift in the global development community policy priorities towards poverty reduction, 3) emergence of experimental social assistance schemes in Global South countries, with designs often ignoring principles embedded in the ILO standards. The Social Protection Floor Recommendation complements previous standards in response to the challenges of widespread poverty and informality and spreading atypical forms of employment. It provides two directions of policy responses: 1) formalizing informal employment relationships and 2) expanding universal or targeted rights-based social assistance schemes. Assistance provided by ILO to member states focuses now more on building the non-contributory schemes and on identifying the fiscal space necessary to close the coverage gaps. Nowadays, the ILO must collaborate more than before with other development partners and the main challenge is to build among them awareness and acceptance of the principles of the ILO social security standards.
The steadily decreasing prices of display technologies and computer graphics hardware contribute to the increasing popularity of multiple-display environments, like large, high-resolution displays. It is therefore necessary that educational organizations give the new generation of computer scientists an opportunity to become familiar with this kind of technology. However, there is a lack of tools that allow for getting started easily. Existing frameworks and libraries that provide support for multi-display rendering are often complex in understanding, configuration and extension. This is critical especially in educational context where the time that students have for their projects is limited and quite short. These tools are also rather known and used in research communities only, thus providing less benefit for future non-scientists. In this work we present an extension for the Unity game engine. The extension allows – with a small overhead – for implementation of applications that are apt to run on both single-display and multi-display systems. It takes care of the most common issues in the context of distributed and multi-display rendering like frame, camera and animation synchronization, thus reducing and simplifying the first steps into the topic. In conjunction with Unity, which significantly simplifies the creation of different kinds of virtual environments, the extension affords students to build mock-up virtual reality applications for large, high-resolution displays, and to implement and evaluate new interaction techniques and metaphors and visualization concepts. Unity itself, in our experience, is very popular among computer graphics students and therefore familiar to most of them. It is also often employed in projects of both research institutions and commercial organizations; so learning it will provide students with qualification in high demand.
AErOmAt Abschlussbericht
(2020)
Das Projekt AErOmAt hatte zum Ziel, neue Methoden zu entwickeln, um einen erheblichen Teil aerodynamischer Simulationen bei rechenaufwändigen Optimierungsdomänen einzusparen. Die Hochschule Bonn-Rhein-Sieg (H-BRS) hat auf diesem Weg einen gesellschaftlich relevanten und gleichzeitig wirtschaftlich verwertbaren Beitrag zur Energieeffizienzforschung geleistet. Das Projekt führte außerdem zu einer schnelleren Integration der neuberufenen Antragsteller in die vorhandenen Forschungsstrukturen.
The clear-sky radiative effect of aerosol-radiation interactions is of relevance for our understanding of the climate system. The influence of aerosol on the surface energy budget is of high interest for the renewable energy sector. In this study, the radiative effect is investigated in particular with respect to seasonal and regional variations for the region of Germany and the year 2015 at the surface and top of atmosphere using two complementary approaches.
First, an ensemble of clear-sky models which explicitly consider aerosols is utilized to retrieve the aerosol optical depth and the surface direct radiative effect of aerosols by means of a clear sky fitting technique. For this, short-wave broadband irradiance measurements in the absence of clouds are used as a basis. A clear sky detection algorithm is used to identify cloud free observations. Considered are measurements of the shortwave broadband global and diffuse horizontal irradiance with shaded and unshaded pyranometers at 25 stations across Germany within the observational network of the German Weather Service (DWD). Clear sky models used are MMAC, MRMv6.1, METSTAT, ESRA, Heliosat-1, CEM and the simplified Solis model. The definition of aerosol and atmospheric characteristics of the models are examined in detail for their suitability for this approach.
Second, the radiative effect is estimated using explicit radiative transfer simulations with inputs on the meteorological state of the atmosphere, trace-gases and aerosol from CAMS reanalysis. The aerosol optical properties (aerosol optical depth, Ångström exponent, single scattering albedo and assymetrie parameter) are first evaluated with AERONET direct sun and inversion products. The largest inconsistency is found for the aerosol absorption, which is overestimated by about 0.03 or about 30 % by the CAMS reanalysis. Compared to the DWD observational network, the simulated global, direct and diffuse irradiances show reasonable agreement within the measurement uncertainty. The radiative kernel method is used to estimate the resulting uncertainty and bias of the simulated direct radiative effect. The uncertainty is estimated to −1.5 ± 7.7 and 0.6 ± 3.5 W m−2 at the surface and top of atmosphere, respectively, while the annual-mean biases at the surface, top of atmosphere and total atmosphere are −10.6, −6.5 and 4.1 W m−2, respectively.
The retrieval of the aerosol radiative effect with the clear sky models shows a high level of agreement with the radiative transfer simulations, with an RMSE of 5.8 W m−2 and a correlation of 0.75. The annual mean of the REari at the surface for the 25 DWD stations shows a value of −12.8 ± 5 W m−2 as average over the clear sky models, compared to −11 W m−2 from the radiative transfer simulations. Since all models assume a fixed aerosol characterisation, the annual cycle of the aerosol radiation effect cannot be reproduced. Out of this set of clear sky models, the largest level of agreement is shown by the ESRA and MRMv6.1 models.
The clear-sky radiative effect of aerosol–radiation interactions is of relevance for our understanding of the climate system. The influence of aerosol on the surface energy budget is of high interest for the renewable energy sector. In this study, the radiative effect is investigated in particular with respect to seasonal and regional variations for the region of Germany and the year 2015 at the surface and top of atmosphere using two complementary approaches.
First, an ensemble of clear-sky models which explicitly consider aerosols is utilized to retrieve the aerosol optical depth and the surface direct radiative effect of aerosols by means of a clear-sky fitting technique. For this, short-wave broadband irradiance measurements in the absence of clouds are used as a basis. A clear-sky detection algorithm is used to identify cloud-free observations. Considered are measurements of the short-wave broadband global and diffuse horizontal irradiance with shaded and unshaded pyranometers at 25 stations across Germany within the observational network of the German Weather Service (DWD). The clear-sky models used are the Modified MAC model (MMAC), the Meteorological Radiation Model (MRM) v6.1, the Meteorological–Statistical solar radiation model (METSTAT), the European Solar Radiation Atlas (ESRA), Heliosat-1, the Center for Environment and Man solar radiation model (CEM), and the simplified Solis model. The definition of aerosol and atmospheric characteristics of the models are examined in detail for their suitability for this approach.
Second, the radiative effect is estimated using explicit radiative transfer simulations with inputs on the meteorological state of the atmosphere, trace gases and aerosol from the Copernicus Atmosphere Monitoring Service (CAMS) reanalysis. The aerosol optical properties (aerosol optical depth, Ångström exponent, single scattering albedo and asymmetry parameter) are first evaluated with AERONET direct sun and inversion products. The largest inconsistency is found for the aerosol absorption, which is overestimated by about 0.03 or about 30 % by the CAMS reanalysis. Compared to the DWD observational network, the simulated global, direct and diffuse irradiances show reasonable agreement within the measurement uncertainty. The radiative kernel method is used to estimate the resulting uncertainty and bias of the simulated direct radiative effect. The uncertainty is estimated to −1.5 ± 7.7 and 0.6 ± 3.5 W m−2 at the surface and top of atmosphere, respectively, while the annual-mean biases at the surface, top of atmosphere and total atmosphere are −10.6, −6.5 and 4.1 W m−2, respectively.
The retrieval of the aerosol radiative effect with the clear-sky models shows a high level of agreement with the radiative transfer simulations, with an RMSE of 5.8 W m−2 and a correlation of 0.75. The annual mean of the REari at the surface for the 25 DWD stations shows a value of −12.8 ± 5 W m−2 as the average over the clear-sky models, compared to −11 W m−2 from the radiative transfer simulations. Since all models assume a fixed aerosol characterization, the annual cycle of the aerosol radiation effect cannot be reproduced. Out of this set of clear-sky models, the largest level of agreement is shown by the ESRA and MRM v6.1 models.
Recent findings in South Africa have once again underlined the fact that the oldest people in the world obviously came from Africa. Thus, historically, this continent has a very special significance. However, its history in more recent times, especially from the mid-19th century onwards, was strongly influenced by colonisation by European states. Many deep wounds from that time still have an impact on society as a whole today. However, the continent is currently also confronted with a greater number of challenges of a different nature.
On the one hand, Africa is trying to strengthen internal cohesion by means of a number of regional organisations and the African Union as a globally active institution; on the other hand, the continent has been marked by political and military conflicts between neighbouring states over the past decades until the recent present. In addition, there are regular internal social upheavals in individual countries due to violent or manipulated political change.
Yet the continent could well be on a good development path, since it has a large number of important raw materials - also in comparison to other continents. However, the individual African states - and especially their citizens - often do not benefit from this to an adequate extent. This results in a social imbalance in large parts of the continent (data collection until the end of June 2023), which leads to considerable internal tensions. To make matters worse, Africa is the continent most affected by climate change.
A closer look at the partly very different economic, political and social situations of the large continent leads to an overall predominantly critical assessment of Africa's further development, which is explained in more detail in the final chapter with regard to the foreseeable consequences for the continent.
Neueste Funde in Südafrika haben nochmals unterstrichen, dass die ältesten Menschen der Welt offensichtlich aus Afrika abstammen. Somit kommt diesem Kontinent historisch gesehen ganz besondere Bedeutung zu. Allerdings war seine Geschichte in der jüngeren Zeit, insbesondere ab Mitte des 19. Jahrhunderts, von der Kolonialisierung durch europäische Staaten stark geprägt. Viele tiefe Wunden aus der damaligen Zeit haben noch heute Auswirkungen auf die Gesellschaft insgesamt. Allerdings ist der Kontinent derzeit auch mit einer größeren Zahl anders gelagerter Herausforderungen konfrontiert.
Zum einen versucht Afrika mittels einer Anzahl von Regionalorganisationen sowie der Afrikanischen Union als global agierender Institution den inneren Zusammenhalt zu stärken, zum anderen ist der Kontinent über die letzten Jahrzehnte bis in die jüngste Gegenwart durch politische und militärische Konflikte zwischen Nachbarstaaten geprägt. Hinzu kommen regelmäßig innere gesellschaftliche Umwälzungen einzelner Länder durch einen gewaltsamen oder manipulierten politischen Wechsel.
Dabei könnte der Kontinent sich durchaus auf einem guten Entwicklungspfad befinden, verfügt er doch – auch im Vergleich zu anderen Kontinenten – über eine Vielzahl von wichtigen Rohstoffen. Allerdings profitieren die einzelnen afrikanischen Staaten – und insbesondere ihre Bürgerinnen und Bürger - hiervon oft nicht in einem angemessenen Rahmen. Somit ergibt sich in großen Teilen des Kontinents ein soziales Ungleichgewicht, das zu erheblichen inneren Spannungen führt. Erschwerend kommt hinzu, dass Afrika weltweit am stärksten vom Klimawandel betroffen ist.
Bei näherer Betrachtung der z.T. sehr unterschiedlichen wirtschaftlichen, politischen und sozialen Situation des großen Kontinents (Datenerhebung bis Ende Juni 2023) führt die vorliegende Untersuchung zu einer insgesamt überwiegend kritischen Einschätzung hinsichtlich der weiteren Entwicklung Afrikas, die im Schlusskapitel bzgl. der absehbaren Konsequenzen für den Kontinent näher dargelegt wird.
Der Einsatz von Agentensystemen ist vielfältig, dennoch sind aktuelle Realisierungen lediglich in der Lage primär regelkonformes oder aber „geskriptetes“ Verhalten auch unter Einsatz von randomisierten Verfahren abzubilden. Für eine realistische Repräsentation sind jedoch auch Abweichungen von den Regeln notwendig, die nicht zufällig sondern kontextbedingt auftreten. Im Rahmen dieses Forschungsprojektes wurde ein realitätsnaher Straßenverkehrssimulator realisiert, der mittels eines detailliert definierten Systems für kognitive Agenten auch diese irregulären Verhaltensweisen generiert und somit ein realistisches Verkehrsverhalten für die Verwendung in VR-Anwendungen simuliert. Durch das Erweitern der Agenten mit psychologischen Persönlichkeitsprofilen, basierend auf dem „Fünf-Faktoren-Modell“, zeigen die Agenten individualisierte und gleichzeitig konsistente Verhaltensmuster. Ein dynamisches Emotionsmodell sorgt zusätzlich für eine situationsbedingte Adaption des Verhaltens, z.B. bei langen Wartezeiten. Da die detaillierte Simulation kognitiver Prozesse, der Persönlichkeitseinflüsse und der emotionalen Zustände erhebliche Rechenleistungen verlangt, wurde ein mehrschichtiger Simulationsansatz entwickelt, der es erlaubt den Detailgrad der Berechnung und Darstellung jedes Agenten während der Simulation stufenweise zu verändern, so dass alle im System befindlichen Agenten konsistent simuliert werden können. Im Rahmen diverser Evaluierungsiterationen in einer bestehenden VR-Anwendung – dem FIVIS-Fahrradfahrsimulator des Antragstellers - konnte eindrucksvoll nachgewiesen werden, dass die realisierten Konzepte die ursprünglich formulierten Forschungsfragestellung überzeugend und effizient lösen.
Während sich die unternehmerische Arbeitswelt immer mehr in Richtung Agilität verschiebt, verharrt das IT-Controlling noch in alten, klassischen Strukturen. Diese Arbeit untersucht die Fragestellung, ob und inwieweit agile Ansätze im IT-Controlling eingesetzt werden können. Dieser Beitrag ist eine modifizierte Version des in der Zeitschrift „HMD Praxis der Wirtschaftsinformatik“ (https://link.springer.com/article/10.1365/s40702-022-00837-0) erschienenen Artikels „Agiles IT-Controlling“.
Agiles IT-Controlling
(2022)
Während im IT-Projektmanagement agile Methoden seit vielen Jahren in der Praxis Zuspruch finden, werden im IT-Controlling überwiegend noch klassische Methoden eingesetzt. Der Beitrag untersucht die Fragestellung, ob und wie die im IT-Controlling eingesetzten Methoden auch agilen Paradigmen folgen und Methoden des agilen IT-Projektmanagements adaptiert werden können.
The following work presents algorithms for semi-automatic validation, feature extraction and ranking of time series measurements acquired from MOX gas sensors. Semi-automatic measurement validation is accomplished by extending established curve similarity algorithms with a slope-based signature calculation. Furthermore, a feature-based ranking metric is introduced. It allows for individual prioritization of each feature and can be used to find the best performing sensors regarding multiple research questions. Finally, the functionality of the algorithms, as well as the developed software suite, are demonstrated with an exemplary scenario, illustrating how to find the most power-efficient MOX gas sensor in a data set collected during an extensive screening consisting of 16,320 measurements, all taken with different sensors at various temperatures and analytes.
Many workers experience their jobs as effortful or even stressful, which can result in strain. Although recovery from work would be an adaptive strategy to prevent the adverse effects of work-related strain, many workers face problems finding enough time to rest and to mentally disconnect from work during nonwork time. What goes on in workers’ minds after a stressful workday? What is it about their jobs that makes them think about their work? This special issue aims to bridge the gap between research on recovery processes mainly examined in Occupational Health Psychology, and research on work stress and working hours, often investigated in the field of Human Resource Management. We first summarize conceptual and theoretical streams from both fields of research. In the following, we discuss the contributions of the five special issue papers and conclude with key messages and directions for further research.
Amino acids perform multiple essential physiological roles in humans, and accordingly, their importance to health has been the subject of extensive attention. In this special issue of the Journal of Nutrition and Metabolism, we focus on the various inborn errors of amino acid metabolism, their diagnostic challenges, new treatment approaches, and recent advances in patient monitoring as well as clinical outcomes.
This paper proposes an approach to an ANN-based temperature controller design for a plastic injection moulding system. This design approach is applied to the development of a controller based on a combination of a classical ANN and integrator. The controller provides a fast temperature response and zero steady-state error for three typical heaters (bar, nozzle, and cartridge) for a plastic moulding system. The simulation results in Matlab Simulink software and in comparison to an industrial PID regulator have shown the advantages of the controller, such as significantly less overshoot and faster transient (compared to PID with autotuning) for all examined heaters. In order to verify the proposed approach, the designed ANN controller was implemented and tested using an experimental setup based on an STM32 board.
The design of a fully superconducting wind power generator is influenced by several factors. Among them, a low number of pole pairs is desirable to achieve low AC losses in the superconducting stator winding, which greatly influences the cooling system design and, consecutively, the efficiency of the entire wind power plant. However, it has been identified that a low number of pole pairs in a superconducting generator tends to greatly increase its output voltage, which in turn creates challenging conditions for the necessary power electronic converter. This study highlights the interdependencies between the design of a fully superconducting 10 MW wind power generator and the corresponding design of its power electronic converter.
An Empirical Evaluation of the Received Signal Strength Indicator for fixed outdoor 802.11 links
(2015)
For the evaluation of the received signal strength indication (RSSI) a different methodology compared to previous publications is introduced in this paper by exploiting a spectral scan feature of recent Qualcomm Atheros WiFi NICs. This method is compared to driver reports and to an industrial grade spectrum analyzer. During the conducted outdoor experiments a decreased scattering of the RSSI compared to previous publications is observed. By applying well-known mathematical tests for normality it is possible to show that the RSSI does not follow a normal distribution in a line-of-sight outdoor environment. The evaluated spectral scan features offers additional possibilities to develop interference classifiers which is an important step for frequency allocation in long-distance 802.11 networks.
The ability to finely segment different instances of various objects in an environment forms a critical tool in the perception tool-box of any autonomous agent. Traditionally instance segmentation is treated as a multi-label pixel-wise classification problem. This formulation has resulted in networks that are capable of producing high-quality instance masks but are extremely slow for real-world usage, especially on platforms with limited computational capabilities. This thesis investigates an alternate regression-based formulation of instance segmentation to achieve a good trade-off between mask precision and run-time. Particularly the instance masks are parameterized and a CNN is trained to regress to these parameters, analogous to bounding box regression performed by an object detection network.
In this investigation, the instance segmentation masks in the Cityscape dataset are approximated using irregular octagons and an existing object detector network (i.e., SqueezeDet) is modified to regresses to the parameters of these octagonal approximations. The resulting network is referred to as SqueezeDetOcta. At the image boundaries, object instances are only partially visible. Due to the convolutional nature of most object detection networks, special handling of the boundary adhering object instances is warranted. However, the current object detection techniques seem to be unaffected by this and handle all the object instances alike. To this end, this work proposes selectively learning only partial, untainted parameters of the bounding box approximation of the boundary adhering object instances. Anchor-based object detection networks like SqueezeDet and YOLOv2 have a discrepancy between the ground-truth encoding/decoding scheme and the coordinate space used for clustering, to generate the prior anchor shapes. To resolve this disagreement, this work proposes clustering in a space defined by two coordinate axes representing the natural log transformations of the width and height of the ground-truth bounding boxes.
When both SqueezeDet and SqueezeDetOcta were trained from scratch, SqueezeDetOcta lagged behind the SqueezeDet network by a massive ≈ 6.19 mAP. Further analysis revealed that the sparsity of the annotated data was the reason for this lackluster performance of the SqueezeDetOcta network. To mitigate this issue transfer-learning was used to fine-tune the SqueezeDetOcta network starting from the trained weights of the SqueezeDet network. When all the layers of the SqueezeDetOcta were fine-tuned, it outperformed the SqueezeDet network paired with logarithmically extracted anchors by ≈ 0.77 mAP. In addition to this, the forward pass latencies of both SqueezeDet and SqueezeDetOcta are close to ≈ 19ms. Boundary adhesion considerations, during training, resulted in an improvement of ≈ 2.62 mAP of the baseline SqueezeDet network. A SqueezeDet network paired with logarithmically extracted anchors improved the performance of the baseline SqueezeDet network by ≈ 1.85 mAP.
In summary, this work demonstrates that if given sufficient fine instance annotated data, an existing object detection network can be modified to predict much finer approximations (i.e., irregular octagons) of the instance annotations, whilst having the same forward pass latency as that of the bounding box predicting network. The results justify the merits of logarithmically extracted anchors to boost the performance of any anchor-based object detection network. The results also showed that the special handling of image boundary adhering object instances produces more performant object detectors.
Die vorliegende Forschungsarbeit setzt sich mit nachhaltigem Verhalten in Bezug auf die Nutzung von Kaffeebehältern an der HBRS auseinander. Anlass dafür ist, dass Pappbecher aufgrund einer Plastikbeschichtung nur schwer recycelbar sind und somit die Umwelt erheblich beeinträchtigen. In diesem Zusammenhang nahmen 204 Studierende an einer Online-Befragung teil. Den Ergebnissen zufolge kommen derzeit vor allem Einweg-Pappbecher zum Einsatz. Zur Modifizierung dieses umweltschädlichen Verhaltens bedarf es an geeigneten Interventionsstrategien. Basierend auf den Ergebnissen sind Maßnahmen zu implementieren, die dem Defizit an Handlungswissen und dem hohen Aufwand entgegenwirken, welcher mit der Verwendung eigens mitgebrachter Becher und den vorhandenen Porzellantassen assoziiert wird. Nach Sicherstellung der ökologischen Vorteile und finanziellen Umsetzbarkeit sollte das bestehende Pfandsystem um praktischere Becher sowie flexible Rückgabemöglichkeiten erweitert werden. Unterstützend ist eine Belohnung in Form von Freigetränken oder einem geringen finanziellen Rabatt sinnvoll, um den automatischen Verbrauch von Pappbechern zu unterbinden.
The analysis of used engine oils from industrial engines enables the study of engine wear and oil degradation in order to evaluate the necessity of oil changes. As the matrix composition of an engine oil strongly depends on its intended application, meaningful diagnostic oil analyses bear considerable challenges. Owing to the broad spectrum of available oil matrices, we have evaluated the applicability of using an internal standard and/or preceding sample digestion for elemental analysis of used engine oils via inductively coupled plasma optical emission spectroscopy (ICP OES). Elements originating from both wear particles and additives as well as particle size influence could be clearly recognized by their distinct digestion behaviour. While a precise determination of most wear elements can be achieved in oily matrix, the measurement of additives is performed preferably after sample digestion. Considering a dataset of physicochemical parameters and elemental composition for several hundred used engine oils, we have further investigated the feasibility of predicting the identity and overall condition of an unknown combustion engine using the machine learning system XGBoost. A maximum accuracy of 89.6% in predicting the engine type was achieved, a mean error of less than 10% of the observed timeframe in predicting the oil running time and even less than 4% for the total engine running time, based purely on common oil check data. Furthermore, obstacles and possibilities to improve the performance of the machine learning models were analysed and the factors that enabled the prediction were explored with SHapley Additive exPlanation (SHAP). Our results demonstrate that both the identification of an unknown engine as well as a lifetime assessment can be performed for a first estimation of the actual sample without requiring meticulous documentation.
Nitrosamines have been identified as a probable human carcinogen and thus are of high concern in many manufacturing industries and various matrices (for example pharmaceutical, cosmetic and food products, workplace air or potable- and wastewater). This study aims to analyse nine nitrosamines relevant in the field of occupational safety using a gas chromatography-drift tube ion mobility spectrometry (GC-DT-IMS) system. To do this, single nitrosamine standards as well as a standard mix, each at 0.1 g/L, were introduced via liquid injection. A GC-DT-IMS method capable of separating the nitrosamine signals according to retention time (first dimension) and drift time (second dimension) in 10 min was developed. The system shows excellent selectivity as each nitrosamine gives two signals pertaining to monomer and dimer in the second dimension. For the first time, reduced ion mobility values for nitrosamines were determined, ranging from 1.18 to 2.03 cm2s−1V−1. The high selectivity of the GC-DT-IMS method could provide a definite advantage for monitoring nitrosamines in different manufacturing industries and consumer products.
PURPOSE
Cervical cancer (CC) is caused by a persistent high-risk human papillomavirus (hrHPV) infection. The cervico-vaginal microbiome may influence the development of (pre)cancer lesions. Aim of the study was (i) to evaluate the new CC screening program in Germany for the detection of high-grade CC precursor lesions, and (ii) to elucidate the role of the cervico-vaginal microbiome and its potential impact on cervical dysplasia.
METHODS
The microbiome of 310 patients referred to colposcopy was determined by amplicon sequencing and correlated with clinicopathological parameters.
RESULTS
Most patients were referred for colposcopy due to a positive hrHPV result in two consecutive years combined with a normal PAP smear. In 2.1% of these cases, a CIN III lesion was detected. There was a significant positive association between the PAP stage and Lactobacillus vaginalis colonization and between the severity of CC precursor lesions and Ureaplasma parvum.
CONCLUSION
In our cohort, the new cervical cancer screening program resulted in a low rate of additional CIN III detected. It is questionable whether these cases were only identified earlier with additional HPV testing before the appearance of cytological abnormalities, or the new screening program will truly increase the detection rate of CIN III in the long run. Colonization with U. parvum was associated with histological dysplastic lesions. Whether targeted therapy of this pathogen or optimization of the microbiome prevents dysplasia remains speculative.
More and more devices will be connected to the internet [3]. Many devicesare part of the so-called Internet of Things (IoT) which contains many low-powerdevices often powered by a battery. These devices mainly communicate with the manufacturers back-end and deliver personal data and secrets like passwords.
Lower back pain is one of the most prevalent diseases in Western societies. A large percentage of European and American populations suffer from back pain at some point in their lives. One successful approach to address lower back pain is postural training, which can be supported by wearable devices, providing real-time feedback about the user’s posture. In this work, we analyze the changes in posture induced by postural training. To this end, we compare snapshots before and after training, as measured by the Gokhale SpineTracker™. Considering pairs of before and after snapshots in different positions (standing, sitting, and bending), we introduce a feature space, that allows for unsupervised clustering. We show that resulting clusters represent certain groups of postural changes, which are meaningful to professional posture trainers.
Annual Report 2011 - 2012
(2013)
Annual Report 2013 - 2014
(2015)
Das Cutting sticks-Problem ist in seiner allgemeinen Formulierung ein NP-vollständiges Problem mit Anwendungspotenzialen im Bereich der Logistik. Unter der Annahme, dass P ungleich NP (P != NP) ist, existieren keine effizienten, d.h. polynomiellen Algorithmen zur Lösung des allgemeinen Problems.
In diesem Papier werden Ansätze aufgezeigt, mit denen bestimmte Instanzen des Problems effizient berechnet werden können. Für die Berechnung wichtige Parameter werden charakterisiert und deren Beziehung untereinander analysiert.
Cytokine-induced killer cells (CIK) in combination with dendritic cells (DCs) have shown favorable outcomes in renal cell carcinoma (RCC), yet some patients exhibit recurrence or no response to this therapy. In a broader perspective, enhancing the antitumor response of DC-CIK cells may help to address this issue. Considering this, herein, we investigated the effect of anti-CD40 and anti-CTLA-4 antibodies on the antitumor response of DC-CIK cells against RCC cell lines. Our analysis showed that, a) anti-CD40 antibody (G28.5) increased the CD3+CD56+ effector cells of CIK cells by promoting the maturation and activation of DCs, b) G28.5 also increased CTLA-4 expression in CIK cells via DCs, but the increase could be hindered by the CTLA-4 inhibitor (ipilimumab), c) adding ipilimumab was also able to significantly increase the proportion of CD3+CD56+ cells in DC-CIK cells, d) anti-CD40 antibodies predominated over anti-CTLA-4 antibodies for cytotoxicity, apoptotic effect and IFN-g secretion of DC-CIK cells against RCC cells, e) after ipilimumab treatment, the population of Tregs in CIK cells remained unaffected, but ipilimumab combined with G28.5 significantly reduced the expression of CD28 in CIK cells. Taken together, we suggest that the agonistic anti-CD40 antibody rather than CTLA-4 inhibitor may improve the antitumor response of DC-CIK cells, particularly in RCC. In addition, we pointed towards the yet to be known contribution of CD28 in the crosstalk between anti-CTLA-4 and CIK cells.
The antiradical and antimicrobial activity of lignin and lignin-based films are both of great interest for applications such as food packaging additives. The polyphenolic structure of lignin in addition to the presence of O-containing functional groups is potentially responsible for these activities. This study used DPPH assays to discuss the antiradical activity of HPMC/lignin and HPMC/lignin/chitosan films. The scavenging activity (SA) of both binary (HPMC/lignin) and ternary (HPMC/lignin/chitosan) systems was affected by the percentage of the added lignin: the 5% addition showed the highest activity and the 30% addition had the lowest. Both scavenging activity and antimicrobial activity are dependent on the biomass source showing the following trend: organosolv of softwood > kraft of softwood > organosolv of grass. Testing the antimicrobial activities of lignins and lignin-containing films showed high antimicrobial activities against Gram-positive and Gram-negative bacteria at 35 °C and at low temperatures (0-7 °C). Purification of kraft lignin has a negative effect on the antimicrobial activity while storage has positive effect. The lignin release in the produced films affected the activity positively and the chitosan addition enhances the activity even more for both Gram-positive and Gram-negative bacteria. Testing the films against spoilage bacteria that grow at low temperatures revealed the activity of the 30% addition on HPMC/L1 film against both B. thermosphacta and P. fluorescens while L5 was active only against B. thermosphacta. In HPMC/lignin/chitosan films, the 5% addition exhibited activity against both B. thermosphacta and P. fluorescens.
Due to global ecological and economic challenges that have been correlated to the transition from fossil-based to renewable resources, fundamental studies are being performed worldwide to replace fossil fuel raw materials in plastic production. One aspect of current research is the development of lignin-derived polyols to substitute expensive fossil-based polyol components for polyurethane and polyester production. This article describes the synthesis of bioactive lignin-based polyurethane coatings using unmodified and demethylated Kraft lignins. Demethylation was performed to enhance the reaction selectivity toward polyurethane formation. The antimicrobial activity was tested according to a slightly modified standard test (JIS Z 2801:2010). Besides effects caused by the lignins themselves, triphenylmethane derivatives (brilliant green and crystal violet) were used as additional antimicrobial substances. Results showed increased antimicrobial capacity against Staphylococcus aureus. Furthermore, the coating color could be varied from dark brown to green and blue, respectively.
Das Optimalziel für ein Logistiklager ist eine hohe Auslastung des Transportsystems. Es stellt sich somit die Frage nach der Auswahl der Aufträge, die gleichzeitig innerhalb des Lagers abgearbeitet werden, ohne Staus, Blockaden oder Überlastungen entstehen zu lassen. Dieser Auswahlprozess wird auch als Path-Packing bezeichnet. Diese Masterthesis untersucht das Path-Packing auf graphentheoretischer Ebene und stellt verschiedene Greedy-Heuristiken, eine Optimallösung auf Basis der Linearen Programmierung sowie einen kombinierten Ansatz gegenüber. Die Ansätze werden anhand von Messzeiten und Auslastungen unterschiedlich randomisiert erstellter Testdaten ausgewertet.
This study investigated the application potential of Black Soldier Fly Larva Hermetia illucens Stratiomyidae: Diptera (L.1758) for wastewater treatment and the removal potential of chemical oxygen demand, ammonia, and phosphorus of and liquid manure residue and municipal waste water containing 1% solids content. Black Soldier Fly Larva were found to reduce the concentration of chemical oxygen demand, but unfortunately, increase the concentration of ammonia and phosphorus. The ability of Black Soldier Fly Larva to feed on organic waste of Liquid manure residue showed that Black Soldier Fly Larva increase their weight by 365% in a solution with 12% solids content and by 595% in a solution having 6% solids content. The study also showed that Black Soldier Fly Larva have the ability to survive in a solution of 1% solids content and have the ability to reduce chemical oxygen demand by up to 86.4% for liquid manure residue and 46.9% for municipal wastewater after 24 hours. Generally, ammonia increased by 43.9% for Liquid manure residue and 98.6% for municipal wastewater. Total phosphorus showed an increase of 11.0% and 88.6% increase for liquid manure residue and municipal wastewater respectively over the 8-day study. Transparent environments tend to reduce the COD content more than the dark environment, both for the liquid manure residue (55.8% and 65.4%) and municipal wastewater (71.5% and 66.4%).
This book chapter describes application examples of gas chromatography/mass spectrometry and pyrolysis – gas chromatography/mass spectrometry in failure analysis for the identification of chemical materials like mineral oils and nitrile rubber gaskets. Furthermore, failure cases demanding identification of polymers/copolymers in fouling on the compressor wall of a car air conditioner and identification of fouling on the surface of a bearing race from the automotive industry are demonstrated. The obtained analytical results were then used for troubleshooting and remedial action of the technological process.
Analytical pyrolysis technique hyphenated to gas chromatography/mass spectrometry (Py-GC/MS) has extended the range of possible tools for characterization of synthetic polymers/copolymers. Pyrolysis involves thermal fragmentation of the analytical sample at elevated temperature between 500 and 1400 °C. In the presence of an inert gas, reproducible decomposition products characteristic for the original polymer/copolymer sample are formed. The pyrolysis products are chromatographically separated by using a fused silica capillary column and subsequently identified by interpretation of the obtained mass spectra or by using mass spectra libraries. The analytical technique eliminate the need for pre-treatment by performing analyses directly on the solid or liquid polymer sample.
In this paper, application examples of the analytical pyrolysis hyphenated to gas chromatography/mass spectrometry for the identification of different polymeric materials in the plastic and automotive industry, dentistry and occupational safety are demonstrated. For the first time results of identification of commercially light-curing dental filling material and a car wrapping foil by pyrolysis-GC/MS are presented.
The main objective of this chapter is to give insights into how H-BRS as a German University of Applied Sciences supports small and medium-sized enterprises (SMEs) in exploring African markets. The university achieves this objective by engaging its Bachelor and Master level students in applied market research. Students engage in this research as part of their final thesis writing. This chapter lays out a process for successful marketing research projects for German SMEs in nine steps.
The most prominent education reform in Europe started in Bologna, Italy, in 1999, when the European Ministers responsible for higher education met to set the foundation for the European Higher Education Area (EHEA). The following process to reform and unify higher education and its systems in Europe is therefore known as the Bologna Process.
The white ground crater by the Phiale Painter (450–440 BC) exhibited in the “Pietro Griffo” Archaeological Museum in Agrigento (Italy) depicts two scenes from Perseus myth. The vase is of utmost importance to archaeologists because the figures are drawn on a white background with remarkable daintiness and attention to detail. Notwithstanding the white ground ceramics being well documented from an archaeological and historical point of view, doubts concerning the compositions of pigments and binders and the production technique are still unsolved. This kind of vase is a valuable rarity, the use of which is documented in elitist funeral rituals. The study aims to investigate the constituent materials and the execution technique of this magnificent crater. The investigation was carried out using non-destructive and non-invasive techniques in situ. Portable X-ray fluorescence and Fourier-transform total reflection infrared spectroscopy complemented the use of visible and ultraviolet light photography to get an overview and specific information on the vase. The XRF data were used to produce false colour maps showing the location of the various elements detected, using the program SmART_scan. The use of gypsum as the material for the white ground is an important result that deserves to be further investigated in similar vases.