Refine
Departments, institutes and facilities
- Fachbereich Angewandte Naturwissenschaften (184)
- Fachbereich Informatik (159)
- Fachbereich Wirtschaftswissenschaften (115)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (113)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (82)
- Fachbereich Ingenieurwissenschaften und Kommunikation (81)
- Institut für funktionale Gen-Analytik (IFGA) (72)
- Institute of Visual Computing (IVC) (35)
- Fachbereich Sozialpolitik und Soziale Sicherung (27)
- Institut für Verbraucherinformatik (IVI) (25)
Document Type
- Article (392)
- Conference Object (140)
- Part of a Book (58)
- Report (34)
- Preprint (18)
- Part of Periodical (13)
- Working Paper (13)
- Master's Thesis (7)
- Conference Proceedings (5)
- Book (monograph, edited volume) (2)
Year of publication
Language
- English (683) (remove)
Has Fulltext
- yes (683) (remove)
Keywords
- Entrepreneurship (8)
- Ghana (8)
- cytokine-induced killer cells (7)
- lignin (7)
- Kenya (6)
- Robotik (6)
- Sustainability (6)
- immunotherapy (6)
- GC/MS (5)
- Machine Learning (5)
After more than twenty years of research, the molecular events of apoptotic cell death can be succinctly stated; different pathways, activated by diverse signals, increase the activity of proteases called caspases that rapidly and irreversibly dismantle condemned cell by cleaving specific substrates. In this time the ideas that apoptosis protects us from tumourigenesis and that cancer chemotherapy works by inducing apoptosis also emerged. Currently, apoptosis research is shifting away from the intracellular events within the dying cell to focus on the effect of apoptotic cells on surrounding tissues. This is producing counterintuitive data showing that our understanding of the role of apoptosis in tumourigenesis and cancer therapy is too simple, with some interesting and provocative implications. Here, we will consider evidence supporting the idea that dying cells signal their presence to the surrounding tissue and, in doing so, elicit repair and regeneration that compensates for any loss of function caused by cell death. We will discuss evidence suggesting that cancer cell proliferation may be driven by inappropriate or corrupted tissue-repair programmes that are initiated by signals from apoptotic cells and show how this may dramatically modify how we view the role of apoptosis in both tumourigenesis and cancer therapy.
Software developers build complex systems using plenty of third-party libraries. Documentation is key to understand and use the functionality provided via the libraries’ APIs. Therefore, functionality is the main focus of contemporary API documentation, while cross-cutting concerns such as security are almost never considered at all, especially when the API itself does not provide security features. Documentations of JavaScript libraries for use in web applications, e.g., do not specify how to add or adapt a Content Security Policy (CSP) to mitigate content injection attacks like Cross-Site Scripting (XSS). This is unfortunate, as security-relevant API documentation might have an influence on secure coding practices and prevailing major vulnerabilities such as XSS. For the first time, we study the effects of integrating security-relevant information in non-security API documentation. For this purpose, we took CSP as an exemplary study object and extended the official Google Maps JavaScript API documentation with security-relevant CSP information in three distinct manners. Then, we evaluated the usage of these variations in a between-group eye-tracking lab study involving N=49 participants. Our observations suggest: (1) Developers are focused on elements with code examples. They mostly skim the documentation while searching for a quick solution to their programming task. This finding gives further evidence to results of related studies. (2) The location where CSP-related code examples are placed in non-security API documentation significantly impacts the time it takes to find this security-relevant information. In particular, the study results showed that the proximity to functional-related code examples in documentation is a decisive factor. (3) Examples significantly help to produce secure CSP solutions. (4) Developers have additional information needs that our approach cannot meet.
Overall, our study contributes to a first understanding of the impact of security-relevant information in non-security API documentation on CSP implementation. Although further research is required, our findings emphasize that API producers should take responsibility for adequately documenting security aspects and thus supporting the sensibility and training of developers to implement secure systems. This responsibility also holds in seemingly non-security relevant contexts.
2-methylacetoacetyl-coenzyme A thiolase (beta-ketothiolase) deficiency: one disease - two pathways
(2020)
Background: 2-methylacetoacetyl-coenzyme A thiolase deficiency (MATD; deficiency of mitochondrial acetoacetyl-coenzyme A thiolase T2/ “beta-ketothiolase”) is an autosomal recessive disorder of ketone body utilization and isoleucine degradation due to mutations in ACAT1.
Methods: We performed a systematic literature search for all available clinical descriptions of patients with MATD. Two hundred forty-four patients were identified and included in this analysis. Clinical course and biochemical data are presented and discussed.
Results: For 89.6% of patients at least one acute metabolic decompensation was reported. Age at first symptoms ranged from 2 days to 8 years (median 12 months). More than 82% of patients presented in the first 2 years of life, while manifestation in the neonatal period was the exception (3.4%). 77.0% (157 of 204 patients) of patients showed normal psychomotor development without neurologic abnormalities. Conclusion: This comprehensive data analysis provides a systematic overview on all cases with MATD identified in the literature. It demonstrates that MATD is a rather benign disorder with often favourable outcome, when compared with many other organic acidurias.
Nitrile-type inhibitors are known to interact with cysteine proteases in a covalent-reversible manner. The chemotype of 3-cyano-3-aza-β-amino acid derivatives was designed in which the N-cyano group is centrally arranged in the molecule to allow for interactions with the nonprimed and primed binding regions of the target enzymes. These compounds were evaluated as inhibitors of the human cysteine cathepsins K, S, B, and L. They exhibited slow-binding behavior and were found to be exceptionally potent, in particular toward cathepsin K, with second-order rate constants up to 52 900 × 103 M–1 s–1.
Background: 3-hydroxy-3-methylglutaryl-coenzyme A lyase deficiency (HMGCLD) is an autosomal recessive disorder of ketogenesis and leucine degradation due to mutations in HMGCL.
Method: We performed a systematic literature search to identify all published cases. Two hundred eleven patients of whom relevant clinical data were available were included in this analysis. Clinical course, biochemical findings and mutation data are highlighted and discussed. An overview on all published HMGCL variants is provided.
Results: More than 95% of patients presented with acute metabolic decompensation. Most patients manifested within the first year of life, 42.4% already neonatally. Very few individuals remained asymptomatic. The neurologic long-term outcome was favorable with 62.6% of patients showing normal development.
Conclusion: This comprehensive data analysis provides a systematic overview on all published cases with HMGCLD including a list of all known HMGCL mutations.
3-Hydroxyisobutyrate Dehydrogenase (HIBADH) deficiency - a novel disorder of valine metabolism
(2021)
3-Hydroxyisobutyric acid (3HiB) is an intermediate in the degradation of the branched-chain amino acid valine. Disorders in valine degradation can lead to 3HiB accumulation and its excretion in the urine. This article describes the first two patients with a new metabolic disorder, 3-hydroxyisobutyrate dehydrogenase (HIBADH) deficiency, its phenotype and its treatment with a low-valine diet. The detected mutation in the HIBADH gene leads to nonsense-mediated mRNA decay of the mutant allele and to a complete loss-of-function of the enzyme. Under strict adherence to a low-valine diet a rapid decrease of 3HiB excretion in the urine was observed. Due to limited patient numbers and intrafamilial differences in phenotype with one affected and one unaffected individual, the clinical phenotype of HIBADH deficiency needs further evaluation.
In service robotics, tasks without the involvement of objects are barely applicable, like in searching, fetching or delivering tasks. Service robots are supposed to capture efficiently object related information in real world scenes while for instance considering clutter and noise, and also being flexible and scalable to memorize a large set of objects. Besides object perception tasks like object recognition where the object’s identity is analyzed, object categorization is an important visual object perception cue that associates unknown object instances based on their e.g. appearance or shape to a corresponding category. We present a pipeline from the detection of object candidates in a domestic scene over the description to the final shape categorization of detected candidates. In order to detect object related information in cluttered domestic environments an object detection method is proposed that copes with multiple plane and object occurrences like in cluttered scenes with shelves. Further a surface reconstruction method based on Growing Neural Gas (GNG) in combination with a shape distribution-based descriptor is proposed to reflect shape characteristics of object candidates. Beneficial properties provided by the GNG such as smoothing and denoising effects support a stable description of the object candidates which also leads towards a more stable learning of categories. Based on the presented descriptor a dictionary approach combined with a supervised shape learner is presented to learn prediction models of shape categories.
Experimental results, of different shapes related to domestically appearing object shape categories such as cup, can, box, bottle, bowl, plate and ball, are shown. A classification accuracy of about 90% and a sequential execution time of lesser than two seconds for the categorization of an unknown object is achieved which proves the reasonableness of the proposed system design. Additional results are shown towards object tracking and false positive handling to enhance the robustness of the categorization. Also an initial approach towards incremental shape category learning is proposed that learns a new category based on the set of previously learned shape categories.
The ability of detecting people has become a crucial subtask, especially in robotic systems which aim an application in public or domestic environments. Robots already provide their services e.g. in real home improvement markets and guide people to a desired product. In such a scenario many robot internal tasks would benefit from the knowledge of knowing the number and positions of people in the vicinity. The navigation for example could treat them as dynamical moving objects and also predict their next motion directions in order to compute a much safer path. Or the robot could specifically approach customers and offer its services. This requires to detect a person or even a group of people in a reasonable range in front of the robot. Challenges of such a real-world task are e.g. changing lightning conditions, a dynamic environment and different people shapes. In this thesis a 3D people detection approach based on point cloud data provided by the Microsoft Kinect is implemented and integrated on mobile service robot. A Top-Down/Bottom-Up segmentation is applied to increase the systems flexibility and provided the capability to the detect people even if they are partially occluded. A feature set is proposed to detect people in various pose configurations and motions using a machine learning technique. The system can detect people up to a distance of 5 meters. The experimental evaluation compared different machine learning techniques and showed that standing people can be detected with a rate of 87.29% and sitting people with 74.94% using a Random Forest classifier. Certain objects caused several false detections. To elimante those a verification is proposed which further evaluates the persons shape in the 2D space. The detection component has been implemented as s sequential (frame rate of 10 Hz) and a parallel application (frame rate of 16 Hz). Finally, the component has been embedded into complete people search task which explorates the environment, find all people and approach each detected person.
Animal models are often needed in cancer research but some research questions may be answered with other models, e.g., 3D replicas of patient-specific data, as these mirror the anatomy in more detail. We, therefore, developed a simple eight-step process to fabricate a 3D replica from computer tomography (CT) data using solely open access software and described the method in detail. For evaluation, we performed experiments regarding endoscopic tumor treatment with magnetic nanoparticles by magnetic hyperthermia and local drug release. For this, the magnetic nanoparticles need to be accumulated at the tumor site via a magnetic field trap. Using the developed eight-step process, we printed a replica of a locally advanced pancreatic cancer and used it to find the best position for the magnetic field trap. In addition, we described a method to hold these magnetic field traps stably in place. The results are highly important for the development of endoscopic tumor treatment with magnetic nanoparticles as the handling and the stable positioning of the magnetic field trap at the stomach wall in close proximity to the pancreatic tumor could be defined and practiced. Finally, the detailed description of the workflow and use of open access software allows for a wide range of possible uses.
4GREAT is an extension of the German Receiver for Astronomy at Terahertz frequencies (GREAT) operated aboard the Stratospheric Observatory for Infrared Astronomy (SOFIA). The spectrometer comprises four different detector bands and their associated subsystems for simultaneous and fully independent science operation. All detector beams are co-aligned on the sky. The frequency bands of 4GREAT cover 491-635, 890-1090, 1240-1525 and 2490-2590 GHz, respectively. This paper presents the design and characterization of the instrument, and its in-flight performance. 4GREAT saw first light in June 2018, and has been offered to the interested SOFIA communities starting with observing cycle 6.
The objective of this research project is to develop a user-friendly and cost-effective interactive input device that allows intuitive and efficient manipulation of 3D objects (6 DoF) in virtual reality (VR) visualization environments with flat projections walls. During this project, it was planned to develop an extended version of a laser pointer with multiple laser beams arranged in specific patterns. Using stationary cameras observing projections of these patterns from behind the screens, it is planned to develop an algorithm for reconstruction of the emitter’s absolute position and orientation in space. Laser pointer concept is an intuitive way of interaction that would provide user with a familiar, mobile and efficient navigation though a 3D environment. In order to navigate in a 3D world, it is required to know the absolute position (x, y and z position) and orientation (roll, pitch and yaw angles) of the device, a total of 6 degrees of freedom (DoF). Ordinary laser-based pointers when captured on a flat surface with a video camera system and then processed, will only provide x and y coordinates effectively reducing available input to 2 DoF only. In order to overcome this problem, an additional set of multiple (invisible) laser pointers should be used in the pointing device. These laser pointers should be arranged in a way that the projection of their rays will form one fixed dot pattern when intersected with the flat surface of projection screens. Images of such a pattern will be captured via a real-time camera-based system and then processed using mathematical re-projection algorithms. This would allow the reconstruction of the full absolute 3D pose (6 DoF) of the input device. Additionally, multi-user or collaborative work should be supported by the system, would allow several users to interact with a virtual environment at the same time. Possibilities to port processing algorithms into embedded processors or FPGAs will be investigated during this project as well.
Background: Cancer heterogeneity poses a serious challenge concerning the toxicity and adverse effects of therapeutic inhibitors, especially when it comes to combinatorial therapies that involve multiple targeted inhibitors. In particular, in non-small cell lung cancer (NSCLC), a number of studies have reported synergistic effects of drug combinations in the preclinical models, while they were only partially successful in the clinical setup, suggesting those alternative clinical strategies (with genetic background and immune response) should be considered. Herein, we investigated the antitumor effect
of cytokine-induced killer (CIK) cells in combination with ALK and PD-1 inhibitors in vitro on genetically variable NSCLC cell lines.
Methods: We co-cultured the three genetically different NSCLC cell lines NCI-H2228 (EML4-ALK), A549 (KRAS mutation), and HCC-78 (ROS1 rearrangement) with and without nivolumab (PD-1 inhibitor) and crizotinib (ALK inhibitor). Additionally, we profiled the variability of surface expression multiple immune checkpoints, the concentration of absolute dead cells, intracellular granzyme B on CIK cells using flow cytometry as well as RT-qPCR. ELISA and Western blot were performed to verify the activation of CIK cells.
Results: Our analysis showed that (a) nivolumab significantly weakened PD-1 surface expression on CIK cells without impacting other immune checkpoints or PD-1 mRNA expression, (b) this combination strategy showed an effective response on cell viability, IFN-g production, and intracellular release of granzyme B in CD3+ CD56+ CIK cells, but solely in NCI-H2228, (c) the intrinsic expression of Fas ligand (FasL) as a T-cell activation marker in CIK cells was upregulated by this additive effect, and (d) nivolumab induced Foxp3 expression in CD4+CD25+ subpopulation of CIK cells significantly increased. Taken together, we could show that CIK cells in combination with crizotinib and nivolumab can enhance the anti-tumor immune response through FasL activation, leading to increased IFN-g and granzyme B, but only in NCI-H2228 cells with EML4-ALK rearrangement. Therefore, we hypothesize that CIK therapy may be a potential alternative in NSCLC patients harboring EML4-ALK rearrangement, in addition, we support the idea that combination therapies offer significant potential when they are optimized on a patient-by-patient basis.
The simultaneous operation of multiple different semiconducting metal oxide (MOX) gas sensors is demanding for the readout circuitry. The challenge results from the strongly varying signal intensities of the various sensor types to the target gas. While some sensors change their resistance only slightly, other types can react with a resistive change over a range of several decades. Therefore, a suitable readout circuit has to be able to capture all these resistive variations, requiring it to have a very large dynamic range. This work presents a compact embedded system that provides a full, high range input interface (readout and heater management) for MOX sensor operation. The system is modular and consists of a central mainboard that holds up to eight sensor-modules, each capable of supporting up to two MOX sensors, therefore supporting a total maximum of 16 different sensors. Its wide input range is archived using the resistance-to-time measurement method. The system is solely built with commercial off-the-shelf components and tested over a range spanning from 100Ω to 5 GΩ (9.7 decades) with an average measurement error of 0.27% and a maximum error of 2.11%. The heater management uses a well-tested power-circuit and supports multiple modes of operation, hence enabling the system to be used in highly automated measurement applications. The experimental part of this work presents the results of an exemplary screening of 16 sensors, which was performed to evaluate the system’s performance.
The choice of suitable semiconducting metal oxide (MOX) gas sensors for the detection of a specific gas or gas mixture is time-consuming since the sensor’s sensitivity needs to be characterized at multiple temperatures to find its optimal operating conditions. To obtain reliable measurement results, it is very important that the power for the sensor’s integrated heater is stable, regulated and error-free (or error-tolerant). Especially the error-free requirement can be only be achieved if the power supply implements failure-avoiding and failure-detection methods. The biggest challenge is deriving multiple different voltages from a common supply in an efficient way while keeping the system as small and lightweight as possible. This work presents a reliable, compact, embedded system that addresses the power supply requirements for fully automated simultaneous sensor characterization for up to 16 sensors at multiple temperatures. The system implements efficient (avg. 83.3% efficiency) voltage conversion with low ripple output (<32 mV) and supports static or temperature-cycled heating modes. Voltage and current of each channel are constantly monitored and regulated to guarantee reliable operation. To evaluate the proposed design, 16 sensors were screened. The results are shown in the experimental part of this work.
A Comparative Study of Uncertainty Estimation Methods in Deep Learning Based Classification Models
(2020)
Deep learning models produce overconfident predictions even for misclassified data. This work aims to improve the safety guarantees of software-intensive systems that use deep learning based classification models for decision making by performing comparative evaluation of different uncertainty estimation methods to identify possible misclassifications.
In this work, uncertainty estimation methods applicable to deep learning models are reviewed and those which can be seamlessly integrated to existing deployed deep learning architectures are selected for evaluation. The different uncertainty estimation methods, deep ensembles, test-time data augmentation and Monte Carlo dropout with its variants, are empirically evaluated on two standard datasets (CIFAR-10 and CIFAR-100) and two custom classification datasets (optical inspection and RoboCup@Work dataset). A relative ranking between the methods is provided by evaluating the deep learning classifiers on various aspects such as uncertainty quality, classifier performance and calibration. Standard metrics like entropy, cross-entropy, mutual information, and variance, combined with a rank histogram based method to identify uncertain predictions by thresholding on these metrics, are used to evaluate uncertainty quality.
The results indicate that Monte Carlo dropout combined with test-time data augmentation outperforms all other methods by identifying more than 95% of the misclassifications and representing uncertainty in the highest number of samples in the test set. It also yields a better classifier performance and calibration in terms of higher accuracy and lower Expected Calibration Error (ECE), respectively. A python based uncertainty estimation library for training and real-time uncertainty estimation of deep learning based classification models is also developed.
The continuously increasing number of biomedical scholarly publications makes it challenging to construct document recommendation algorithms that can efficiently navigate through literature. Such algorithms would help researchers in finding similar, relevant, and related publications that align with their research interests. Natural Language Processing offers various alternatives to compare publications, ranging from entity recognition to document embeddings. In this paper, we present the results of a comparative analysis of vector-based approaches to assess document similarity in the RELISH corpus. We aim to determine the best approach that resembles relevance without the need for further training. Specifically, we employ five different techniques to generate vectors representing the text in the documents. These techniques employ a combination of various Natural Language Processing frameworks such as Word2Vec, Doc2Vec, dictionary-based Named Entity Recognition, and state-of-the-art models based on BERT. To evaluate the document similarity obtained by these approaches, we utilize different evaluation metrics that account for relevance judgment, relevance search, and re-ranking of the relevance search. Our results demonstrate that the most promising approach is an in-house version of document embeddings, starting with word embeddings and using centroids to aggregate them by document.
The research of autonomous artificial agents that adapt to and survive in changing, possibly hostile environments, has gained momentum in recent years. Many of such agents incorporate mechanisms to learn and acquire new knowledge from its environment, a feature that becomes fundamental to enable the desired adaptation, and account for the challenges that the environment poses. The issue of how to trigger such learning, however, has not been as thoroughly studied as its significance suggest. The solution explored is based on the use of surprise (the reaction to unexpected events), as the mechanism that triggers learning. This thesis introduces a computational model of surprise that enables the robotic learner to experience surprise and start the acquisition of knowledge to explain it. A measure of surprise that combines elements from information and probability theory, is presented. Such measure offers a response to surprising situations faced by the robot, that is proportional to the degree of unexpectedness of such event. The concepts of short- and long-term memory are investigated as factors that influence the resulting surprise. Short-term memory enables the robot to habituate to new, repeated surprises, and to “forget” about old ones, allowing them to become surprising again. Long-term memory contains knowledge that is known a priori or that has been previously learned by the robot. Such knowledge influences the surprise mechanism, by applying a subsumption principle: if the available knowledge is able to explain the surprising event, suppress any trigger of surprise. The computational model of robotic surprise has been successfully applied to the domain of a robotic learner, specifically one that learns by experimentation. A brief introduction to the context of such application is provided, as well as a discussion on related issues like the relationship of the surprise mechanism with other components of the robot conceptual architecture, the challenges presented by the specific learning paradigm used, and other components of the motivational structure of the agent.
Computers can help us to trigger our intuition about how to solve a problem. But how does a computer take into account what a user wants and update these triggers? User preferences are hard to model as they are by nature vague, depend on the user’s background and are not always deterministic, changing depending on the context and process under which they were established. We pose that the process of preference discovery should be the object of interest in computer aided design or ideation. The process should be transparent, informative, interactive and intuitive. We formulate Hyper-Pref, a cyclic co-creative process between human and computer, which triggers the user’s intuition about what is possible and is updated according to what the user wants based on their decisions. We combine quality diversity algorithms, a divergent optimization method that can produce many, diverse solutions, with variational autoencoders to both model that diversity as well as the user’s preferences, discovering the preference hypervolume within large search spaces.
Electrical signal transmission in power electronic devices takes place through high-purity aluminum bonding wires. Cyclic mechanical and thermal stresses during operation lead to fatigue loads, resulting in premature failure of the wires, which cannot be reliably predicted. The following work presents two fatigue lifetime models calibrated and validated based on experimental fatigue results of an aluminum bonding wire and subsequently transferred and applied to other wire types. The lifetime modeling of Wöhler curves for different load ratios shows good but limited applicability for the linear model. The model can only be applied above 10,000 cycles and within the investigated load range of R = 0.1 to R = 0.7. The nonlinear model shows very good agreement between model prediction and experimental results over the entire investigated cycle range. Furthermore, the predicted Smith diagram is not only consistent in the investigated load range but also in the extrapolated load range from R = −1.0 to R = 0.8. A transfer of both model approaches to other wire types by using their tensile strengths can be implemented as well, although the nonlinear model is more suitable since it covers the entire load and cycle range.
Climate change is increasingly affecting vulnerable groups and resulting in dire social and economic consequences, especially for those in the Global South. Managing current and emerging climate-related risks will require increasing individual’s and communities’ resilience, including enhancing absorptive, adaptive, and transformative capacities. Policymakers are now considering the role that social protection policies and programmes can play in building climate resilience by contributing to these capacities. However, there is a limited understanding of the extent to which social protection instruments can influence these three resilience-related capacities. Lack of assessment tools or frameworks might contribute to limited evidence of social protection’s ability to increase climate resilience. In particular, there appear to be no frameworks or tools that help assess the role of social cash transfers (SCT) in building adaptive capacity. Based on a multi-staged literature review, we develop an adaptive capacity outcomes framework (ACOF) that can help assess SCT’s contribution to building adaptive capacity, and, consequently, resilience. The framework is then tested using impact evaluation and assessment reports from SCT programmes in Indonesia, Zambia, Ethiopia, Bangladesh, and Tanzania. The exercise finds that SCTs alone have a limited contribution to adaptive capacity outcomes, but interventions that combine cash transfers with other components such as nutrition or livelihood training show positive impacts. We find that the ACOF can support assessments of SCT’s contribution towards adaptive capacity. It can help build evidence, evaluate impacts, and through further research, can facilitate learning on SCTs' role in increasing climate resilience.
Failure prognostic builds up on constant data acquisition and processing and fault diagnosis and is an essential part of predictive maintenance of smart manufacturing systems enabling condition based maintenance, optimised use of plant equipment, improved uptime and yield and to prevent safety problems. Given known control inputs into a plant and real sensor outputs or simulated measurements, the model-based part of the proposed hybrid method provides numerical values of unknown parameter degradation functions at sampling time points by the evaluation of equations that have been derived offline from a bicausal diagnostic bond graph. These numerical values are computed concurrently to the constant monitoring of a system and are stored in a buffer of fixed length. The data-driven part of the method provides a sequence of remaining useful life estimates by repeated projection of the parameter degradation into the future based on the use of values in a sliding time window. Existing software can be used to determine the best fitting function and can account for its random parameters. The continuous parameter estimation and their projection into the future can be performed in parallel for multiple isolated simultaneous parametric faults on a multicore, multiprocessor computer.
The proposed hybrid bond graph model-based, data-driven method is verified by an offline simulation case study of a typical power electronic circuit. It can be used to implement embedded systems that enable cooperating machines in smart manufacturing to perform prognostic themselves.
A Method for the Sustainable Documentation of Operations Processes in Parcel Distribution Centers
(2018)
There is often no common understanding on operational processes in logistics companies as they are not properly documented. Hence, people execute the same process differently and training is conducted by experienced operators on an ad-hoc basis. Furthermore, continuous process improvement is hampered as neither the ideal process nor current issues in as-is processes are visible. A major reason for the missing documentation is the complexity of existing business process modelling languages. Modelling experts are required for initially describing the processes and also for updating the models after process changes. Furthermore, operations people are usually not used to read complex process models in EPCs or BPMN diagrams. In order to overcome these limitations, a domain-specific modelling language which facilitates maintaining up-to-date process models has been designed with a large logistics company in Germany. The paper at hand briefly describes this language and illustrates the method on how to apply it in operations environments.
Integrating physical simulation data into data ecosystems challenges the compatibility and interoperability of data management tools. Semantic web technologies and relational databases mostly use other data types, such as measurement or manufacturing design data. Standardizing simulation data storage and harmonizing the data structures with other domains is still a challenge, as current standards such as the ISO standard STEP (ISO 10303 ”Standard for the Exchange of Product model data”) fail to bridge the gap between design and simulation data. This challenge requires new methods, such as ontologies, to rethink simulation results integration. This research describes a new software architecture and application methodology based on the industrial standard ”Virtual Material Modelling in Manufacturing” (VMAP). The architecture integrates large quantities of structured simulation data and their analyses into a semantic data structure. It is capable of providing data permeability from the global digital twin level to the detailed numerical values of data entries and even new key indicators in a three-step approach: It represents a file as an instance in a knowledge graph, queries the file’s metadata, and finds a semantically represented process that enables new metadata to be created and instantiated.
Recessive mutations in the MPV17 gene cause mitochondrial DNA depletion syndrome, a fatal infantile genetic liver disease in humans. Loss of function in mice leads to glomerulosclerosis and sensineural deafness accompanied with mitochondrial DNA depletion. Mutations in the yeast homolog Sym1, and in the zebra fish homolog tra cause interesting, but not obviously related phenotypes, although the human gene can complement the yeast Sym1 mutation. The MPV17 protein is a hydrophobic membrane protein of 176 amino acids and unknown function. Initially localised in murine peroxisomes, it was later reported to be a mitochondrial inner membrane protein in humans and in yeast. To resolve this contradiction we tested two new mouse monoclonal antibodies directed against the human MPV17 protein in Western blots and immunohistochemistry on human U2OS cells. One of these monoclonal antibodies showed specific reactivity to a protein of 20 kD absent in MPV17 negative mouse cells. Immunofluorescence studies revealed colocalisation with peroxisomal, endosomal and lysosomal markers, but not with mitochondria. This data reveal a novel connection between a possible peroxisomal/endosomal/lysosomal function and mitochondrial DNA depletion.
Neuromorphic computing aims to mimic the computational principles of the brain in silico and has motivated research into event-based vision and spiking neural networks (SNNs). Event cameras (ECs) capture local, independent changes in brightness, and offer superior power consumption, response latencies, and dynamic ranges compared to frame-based cameras. SNNs replicate neuronal dynamics observed in biological neurons and propagate information in sparse sequences of ”spikes”. Apart from biological fidelity, SNNs have demonstrated potential as an alternative to conventional artificial neural networks (ANNs), such as in reducing energy expenditure and inference time in visual classification. Although potentially beneficial for robotics, the novel event-driven and spike-based paradigms remain scarcely explored outside the domain of aerial robots.
To investigate the utility of brain-inspired sensing and data processing in a robotics application, we developed a neuromorphic approach to real-time, online obstacle avoidance on a manipulator with an onboard camera. Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN, decoding neural activations into avoidance motions, and adjusting plans in a dynamic motion primitive formulation. We conducted simulated and real experiments with a Kinova Gen3 arm performing simple reaching tasks involving static and dynamic obstacles. Our implementation was systematically tuned, validated, and tested in sets of distinct task scenarios, and compared to a non-adaptive baseline through formalized quantitative metrics and qualitative criteria.
The neuromorphic implementation facilitated reliable avoidance of imminent collisions in most scenarios, with 84% and 92% median success rates in simulated and real experiments, where the baseline consistently failed. Adapted trajectories were qualitatively similar to baseline trajectories, indicating low impacts on safety, predictability and smoothness criteria. Among notable properties of the SNN were the correlation of processing time with the magnitude of perceived motions (captured in events) and robustness to different event emulation methods. Preliminary tests with a DAVIS346 EC showed similar performance, validating our experimental event emulation method. These results motivate future efforts to incorporate SNN learning, utilize neuromorphic processors, and target other robot tasks to further explore this approach.