Refine
Departments, institutes and facilities
- Fachbereich Informatik (979)
- Fachbereich Angewandte Naturwissenschaften (630)
- Institut für funktionale Gen-Analytik (IFGA) (560)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (393)
- Fachbereich Ingenieurwissenschaften und Kommunikation (386)
- Fachbereich Wirtschaftswissenschaften (319)
- Institute of Visual Computing (IVC) (286)
- Institut für Cyber Security & Privacy (ICSP) (244)
- Institut für Verbraucherinformatik (IVI) (166)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (126)
Document Type
- Article (1649)
- Conference Object (1426)
- Part of a Book (249)
- Preprint (88)
- Report (73)
- Doctoral Thesis (64)
- Book (monograph, edited volume) (57)
- Master's Thesis (36)
- Working Paper (34)
- Conference Proceedings (27)
Year of publication
Language
- English (3792) (remove)
Keywords
- Machine Learning (15)
- Virtual Reality (15)
- FPGA (14)
- Robotics (14)
- Sustainability (14)
- ENaC (13)
- GC/MS (13)
- virtual reality (13)
- sustainability (12)
- ICT (11)
Background: Cancer heterogeneity poses a serious challenge concerning the toxicity and adverse effects of therapeutic inhibitors, especially when it comes to combinatorial therapies that involve multiple targeted inhibitors. In particular, in non-small cell lung cancer (NSCLC), a number of studies have reported synergistic effects of drug combinations in the preclinical models, while they were only partially successful in the clinical setup, suggesting those alternative clinical strategies (with genetic background and immune response) should be considered. Herein, we investigated the antitumor effect
of cytokine-induced killer (CIK) cells in combination with ALK and PD-1 inhibitors in vitro on genetically variable NSCLC cell lines.
Methods: We co-cultured the three genetically different NSCLC cell lines NCI-H2228 (EML4-ALK), A549 (KRAS mutation), and HCC-78 (ROS1 rearrangement) with and without nivolumab (PD-1 inhibitor) and crizotinib (ALK inhibitor). Additionally, we profiled the variability of surface expression multiple immune checkpoints, the concentration of absolute dead cells, intracellular granzyme B on CIK cells using flow cytometry as well as RT-qPCR. ELISA and Western blot were performed to verify the activation of CIK cells.
Results: Our analysis showed that (a) nivolumab significantly weakened PD-1 surface expression on CIK cells without impacting other immune checkpoints or PD-1 mRNA expression, (b) this combination strategy showed an effective response on cell viability, IFN-g production, and intracellular release of granzyme B in CD3+ CD56+ CIK cells, but solely in NCI-H2228, (c) the intrinsic expression of Fas ligand (FasL) as a T-cell activation marker in CIK cells was upregulated by this additive effect, and (d) nivolumab induced Foxp3 expression in CD4+CD25+ subpopulation of CIK cells significantly increased. Taken together, we could show that CIK cells in combination with crizotinib and nivolumab can enhance the anti-tumor immune response through FasL activation, leading to increased IFN-g and granzyme B, but only in NCI-H2228 cells with EML4-ALK rearrangement. Therefore, we hypothesize that CIK therapy may be a potential alternative in NSCLC patients harboring EML4-ALK rearrangement, in addition, we support the idea that combination therapies offer significant potential when they are optimized on a patient-by-patient basis.
The simultaneous operation of multiple different semiconducting metal oxide (MOX) gas sensors is demanding for the readout circuitry. The challenge results from the strongly varying signal intensities of the various sensor types to the target gas. While some sensors change their resistance only slightly, other types can react with a resistive change over a range of several decades. Therefore, a suitable readout circuit has to be able to capture all these resistive variations, requiring it to have a very large dynamic range. This work presents a compact embedded system that provides a full, high range input interface (readout and heater management) for MOX sensor operation. The system is modular and consists of a central mainboard that holds up to eight sensor-modules, each capable of supporting up to two MOX sensors, therefore supporting a total maximum of 16 different sensors. Its wide input range is archived using the resistance-to-time measurement method. The system is solely built with commercial off-the-shelf components and tested over a range spanning from 100Ω to 5 GΩ (9.7 decades) with an average measurement error of 0.27% and a maximum error of 2.11%. The heater management uses a well-tested power-circuit and supports multiple modes of operation, hence enabling the system to be used in highly automated measurement applications. The experimental part of this work presents the results of an exemplary screening of 16 sensors, which was performed to evaluate the system’s performance.
The choice of suitable semiconducting metal oxide (MOX) gas sensors for the detection of a specific gas or gas mixture is time-consuming since the sensor’s sensitivity needs to be characterized at multiple temperatures to find its optimal operating conditions. To obtain reliable measurement results, it is very important that the power for the sensor’s integrated heater is stable, regulated and error-free (or error-tolerant). Especially the error-free requirement can be only be achieved if the power supply implements failure-avoiding and failure-detection methods. The biggest challenge is deriving multiple different voltages from a common supply in an efficient way while keeping the system as small and lightweight as possible. This work presents a reliable, compact, embedded system that addresses the power supply requirements for fully automated simultaneous sensor characterization for up to 16 sensors at multiple temperatures. The system implements efficient (avg. 83.3% efficiency) voltage conversion with low ripple output (<32 mV) and supports static or temperature-cycled heating modes. Voltage and current of each channel are constantly monitored and regulated to guarantee reliable operation. To evaluate the proposed design, 16 sensors were screened. The results are shown in the experimental part of this work.
A Comparative Study of Uncertainty Estimation Methods in Deep Learning Based Classification Models
(2020)
Deep learning models produce overconfident predictions even for misclassified data. This work aims to improve the safety guarantees of software-intensive systems that use deep learning based classification models for decision making by performing comparative evaluation of different uncertainty estimation methods to identify possible misclassifications.
In this work, uncertainty estimation methods applicable to deep learning models are reviewed and those which can be seamlessly integrated to existing deployed deep learning architectures are selected for evaluation. The different uncertainty estimation methods, deep ensembles, test-time data augmentation and Monte Carlo dropout with its variants, are empirically evaluated on two standard datasets (CIFAR-10 and CIFAR-100) and two custom classification datasets (optical inspection and RoboCup@Work dataset). A relative ranking between the methods is provided by evaluating the deep learning classifiers on various aspects such as uncertainty quality, classifier performance and calibration. Standard metrics like entropy, cross-entropy, mutual information, and variance, combined with a rank histogram based method to identify uncertain predictions by thresholding on these metrics, are used to evaluate uncertainty quality.
The results indicate that Monte Carlo dropout combined with test-time data augmentation outperforms all other methods by identifying more than 95% of the misclassifications and representing uncertainty in the highest number of samples in the test set. It also yields a better classifier performance and calibration in terms of higher accuracy and lower Expected Calibration Error (ECE), respectively. A python based uncertainty estimation library for training and real-time uncertainty estimation of deep learning based classification models is also developed.
This paper gives an overview of the development of Fair Trade in six European countries: Austria, France, Germany, the Netherlands, Switzerland and the United Kingdom. After the description of the food retail industry and its market structures in these countries, the main European Fair Trade organizations are analyzed regarding their role within the Fair Trade system. The following part deals with the development of Fair Trade sales in general and with respect to the products coffee, tea, bananas, fruit juice and sugar. An overview of the main activities of national Fair Trade organizations, e.g. public relation activities, completes the analysis. This study shows the enormous upswing of Fair Trade during the last decade and the reasons for this development. Nevertheless, it comes to the conclusion that Fair Trade is still far away from being an essential part of the food retail industry in Europe.
This paper compares the memory allocation of two Java virtual machines, namely Oracle Java HotSpot VM 32-bit (OJVM) and Jamaica JamaicaVM (JJVM). The basic difference of the architectures in both machines is that the JamaicaVM uses fixed-size blocks for allocating objects on the heap. The basic difference of the architectures is that the JJVM uses fixed size block allocation on the heap. This means that objects have to be split into several connected blocks if they are bigger than the specified block-size. On the other hand, for small objects a full block must be allocated. The paper contains both theoretical and experimental analysis on the memory-overhead. The theoretical analysis is based on specifications of the two virtual machines. The experimental analysis is done with a modified JVMTI Agent together with the SPECjvm2008 Benchmark.
The continuously increasing number of biomedical scholarly publications makes it challenging to construct document recommendation algorithms that can efficiently navigate through literature. Such algorithms would help researchers in finding similar, relevant, and related publications that align with their research interests. Natural Language Processing offers various alternatives to compare publications, ranging from entity recognition to document embeddings. In this paper, we present the results of a comparative analysis of vector-based approaches to assess document similarity in the RELISH corpus. We aim to determine the best approach that resembles relevance without the need for further training. Specifically, we employ five different techniques to generate vectors representing the text in the documents. These techniques employ a combination of various Natural Language Processing frameworks such as Word2Vec, Doc2Vec, dictionary-based Named Entity Recognition, and state-of-the-art models based on BERT. To evaluate the document similarity obtained by these approaches, we utilize different evaluation metrics that account for relevance judgment, relevance search, and re-ranking of the relevance search. Our results demonstrate that the most promising approach is an in-house version of document embeddings, starting with word embeddings and using centroids to aggregate them by document.
Recent years have seen extensive adoption of domain generation algorithms (DGA) by modern botnets. The main goal is to generate a large number of domain names and then use a small subset for actual C&C communication. This makes DGAs very compelling for botmasters to harden the infrastructure of their botnets and make it resilient to blacklisting and attacks such as takedown efforts. While early DGAs were used as a backup communication mechanism, several new botnets use them as their primary communication method, making it extremely important to study DGAs in detail.
In this paper, we perform a comprehensive measurement study of the DGA landscape by analyzing 43 DGAbased malware families and variants. We also present a taxonomy for DGAs and use it to characterize and compare the properties of the studied families. By reimplementing the algorithms, we pre-compute all possible domains they generate, covering the majority of known and active DGAs. Then, we study the registration status of over 18 million DGA domains and show that corresponding malware families and related campaigns can be reliably identified by pre-computing future DGA domains. We also give insights into botmasters’ strategies regarding domain registration and identify several pitfalls in previous takedown efforts of DGA-based botnets. We will share the dataset for future research and will also provide a web service to check domains for potential DGA identity.
The research of autonomous artificial agents that adapt to and survive in changing, possibly hostile environments, has gained momentum in recent years. Many of such agents incorporate mechanisms to learn and acquire new knowledge from its environment, a feature that becomes fundamental to enable the desired adaptation, and account for the challenges that the environment poses. The issue of how to trigger such learning, however, has not been as thoroughly studied as its significance suggest. The solution explored is based on the use of surprise (the reaction to unexpected events), as the mechanism that triggers learning. This thesis introduces a computational model of surprise that enables the robotic learner to experience surprise and start the acquisition of knowledge to explain it. A measure of surprise that combines elements from information and probability theory, is presented. Such measure offers a response to surprising situations faced by the robot, that is proportional to the degree of unexpectedness of such event. The concepts of short- and long-term memory are investigated as factors that influence the resulting surprise. Short-term memory enables the robot to habituate to new, repeated surprises, and to “forget” about old ones, allowing them to become surprising again. Long-term memory contains knowledge that is known a priori or that has been previously learned by the robot. Such knowledge influences the surprise mechanism, by applying a subsumption principle: if the available knowledge is able to explain the surprising event, suppress any trigger of surprise. The computational model of robotic surprise has been successfully applied to the domain of a robotic learner, specifically one that learns by experimentation. A brief introduction to the context of such application is provided, as well as a discussion on related issues like the relationship of the surprise mechanism with other components of the robot conceptual architecture, the challenges presented by the specific learning paradigm used, and other components of the motivational structure of the agent.
The objective of this thesis is to implement a computer game based motivation system for maximal strength testing on the Biodex System 3 Isokinetic Dynamometer. The prototype game has been designed to improve the peak torque produced in an isometric knee extensor strength test. An extensive analysis is performed on a torque data set from a previous study. The torque responses for five second long maximal voluntary contractions of the knee extensor are analyzed to understand torque response characteristics of different subjects. The parameters identifed in the data analysis are used in the implementation of the 'Shark and School of Fish' game. The behavior of the game for different torque responses is analyzed on a different torque data set from the previous study. The evaluation shows that the game rewards and motivates continuously over a repetition to reach the peak torque value. The evaluation also shows that the game rewards the user more if he overcomes a baseline torque value within the first second and then gradually increase the torque to reach peak torque.
The aim of design science research (DSR) in information systems is the user-centred creation of IT-artifacts with regard to specific social environments. For culture research in the field, which is necessary for a proper localization of IT-artifacts, models and research approaches from social sciences usually are adopted. Descriptive dimension-based culture models most commonly are applied for this purpose, which assume culture being a national phenomenon and tend to reduce it to basic values. Such models are useful for investigations in behavioural culture research because it aims to isolate, describe and explain culture-specific attitudes and characteristics within a selected society. In contrast, with the necessity to deduce concrete decisions for artifact-design, research results from DSR need to go beyond this aim. As hypothesis, this contribution generally questions the applicability of such generic culture dimensions’ models for DSR and focuses on their theoretical foundation, which goes back to Hofstede’s conceptual Onion Model of Culture. The herein applied literature-based analysis confirms the hypothesis. Consequently, an alternative conceptual culture model is being introduced and discussed as theoretical foundation for culture research in DSR.
New cars are increasingly "connected" by default. Since not having a car is not an option for many people, understanding the privacy implications of driving connected cars and using their data-based services is an even more pressing issue than for expendable consumer products. While risk-based approaches to privacy are well established in law, they have only begun to gain traction in HCI. These approaches are understood not only to increase acceptance but also to help consumers make choices that meet their needs. To the best of our knowledge, perceived risks in the context of connected cars have not been studied before. To address this gap, our study reports on the analysis of a survey with 18 open-ended questions distributed to 1,000 households in a medium-sized German city. Our findings provide qualitative insights into existing attitudes and use cases of connected car features and, most importantly, a list of perceived risks themselves. Taking the perspective of consumers, we argue that these can help inform consumers about data use in connected cars in a user-friendly way. Finally, we show how these risks fit into and extend existing risk taxonomies from other contexts with a stronger social perspective on risks of data use.
During exercise, heart rate has proven to be a good measure in planning workouts. It is not only simple to measure but also well understood and has been used for many years for workout planning. To use heart rate to control physical exercise, a model which predicts future heart rate dependent on a given strain can be utilized. In this paper, we present a mathematical model based on convolution for predicting the heart rate response to strain with four physiologically explainable parameters. This model is based on the general idea of the Fitness-Fatigue model for performance analysis, but is revised here for heart rate analysis. Comparisons show that the Convolution model can compete with other known heart rate models. Furthermore, this new model can be improved by reducing the number of parameters. The remaining parameter seems to be a promising indicator of the actual subject’s fitness.
Computers can help us to trigger our intuition about how to solve a problem. But how does a computer take into account what a user wants and update these triggers? User preferences are hard to model as they are by nature vague, depend on the user’s background and are not always deterministic, changing depending on the context and process under which they were established. We pose that the process of preference discovery should be the object of interest in computer aided design or ideation. The process should be transparent, informative, interactive and intuitive. We formulate Hyper-Pref, a cyclic co-creative process between human and computer, which triggers the user’s intuition about what is possible and is updated according to what the user wants based on their decisions. We combine quality diversity algorithms, a divergent optimization method that can produce many, diverse solutions, with variational autoencoders to both model that diversity as well as the user’s preferences, discovering the preference hypervolume within large search spaces.
In Sensor-based Fault Detection and Diagnosis (SFDD) methods, spatial and temporal dependencies among the sensor signals can be modeled to detect faults in the sensors, if the defined dependencies change over time. In this work, we model Granger causal relationships between pairs of sensor data streams to detect changes in their dependencies. We compare the method on simulated signals with the Pearson correlation, and show that the method elegantly handles noise and lags in the signals and provides appreciable dependency detection. We further evaluate the method using sensor data from a mobile robot by injecting both internal and external faults during operation of the robot. The results show that the method is able to detect changes in the system when faults are injected, but is also prone to detecting false positives. This suggests that this method can be used as a weak detection of faults, but other methods, such as the use of a structural model, are required to reliably detect and diagnose faults.
A trace of the execution of a concurrent object-oriented program can be displayed in two-dimensions as a diagram of a non-metric finite geometry. The actions of a programs are represented by points, its objects and threads by vertical lines, its transactions by horizontal lines, its communications and resource sharing by sloping arrows, and its partial traces by rectangular figures.
Electrical signal transmission in power electronic devices takes place through high-purity aluminum bonding wires. Cyclic mechanical and thermal stresses during operation lead to fatigue loads, resulting in premature failure of the wires, which cannot be reliably predicted. The following work presents two fatigue lifetime models calibrated and validated based on experimental fatigue results of an aluminum bonding wire and subsequently transferred and applied to other wire types. The lifetime modeling of Wöhler curves for different load ratios shows good but limited applicability for the linear model. The model can only be applied above 10,000 cycles and within the investigated load range of R = 0.1 to R = 0.7. The nonlinear model shows very good agreement between model prediction and experimental results over the entire investigated cycle range. Furthermore, the predicted Smith diagram is not only consistent in the investigated load range but also in the extrapolated load range from R = −1.0 to R = 0.8. A transfer of both model approaches to other wire types by using their tensile strengths can be implemented as well, although the nonlinear model is more suitable since it covers the entire load and cycle range.
As from the beginning of the late 70's an impressive number of innovative electronic payment systems have been developed and tested commercially. However, the resulting variety and complexity of the systems have turned out to be one of the obstacles for the broad acceptance of electronic payment. In this paper we propose a process and service oriented framework, which offers a structural and conceptual orientation in the field of electronic payment. It renders possible an integral view on electronic payment that goes beyond the frame of an individual system. To do this, we have generalized the systems-oriented approaches to a phase-oriented payment model. Using this model, requirements and supporting services for electronic payment can be sorted systematically and described from both the customers' and the merchants' viewpoint. Providing integrated payment processes and services is proving to be a difficult task. With this paper we would like to outline the necessity for a Payment Service Provider to act as a mediator for suppliers and users of electronic payment systems.
A framework of decision‐support systems in advanced manufacturing enterprises ‐ a systems view
(1997)
Climate change is increasingly affecting vulnerable groups and resulting in dire social and economic consequences, especially for those in the Global South. Managing current and emerging climate-related risks will require increasing individual’s and communities’ resilience, including enhancing absorptive, adaptive, and transformative capacities. Policymakers are now considering the role that social protection policies and programmes can play in building climate resilience by contributing to these capacities. However, there is a limited understanding of the extent to which social protection instruments can influence these three resilience-related capacities. Lack of assessment tools or frameworks might contribute to limited evidence of social protection’s ability to increase climate resilience. In particular, there appear to be no frameworks or tools that help assess the role of social cash transfers (SCT) in building adaptive capacity. Based on a multi-staged literature review, we develop an adaptive capacity outcomes framework (ACOF) that can help assess SCT’s contribution to building adaptive capacity, and, consequently, resilience. The framework is then tested using impact evaluation and assessment reports from SCT programmes in Indonesia, Zambia, Ethiopia, Bangladesh, and Tanzania. The exercise finds that SCTs alone have a limited contribution to adaptive capacity outcomes, but interventions that combine cash transfers with other components such as nutrition or livelihood training show positive impacts. We find that the ACOF can support assessments of SCT’s contribution towards adaptive capacity. It can help build evidence, evaluate impacts, and through further research, can facilitate learning on SCTs' role in increasing climate resilience.
Failure prognostic builds up on constant data acquisition and processing and fault diagnosis and is an essential part of predictive maintenance of smart manufacturing systems enabling condition based maintenance, optimised use of plant equipment, improved uptime and yield and to prevent safety problems. Given known control inputs into a plant and real sensor outputs or simulated measurements, the model-based part of the proposed hybrid method provides numerical values of unknown parameter degradation functions at sampling time points by the evaluation of equations that have been derived offline from a bicausal diagnostic bond graph. These numerical values are computed concurrently to the constant monitoring of a system and are stored in a buffer of fixed length. The data-driven part of the method provides a sequence of remaining useful life estimates by repeated projection of the parameter degradation into the future based on the use of values in a sliding time window. Existing software can be used to determine the best fitting function and can account for its random parameters. The continuous parameter estimation and their projection into the future can be performed in parallel for multiple isolated simultaneous parametric faults on a multicore, multiprocessor computer.
The proposed hybrid bond graph model-based, data-driven method is verified by an offline simulation case study of a typical power electronic circuit. It can be used to implement embedded systems that enable cooperating machines in smart manufacturing to perform prognostic themselves.
In the design of robot skills, the focus generally lies on increasing the flexibility and reliability of the robot execution process; however, typical skill representations are not designed for analysing execution failures if they occur or for explicitly learning from failures. In this paper, we describe a learning-based hybrid representation for skill parameterisation called an execution model, which considers execution failures to be a natural part of the execution process. We then (i) demonstrate how execution contexts can be included in execution models, (ii) introduce a technique for generalising models between object categories by combining generalisation attempts performed by a robot with knowledge about object similarities represented in an ontology, and (iii) describe a procedure that uses an execution model for identifying a likely hypothesis of a parameterisation failure. The feasibility of the proposed methods is evaluated in multiple experiments performed with a physical robot in the context of handle grasping, object grasping, and object pulling. The experimental results suggest that execution models contribute towards avoiding execution failures, but also represent a first step towards more introspective robots that are able to analyse some of their execution failures in an explicit manner.