Refine
Departments, institutes and facilities
- Fachbereich Wirtschaftswissenschaften (92)
- Fachbereich Angewandte Naturwissenschaften (62)
- Fachbereich Informatik (55)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (41)
- Fachbereich Sozialpolitik und Soziale Sicherung (33)
- Fachbereich Ingenieurwissenschaften und Kommunikation (32)
- Institut für Verbraucherinformatik (IVI) (28)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (25)
- Präsidium (25)
- Institute of Visual Computing (IVC) (22)
Document Type
- Article (110)
- Part of a Book (94)
- Conference Object (93)
- Part of Periodical (26)
- Book (monograph, edited volume) (20)
- Report (13)
- Contribution to a Periodical (9)
- Working Paper (8)
- Doctoral Thesis (4)
- Preprint (3)
Year of publication
- 2018 (389) (remove)
Keywords
- Digitalisierung (5)
- ICT (5)
- Lehrbuch (4)
- Betriebswirtschaftslehre (3)
- Dementia (3)
- FPGA (3)
- Qualitätsmanagement (3)
- User Experience (3)
- drug release (3)
- lignin (3)
Mit dem Anspruch, die Qualität in der medizinischen Rehabilitation weiterzuentwickeln, haben sich im Jahr 2007 13 Kliniken von acht Trägern zusammengeschlossen. Heute, ein gutes Jahrzehnt später, vereint der Qualitätsverbund Gesundheit rund 30 Reha-Kliniken von elf Trägern, darunter kommunale und kirchliche Institutionen, Privatunternehmen und die Rehazentren der Deutschen Rentenversicherung Baden-Württemberg.
Pyrolysis–Gas Chromatography
(2018)
The methodology of analytical pyrolysis-GC/MS has been known for several years, but is seldom used in research laboratories and process control in the chemical industry. This is due to the relative difficulty of interpreting the identified pyrolysis products as well as the variety of them. This book contains full identification of several classes of polymers/copolymers and biopolymers that can be very helpful to the user. In addition, the practical applications can encourage analytical chemists and engineers to use the techniques explored in this volume.
The structure and the functions of various types of pyrolyzers and the results of the pyrolysis–gas chromatographic–mass spectrometric identification of synthetic polymers/copolymers and biopolymers at 700°C are described. Practical applications of these techniques are also included, detailing the analysis of microplastics, failure analysis in the automotive industry and solutions for technological problems.
Surrogate-assistance approaches have long been used in computationally expensive domains to improve the data-efficiency of optimization algorithms. Neuroevolution, however, has so far resisted the application of these techniques because it requires the surrogate model to make fitness predictions based on variable topologies, instead of a vector of parameters. Our main insight is that we can sidestep this problem by using kernel-based surrogate models, which require only the definition of a distance measure between individuals. Our second insight is that the well-established Neuroevolution of Augmenting Topologies (NEAT) algorithm provides a computationally efficient distance measure between dissimilar networks in the form of "compatibility distance", initially designed to maintain topological diversity. Combining these two ideas, we introduce a surrogate-assisted neuroevolution algorithm that combines NEAT and a surrogate model built using a compatibility distance kernel. We demonstrate the data-efficiency of this new algorithm on the low dimensional cart-pole swing-up problem, as well as the higher dimensional half-cheetah running task. In both tasks the surrogate-assisted variant achieves the same or better results with several times fewer function evaluations as the original NEAT.
Improving the Performance of Parallel SpMV Operations on NUMA Systems with Adaptive Load Balancing
(2018)
For a parallel Sparse Matrix Vector Multiply (SpMV) on a multiprocessor, rather simple and efficient work distributions often produce good results. In cases where this is not true, adaptive load balancing can improve the balance and performance. This paper introduces a low overhead framework for adaptive load balancing of parallel SpMV operations. It uses statistical filters to gather relevant runtime performance data and detects an imbalance situation. Three different algorithms were compared that adaptively balance the load with high quality and low overhead. Results show that for sparse matrices, where the adaptive load balancing was enabled, an average speedup of 1.15 (regarding the total execution time) could be achieved with our best algorithm over 4 different matrix formats and two different NUMA systems.
Background: Local injection of autologous conditioned serum (ACS) is a well-known therapy for inflammatory diseases (IDs). While patients’ blood is incubated to generate ACS (with subsequent centrifugation), immune cells produce high amounts of growth factors and cytokines. This include, amongst others, interleukin-1 receptor antagonist (IL-1ra), interleukins 6 and 10, tumour necrosis factor alpha (TNF-α) and transforming growth factor beta 1 (TGF-β1). The aim of this study was to analyse exosomes release into ACS as well as their cytokine cargo.
Softwarenutzung im Umbruch: Von der Software-Lizenz zum Cloudbasierten Business Process Outsourcing
(2018)
Major progress occurred in understanding inborn errors of ketone body transport and metabolism between the International Congresses on Inborn Errors of Metabolism in Barcelona (2013) and Rio de Janeiro (2017). These conditions impair either ketogenesis (presenting as episodes of hypoketotic hypoglycemia) or ketolysis (presenting as ketoacidotic episodes); for both groups, immediate intravenous glucose administration is the most critical and (mHGGCS, HMGCS2) effective treatment measure.
The Fragile X Syndrome (FXS) is one of the most common forms of inherited intellectual disability in all human societies. Caused by the transcriptional silencing of a single gene, the fragile x mental retardation gene FMR1, FXS is characterized by a variety of symptoms, which range from mental disabilities to autism and epilepsy. More than 20 years ago, a first animal model was described, the Fmr1 knock-out mouse. Several other models have been developed since then, including conditional knock-out mice, knock-out rats, a zebrafish and a drosophila model. Using these model systems, various targets for potential pharmaceutical treatments have been identified and many treatments have been shown to be efficient in preclinical studies. However, all attempts to turn these findings into a therapy for patients have failed thus far. In this review, I will discuss underlying difficulties and address potential alternatives for our future research.
Estimation of Prediction Uncertainty for Semantic Scene Labeling Using Bayesian Approximation
(2018)
With the advancement in technology, autonomous and assisted driving are close to being reality. A key component of such systems is the understanding of the surrounding environment. This understanding about the environment can be attained by performing semantic labeling of the driving scenes. Existing deep learning based models have been developed over the years that outperform classical image processing algorithms for the task of semantic labeling. However, the existing models only produce semantic predictions and do not provide a measure of uncertainty about the predictions. Hence, this work focuses on developing a deep learning based semantic labeling model that can produce semantic predictions and their corresponding uncertainties. Autonomous driving needs a real-time operating model, however the Full Resolution Residual Network (FRRN) [4] architecture, which is found as the best performing architecture during literature search, is not able to satisfy this condition. Hence, a small network, similar to FRRN, has been developed and used in this work. Based on the work of [13], the developed network is then extended by adding dropout layers and the dropouts are used during testing to perform approximate Bayesian inference. The existing works on uncertainties, do not have quantitative metrics to evaluate the quality of uncertainties estimated by a model. Hence, the area under curve (AUC) of the receiver operating characteristic (ROC) curves is proposed and used as an evaluation metric in this work. Further, a comparative analysis about the influence of dropout layer position, drop probability and the number of samples, on the quality of uncertainty estimation is performed. Finally, based on the insights gained from the analysis, a model with optimal configuration of dropout is developed. It is then evaluated on the Cityscape dataset and shown to be outperforming the baseline model with an AUC-ROC of about 90%, while the latter having AUC-ROC of about 80%.
Scientific or statistical research has long been the domain of dedicated programming languages such as R, SPSS or SAS. A few years other competitors entered the arena, among them Python with its powerful SciPy package. The following article introduces SciPy by applying a small subset of its functionality to a well-known dataset.
Text is one of the key sources of information for social sciences and humanities which, with the rise and development of computational technologies, has been mostly available via digital libraries, archives and websites. It enables researchers to increasingly deal with large scale text corpora that require the use of advanced software tools to process them and extract information. Computational linguistics - a discipline that has emerged on the border of computer science, linguistics and statistics - has achieved certain results in automated text analysis and information extraction, e.g., tools for part-of-speech tagging, grammar parsing, semantic role labelling, sentiment analysis and anaphora resolution have been developed and successfully used in many scientific projects. However, there still exists a gap between technology available and the needs of social sciences: named entity recognizers are incapable of identifying actors, sentiment analysis just provides the overall mood of an expression but is not able to identify the evaluation of information by the utterer, topic modeling tools can only assign a topic to a document, but fall short of measuring its frame.
Beim Entwurf eines effizienten und sicheren Luftfahrzeugs müssen viele fachliche Aspekte berücksichtigt werden. Die Bereiche Aerodynamik, Strukturmechanik als auch Flugmechanik spielen eine wichtige Rolle und hängen voneinander ab. Daher ist ein iterativer Entwurfsprozess erforderlich, um einen an die Anforderungen bestmöglich angepassten Kompromiss zu finden. In der Forschung am Deutschen Zentrum für Luft- und Raumfahrt e.V. (DLR) werden dafür automatisierte Prozessketten entwickelt, die zur Bewertung und Entwicklung von neuen Flugzeugkonzepten dienen.
Variable Sterne sind Sterne, welche in bestimmten Messparametern variabel sind. In unserem Fall ist dies die Helligkeit der Sterne. Grundsätzlich gibt es hier zwei Arten der Variabilität, intrinsiche und extrinsische Prozesse. Unter intrinischen Prozessen versteht man Variabilität, deren Ursache im Stern selbst liegt.
Science Track FrOSCon 2016
(2018)
Im Jahre 2015 feierte die Free and Open Source Software Conference ihr 10 Jähriges Bestehen. Entstanden aus einer Idee von Studierenden, wissenschaftlichen Mitarbeitern und Professoren des Fachbereichs Informatik entwickelte sich eine der wichtigsten Konferenzen im Bereich der freien und quelloffenen Software in Deutschland.
Der vorliegende Fachbericht ist der Abschlussbericht eines im Auftrag des Ministeriums für Umwelt, Landwirtschaft, Natur- und Verbraucherschutz durchgeführten Kooperationsprojekts des LANUV mit dem Internationalen Zentrum für Nachhaltige Entwicklung (Hochschule-Bonn-Rhein-Sieg) zur Untersuchung von Mengen und Gründen für die Entstehung von Lebensmittelverlusten bei Obst, Gemüse und Kartoffeln sowie zur Entwicklung von Vermeidungsstrategien im Winter 2016/2017.
Das kompakte Fachbuch gibt einen Überblick über die Möglichkeiten von „Big Data“ im Gesundheitswesen und beschreibt anhand von ausgewählten Szenarien mögliche Einsatzgebiete. Die Autoren erläutern zentrale Systemkomponenten und IT-Standards und thematisieren anhand wichtiger Daten des Gesundheitswesens die Notwendigkeit der Strukturierung und Modellierung von Daten. Das Buch gibt Hinweise wie Geschäftsprozesse im Gesundheitswesen dokumentiert, analysiert und verbessert werden können. Anwendungsszenarien, wie die Datenanalysen für Krankenhäuser, Labore, Versicherungen und die Pharmaindustrie, zeigen die praktische Relevanz des Themas. Aber auch rechtliche und ethische Aspekte werden inhaltlich angeschnitten. Ein Buch für Entscheider in der medizinischen Leitung und Verwaltung von Krankenhäusern, Fachleute sowie niedergelassene Ärzte und Apotheker, aber auch Personen in Ausbildung und Studium im Gesundheitswesen.