Refine
Departments, institutes and facilities
- Fachbereich Informatik (89)
- Fachbereich Wirtschaftswissenschaften (75)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (54)
- Fachbereich Angewandte Naturwissenschaften (53)
- Fachbereich Ingenieurwissenschaften und Kommunikation (36)
- Institut für Cyber Security & Privacy (ICSP) (31)
- Präsidium (30)
- Institute of Visual Computing (IVC) (21)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (21)
- Fachbereich Sozialpolitik und Soziale Sicherung (20)
Document Type
- Conference Object (135)
- Article (105)
- Part of a Book (33)
- Part of Periodical (26)
- Book (monograph, edited volume) (23)
- Report (12)
- Patent (11)
- Working Paper (10)
- Contribution to a Periodical (9)
- Bachelor Thesis (6)
Year of publication
- 2017 (395) (remove)
Keywords
- Entrepreneurship (5)
- Nachhaltigkeit (5)
- Controlling (3)
- GC/MS (3)
- Lehrbuch (3)
- stem cells (3)
- Aerodynamics (2)
- Analytical pyrolysis (2)
- Approximated Jacobian (2)
- Biomineralization (2)
In order to help journalists investigate inside large audiovisual archives, as maintained by news broadcast agencies, the multimedia data must be indexed by text-based search engies. By automatically creating a transcript through automatic speech recognition (ASR), the spoken word becomes accessible to text search, and queries for keywords are made possible. But stil, important contextual information like the identity of the speaker is not captured. Especially when gathering original footage in the political domain, the identity of the speaker can be the most important query constraint, although this name may not be prominent in the words spoken. It is thus desireable to have this information provided explicitely to the search engine. To provide this information, the archive must be an alyzed by automatic Speaker Identification (SID). While this research topic has seen substantial gains in accuracy and robustness over last years, it has not yet established itself as a helpful, large-scale tool outside the research community. This thesis sets out to establish a workflow to provide automatic speaker identification. Its application is to help journalists searching on speeches given in the German parliament (Bundestag). This is a contribution to the News-Stream 3.0 project, a BMBF funded research project that addresses accessibility of various data sources for journalists.
In der vorliegenden Arbeit wird ein Verfahren zur Segmentierung von Außenszenen und Terrain-Klassifkation entwickelt. Dazu werden 360 Grad-Laserscanner-Aufnahmen von Straßen, Gebäudefassaden und Waldwegen aufgenommen. Von diesen Aufnahmen werden verschiedene visuelle Repräsentationen in 2D erstellt. Dazu werden die Distanzinformationen und Winkelübergänge der Polarkoordinaten, die Remissionswerte und der Normalenvektor eingesetzt. Die Berechnung des Normalenvektors wird über ein modernes Verfahren mit einerniedrigen Laufzeit durchgeführt. Anschließend werden Oberflächeneigenschaften innerhalb einer Punktwolke analysiert und vier Klassen unterschieden: Untergrund, Vegetation, Hindernis und Himmel. Die Segmentierung und Klassifkation geschieht in einem Schritt. Dazuwird die Varianz auf den N ormalen über eine Filtermaske berechnet und ein Deskriptor erstellt. Der Deskriptor beinhaltet die Normalenvektoren und die Normalenvarianz fürdie x-, y- und z-Achse. Die Ergebnisse werden als Überblendung auf dem Remissionsbilddargestellt. Die Auswertung wird über eigens erstellte Ground-Truth-Daten vorgenommen. Dazu wird das Remissionsbild genutzt und der Ground-Truth mit verschiedenen Farben eingezeichnet. Die Klassifkationsergebnisse sind in Precision-Recall-Diagrammen dargestellt.
Selbstlokalisation eines Mikroflugsystems mit Laserscannern zur 3D-Kartierung von Innenräumen
(2017)
In der Bachelorarbeit Selbstlokalisation eines Mikroflugsystems mit Laserscannern zur 3D-Kartierung von Innenräumen wird ein Verfahren vorgestellt, welches die Position eines Flugroboters in Abhängigkeit aller sechs Freiheitsgrade darstellen kann. Mithilfe von zwei Laserscannern werden zeitgleich dreidimensionale Karten der Umgebung erstellt. Zur Ermittlung der Bewegung des Flugroboters auf der horizontalen Ebene wird Hector SLAM verwendet. Um zusätzlich Höheninformationen zu erhalten, werden zwei verschiedene Verfahren implementiert: Das erste Verfahren misst die Höhe direkt mithilfe eines Laserscanners, das zweite Verfahren nutzt Hector SLAM zur Ermittlung der Höheninformationen aus der vertikalen Scanebene.
Wissen für die Wirtschaft
(2017)
Es gehörte zum Gründungsauftrag der Hochschule Bonn-Rhein-Sieg, Wissenschaft und Wirtschaft zusammenzubringen und gemeinsam Neues zu entwickeln. Die hier vorgestellte Broschüre ist vor allem als Anregung gedacht: Sie zeigt, welche Erfolge aus einer Zusammenarbeit erwachsen und erleichtert es Unternehmen, den ersten Schritt hin zu einer Kooperation mit der H-BRS zu gehen.
OpenDaylight (ODL) is a commercial, collaborative, open-source platform to accelerate the adoption and innovation of Software Defined Networking (SDN) and Network Function Visualization. This paper describes the novel ODL architecture in a simplified way to grasp a better understanding of such architecture. ODL architecture intends to foster new innovation and accelerate adoption of programming the network. The innovation of Model-Driven Service Abstraction Layer (MD-SAL) in the architecture leads to developing models for automatic management and configuration of the networks. MD-SAL provides ODL with the ability to support any protocol talking to the network elements as well as any network application. The flexibility inherent in ODL architecture could enable ODL to shape the next generation networks.
Die Hochschulen der "Hochschulallianz für den Mittelstand" haben sich ganz bewusst für diese Namensgebung entschieden. Wir wollen uns gemeinsam für den Mittelstand in Deutschland engagieren. Hochschulen für angewandte Wissenschaften/Fachhochschulen sind in vielen Regionen der wichtigste Ausbildungs-, Forschungs- und Entwicklungspartner für mittelständische Unternehmen. Und dennoch konstatiert der aktuelle Innovationsindikator des BDI: Es gibt noch immer zu viele Berührungsängste zwischen Wissenschaftlern und KMU-Managern. Daran hat sich seit Jahrzehnten leider nicht viel geändert.
Exosomes derived from human autologous conditioned serum are nanocarriers for IL-6 and TNF-alfa
(2017)
Die vorliegende Erfindung betrifft ein Analysesystem und ein bibliotheksunabhängiges Analyseverfahren zum qualitativen Nachweis und zur Klassifizierung energetischer Materialien, insbesondere zum Nachweis von Explosiv- und Sprengstoffen sowie für komplexe Stoffzusammensetzungen, welche in IEDS (Improvised Explosive Devices) Verwendung finden.
Solid-Phase Microextraction (SPME) is a very simple and efficient, solventless sample preparation method, invented by Pawliszyn and coworkers at the University of Waterloo (Canada) in 1989. This method has been widely used in different fields of analytical chemistry since its first applications to environmental and food analysis. SPME integrates sampling, extraction, concentration and sample introduction into a single solvent-free step. The method saves preparation time, disposal costs and can improve detection limits. It has been routinely used in combination with gas chromatography (GC) and gas chromatography/mass spectrometry (GC/MS) and successfully applied to a wide variety of ompounds, especially for the extraction of volatile and semi-volatile organic compounds from environmental, biological and food samples.
Since the last twenty years, SPME in headspace (HS) mode is used as a valuable sample preparation technique for identifying degradation products in polymers and for determination of rest monomers and other light-boiling substances in polymeric materials. For more than ten years, our laboratory has been involved in projects focused on the application of HS-SPME-GC/MS for the characterization of polymeric materials from many branches of manufacturing and building industries. This book chapter describes the application examples of this technique for identifying volatile organic compounds (VOCs), additives and degradation products in industrial plastics, rubber, and packaging materials.
Elektronik für Entscheider
(2017)
Dieses Buch gibt Nichtingenieuren, die sich beruflich mit Elektronik beschäftigen, die Möglichkeit, sich ein Stück auf dieses Fachgebiet zu begeben, um Aufgaben, Sprache und Vorgehensweise von Ingenieuren zu verstehen. Ziel ist es dabei nicht, nach dem Lesen dieses Buches eine elektronische Schaltung entwickeln zu können. Im Vordergrund steht vielmehr ein generelles Verständnis für die Zusammenhänge und Grundbegriffe der Elektronik.
Emotional communication is a key element of habilitation care of persons with dementia. It is, therefore, highly preferable for assistive robots that are used to supplement human care provided to persons with dementia, to possess the ability to recognize and respond to emotions expressed by those who are being cared-for. Facial expressions are one of the key modalities through which emotions are conveyed. This work focuses on computer vision-based recognition of facial expressions of emotions conveyed by the elderly.
Although there has been much work on automatic facial expression recognition, the algorithms have been experimentally validated primarily on young faces. The facial expressions on older faces has been totally excluded. This is due to the fact that the facial expression databases that were available and that have been used in facial expression recognition research so far do not contain images of facial expressions of people above the age of 65 years. To overcome this problem, we adopt a recently published database, namely, the FACES database, which was developed to address exactly the same problem in the area of human behavioural research. The FACES database contains 2052 images of six different facial expressions, with almost identical and systematic representation of the young, middle-aged and older age-groups.
In this work, we evaluate and compare the performance of two of the existing imagebased approaches for facial expression recognition, over a broad spectrum of age ranging from 19 to 80 years. The evaluated systems use Gabor filters and uniform local binary patterns (LBP) for feature extraction, and AdaBoost.MH with multi-threshold stump learner for expression classification. We have experimentally validated the hypotheses that facial expression recognition systems trained only on young faces perform poorly on middle-aged and older faces, and that such systems confuse ageing-related facial features on neutral faces with other expressions of emotions. We also identified that, among the three age-groups, the middle-aged group provides the best generalization performance across the entire age spectrum. The performance of the systems was also compared to the performance of humans in recognizing facial expressions of emotions. Some similarities were observed, such as, difficulty in recognizing the expressions on older faces, and difficulty in recognizing the expression of sadness.
The findings of our work establish the need for developing approaches for facial expression recognition that are robust to the effects of ageing on the face. The scientific results of our work can be used as a basis to guide future research in this direction.
Population ageing and growing prevalence of disability have resulted in a growing need for personal care and assistance. The insufficient supply of personal care workers and the rising costs of long-term care have turned this phenomenon into a greater social concern. This has resulted in a growing interest in assistive technology in general, and assistive robots in particular, as a means of substituting or supplementing the care provided by humans, and as a means of increasing the independence and overall quality of life of persons with special needs. Although many assistive robots have been developed in research labs world-wide, very few are commercially available. One of the reasons for this, is the cost. One way of optimising cost is to develop solutions that address specific needs of users. As a precursor to this, it is important to identify gaps between what the users need and what the technology (assistive robots) currently provides. This information is obtained through technology mapping.
The current literature lacks a mapping between user needs and assistive robots, at the level of individual systems. The user needs are not expressed in uniform terminology across studies, which makes comparison of results difficult. In this research work, we have illustrated the technology mapping of assistive robots using the International Classification of Functioning, Disability and Health (ICF). ICF provides standard terminology for expressing user needs in detail. Expressing the assistive functions of robots also in ICF terminology facilitates communication between different stakeholders (rehabilitation professionals, robotics researchers, etc.).
We also investigated existing taxonomies for assistive robots. It was observed that there is no widely accepted taxonomy for classifying assistive robots. However, there exists an international standard, ISO 9999, which classifies commercially available assistive products. The applicability of the latest revision of ISO 9999 standard for classifying mobility assistance robots has been studied. A partial classification of assistive robots based on ISO 9999 is suggested. The taxonomy and technology mapping are illustrated with the help of four robots that have the potential to provide mobility assistance. These are the SmartCane, the SmartWalker, MAid and Care-O-bot (R) 3. SmartCane, SmartWalker and MAid provide assistance by supporting physical movement. Care-O-bot (R) 3 provides assistance by reducing the need to move.
Recent work in image captioning and scene-segmentation has shown significant results in the context of scene-understanding. However, most of these developments have not been extrapolated to research areas such as robotics. In this work we review the current state-ofthe- art models, datasets and metrics in image captioning and scenesegmentation. We introduce an anomaly detection dataset for the purpose of robotic applications, and we present a deep learning architecture that describes and classifies anomalous situations. We report a METEOR score of 16.2 and a classification accuracy of 97 %.
Smart home systems change the way we experience the home. While there are established research fields within HCI for visualizing specific use cases of a smart home, studies targeting user demands on visualizations spanning across multiple use cases are rare. Especially, individual data-related demands pose a challenge for usable visualizations. To investigate potentials of an end-user development (EUD) approach for flexibly supporting such demands, we developed a smart home system featuring both pre-defined visualizations and a visualization creation tool. To evaluate our concept, we installed our prototype in 12 households as part of a Living Lab study. Results are based on three interview studies, a design workshop and system log data. We identified eight overarching interests in home data and show how participants used pre-defined visualizations to get an overview and the creation tool to not only address specific use cases but also to answer questions by creating temporary visualizations.
Smart home systems are becoming an integral feature of the emerging home IT market. Under this general term, products mainly address issues of security, energy savings and comfort. Comprehensive systems that cover several use cases are typically operated and managed via a unified dashboard. Unfortunately, research targeting user experience (UX) design for smart home interaction that spans several use cases or covering the entire system is scarce. Furthermore, existing comprehensive and user-centered longterm studies on challenges and needs throughout phases of information collection, installation and operation of smart home systems are technologically outdated. Our 18-month Living Lab study covering 14 households equipped with smart home technology provides insights on how to design for improving smart home appropriation. This includes a stronger sensibility for household practices during setup and configuration, flexible visualizations for evolving demands and an extension of smart home beyond the location.
Von Spechten, Regentropfen und Herzschlägen: Vergleichende Frequenzanalyse periodischer Signale
(2017)
Nahezu jede Einleitung allgemeinverständlicher Publikationen zur Herzratenvariabilität beginnt mit dem Wang Shu-He zugeschriebenen Zitat: "Wenn das Herz so regelmäßig wie das Klopfen eines Spechtes oder das Tröpfeln des Regens auf dem Dach wird, wird der Patient innerhalb von 4 Tagen sterben". Trotz der häufigen Verwendung dieses Zitats sind keine vergleichenden Analysen von Herzratenvariabilität, Spechttrommlern und Regentropfen verfügbar. Dies war Anlass zu den hier vorgestellten Messungen und dem Versuch, die gewonnenen Registrierungen durch geeignete Aufbereitung mit dem menschlichen Herzschlag vergleichen und zum Zitat ins Verhältnis setzen zu können.
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
The MAP-Elites algorithm produces a set of high-performing solutions that vary according to features defined by the user. This technique to 'illuminate' the problem space through the lens of chosen features has the potential to be a powerful tool for exploring design spaces, but is limited by the need for numerous evaluations. The Surrogate-Assisted Illumination (SAIL) algorithm, introduced here, integrates approximative models and intelligent sampling of the objective function to minimize the number of evaluations required by MAP-Elites.
The ability of SAIL to efficiently produce both accurate models and diverse high-performing solutions is illustrated on a 2D airfoil design problem. The search space is divided into bins, each holding a design with a different combination of features. In each bin SAIL produces a better performing solution than MAP-Elites, and requires several orders of magnitude fewer evaluations. The CMA-ES algorithm was used to produce an optimal design in each bin: with the same number of evaluations required by CMA-ES to find a near-optimal solution in a single bin, SAIL finds solutions of similar quality in every bin.