Refine
H-BRS Bibliography
- yes (300) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (69)
- Fachbereich Angewandte Naturwissenschaften (68)
- Fachbereich Wirtschaftswissenschaften (64)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (50)
- Fachbereich Ingenieurwissenschaften und Kommunikation (43)
- Fachbereich Sozialpolitik und Soziale Sicherung (33)
- Institut für Verbraucherinformatik (IVI) (18)
- Institute of Visual Computing (IVC) (18)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (13)
- Präsidium (13)
Document Type
- Article (105)
- Conference Object (75)
- Part of a Book (32)
- Book (monograph, edited volume) (20)
- Part of Periodical (17)
- Report (15)
- Preprint (10)
- Doctoral Thesis (8)
- Master's Thesis (6)
- Working Paper (3)
Year of publication
- 2019 (300) (remove)
Keywords
- Lehrbuch (4)
- lignin (4)
- Lignin (3)
- Navigation (3)
- work engagement (3)
- Aminoacylase (2)
- Chemie (2)
- Design (2)
- Drosophila (2)
- Exergame (2)
Zweite Ordnung über die Änderung der Grundordnung der Hochschule Bonn-Rhein-Sieg vom 18.06.2015
(2019)
The Potential of Sustainable Antimicrobial Additives for Food Packaging from Native Plants in Benin
(2019)
The media is considered to be the fourth pillar in a democratic country. It acts as an effective control mechanism to check the other branches of the government. But this is only consequential when the media functions in an independent and transparent fashion with trained and neutral professionals who are aware of the accountability and consequences of their work. All these factors together would further the country as a democratic institution. Traditionally, it was legacy media responsible for a one-to-many communication process. Their goal was to provide information to the citizens. But this changed with development in technology and the use of social media in daily life. The internet brought with it new media formats which are easily accessible but also unstructured. These lowered barriers of entry in the media enabled citizens to become active participants in the communication process. As a result, these citizens developed a different relationship with the already existing media wherein they were not only the receivers to information but also co-producers. Real-time information allows users to communicate with each other and in turn widely generate public opinion on internet platforms. A many-to-many communication style emerged. While on the one hand, this type of discourse could be an opportunity for citizens to exercise their fundamental freedom of speech and expression, it is on the other hand, proving to have a detrimental effect in two parts: Lack of neutrality, polarized views and pre-existing misconceptions on the part of citizens as well as algorithms and formation of echo-chambers on the part of technology. Some questions arise in this scenario about the capability of citizen journalists, the duties they should adhere to along with the enjoyment of their rights and freedoms, the risks involved in an unchecked method of communication and the effect of citizen journalism in the democratic process.
Background & Objective: Due to the policy goals for sustainable energy production, renewable energy plants such as photovoltaics are increasingly in use. The energy production from solar radiation depends strongly on atmospheric conditions. As the weather mostly changes, electrical power generation fluctuates, making technical planning and control of power grids to a complex problem. Due to used materials (semiconductors e.g. silicon, gallium arsenide, cadmium telluride) the photovoltaic cells are spectrally selective. It means that only radiation of certain wavelengths converts into electrical energy. A material property called spectral response characterizes a certain degree of conversion of solar radiation into the electric current for each wavelength of solar light.
Erfassung der N-Dynamik verschiedener Wirtschaftsdünger in Wintergerste und Senf im Dürrejahr 2018
(2019)
Energy Profiles of the Ring Puckering of Cyclopentane, Methylcyclopentane and Ethylcyclopentane
(2019)
BonaRes (Modul A): Überwindung der Bodenmüdigkeit mithilfe eines integrierten Ansatzes - ORDIAmur
(2019)
Das Deutsche Zentrum für Luft- und Raumfahrt (DLR) führt viele Forschungen und Studien im Bereich der Luft- und Raumfahrt durch. Dabei spielen die Studien für die Gesundheit und Medizin auch eine sehr wichtige Rolle bei der DLR. Zu diesem Zweck führt die DLR die Artificial Gravity bed rest study (AGBRESA) im Auftrag der European Space Agency (esa) und in Kooperation der NASA durch. In dieser Studie werden die negativen Auswirkungen der Schwerelosigkeit auf dem Menschen im Weltall simuliert. Dabei werden Experimente durchgeführt, um die negative Auswirkungen entgegenzuwirken. Die Ergebnisse der Experimente werden in der DLR digital, aber auch auf Papier dokumentiert. In diesem Master-Projekt habe ich nun die Aufgabe, die Papierprotokolle für den Bereich der Blutabnahme und der Labordokumentation in eine digitale Form zu ersetzen.
Incoming solar radiation is an important driver of our climate and weather. Several studies (see for instance Frank et al. 2018) have revealed discrepancies between ground-based irradiance measurements and the predictions of regional weather models. In the realm of electricity generation, accurate forecasts of solar photovoltaic (PV)energy yield are becoming indispensable for cost-effective grid operation: in Germany there are 1.6 million PVsystems installed, with a nominal power of 46 GW (Bundesverband Solarwirtschaft 2019). The proliferation of PV systems provides a unique opportunity to characterise global irradiance with unprecedented spatiotemporalresolution, which in turn will allow for highly resolved PV power forecasts.
Renewable energies play an increasingly important role for energy production in Europe. Unlike coal or gas powerplants, solar energy production is highly variable in space and time. This is due to the strong variability of cloudsand their influence on the surface solar irradiance. Especially in regions with large contribution from photovoltaicpower production, the intermittent energy feed-in to the power grid can be a risk for grid stability. Therefore goodforecasts of temporal and spatial variability of surface irradiance are necessary to be able to properly regulate thepower supply.
Due to the policy goals for sustainable energy production, renewable energy plants such as photovoltaics are increasingly in use. The energy production from solar radiation depends strongly on atmospheric conditions. As the weather mostly changes, electrical power generation fluctuates, making technical planning and control of power grids to a complex problem.
Emotion and gender recognition from facial features are important properties of human empathy. Robots should also have these capabilities. For this purpose we have designed special convolutional modules that allow a model to recognize emotions and gender with a considerable lower number of parameters, enabling real-time evaluation on a constrained platform. We report accuracies of 96% in the IMDB gender dataset and 66% in the FER-2013 emotion dataset, while requiring a computation time of less than 0.008 seconds on a Core i7 CPU. All our code, demos and pre-trained architectures have been released under an open-source license in our repository at https://github.com/oarriaga/face classification.
The need for innovation around the control functions of inverters is great. PV inverters were initially expected to be passive followers of the grid and to disconnect as soon as abnormal conditions happened. Since future power systems will be dominated by generation and storage resources interfaced through inverters these converters must move from following to forming and sustaining the grid. As “digital natives” PV inverters can also play an important role in the digitalisation of distribution networks. In this short review we identified a large potential to make the PV inverter the smart local hub in a distributed energy system. At the micro level, costs and coordination can be improved with bidirectional inverters between the AC grid and PV production, stationary storage, car chargers and DC loads. At the macro level the distributed nature of PV generation means that the same devices will support both to the local distribution network and to the global stability of the grid. Much success has been obtained in the former. The later remains a challenge, in particular in terms of scaling. Yet there is some urgency in researching and demonstrating such solutions. And while digitalisation offers promise in all control aspects it also raises significant cybersecurity concerns.
Meine Zeitung geht online
(2019)
This work introduces a semi-Lagrangian lattice Boltzmann (SLLBM) solver for compressible flows (with or without discontinuities). It makes use of a cell-wise representation of the simulation domain and utilizes interpolation polynomials up to fourth order to conduct the streaming step. The SLLBM solver allows for an independent time step size due to the absence of a time integrator and for the use of unusual velocity sets, like a D2Q25, which is constructed by the roots of the fifth-order Hermite polynomial. The properties of the proposed model are shown in diverse example simulations of a Sod shock tube, a two-dimensional Riemann problem and a shock-vortex interaction. It is shown that the cell-based interpolation and the use of Gauss-Lobatto-Chebyshev support points allow for spatially high-order solutions and minimize the mass loss caused by the interpolation. Transformed grids in the shock-vortex interaction show the general applicability to non-uniform grids.
Die Wahrnehmung des perzeptionellen Aufrecht (perceptual upright, PU) variiert in Abhängigkeit der Gewichtung verschiedener gravitationsbezogener und körperbasierter Merkmale zwischen Kontexten und aufgrund individueller Unterschiede. Ziel des Vorhabens war es, systematisch zu untersuchen, welche Zusammenhänge zwischen visuellen und gravitationsbedingten Merkmalen bestehen. Das Vorhaben baute auf vorangegangen Untersuchungen auf, deren Ergebnisse indizieren, dass eine Gravitation von ca. 0,15g notwendig ist, um effiziente Selbstorientierungsinformationen bereit zu stellen (Herpers et. al, 2015; Harris et. al, 2014).
In dem hier beschriebenen Vorhaben wurden nun gezielt künstliche Gravitationsbedingungen berücksichtigt, um die Gravitationsschwelle, ab der ein wahrnehmbarer Einfluss beobachtbar ist, genauer zu quantifizieren bzw. die oben genannte Hypothese zu bestätigen. Es konnte gezeigt werden, dass die zentripetale Kraft, die auf einer rotierenden Zentrifuge entlang der Längsachse des Körpers wirkt, genauso efektiv wie Stehen mit normaler Schwerkraft ist, um das Gefühl des perzeptionellen Aufrechts auszulösen. Die erzielten Daten deuten zudem darauf hin, dass ein Gravitationsfeld von mindestens 0,15 g notwendig ist, um eine efektive Orientierungsinformation für die Wahrnehmung von Aufrecht zu liefern. Dies entspricht in etwa der Gravitationskraft von 0,17 g, die auf dem Mond besteht. Für eine lineare Beschleunigung des Körpers liegt der vestibulare Schwellenwert bei etwa 0,1 m/s2 und somit liegt der Wert für die Situation auf dem Mond von 1,6 m/s2 deutlich über diesem Schwellenwert.
More and more devices will be connected to the internet [3]. Many devicesare part of the so-called Internet of Things (IoT) which contains many low-powerdevices often powered by a battery. These devices mainly communicate with the manufacturers back-end and deliver personal data and secrets like passwords.
Analytische Chemie I
(2019)
In this thesis, unique administrative data, a relevant time of follow-up and advanced statistical measures to handle confounding have been utilized in order to provide new and informative evidence on the effects of vocational rehabilitation programs on work participation outcomes in Germany. While re-affirming the important role of micro-level determinants, the present study provides an extensive example of the individual and fiscal effects that are possible through meaningful vocational rehabilitation measures. The analysis showed that the principal objective, namely, to improve participation in employment, was generally achieved. Contrary to the common misconception that “off-the-job training” is relatively ineffective, this thesis has provided an empirical example of the positive impact of the programs.
Computer graphics research strives to synthesize images of a high visual realism that are indistinguishable from real visual experiences. While modern image synthesis approaches enable to create digital images of astonishing complexity and beauty, processing resources remain a limiting factor. Here, rendering efficiency is a central challenge involving a trade-off between visual fidelity and interactivity. For that reason, there is still a fundamental difference between the perception of the physical world and computer-generated imagery. At the same time, advances in display technologies drive the development of novel display devices. The dynamic range, the pixel densities, and refresh rates are constantly increasing. Display systems enable a larger visual field to be addressed by covering a wider field-of-view, due to either their size or in the form of head-mounted devices. Currently, research prototypes are ranging from stereo and multi-view systems, head-mounted devices with adaptable lenses, up to retinal projection, and lightfield/holographic displays. Computer graphics has to keep step with, as driving these devices presents us with immense challenges, most of which are currently unsolved. Fortunately, the human visual system has certain limitations, which means that providing the highest possible visual quality is not always necessary. Visual input passes through the eye’s optics, is filtered, and is processed at higher level structures in the brain. Knowledge of these processes helps to design novel rendering approaches that allow the creation of images at a higher quality and within a reduced time-frame. This thesis presents the state-of-the-art research and models that exploit the limitations of perception in order to increase visual quality but also to reduce workload alike - a concept we call perception-driven rendering. This research results in several practical rendering approaches that allow some of the fundamental challenges of computer graphics to be tackled. By using different tracking hardware, display systems, and head-mounted devices, we show the potential of each of the presented systems. The capturing of specific processes of the human visual system can be improved by combining multiple measurements using machine learning techniques. Different sampling, filtering, and reconstruction techniques aid the visual quality of the synthesized images. An in-depth evaluation of the presented systems including benchmarks, comparative examination with image metrics as well as user studies and experiments demonstrated that the methods introduced are visually superior or on the same qualitative level as ground truth, whilst having a significantly reduced computational complexity.
Process-dependent thermo-mechanical viscoelastic properties and the corresponding morphology of HDPE extrusion blow molded (EBM) parts were investigated. Evaluation of bulk data showed that flow direction, draw ratio, and mold temperature influence the viscoelastic behavior significantly in certain temperature ranges. Flow induced orientations due to higher draw ratio and higher mold temperature lead to higher crystallinities. To determine the local viscoelastic properties, a new microindentation system was developed by merging indentation with dynamic mechanical analysis. The local process-structure-property relationship of EBM parts showed that the cross-sectional temperature distribution is clearly reflected by local crystallinities and local complex moduli. Additionally, a model to calculate three-dimensional anisotropic coefficients of thermal expansion as a function of the process dependent crystallinity was developed based on an elementary volume unit cell with stacked layers of amorphous phase and crystalline lamellae. Good agreement of the predicted thermal expansion coefficients with measured ones was found up to a temperature of 70 °C.
The aim of this study was to investigate whether beneficial vacation effects can be strengthened and prolonged with a smartphone-based intervention. In a four-week longitudinal study among 79 Finnish teachers, we investigated the development of recovery, well-being, and job performance before, during, and after a one-week vacation in three groups: non-users (n = 51), passive (n = 18) and active (n = 10) users. Participants were instructed to actively use a recovery app (called Holidaily) and complete five digital questionnaires. Most recovery experiences and well-being indicators increased during the vacation. Job performance and concentration capacity showed no significant time effects. Among active app users, creativity at work increased from baseline to after the vacation, whereas among non-users it decreased and among passive users it decreased a few days after the vacation but increased again one and a half weeks after the vacation. The fading of beneficial vacation effects on negative affect seems to have been slower among active app users. Only few participants used the app actively. Still, results suggest that a smartphone-based recovery intervention may support beneficial vacation effects.
Lässt sich das erforderliche zeitlich begrenzte, von außen auf den Körper einwirkende Ereignis nicht im Vollbeweis sichern, fehlt es an der erforderlichen haftungsbegründenden Kausalität – mit der Folge, dass das geltend gemachte Unfallereignis nicht als Arbeitsunfall anzuerkennen ist. Bleibt unklar, welche Handlungen der Versicherte zum Zeitpunkt des Unfallereignisses vorgenommen hat, sind auch die Grundsätze des so genannten Anscheinsbeweises unanwendbar, weil ein typischer Geschehensablauf nicht nachweisbar ist.
Ereignet sich ein Unfall auf dem Heimweg vom Arbeitsplatz, während die betroffene Arbeitnehmerin mit einem Mobiltelefon telefoniert (sogenannte „gemischte Tätigkeit“), so scheidet ein Wegeunfall dann aus, wenn die Unfallentstehung überwiegend dem Telefonieren zuzurechnen ist und dieses damit für die Unfallentstehung wesentlich war.
Gefährdet die Nutzung von Gesundheits-Apps und Wearables die solidarische Krankenversicherung?
(2019)
Blutdruck messen, Schrittzahl verfolgen, Schlaf kontrollieren, Zuckerwerte im Blick haben und sogar die Durchführung von EKGs – dies sind nur einige der Anwendungen, die ein gängiges Mobiltelefon oder eine Smartwatch mit entsprechender Software durchführen können. Apps und Wearables (so werden am Körper getragene Computertechnologien genannt) werden in ihren Einsatzmöglichkeiten immer vielfältiger.
2-methylacetoacetyl-coenzyme A thiolase (beta-ketothiolase) deficiency: one disease - two pathways
(2019)
Background: 2-methylacetoacetyl-coenzyme A thiolase deficiency (MATD; deficiency of mitochondrial acetoacetyl-coenzyme A thiolase T2/ “beta-ketothiolase”) is an autosomal recessive disorder of ketone body utilization and isoleucine degradation due to mutations in ACAT1.
Methods: We performed a systematic literature search for all available clinical descriptions of patients with MATD. 244 patients were identified and included in this analysis. Clinical course and biochemical data are presented and discussed.
Results: For 89.6 % of patients at least one acute metabolic decompensation was reported. Age at first symptoms ranged from 2 days to 8 years (median 12 months). More than 82% of patients presented in the first two years of life, while manifestation in the neonatal period was the exception (3.4%). 77.0% (157 of 204 patients) of patients showed normal psychomotor development without neurologic abnormalities.
Conclusion: This comprehensive data analysis provides a systematic overview on all cases with MATD identified in the literature. It demonstrates that MATD is a rather benign disorder with often favourable outcome, when compared with many other organic acidurias.
Background 3-hydroxy-3-methylglutaryl-coenzyme A lyase deficiency (HMGCLD) is an autosomal recessive disorder of ketogenesis and leucine degradation due to mutations in HMGCL .
Method We performed a systematic literature search to identify all published cases. 211 patients of whom relevant clinical data were available were included in this analysis. Clinical course, biochemical findings and mutation data are highlighted and discussed. An overview on all published HMGCL variants is provided.
Results More than 95% of patients presented with acute metabolic decompensation. Most patients manifested within the first year of life, 42.4% already neonatally. Very few individuals remained asymptomatic. The neurologic long-term outcome was favorable with 62.6% of patients showing normal development.
Conclusion This comprehensive data analysis provides a systematic overview on all published cases with HMGCLD including a list of all known HMGCL mutations.
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews ofstatic scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
The complex nature of multifactorial diseases, such as Morbus Alzheimer, has produced a strong need to design multitarget-directed ligands to address the involved complementary pathways. We performed a purposive structural modification of a tetratarget small-molecule, that is contilisant, and generated a combinatorial library of 28 substituted chromen-4-ones. The compounds comprise a basic moiety which is linker-connected to the 6-position of the heterocyclic chromenone core. The syntheses were accomplished by Mitsunobu- or Williamson-type ether formations. The resulting library members were evaluated at a panel of seven human enzymes, all of which being involved in the pathophysiology of neurodegeneration. A concomitant inhibition of human acetylcholinesterase and human monoamine oxidase B, with IC50 values of 5.58 and 7.20 μM, respectively, was achieved with the dual-target 6-(4-(piperidin-1-yl)butoxy)-4H-chromen-4-one (7).
Geschäftsprozess-Management
(2019)
Start-ups stehen im Wettbewerb um qualifizierte Mitarbeiter in starker Konkurrenz zu etablierten Unternehmen und Konzernen. Der Bedarf an Fachkräften (etwa Software-Entwicklern) ist größer als je zuvor [1]. Wie stellen sich Start-ups als Arbeitgeber dar, um Personal für sich zu gewinnen? Dieser Frage wurde im Rahmen der Studie „Start-ups als Arbeitgeber“ nachgegangen.
Die Bilder aus dem Silicon Valley sind bekannt: Das Großraumbüro mit den Sitzecken zum Zurückziehen. Schaukeln, Kickern und Videospiele zum Relaxen in den Arbeitspausen. Überall gibt es etwas zu essen und zu trinken, und das natürlich gratis. – Diese Vorstellungen haben viele im Kopf. Finden sich diese Bilder in der Selbstdarstellung deutscher Start-ups als Arbeitgeber wieder?
Die hier vorgestellte Studie will keine allgemeingültigen Ergebnisse liefern, sondern ist explorativ angelegt und soll zur weiteren Beschäftigung mit diesem Forschungsfeld in Wissenschaft und Praxis anregen.
Bone tissue engineering is an ever-changing, rapidly evolving, and highly interdisciplinary field of study, where scientists try to mimic natural bone structure as closely as possible in order to facilitate bone healing. New insights from cell biology, specifically from mesenchymal stem cell differentiation and signaling, lead to new approaches in bone regeneration. Novel scaffold and drug release materials based on polysaccharides gain increasing attention due to their wide availability and good biocompatibility to be used as hydrogels and/or hybrid components for drug release and tissue engineering. This article reviews the current state of the art, recent developments, and future perspectives in polysaccharide-based systems used for bone regeneration.
Internet-Ökonomie
(2019)
Dieses Buch zeigt, wie sich Apple, Amazon, Facebook und Google zu den wertvollsten Unternehmen der Welt entwickeln konnten. Ihr Erfolg basiert auf dem Ergreifen von Chancen, die die digitale Welt und das Internet bieten. Traditionelle Geschäftsmodelle werden dadurch verändert und über Jahrzehnte gewachsene Marktstrukturen teilweise in Frage gestellt.
In an effort to assist researchers in choosing basis sets for quantum mechanical modeling of molecules (i.e. balancing calculation cost versus desired accuracy), we present a systematic study on the accuracy of computed conformational relative energies and their geometries in comparison to MP2/CBS and MP2/AV5Z data, respectively. In order to do so, we introduce a new nomenclature to unambiguously indicate how a CBS extrapolation was computed. Nineteen minima and transition states of buta-1,3-diene, propan-2-ol and the water dimer were optimized using forty-five different basis sets. Specifically, this includes one Pople (i.e. 6-31G(d)), eight Dunning (i.e. VXZ and AVXZ, X=2-5), twenty-five Jensen (i.e. pc-n, pcseg-n, aug-pcseg-n, pcSseg-n and aug-pcSseg-n, n=0-4) and nine Karlsruhe (e.g. def2-SV(P), def2-QZVPPD) basis sets. The molecules were chosen to represent both common and electronically diverse molecular systems. In comparison to MP2/CBS relative energies computed using the largest Jensen basis sets (i.e. n=2,3,4), the use of smaller sizes (n=0,1,2 and n=1,2,3) provides results that are within 0.11--0.24 and 0.09-0.16 kcal/mol. To practically guide researchers in their basis set choice, an equation is introduced that ranks basis sets based on a user-defined balance between their accuracy and calculation cost. Furthermore, we explain why the aug-pcseg-2, def2-TZVPPD and def2-TZVP basis sets are very suitable choices to balance speed and accuracy.
Abläufe wie Schieben, Gegenstemmen, Heben, Tragen, Sprung aus der Hocke sind nicht willkürlich gesteuerte Belastungen der Sehne und damit als Gelegenheitsanlässe und nicht Unfallereignisse anzusehen. Entsprechende Abläufe können die Sehne nicht gefährden, es fehlt hierbei bereits an der physiologisch-naturwissenschaftlichen Kausalität.
Currently, a variety of methods exist for creating different types of spatio-temporal world models. Despite the numerous methods for this type of modeling, there exists no methodology for comparing the different approaches or their suitability for a given application e.g. logistics robots. In order to establish a means for comparing and selecting the best-fitting spatio-temporal world modeling technique, a methodology and standard set of criteria must be established. To that end, state-of-the-art methods for this type of modeling will be collected, listed, and described. Existing methods used for evaluation will also be collected where possible.
Using the collected methods, new criteria and techniques will be devised to enable the comparison of various methods in a qualitative manner. Experiments will be proposed to further narrow and ultimately select a spatio-temporal model for a given purpose. An example network of autonomous logistic robots, ROPOD, will serve as a case study used to demonstrate the use of the new criteria. This will also serve to guide the design of future experiments that aim to select a spatio-temporal world modeling technique for a given task. ROPOD was specifically selected as it operates in a real-world, human shared environment. This type of environment is desirable for experiments as it provides a unique combination of common and novel problems that arise when selecting an appropriate spatio-temporal world model. Using the developed criteria, a qualitative analysis will be applied to the selected methods to remove unfit options.
Then, experiments will be run on the remaining methods to provide comparative benchmarks. Finally, the results will be analyzed and recommendations to ROPOD will be made.
Multi-robot systems (MRS) are capable of performing a set of tasks by dividing them among the robots in the fleet. One of the challenges of working with multirobot systems is deciding which robot should execute each task. Multi-robot task allocation (MRTA) algorithms address this problem by explicitly assigning tasks to robots with the goal of maximizing the overall performance of the system. The indoor transportation of goods is a practical application of multi-robot systems in the area of logistics. The ROPOD project works on developing multi-robot system solutions for logistics in hospital facilities. The correct selection of an MRTA algorithm is crucial for enhancing transportation tasks. Several multi-robot task allocation algorithms exist in the literature, but just few experimental comparative analysis have been performed. This project analyzes and assesses the performance of MRTA algorithms for allocating supply cart transportation tasks to a fleet of robots. We conducted a qualitative analysis of MRTA algorithms, selected the most suitable ones based on the ROPOD requirements, implemented four of them (MURDOCH, SSI, TeSSI, and TeSSIduo), and evaluated the quality of their allocations using a common experimental setup and 10 experiments. Our experiments include off-line and semi on-line allocation of tasks as well as scalability tests and use virtual robots implemented as Docker containers. This design should facilitate deployment of the system on the physical robots. Our experiments conclude that TeSSI and TeSSIduo suit best the ROPOD requirements. Both use temporal constraints to build task schedules and run in polynomial time, which allow them to scale well with the number of tasks and robots. TeSSI distributes the tasks among more robots in the fleet, while TeSSIduo tends to use a lower percentage of the available robots.
Subsequently, we have integrated TeSSI and TeSSIduo to perform multi-robot task allocation for the ROPOD project.
CSR-Erfolgssteuerung
(2019)
Das Lehrbuch behandelt den CSR-Reformprozess, der Unternehmen zur globalen Sorgfaltspflicht (Due Diligence) auffordert. Die CSR-Berichterstattungpflicht, die Vergaberechtsreform und die Aufforderung zur Implementierung von Risikomanagementsystemen treffen dabei nicht nur große, sondern insbesondere auch mittlere und kleine Unternehmen (KMU). Das Buch soll daher die CSR-Relevanz für Unternehmen aller Größen transparent machen und Umsetzungsblockaden und -hemmnisse abbauen.
Die unterschiedlichen Facetten der digitalen Zukunft zu beleuchten – sei es die aktive Gestaltungsaufgabe der Politik, die ethischen und moralischen Anpassungen durch Digitalisierung in der Gesellschaft oder die technische und wirtschaftliche Verantwortung – und in Bezug zueinander zu setzen, ist Aufgabe und Ziel dieser Publikation. Im Zuge der digitalen Transformation ist zudem der Ruf nach einer neuen Kultur für Gesellschaft, Politik und Wirtschaft geboten.
The Learning Culture Survey (LCS) is a questionnaire-based research, investigating students’ perceptions of and expectations towards Higher Education (HE). The aim of this survey is to improve our understanding about the sources of cultural conflicts in educational scenarios. This understanding, shell help us to predict potential conflict situations and develop supportive measures.
After three years of development, the LCS was initialized in 2010 in South Korea and Germany. During the following years, the investigations were extended to further countries. The results, on the one hand, provided insights about the cultural context of HE in general and on the other hand, about specific (national / regional) characteristics of learners in HE. Most issues targeted with the questionnaire were directly linked to value systems. Thus, we expected from the beginning that the collected data would keep valid over longer periods of time. However, we had no evidence regarding the actual persistence of learning culture. For a study, designed to being implemented on a global scope and providing input for further applications, persistence is a basic condition to justify related investigations.
To answer the question on persistence, we repeated the LCS in our university every four years, between 2010 to 2018/19. Besides a small number of slight changes, explainable out of their situational context, the overall results kept consistent over the investigated years. In this paper, after an introduction of the LCS’ concept, setting and its general results from the past years, we present the insights from our most recently finalized longitudinal study on learning culture.
Digital transformation in Higher Education and Science is a mission-critical demand to prepare educational institutions for their future competition on the international market. In many cases, the digitization goes along with the search for and acquisition of new software. For easily exchangeable software, wrong product decisions, in the worst case, lead to calculable financial losses. However, if a planned software requires a lot of technological adjustments and is to be applied as central component of a business- and/or security-critical environment, wrong decisions during the software acquisition process might lead to hardly calculable damage. Questions arising are how to decide for a product and how many resources should be invested for the acquisition process.
We planned to apply a commercial Business Support System, which should replace the currently used in-house developed software. Our goals were the increase of our university’s level of data security, to ease the interaction between stakeholders, to eliminate media discontinuities, to improve the process management and transparency, and to reduce the execution time of automated processes. Alongside with the introduction of the electronic case file, our agenda stipulates the digitization (and automation) of administrative university processes, especially, but not limited to, the student self-service and the administrative student life cycle. Usual tools and practices, commonly applied to (simple) software acquisition, failed in our scenario.
With the case study introduced in this paper, we address all persons, involved within software acquisition processes: From our experiences, we strongly recommend to place greater value on an exhaustively completed acquisition process, than on short-termed economic advantages.
Large display environments are highly suitable for immersive analytics. They provide enough space for effective co-located collaboration and allow users to immerse themselves in the data. To provide the best setting - in terms of visualization and interaction - for the collaborative analysis of a real-world task, we have to understand the group dynamics during the work on large displays. Among other things, we have to study, what effects different task conditions will have on user behavior.
In this paper, we investigated the effects of task conditions on group behavior regarding collaborative coupling and territoriality during co-located collaboration on a wall-sized display. For that, we designed two tasks: a task that resembles the information foraging loop and a task that resembles the connecting facts activity. Both tasks represent essential sub-processes of the sensemaking process in visual analytics and cause distinct space/display usage conditions. The information foraging activity requires the user to work with individual data elements to look into details. Here, the users predominantly occupy only a small portion of the display. In contrast, the connecting facts activity requires the user to work with the entire information space. Therefore, the user has to overview the entire display.
We observed 12 groups for an average of two hours each and gathered qualitative data and quantitative data. During data analysis, we focused specifically on participants' collaborative coupling and territorial behavior.
We could detect that participants tended to subdivide the task to approach it, in their opinion, in a more effective way, in parallel. We describe the subdivision strategies for both task conditions. We also detected and described multiple user roles, as well as a new coupling style that does not fit in either category: loosely or tightly. Moreover, we could observe a territory type that has not been mentioned previously in research. In our opinion, this territory type can affect the collaboration process of groups with more than two collaborators negatively. Finally, we investigated critical display regions in terms of ergonomics. We could detect that users perceived some regions as less comfortable for long-time work.
The Peren-Clement index (PCI) is a methodology to analyze country-specific risk for businesses engaged in international trade and direct investment. This index, established in 1998, provides a guideline when deciding which foreign markets offer the possibility for additional business engagement and investment, and to what extent existing engagement or investment can be increased or should be reduced.
In mathematical modeling by means of performance models, the Fitness-Fatigue Model (FF-Model) is a common approach in sport and exercise science to study the training performance relationship. The FF-Model uses an initial basic level of performance and two antagonistic terms (for fitness and fatigue). By model calibration, parameters are adapted to the subject’s individual physical response to training load. Although the simulation of the recorded training data in most cases shows useful results when the model is calibrated and all parameters are adjusted, this method has two major difficulties. First, a fitted value as basic performance will usually be too high. Second, without modification, the model cannot be simply used for prediction. By rewriting the FF-Model such that effects of former training history can be analyzed separately – we call those terms preload – it is possible to close the gap between a more realistic initial performance level and an athlete's actual performance level without distorting other model parameters and increase model accuracy substantially. Fitting error of the preload-extended FF-Model is less than 32% compared to the error of the FF-Model without preloads. Prediction error of the preload-extended FF-Model is around 54% of the error of the FF-Model without preloads.
This work presents the preliminary research towards developing an adaptive tool for fault detection and diagnosis of distributed robotic systems, using explainable machine learning methods. Autonomous robots are complex systems that require high reliability in order to operate in different environments. Even more so, when considering distributed robotic systems, the task of fault detection and diagnosis becomes exponentially difficult.
To diagnose systems, models representing the behaviour under investigation need to be developed, and with distributed robotic systems generating large amount of data, machine learning becomes an attractive method of modelling especially because of its high performance. However, with current day methods such as artificial neural networks (ANNs), the issue of explainability arises where learnt models lack the ability to give explainable reasons behind their decisions.
This paper presents current trends in methods for data collection from distributed systems, inductive logic programming (ILP); an explainable machine learning method, and fault detection and diagnosis.
In the field of service robots, dealing with faults is crucial to promote user acceptance. In this context, this work focuses on some specific faults which arise from the interaction of a robot with its real world environment due to insufficient knowledge for action execution.
In our previous work [1], we have shown that such missing knowledge can be obtained through learning by experimentation. The combination of symbolic and geometric models allows us to represent action execution knowledge effectively. However we did not propose a suitable representation of the symbolic model.
In this work we investigate such symbolic representation and evaluate its learning capability. The experimental analysis is performed on four use cases using four different learning paradigms. As a result, the symbolic representation together with the most suitable learning paradigm are identified.
In Sensor-based Fault Detection and Diagnosis (SFDD) methods, spatial and temporal dependencies among the sensor signals can be modeled to detect faults in the sensors, if the defined dependencies change over time. In this work, we model Granger causal relationships between pairs of sensor data streams to detect changes in their dependencies. We compare the method on simulated signals with the Pearson correlation, and show that the method elegantly handles noise and lags in the signals and provides appreciable dependency detection. We further evaluate the method using sensor data from a mobile robot by injecting both internal and external faults during operation of the robot. The results show that the method is able to detect changes in the system when faults are injected, but is also prone to detecting false positives. This suggests that this method can be used as a weak detection of faults, but other methods, such as the use of a structural model, are required to reliably detect and diagnose faults.
This paper proposes an approach to an ANN-based temperature controller design for a plastic injection moulding system. This design approach is applied to the development of a controller based on a combination of a classical ANN and integrator. The controller provides a fast temperature response and zero steady-state error for three typical heaters (bar, nozzle, and cartridge) for a plastic moulding system. The simulation results in Matlab Simulink software and in comparison to an industrial PID regulator have shown the advantages of the controller, such as significantly less overshoot and faster transient (compared to PID with autotuning) for all examined heaters. In order to verify the proposed approach, the designed ANN controller was implemented and tested using an experimental setup based on an STM32 board.
Quantifying Interference in WiLD Networks using Topography Data and Realistic Antenna Patterns
(2019)
Avoiding possible interference is a key aspect to maximize the performance in Wi-Fi based Long Distance networks. In this paper we quantify self-induced interference based on data derived from our testbed and match the findings against simulations. By enhancing current simulation models with two key elements we significantly reduce the deviation between testbed and simulation: the usage of detailed antenna patterns compared to the cone model and propagation modeling enhanced by license-free topography data. Based on the gathered data we discuss several possible optimization approaches such as physical separation of local radios, tuning the sensitivity of the transmitter and using centralized compared to distributed channel assignment algorithms. While our testbed is based on 5 GHz Wi-Fi, we briefly discuss the possible impact of our results to other frequency bands.
Ein Prüf- und Zertifizierungsstandard zur gesellschaftlichen Verantwortung von Organisationen
(2019)
Nach einem breiten Konsens in Gesellschaft, Politik und Wissenschaft lässt sich die gegenwärtige Art zu wirtschaften so nicht fortsetzen. Ein weltweit gültiger Zertifizierungsstandard, der einen einheitlichen und politisch neutralen Ordnungsrahmen zur gesellschaftlichen Verantwortung von Organisationen gegenüber Mensch und Natur zum Ziel hat, kann helfen, die sich dynamisch verändernden und jeweils notwendigen Anforderungen zu messen, deren Einhaltung (auch komparativ) nachzuweisen und ggf. notwendige Korrekturen zu definieren. Die internationale Norm DIN EN ISO/IEC 26000:2010 bietet einen grundsätzlich geeigneten Orientierungsrahmen zu gesellschaftlich verantwortlichem Handeln für Organisationen.
BWL-Klausuren für Dummies
(2019)
Synthesis of Substituted Hydroxyapatite for Application in Bone Tissue Engineering and Drug Delivery
(2019)
Gas Chromatography
(2019)
Gas chromatography (GC) is one of the most important types of chromatography used in analytical chemistry for separating and analyzing chemical organic compounds. Today, gas chromatography is one of the most widespread investigation methods of instrumental analysis. This technique is used in the laboratories of chemical, petrochemical, and pharmaceutical industries, in research institutes, and also in clinical, environmental, and food and beverage analysis. This book is the outcome of contributions by experts in the field of gas chromatography and includes a short history of gas chromatography, an overview of derivatization methods and sample preparation techniques, a comprehensive study on pyrazole mass spectrometric fragmentation, and a GC/MS/MS method for the determination and quantification of pesticide residues in grape samples.
It is shown that the electrochemical kinetics of alkaline methanol oxidation can be reduced by setting certain fast reactions contained in it to a steady state. As a result, the underlying system of Ordinary Differential Equations (ODE) is transformed to a system of Differential-Algebraic Equations (DAE). We measure the precision characteristics of such transformation and discuss the consequences of the obtained model reduction.
The paper presents the topological reduction method applied to gas transport networks, using contraction of series, parallel and tree-like subgraphs. The contraction operations are implemented for pipe elements, described by quadratic friction law. This allows significant reduction of the graphs and acceleration of solution procedure for stationary network problems. The algorithm has been tested on several realistic network examples. The possible extensions of the method to different friction laws and other elements are discussed.