Refine
H-BRS Bibliography
- yes (169) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (64)
- Fachbereich Angewandte Naturwissenschaften (49)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (42)
- Fachbereich Ingenieurwissenschaften und Kommunikation (33)
- Fachbereich Wirtschaftswissenschaften (22)
- Institute of Visual Computing (IVC) (16)
- Institut für funktionale Gen-Analytik (IFGA) (11)
- Institut für Verbraucherinformatik (IVI) (10)
- Institut für Sicherheitsforschung (ISF) (8)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (7)
Document Type
- Article (73)
- Conference Object (63)
- Preprint (9)
- Doctoral Thesis (6)
- Part of a Book (5)
- Report (4)
- Book (monograph, edited volume) (3)
- Master's Thesis (3)
- Conference Proceedings (1)
- Research Data (1)
Year of publication
- 2019 (169) (remove)
Language
- English (169) (remove)
Keywords
- lignin (4)
- Navigation (3)
- work engagement (3)
- Aminoacylase (2)
- Design (2)
- Drosophila (2)
- Exergame (2)
- Extrusion blow molding (2)
- FPGA (2)
- Hyperspectral image (2)
The media is considered to be the fourth pillar in a democratic country. It acts as an effective control mechanism to check the other branches of the government. But this is only consequential when the media functions in an independent and transparent fashion with trained and neutral professionals who are aware of the accountability and consequences of their work. All these factors together would further the country as a democratic institution. Traditionally, it was legacy media responsible for a one-to-many communication process. Their goal was to provide information to the citizens. But this changed with development in technology and the use of social media in daily life. The internet brought with it new media formats which are easily accessible but also unstructured. These lowered barriers of entry in the media enabled citizens to become active participants in the communication process. As a result, these citizens developed a different relationship with the already existing media wherein they were not only the receivers to information but also co-producers. Real-time information allows users to communicate with each other and in turn widely generate public opinion on internet platforms. A many-to-many communication style emerged. While on the one hand, this type of discourse could be an opportunity for citizens to exercise their fundamental freedom of speech and expression, it is on the other hand, proving to have a detrimental effect in two parts: Lack of neutrality, polarized views and pre-existing misconceptions on the part of citizens as well as algorithms and formation of echo-chambers on the part of technology. Some questions arise in this scenario about the capability of citizen journalists, the duties they should adhere to along with the enjoyment of their rights and freedoms, the risks involved in an unchecked method of communication and the effect of citizen journalism in the democratic process.
Background & Objective: Due to the policy goals for sustainable energy production, renewable energy plants such as photovoltaics are increasingly in use. The energy production from solar radiation depends strongly on atmospheric conditions. As the weather mostly changes, electrical power generation fluctuates, making technical planning and control of power grids to a complex problem. Due to used materials (semiconductors e.g. silicon, gallium arsenide, cadmium telluride) the photovoltaic cells are spectrally selective. It means that only radiation of certain wavelengths converts into electrical energy. A material property called spectral response characterizes a certain degree of conversion of solar radiation into the electric current for each wavelength of solar light.
Energy Profiles of the Ring Puckering of Cyclopentane, Methylcyclopentane and Ethylcyclopentane
(2019)
Incoming solar radiation is an important driver of our climate and weather. Several studies (see for instance Frank et al. 2018) have revealed discrepancies between ground-based irradiance measurements and the predictions of regional weather models. In the realm of electricity generation, accurate forecasts of solar photovoltaic (PV)energy yield are becoming indispensable for cost-effective grid operation: in Germany there are 1.6 million PVsystems installed, with a nominal power of 46 GW (Bundesverband Solarwirtschaft 2019). The proliferation of PV systems provides a unique opportunity to characterise global irradiance with unprecedented spatiotemporalresolution, which in turn will allow for highly resolved PV power forecasts.
Renewable energies play an increasingly important role for energy production in Europe. Unlike coal or gas powerplants, solar energy production is highly variable in space and time. This is due to the strong variability of cloudsand their influence on the surface solar irradiance. Especially in regions with large contribution from photovoltaicpower production, the intermittent energy feed-in to the power grid can be a risk for grid stability. Therefore goodforecasts of temporal and spatial variability of surface irradiance are necessary to be able to properly regulate thepower supply.
Due to the policy goals for sustainable energy production, renewable energy plants such as photovoltaics are increasingly in use. The energy production from solar radiation depends strongly on atmospheric conditions. As the weather mostly changes, electrical power generation fluctuates, making technical planning and control of power grids to a complex problem.
Emotion and gender recognition from facial features are important properties of human empathy. Robots should also have these capabilities. For this purpose we have designed special convolutional modules that allow a model to recognize emotions and gender with a considerable lower number of parameters, enabling real-time evaluation on a constrained platform. We report accuracies of 96% in the IMDB gender dataset and 66% in the FER-2013 emotion dataset, while requiring a computation time of less than 0.008 seconds on a Core i7 CPU. All our code, demos and pre-trained architectures have been released under an open-source license in our repository at https://github.com/oarriaga/face classification.
The need for innovation around the control functions of inverters is great. PV inverters were initially expected to be passive followers of the grid and to disconnect as soon as abnormal conditions happened. Since future power systems will be dominated by generation and storage resources interfaced through inverters these converters must move from following to forming and sustaining the grid. As “digital natives” PV inverters can also play an important role in the digitalisation of distribution networks. In this short review we identified a large potential to make the PV inverter the smart local hub in a distributed energy system. At the micro level, costs and coordination can be improved with bidirectional inverters between the AC grid and PV production, stationary storage, car chargers and DC loads. At the macro level the distributed nature of PV generation means that the same devices will support both to the local distribution network and to the global stability of the grid. Much success has been obtained in the former. The later remains a challenge, in particular in terms of scaling. Yet there is some urgency in researching and demonstrating such solutions. And while digitalisation offers promise in all control aspects it also raises significant cybersecurity concerns.
This work introduces a semi-Lagrangian lattice Boltzmann (SLLBM) solver for compressible flows (with or without discontinuities). It makes use of a cell-wise representation of the simulation domain and utilizes interpolation polynomials up to fourth order to conduct the streaming step. The SLLBM solver allows for an independent time step size due to the absence of a time integrator and for the use of unusual velocity sets, like a D2Q25, which is constructed by the roots of the fifth-order Hermite polynomial. The properties of the proposed model are shown in diverse example simulations of a Sod shock tube, a two-dimensional Riemann problem and a shock-vortex interaction. It is shown that the cell-based interpolation and the use of Gauss-Lobatto-Chebyshev support points allow for spatially high-order solutions and minimize the mass loss caused by the interpolation. Transformed grids in the shock-vortex interaction show the general applicability to non-uniform grids.
More and more devices will be connected to the internet [3]. Many devicesare part of the so-called Internet of Things (IoT) which contains many low-powerdevices often powered by a battery. These devices mainly communicate with the manufacturers back-end and deliver personal data and secrets like passwords.
In this thesis, unique administrative data, a relevant time of follow-up and advanced statistical measures to handle confounding have been utilized in order to provide new and informative evidence on the effects of vocational rehabilitation programs on work participation outcomes in Germany. While re-affirming the important role of micro-level determinants, the present study provides an extensive example of the individual and fiscal effects that are possible through meaningful vocational rehabilitation measures. The analysis showed that the principal objective, namely, to improve participation in employment, was generally achieved. Contrary to the common misconception that “off-the-job training” is relatively ineffective, this thesis has provided an empirical example of the positive impact of the programs.
Computer graphics research strives to synthesize images of a high visual realism that are indistinguishable from real visual experiences. While modern image synthesis approaches enable to create digital images of astonishing complexity and beauty, processing resources remain a limiting factor. Here, rendering efficiency is a central challenge involving a trade-off between visual fidelity and interactivity. For that reason, there is still a fundamental difference between the perception of the physical world and computer-generated imagery. At the same time, advances in display technologies drive the development of novel display devices. The dynamic range, the pixel densities, and refresh rates are constantly increasing. Display systems enable a larger visual field to be addressed by covering a wider field-of-view, due to either their size or in the form of head-mounted devices. Currently, research prototypes are ranging from stereo and multi-view systems, head-mounted devices with adaptable lenses, up to retinal projection, and lightfield/holographic displays. Computer graphics has to keep step with, as driving these devices presents us with immense challenges, most of which are currently unsolved. Fortunately, the human visual system has certain limitations, which means that providing the highest possible visual quality is not always necessary. Visual input passes through the eye’s optics, is filtered, and is processed at higher level structures in the brain. Knowledge of these processes helps to design novel rendering approaches that allow the creation of images at a higher quality and within a reduced time-frame. This thesis presents the state-of-the-art research and models that exploit the limitations of perception in order to increase visual quality but also to reduce workload alike - a concept we call perception-driven rendering. This research results in several practical rendering approaches that allow some of the fundamental challenges of computer graphics to be tackled. By using different tracking hardware, display systems, and head-mounted devices, we show the potential of each of the presented systems. The capturing of specific processes of the human visual system can be improved by combining multiple measurements using machine learning techniques. Different sampling, filtering, and reconstruction techniques aid the visual quality of the synthesized images. An in-depth evaluation of the presented systems including benchmarks, comparative examination with image metrics as well as user studies and experiments demonstrated that the methods introduced are visually superior or on the same qualitative level as ground truth, whilst having a significantly reduced computational complexity.
Process-dependent thermo-mechanical viscoelastic properties and the corresponding morphology of HDPE extrusion blow molded (EBM) parts were investigated. Evaluation of bulk data showed that flow direction, draw ratio, and mold temperature influence the viscoelastic behavior significantly in certain temperature ranges. Flow induced orientations due to higher draw ratio and higher mold temperature lead to higher crystallinities. To determine the local viscoelastic properties, a new microindentation system was developed by merging indentation with dynamic mechanical analysis. The local process-structure-property relationship of EBM parts showed that the cross-sectional temperature distribution is clearly reflected by local crystallinities and local complex moduli. Additionally, a model to calculate three-dimensional anisotropic coefficients of thermal expansion as a function of the process dependent crystallinity was developed based on an elementary volume unit cell with stacked layers of amorphous phase and crystalline lamellae. Good agreement of the predicted thermal expansion coefficients with measured ones was found up to a temperature of 70 °C.
The aim of this study was to investigate whether beneficial vacation effects can be strengthened and prolonged with a smartphone-based intervention. In a four-week longitudinal study among 79 Finnish teachers, we investigated the development of recovery, well-being, and job performance before, during, and after a one-week vacation in three groups: non-users (n = 51), passive (n = 18) and active (n = 10) users. Participants were instructed to actively use a recovery app (called Holidaily) and complete five digital questionnaires. Most recovery experiences and well-being indicators increased during the vacation. Job performance and concentration capacity showed no significant time effects. Among active app users, creativity at work increased from baseline to after the vacation, whereas among non-users it decreased and among passive users it decreased a few days after the vacation but increased again one and a half weeks after the vacation. The fading of beneficial vacation effects on negative affect seems to have been slower among active app users. Only few participants used the app actively. Still, results suggest that a smartphone-based recovery intervention may support beneficial vacation effects.
2-methylacetoacetyl-coenzyme A thiolase (beta-ketothiolase) deficiency: one disease - two pathways
(2019)
Background: 2-methylacetoacetyl-coenzyme A thiolase deficiency (MATD; deficiency of mitochondrial acetoacetyl-coenzyme A thiolase T2/ “beta-ketothiolase”) is an autosomal recessive disorder of ketone body utilization and isoleucine degradation due to mutations in ACAT1.
Methods: We performed a systematic literature search for all available clinical descriptions of patients with MATD. 244 patients were identified and included in this analysis. Clinical course and biochemical data are presented and discussed.
Results: For 89.6 % of patients at least one acute metabolic decompensation was reported. Age at first symptoms ranged from 2 days to 8 years (median 12 months). More than 82% of patients presented in the first two years of life, while manifestation in the neonatal period was the exception (3.4%). 77.0% (157 of 204 patients) of patients showed normal psychomotor development without neurologic abnormalities.
Conclusion: This comprehensive data analysis provides a systematic overview on all cases with MATD identified in the literature. It demonstrates that MATD is a rather benign disorder with often favourable outcome, when compared with many other organic acidurias.
Background 3-hydroxy-3-methylglutaryl-coenzyme A lyase deficiency (HMGCLD) is an autosomal recessive disorder of ketogenesis and leucine degradation due to mutations in HMGCL .
Method We performed a systematic literature search to identify all published cases. 211 patients of whom relevant clinical data were available were included in this analysis. Clinical course, biochemical findings and mutation data are highlighted and discussed. An overview on all published HMGCL variants is provided.
Results More than 95% of patients presented with acute metabolic decompensation. Most patients manifested within the first year of life, 42.4% already neonatally. Very few individuals remained asymptomatic. The neurologic long-term outcome was favorable with 62.6% of patients showing normal development.
Conclusion This comprehensive data analysis provides a systematic overview on all published cases with HMGCLD including a list of all known HMGCL mutations.
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews ofstatic scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
The complex nature of multifactorial diseases, such as Morbus Alzheimer, has produced a strong need to design multitarget-directed ligands to address the involved complementary pathways. We performed a purposive structural modification of a tetratarget small-molecule, that is contilisant, and generated a combinatorial library of 28 substituted chromen-4-ones. The compounds comprise a basic moiety which is linker-connected to the 6-position of the heterocyclic chromenone core. The syntheses were accomplished by Mitsunobu- or Williamson-type ether formations. The resulting library members were evaluated at a panel of seven human enzymes, all of which being involved in the pathophysiology of neurodegeneration. A concomitant inhibition of human acetylcholinesterase and human monoamine oxidase B, with IC50 values of 5.58 and 7.20 μM, respectively, was achieved with the dual-target 6-(4-(piperidin-1-yl)butoxy)-4H-chromen-4-one (7).
Bone tissue engineering is an ever-changing, rapidly evolving, and highly interdisciplinary field of study, where scientists try to mimic natural bone structure as closely as possible in order to facilitate bone healing. New insights from cell biology, specifically from mesenchymal stem cell differentiation and signaling, lead to new approaches in bone regeneration. Novel scaffold and drug release materials based on polysaccharides gain increasing attention due to their wide availability and good biocompatibility to be used as hydrogels and/or hybrid components for drug release and tissue engineering. This article reviews the current state of the art, recent developments, and future perspectives in polysaccharide-based systems used for bone regeneration.
In an effort to assist researchers in choosing basis sets for quantum mechanical modeling of molecules (i.e. balancing calculation cost versus desired accuracy), we present a systematic study on the accuracy of computed conformational relative energies and their geometries in comparison to MP2/CBS and MP2/AV5Z data, respectively. In order to do so, we introduce a new nomenclature to unambiguously indicate how a CBS extrapolation was computed. Nineteen minima and transition states of buta-1,3-diene, propan-2-ol and the water dimer were optimized using forty-five different basis sets. Specifically, this includes one Pople (i.e. 6-31G(d)), eight Dunning (i.e. VXZ and AVXZ, X=2-5), twenty-five Jensen (i.e. pc-n, pcseg-n, aug-pcseg-n, pcSseg-n and aug-pcSseg-n, n=0-4) and nine Karlsruhe (e.g. def2-SV(P), def2-QZVPPD) basis sets. The molecules were chosen to represent both common and electronically diverse molecular systems. In comparison to MP2/CBS relative energies computed using the largest Jensen basis sets (i.e. n=2,3,4), the use of smaller sizes (n=0,1,2 and n=1,2,3) provides results that are within 0.11--0.24 and 0.09-0.16 kcal/mol. To practically guide researchers in their basis set choice, an equation is introduced that ranks basis sets based on a user-defined balance between their accuracy and calculation cost. Furthermore, we explain why the aug-pcseg-2, def2-TZVPPD and def2-TZVP basis sets are very suitable choices to balance speed and accuracy.
Currently, a variety of methods exist for creating different types of spatio-temporal world models. Despite the numerous methods for this type of modeling, there exists no methodology for comparing the different approaches or their suitability for a given application e.g. logistics robots. In order to establish a means for comparing and selecting the best-fitting spatio-temporal world modeling technique, a methodology and standard set of criteria must be established. To that end, state-of-the-art methods for this type of modeling will be collected, listed, and described. Existing methods used for evaluation will also be collected where possible.
Using the collected methods, new criteria and techniques will be devised to enable the comparison of various methods in a qualitative manner. Experiments will be proposed to further narrow and ultimately select a spatio-temporal model for a given purpose. An example network of autonomous logistic robots, ROPOD, will serve as a case study used to demonstrate the use of the new criteria. This will also serve to guide the design of future experiments that aim to select a spatio-temporal world modeling technique for a given task. ROPOD was specifically selected as it operates in a real-world, human shared environment. This type of environment is desirable for experiments as it provides a unique combination of common and novel problems that arise when selecting an appropriate spatio-temporal world model. Using the developed criteria, a qualitative analysis will be applied to the selected methods to remove unfit options.
Then, experiments will be run on the remaining methods to provide comparative benchmarks. Finally, the results will be analyzed and recommendations to ROPOD will be made.
Multi-robot systems (MRS) are capable of performing a set of tasks by dividing them among the robots in the fleet. One of the challenges of working with multirobot systems is deciding which robot should execute each task. Multi-robot task allocation (MRTA) algorithms address this problem by explicitly assigning tasks to robots with the goal of maximizing the overall performance of the system. The indoor transportation of goods is a practical application of multi-robot systems in the area of logistics. The ROPOD project works on developing multi-robot system solutions for logistics in hospital facilities. The correct selection of an MRTA algorithm is crucial for enhancing transportation tasks. Several multi-robot task allocation algorithms exist in the literature, but just few experimental comparative analysis have been performed. This project analyzes and assesses the performance of MRTA algorithms for allocating supply cart transportation tasks to a fleet of robots. We conducted a qualitative analysis of MRTA algorithms, selected the most suitable ones based on the ROPOD requirements, implemented four of them (MURDOCH, SSI, TeSSI, and TeSSIduo), and evaluated the quality of their allocations using a common experimental setup and 10 experiments. Our experiments include off-line and semi on-line allocation of tasks as well as scalability tests and use virtual robots implemented as Docker containers. This design should facilitate deployment of the system on the physical robots. Our experiments conclude that TeSSI and TeSSIduo suit best the ROPOD requirements. Both use temporal constraints to build task schedules and run in polynomial time, which allow them to scale well with the number of tasks and robots. TeSSI distributes the tasks among more robots in the fleet, while TeSSIduo tends to use a lower percentage of the available robots.
Subsequently, we have integrated TeSSI and TeSSIduo to perform multi-robot task allocation for the ROPOD project.
The Learning Culture Survey (LCS) is a questionnaire-based research, investigating students’ perceptions of and expectations towards Higher Education (HE). The aim of this survey is to improve our understanding about the sources of cultural conflicts in educational scenarios. This understanding, shell help us to predict potential conflict situations and develop supportive measures.
After three years of development, the LCS was initialized in 2010 in South Korea and Germany. During the following years, the investigations were extended to further countries. The results, on the one hand, provided insights about the cultural context of HE in general and on the other hand, about specific (national / regional) characteristics of learners in HE. Most issues targeted with the questionnaire were directly linked to value systems. Thus, we expected from the beginning that the collected data would keep valid over longer periods of time. However, we had no evidence regarding the actual persistence of learning culture. For a study, designed to being implemented on a global scope and providing input for further applications, persistence is a basic condition to justify related investigations.
To answer the question on persistence, we repeated the LCS in our university every four years, between 2010 to 2018/19. Besides a small number of slight changes, explainable out of their situational context, the overall results kept consistent over the investigated years. In this paper, after an introduction of the LCS’ concept, setting and its general results from the past years, we present the insights from our most recently finalized longitudinal study on learning culture.
Digital transformation in Higher Education and Science is a mission-critical demand to prepare educational institutions for their future competition on the international market. In many cases, the digitization goes along with the search for and acquisition of new software. For easily exchangeable software, wrong product decisions, in the worst case, lead to calculable financial losses. However, if a planned software requires a lot of technological adjustments and is to be applied as central component of a business- and/or security-critical environment, wrong decisions during the software acquisition process might lead to hardly calculable damage. Questions arising are how to decide for a product and how many resources should be invested for the acquisition process.
We planned to apply a commercial Business Support System, which should replace the currently used in-house developed software. Our goals were the increase of our university’s level of data security, to ease the interaction between stakeholders, to eliminate media discontinuities, to improve the process management and transparency, and to reduce the execution time of automated processes. Alongside with the introduction of the electronic case file, our agenda stipulates the digitization (and automation) of administrative university processes, especially, but not limited to, the student self-service and the administrative student life cycle. Usual tools and practices, commonly applied to (simple) software acquisition, failed in our scenario.
With the case study introduced in this paper, we address all persons, involved within software acquisition processes: From our experiences, we strongly recommend to place greater value on an exhaustively completed acquisition process, than on short-termed economic advantages.
Large display environments are highly suitable for immersive analytics. They provide enough space for effective co-located collaboration and allow users to immerse themselves in the data. To provide the best setting - in terms of visualization and interaction - for the collaborative analysis of a real-world task, we have to understand the group dynamics during the work on large displays. Among other things, we have to study, what effects different task conditions will have on user behavior.
In this paper, we investigated the effects of task conditions on group behavior regarding collaborative coupling and territoriality during co-located collaboration on a wall-sized display. For that, we designed two tasks: a task that resembles the information foraging loop and a task that resembles the connecting facts activity. Both tasks represent essential sub-processes of the sensemaking process in visual analytics and cause distinct space/display usage conditions. The information foraging activity requires the user to work with individual data elements to look into details. Here, the users predominantly occupy only a small portion of the display. In contrast, the connecting facts activity requires the user to work with the entire information space. Therefore, the user has to overview the entire display.
We observed 12 groups for an average of two hours each and gathered qualitative data and quantitative data. During data analysis, we focused specifically on participants' collaborative coupling and territorial behavior.
We could detect that participants tended to subdivide the task to approach it, in their opinion, in a more effective way, in parallel. We describe the subdivision strategies for both task conditions. We also detected and described multiple user roles, as well as a new coupling style that does not fit in either category: loosely or tightly. Moreover, we could observe a territory type that has not been mentioned previously in research. In our opinion, this territory type can affect the collaboration process of groups with more than two collaborators negatively. Finally, we investigated critical display regions in terms of ergonomics. We could detect that users perceived some regions as less comfortable for long-time work.
The Peren-Clement index (PCI) is a methodology to analyze country-specific risk for businesses engaged in international trade and direct investment. This index, established in 1998, provides a guideline when deciding which foreign markets offer the possibility for additional business engagement and investment, and to what extent existing engagement or investment can be increased or should be reduced.
In mathematical modeling by means of performance models, the Fitness-Fatigue Model (FF-Model) is a common approach in sport and exercise science to study the training performance relationship. The FF-Model uses an initial basic level of performance and two antagonistic terms (for fitness and fatigue). By model calibration, parameters are adapted to the subject’s individual physical response to training load. Although the simulation of the recorded training data in most cases shows useful results when the model is calibrated and all parameters are adjusted, this method has two major difficulties. First, a fitted value as basic performance will usually be too high. Second, without modification, the model cannot be simply used for prediction. By rewriting the FF-Model such that effects of former training history can be analyzed separately – we call those terms preload – it is possible to close the gap between a more realistic initial performance level and an athlete's actual performance level without distorting other model parameters and increase model accuracy substantially. Fitting error of the preload-extended FF-Model is less than 32% compared to the error of the FF-Model without preloads. Prediction error of the preload-extended FF-Model is around 54% of the error of the FF-Model without preloads.
This work presents the preliminary research towards developing an adaptive tool for fault detection and diagnosis of distributed robotic systems, using explainable machine learning methods. Autonomous robots are complex systems that require high reliability in order to operate in different environments. Even more so, when considering distributed robotic systems, the task of fault detection and diagnosis becomes exponentially difficult.
To diagnose systems, models representing the behaviour under investigation need to be developed, and with distributed robotic systems generating large amount of data, machine learning becomes an attractive method of modelling especially because of its high performance. However, with current day methods such as artificial neural networks (ANNs), the issue of explainability arises where learnt models lack the ability to give explainable reasons behind their decisions.
This paper presents current trends in methods for data collection from distributed systems, inductive logic programming (ILP); an explainable machine learning method, and fault detection and diagnosis.
In the field of service robots, dealing with faults is crucial to promote user acceptance. In this context, this work focuses on some specific faults which arise from the interaction of a robot with its real world environment due to insufficient knowledge for action execution.
In our previous work [1], we have shown that such missing knowledge can be obtained through learning by experimentation. The combination of symbolic and geometric models allows us to represent action execution knowledge effectively. However we did not propose a suitable representation of the symbolic model.
In this work we investigate such symbolic representation and evaluate its learning capability. The experimental analysis is performed on four use cases using four different learning paradigms. As a result, the symbolic representation together with the most suitable learning paradigm are identified.
In Sensor-based Fault Detection and Diagnosis (SFDD) methods, spatial and temporal dependencies among the sensor signals can be modeled to detect faults in the sensors, if the defined dependencies change over time. In this work, we model Granger causal relationships between pairs of sensor data streams to detect changes in their dependencies. We compare the method on simulated signals with the Pearson correlation, and show that the method elegantly handles noise and lags in the signals and provides appreciable dependency detection. We further evaluate the method using sensor data from a mobile robot by injecting both internal and external faults during operation of the robot. The results show that the method is able to detect changes in the system when faults are injected, but is also prone to detecting false positives. This suggests that this method can be used as a weak detection of faults, but other methods, such as the use of a structural model, are required to reliably detect and diagnose faults.
This paper proposes an approach to an ANN-based temperature controller design for a plastic injection moulding system. This design approach is applied to the development of a controller based on a combination of a classical ANN and integrator. The controller provides a fast temperature response and zero steady-state error for three typical heaters (bar, nozzle, and cartridge) for a plastic moulding system. The simulation results in Matlab Simulink software and in comparison to an industrial PID regulator have shown the advantages of the controller, such as significantly less overshoot and faster transient (compared to PID with autotuning) for all examined heaters. In order to verify the proposed approach, the designed ANN controller was implemented and tested using an experimental setup based on an STM32 board.
Quantifying Interference in WiLD Networks using Topography Data and Realistic Antenna Patterns
(2019)
Avoiding possible interference is a key aspect to maximize the performance in Wi-Fi based Long Distance networks. In this paper we quantify self-induced interference based on data derived from our testbed and match the findings against simulations. By enhancing current simulation models with two key elements we significantly reduce the deviation between testbed and simulation: the usage of detailed antenna patterns compared to the cone model and propagation modeling enhanced by license-free topography data. Based on the gathered data we discuss several possible optimization approaches such as physical separation of local radios, tuning the sensitivity of the transmitter and using centralized compared to distributed channel assignment algorithms. While our testbed is based on 5 GHz Wi-Fi, we briefly discuss the possible impact of our results to other frequency bands.
Synthesis of Substituted Hydroxyapatite for Application in Bone Tissue Engineering and Drug Delivery
(2019)
Gas Chromatography
(2019)
Gas chromatography (GC) is one of the most important types of chromatography used in analytical chemistry for separating and analyzing chemical organic compounds. Today, gas chromatography is one of the most widespread investigation methods of instrumental analysis. This technique is used in the laboratories of chemical, petrochemical, and pharmaceutical industries, in research institutes, and also in clinical, environmental, and food and beverage analysis. This book is the outcome of contributions by experts in the field of gas chromatography and includes a short history of gas chromatography, an overview of derivatization methods and sample preparation techniques, a comprehensive study on pyrazole mass spectrometric fragmentation, and a GC/MS/MS method for the determination and quantification of pesticide residues in grape samples.
It is shown that the electrochemical kinetics of alkaline methanol oxidation can be reduced by setting certain fast reactions contained in it to a steady state. As a result, the underlying system of Ordinary Differential Equations (ODE) is transformed to a system of Differential-Algebraic Equations (DAE). We measure the precision characteristics of such transformation and discuss the consequences of the obtained model reduction.
The paper presents the topological reduction method applied to gas transport networks, using contraction of series, parallel and tree-like subgraphs. The contraction operations are implemented for pipe elements, described by quadratic friction law. This allows significant reduction of the graphs and acceleration of solution procedure for stationary network problems. The algorithm has been tested on several realistic network examples. The possible extensions of the method to different friction laws and other elements are discussed.
Survival of patients with pediatric acute lymphoblastic leukemia (ALL) after allogeneic hematopoietic stem cell transplantation (allo-SCT) is mainly compromised by leukemia relapse, carrying dismal prognosis. As novel individualized therapeutic approaches are urgently needed, we performed whole-exome sequencing of leukemic blasts of 10 children with post–allo-SCT relapses with the aim of thoroughly characterizing the mutational landscape and identifying druggable mutations. We found that post–allo-SCT ALL relapses display highly diverse and mostly patient-individual genetic lesions. Moreover, mutational cluster analysis showed substantial clonal dynamics during leukemia progression from initial diagnosis to relapse after allo-SCT. Only very few alterations stayed constant over time. This dynamic clonality was exemplified by the detection of thiopurine resistance-mediating mutations in the nucleotidase NT5C2 in 3 patients’ first relapses, which disappeared in the post–allo-SCT relapses on relief of selective pressure of maintenance chemotherapy. Moreover, we identified TP53 mutations in 4 of 10 patients after allo-SCT, reflecting acquired chemoresistance associated with selective pressure of prior antineoplastic treatment. Finally, in 9 of 10 children’s post–allo-SCT relapse, we found alterations in genes for which targeted therapies with novel agents are readily available. We could show efficient targeting of leukemic blasts by APR-246 in 2 patients carrying TP53 mutations. Our findings shed light on the genetic basis of post–allo-SCT relapse and may pave the way for unraveling novel therapeutic strategies in this challenging situation.
Scratch assays enable the study of the migration process of an injured adherent cell layer in vitro. An apparatus for the reproducible performance of scratch assays and cell harvesting has been developed that meets the requirements for reproducibility in tests as well as easy handling. The entirely autoclavable setup is divided into a sample translation and a scratching system. The translational system is compatible with standard culture dishes and can be modified to adapt to different cell culture systems, while the scratching system can be adjusted according to angle, normal force, shape, and material to adapt to specific questions and demanding substrates. As a result, a fully functional prototype can be presented. This system enables the creation of reproducible and clear scratch edges with a low scratch border roughness within a monolayer of cells. Moreover, the apparatus allows the collection of the migrated cells after scratching for further molecular biological investigations without the need for a second processing step. For comparison, the mechanical properties of manually performed scratch assays are evaluated.
The number of studies on work breaks and the importance of this subject is growing rapidly, with research showing that work breaks increase employees’ wellbeing and performance and workplace safety. However, comparing the results of work break research is difficult since the study designs and methods are heterogeneous and there is no standard theoretical model for work breaks. Based on a systematic literature search, this scoping review included a total of 93 studies on experimental work break research conducted over the last 30 years. This scoping review provides a first structured evaluation regarding the underlying theoretical framework, the variables investigated, and the measurement methods applied. Studies using a combination of measurement methods from the categories “self-report measures,” “performance measures,” and “physiological measures” are most common and to be preferred in work break research. This overview supplies important information for ergonomics researchers allowing them to design work break studies with a more structured and stronger theory-based approach. A standard theoretical model for work breaks is needed in order to further increase the comparability of studies in the field of experimental work break research in the future.
Although work events can be regarded as pivotal elements of organizational life, only a few studies have examined how positive and negative events relate to and combine to affect work engagement over time. Theory suggests that to better understand how current events affect work engagement (WE), we have to account for recent events that have preceded these current events. We present competing theoretical views on how recent and current work events may affect employees (e.g., getting used to a high frequency of negative events or becoming more sensitive to negative events). Although the occurrence of events implies discrete changes in the experience of work, prior research has not considered whether work events actually accumulate to sustained mid-term changes in WE. To address these gaps in the literature, we conducted a week-level longitudinal study across a period of 15 consecutive weeks among 135 employees, which yielded 849 weekly observations. While positive events were associated with higher levels of WE within the same week, negative events were not. Our results support neither satiation nor sensitization processes. However, high frequencies of negative events in the preceding week amplified the beneficial effects of positive events on WE in the current week. Growth curve analyses show that the benefits of positive events accumulate to sustain high levels of WE. WE dissipates in the absence of continuous experience of positive events. Our study adds a temporal component and informs research that has taken a feature-oriented perspective on the dynamic interplay of job demands and resources.
Application developers constitute an important part of a digital platform’s ecosystem. Knowledge about psychological processes that drive developer behavior in platform ecosystems is scarce. We build on the lead userness construct which comprises two dimensions, trend leadership and high expected benefits from a solution, to explain how developers’ innovative work behavior (IWB) is stimulated. We employ an efficiencyoriented and a social-political perspective to investigate the relationship between lead userness and IWB. The efficiency-oriented view resonates well with the expected benefit dimension of lead userness, while the social-political view might be interpreted as a reflection of trend leadership. Using structural equation modeling, we test our model with a sample of over 400 developers from three platform ecosystems. We find that lead userness is indirectly associated with IWB and the performance-enhancing view to be the stronger predictor of IWB. Finally, we unravel differences between paid and unpaid app developers in platform ecosystems.
Data-Driven Robot Fault Detection and Diagnosis Using Generative Models: A Modified SFDD Algorithm
(2019)
This paper presents a modification of the data-driven sensor-based fault detection and diagnosis (SFDD) algorithm for online robot monitoring. Our version of the algorithm uses a collection of generative models, in particular restricted Boltzmann machines, each of which represents the distribution of sliding window correlations between a pair of correlated measurements. We use such models in a residual generation scheme, where high residuals generate conflict sets that are then used in a subsequent diagnosis step. As a proof of concept, the framework is evaluated on a mobile logistics robot for the problem of recognising disconnected wheels, such that the evaluation demonstrates the feasibility of the framework (on the faulty data set, the models obtained 88.6% precision and 75.6% recall rates), but also shows that the monitoring results are influenced by the choice of distribution model and the model parameters as a whole.
Atmospheric aerosols affect the power production of solar energy systems. Their impact depends on both the atmospheric conditions and the solar technology employed. By being a region with a lack in power production and prone to high solar insolation, West Africa shows high potential for the application of solar power systems. However, dust outbreaks, containing high aerosol loads, occur especially in the Sahel, located between the Saharan desert in the north and the Sudanian Savanna in the south. They might affect the whole region for several days with significant effects on power generation. This study investigates the impact of atmospheric aerosols on solar energy production for the example year 2006 making use of six well instrumented sites in West Africa. Two different solar power technologies, a photovoltaic (PV) and a parabolic through (PT) power plant, are considered. The daily reduction of solar power due to aerosols is determined over mostly clear-sky days in 2006 with a model chain combining radiative transfer and technology specific power generation. For mostly clear days the local daily reduction of PV power (at alternating current) (PVAC) and PT power (PTP) due to the presence of aerosols lies between 13 % and 22 % and between 22 % and 37 %, respectively. In March 2006 a major dust outbreak occurred, which serves as an example to investigate the impact of an aerosol extreme event on solar power. During the dust outbreak, daily reduction of PVAC and PTP of up to 79 % and 100 % occur with a mean reduction of 20 % to 40 % for PVAC and of 32 % to 71 % for PTP during the 12 days of the event.
The design of self-driving cars is one of the most exciting and ambitious challenges of our days and every day, new research work is published. In order to give an orientation, this article will present an overview of various methods used to study the human side of autonomous driving. Simplifying roughly, you can distinguish between design science-oriented methods (such as Research through Design, Wizard of Oz or driving simulator ) and behavioral science methods (such as survey, interview, and observation). We show how these methods are adopted in the context of autonomous driving research and dis-cuss their strengths and weaknesses. Due to the complexity of the topic, we will show that mixed method approaches will be suitable to explore the impact of autonomous driving on different levels: the individual, the social interaction and society.
In the literature on occupational stress and recovery from work, several facets of thinking about work during off-job time have been conceptualized. However, research on the focal concepts is currently rather diffuse. In this study we take a closer look at the five most well-established concepts: (1) psychological detachment, (2) affective rumination, (3) problem-solving pondering, (4) positive work reflection, and (5) negative work reflection. More specifically, we scrutinized (1) whether the five facets of work-related rumination are empirically distinct, (2) whether they yield differential associations with different facets of employee well-being (burnout, work engagement, thriving, satisfaction with life, and flourishing), and (3) to what extent the five facets can be distinguished from and relate to conceptually similar constructs, such as irritation, worry, and neuroticism. We applied structural equation modeling techniques to cross-sectional survey data from 474 employees. Our results provide evidence for (1) five correlated, yet empirically distinct facets of work-related rumination. (2) Each facet yields a unique pattern of association with the eight aspects of employee well-being. For instance, detachment is strongly linked to satisfaction with life and flourishing. Affective rumination is linked particularly to burnout. Problem-solving pondering and positive work reflection yield the strongest links to work engagement. (3) The five facets of work-related rumination are distinct from related concepts, although there is a high overlap between (lower levels of) psychological detachment and cognitive irritation. Our study contributes to clarifying the structure of work-related rumination and extends the nomological network around different types of thinking about work during off-job time and employee well-being.
PosturePairsDB19
(2019)
Lower back pain is one of the most prevalent diseases in Western societies. A large percentage of European and American populations suffer from back pain at some point in their lives. One successful approach to address lower back pain is postural training, which can be supported by wearable devices, providing real-time feedback about the user’s posture. In this work, we analyze the changes in posture induced by postural training. To this end, we compare snapshots before and after training, as measured by the Gokhale SpineTracker™. Considering pairs of before and after snapshots in different positions (standing, sitting, and bending), we introduce a feature space, that allows for unsupervised clustering. We show that resulting clusters represent certain groups of postural changes, which are meaningful to professional posture trainers.
Trust is the lubricant of the sharing economy, especially in peer-to-peer carsharing where you leave a valuable good to a stranger in the hope of getting it backunscathed. Central mechanisms for handling this information gap nowadays are ratings and reviews of other users. The rising of connected car technology opens new possibilities to increase trust by collecting and providing e.g. driving behavior data. At the same time, this means an intrusion into the privacy of the user. Therefore, in this work we explore technological approaches that allow building trust without violating the privacy of individuals. We evaluate to what extent blockchain technology and smart contracts are suitable technologies to meet these challengesby setting upa prototype implementation of a block-chain-based carsharing approach. In this context, we present our research approachand evaluate the prototype in terms of trust and privacy.
The initially large number of variants is reduced by applying custom variant annotation and filtering procedures. This requires complex software toolchains to be set up and data sources to be integrated. Furthermore, increasing study sizes subsequently require higher efforts to manage datasets in a multi-user and multi-institution environment. It is common practice to expect numerous iterations of continuative respecification and refinement of filter strategies, when the cause for a disease or phenotype is unknown. Data analysis support during this phase is fundamental, because handling the large volume of data is not possible or inadequate for users with limited computer literacy. Constant feedback and communication is necessary when filter parameters are adjusted or the study grows with additional samples. Consequently, variant filtering and interpretation becomes time-consuming and hinders a dynamic and explorative data analysis by experts.
The choice of suitable semiconducting metal oxide (MOX) gas sensors for the detection of a specific gas or gas mixture is time-consuming since the sensor’s sensitivity needs to be characterized at multiple temperatures to find its optimal operating conditions. To obtain reliable measurement results, it is very important that the power for the sensor’s integrated heater is stable, regulated and error-free (or error-tolerant). Especially the error-free requirement can be only be achieved if the power supply implements failure-avoiding and failure-detection methods. The biggest challenge is deriving multiple different voltages from a common supply in an efficient way while keeping the system as small and lightweight as possible. This work presents a reliable, compact, embedded system that addresses the power supply requirements for fully automated simultaneous sensor characterization for up to 16 sensors at multiple temperatures. The system implements efficient (avg. 83.3% efficiency) voltage conversion with low ripple output (<32 mV) and supports static or temperature-cycled heating modes. Voltage and current of each channel are constantly monitored and regulated to guarantee reliable operation. To evaluate the proposed design, 16 sensors were screened. The results are shown in the experimental part of this work.
Due to global ecological and economic challenges that have been correlated to the transition from fossil-based to renewable resources, fundamental studies are being performed worldwide to replace fossil fuel raw materials in plastic production. One aspect of current research is the development of lignin-derived polyols to substitute expensive fossil-based polyol components for polyurethane and polyester production. This article describes the synthesis of bioactive lignin-based polyurethane coatings using unmodified and demethylated Kraft lignins. Demethylation was performed to enhance the reaction selectivity toward polyurethane formation. The antimicrobial activity was tested according to a slightly modified standard test (JIS Z 2801:2010). Besides effects caused by the lignins themselves, triphenylmethane derivatives (brilliant green and crystal violet) were used as additional antimicrobial substances. Results showed increased antimicrobial capacity against Staphylococcus aureus. Furthermore, the coating color could be varied from dark brown to green and blue, respectively.
This work aims to create a natural language generation (NLG) base for further development of systems for automatic examination questions generation and automatic summarization in Hochschule Bonn-Rhein-Sieg and Fraunhofer IAIS, respectively. Nowadays both tasks are very relevant. The first can significantly simplify the university teachers' work and the second to be of assistance for a faster retrieval of knowledge from an excessively large amount of information that people often work with. We focus on the search for an efficient and robust approach to the controlled NLG problem. Therefore, though the initial idea of the project was the usage of the generative adversarial neural networks (GANs), we switched our attention to more robust and easily-controllable autoencoders. Thus, in this work we implement an autoencoder for unsupervised discovery of latent space representations of text, and show the ability of the system to generate new sentences based on this latent space. Apart from that, we apply Gaussian mixture techniques in order to obtain meaningful text clusters and thereby try to create a tool that would allow us to generate sentences relevant to the semantics of the Gaussian clusters, e.g. positive or negative reviews or examination questions on certain topic. The developed system is tested on several datasets and compared to GANs' performance.
Traffic sign recognition is an important component of many advanced driving assistance systems, and it is required for full autonomous driving. Computational performance is usually the bottleneck in using large scale neural networks for this purpose. SqueezeNet is a good candidate for efficient image classification of traffic signs, but in our experiments it does not reach high accuracy, and we believe this is due to lack of data, requiring data augmentation. Generative adversarial networks can learn the high dimensional distribution of empirical data, allowing the generation of new data points. In this paper we apply pix2pix GANs architecture to generate new traffic sign images and evaluate the use of these images in data augmentation. We were motivated to use pix2pix to translate symbolic sign images to real ones due to the mode collapse in Conditional GANs. Through our experiments we found that data augmentation using GAN can increase classification accuracy for circular traffic signs from 92.1% to 94.0%, and for triangular traffic signs from 93.8% to 95.3%, producing an overall improvement of 2%. However some traditional augmentation techniques can outperform GAN data augmentation, for example contrast variation in circular traffic signs (95.5%) and displacement on triangular traffic signs (96.7 %). Our negative results shows that while GANs can be naively used for data augmentation, they are not always the best choice, depending on the problem and variability in the data.
Intact Transition Epitope Mapping - Targeted High-Energy Rupture of Extracted Epitopes (ITEM-THREE)
(2019)
Epitope mapping, which is the identification of antigenic determinants, is essential for the design of novel antibody-based therapeutics and diagnostic tools. ITEM-THREE is a mass spectrometry-based epitope mapping method that can identify epitopes on antigens upon generating an immune complex in electrospray-compatible solutions by adding an antibody of interest to a mixture of peptides from which at least one holds the antibody's epitope. This mixture is nano-electrosprayed without purification. Identification of the epitope peptide is performed within a mass spectrometer that provides an ion mobility cell sandwiched in-between two collision cells and where this ion manipulation setup is flanked by a quadrupole mass analyzer on one side and a time-of-flight mass analyzer on the other side. In a stepwise fashion, immune-complex ions are separated from unbound peptide ions and dissociated to release epitope peptide ions. Immune complex-released peptide ions are separated from antibody ions and fragmented by collision induced dissociation. Epitope-containing peptide fragment ions are recorded, and mass lists are submitted to unsupervised data base search thereby retrieving both, the amino acid sequence of the epitope peptide and the originating antigen. ITEM-THREE was developed with antiTRIM21 and antiRA33 antibodies for which the epitopes were known, subjecting them to mixtures of synthetic peptides of which one contained the respective epitope. ITEM-THREE was then successfully tested with an enzymatic digest of His-tagged recombinant human β-actin and an antiHis-tag antibody, as well as with an enzymatic digest of recombinant human TNFα and an antiTNFα antibody whose epitope was previously unknown.
Healing of large bone defects requires implants or scaffolds that provide structural guidance for cell growth, differentiation, and vascularization. In the present work, an agarose-hydroxyapatite composite scaffold was developed that acts not only as a 3D matrix, but also as a release system. Hydroxyapatite (HA) was incorporated into the agarose gels in situ in various ratios by a simple procedure consisting of precipitation, cooling, washing, and drying. The resulting gels were characterized regarding composition, porosity, mechanical properties, and biocompatibility. A pure phase of carbonated HA was identified in the scaffolds, which had pore sizes of up to several hundred micrometers. Mechanical testing revealed elastic moduli of up to 2.8 MPa for lyophilized composites. MTT testing on Lw35human mesenchymal stem cells (hMSCs) and osteosarcoma MG-63 cells proved the biocompatibility of the scaffolds. Furthermore, scaffolds were loaded with model drug compounds for guided hMSC differentiation. Different release kinetic models were evaluated for adenosine 5′-triphosphate (ATP) and suramin, and data showed a sustained release behavior over four days.
Change - shaping reality
(2019)
Surrogate models are used to reduce the burden of expensive-to-evaluate objective functions in optimization. By creating models which map genomes to objective values, these models can estimate the performance of unknown inputs, and so be used in place of expensive objective functions. Evolutionary techniques such as genetic programming or neuroevolution commonly alter the structure of the genome itself. A lack of consistency in the genotype is a fatal blow to data-driven modeling techniques: interpolation between points is impossible without a common input space. However, while the dimensionality of genotypes may differ across individuals, in many domains, such as controllers or classifiers, the dimensionality of the input and output remains constant. In this work we leverage this insight to embed differing neural networks into the same input space. To judge the difference between the behavior of two neural networks, we give them both the same input sequence, and examine the difference in output. This difference, the phenotypic distance, can then be used to situate these networks into a common input space, allowing us to produce surrogate models which can predict the performance of neural networks regardless of topology. In a robotic navigation task, we show that models trained using this phenotypic embedding perform as well or better as those trained on the weight values of a fixed topology neural network. We establish such phenotypic surrogate models as a promising and flexible approach which enables surrogate modeling even for representations that undergo structural changes.
Are quality diversity algorithms better at generating stepping stones than objective-based search?
(2019)
The route to the solution of complex design problems often lies through intermediate "stepping stones" which bear little resemblance to the final solution. By greedily following the path of greatest fitness improvement, objective-based search overlooks and discards stepping stones which might be critical to solving the problem. Here, we hypothesize that Quality Diversity (QD) algorithms are a better way to generate stepping stones than objective-based search: by maintaining a large set of solutions which are of high-quality, but phenotypically different, these algorithms collect promising stepping stones while protecting them in their own "ecological niche". To demonstrate the capabilities of QD we revisit the challenge of recreating images produced by user-driven evolution, a classic challenge which spurred work in novelty search and illustrated the limits of objective-based search. We show that QD far outperforms objective-based search in matching user-evolved images. Further, our results suggest some intriguing possibilities for leveraging the diversity of solutions created by QD.
In this paper, we provide a participatory design study of a mobile health platform for older adults that provides an integrative perspective on health data collected from different devices and apps. We illustrate the diversity and complexity of older adults’ perspectives in the context of health and technology use, the challenges which follow on for the design of mobile health platforms that support active and healthy ageing (AHA) and our approach to addressing these challenges through a participatory design (PD) process. Interviews were conducted with older adults aged 65+ in a two-month study with the goal of understanding perspectives on health and technologies for AHA support. We identified challenges and derived design ideas for a mobile health platform called “My-AHA”. For researchers in this field, the structured documentation of our procedures and results, as well as the implications derived provide valuable insights for the design of mobile health platforms for older adults.
This work addresses the issue of finding an optimal flight zone for a side-by-side tracking and following Unmanned Aerial Vehicle(UAV) adhering to space-restricting factors brought upon by a dynamic Vector Field Extraction (VFE) algorithm. The VFE algorithm demands a relatively perpendicular field of view of the UAV to the tracked vehicle, thereby enforcing the space-restricting factors which are distance, angle and altitude. The objective of the UAV is to perform side-by-side tracking and following of a lightweight ground vehicle while acquiring high quality video of tufts attached to the side of the tracked vehicle. The recorded video is supplied to the VFE algorithm that produces the positions and deformations of the tufts over time as they interact with the surrounding air, resulting in an airflow model of the tracked vehicle. The present limitations of wind tunnel tests and computational fluid dynamics simulation suggest the use of a UAV for real world evaluation of the aerodynamic properties of the vehicle’s exterior. The novelty of the proposed approach is alluded to defining the specific flight zone restricting factors while adhering to the VFE algorithm, where as a result we were capable of formalizing a locally-static and a globally-dynamic geofence attached to the tracked vehicle and enclosing the UAV.
In the literature on occupational stress and recovery from work several facets of thinking about work in off-job time have been conceptualized. However, research on the focal concepts is currently rather disintegrated. In this study we take a closer look at the five most established concepts, namely (1) psychological detachment, (2) affective rumination, (3) problem-solving pondering, (4) positive work reflection, and (5) negative work reflection. More specifically, we scrutinized (1) whether the five facets of work-related rumination are empirically distinct, (2) whether they yield differential associations with different facets of employee well-being (burnout, work engagement, thriving, satisfaction with life, and flourishing), and (3) to what extent the five facets can be distinguished from and relate to conceptually similar constructs, such as irritation, worry, and neuroticism. We applied structural equation modeling techniques to cross-sectional survey data from 474 employees. Our results provide evidence that (1) the five facets of work-related rumination are highly related, yet empirically distinct, (2) that each facet contributes uniquely to explain variance in certain aspects of employee well-being, and (3) that they are distinct from related concepts, albeit there is a high overlap between (lower levels of) psychological detachment and cognitive irritation. Our study contributes to clarify the structure of work-related rumination and extends the nomological network around different types of thinking about work in off-job time and employee well-being.
Estimating the impact of successful completion of vocational education on employment outcomes
(2019)
The initial phase in real world engineering optimization and design is a process of discovery in which not all requirements can be made in advance, or are hard to formalize. Quality diversity algorithms, which produce a variety of high performing solutions, provide a unique chance to support engineers and designers in the search for what is possible and high performing. In this work we begin to answer the question how a user can interact with quality diversity and turn it into an interactive innovation aid. By modeling a user's selection it can be determined whether the optimization is drifting away from the user's preferences. The optimization is then constrained by adding a penalty to the objective function. We present an interactive quality diversity algorithm that can take into account the user's selection. The approach is evaluated in a new multimodal optimization benchmark that allows various optimization tasks to be performed. The user selection drift of the approach is compared to a state of the art alternative on both a planning and a neuroevolution control task, thereby showing its limits and possibilities.