Refine
H-BRS Bibliography
- yes (88) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (50)
- Fachbereich Ingenieurwissenschaften und Kommunikation (22)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (17)
- Fachbereich Angewandte Naturwissenschaften (9)
- Fachbereich Wirtschaftswissenschaften (6)
- Institut für Cyber Security & Privacy (ICSP) (4)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (2)
- Fachbereich Sozialpolitik und Soziale Sicherung (1)
- Institut für Medienentwicklung und -analyse (IMEA) (1)
- Institut für funktionale Gen-Analytik (IFGA) (1)
Document Type
- Preprint (88) (remove)
Year of publication
Keywords
- Evolutionary Computation (2)
- FOS: Computer and information sciences (2)
- burnout (2)
- inborn error of metabolism (2)
- ketone body (2)
- lignin (2)
- metabolic acidosis (2)
- metabolic decompensation (2)
- organic aciduria (2)
- psychological detachment (2)
- unsupervised learning (2)
- work engagement (2)
- ACAT1 (1)
- AML (1)
- ATR-FTIR (1)
- Air Pollution Monitoring (1)
- Artificial Intelligence (cs.AI) (1)
- Authentication features (1)
- Autoencoder (1)
- Automatic Differentiation (1)
- Automatic Short Answer Grading (1)
- BERT (1)
- Ball Tracking (1)
- Battery Packs (1)
- Battery degradation (1)
- Bayesian Deep Learning (1)
- Bayesian optimization (1)
- Big Data Analysis (1)
- Bioinformatics (1)
- Black-Box Optimization (1)
- COVID-19 (1)
- Calendar ageing (1)
- Capacity fade (1)
- Cell-to-cell variations (1)
- Compositional Pattern Producing Networks (1)
- Computational Fluid Dynamics (1)
- Computer Science - Computer Vision and Pattern Recognition (1)
- Computer Science - Learning (1)
- Cutting sticks problem (1)
- Cyclic Ageing (1)
- Cyclic testing (1)
- Deep Learning (1)
- Dimensionality reduction (1)
- Drosophila (1)
- Drug (1)
- ELMo (1)
- Electric Vehicles (1)
- Facial Emotion Recognition (1)
- Feature Model (1)
- Filtering (1)
- Folin-Ciocalteu assay (1)
- GPT (1)
- GPT-2 (1)
- Gender-based violence (1)
- HEB mixer (1)
- HMGCL (1)
- HSP90 (1)
- Heat Shock Protein (1)
- Hydrogen storage (1)
- Hyper-parameter Tuning (1)
- IoT (1)
- Java grid engine (1)
- Ketogenesis (1)
- Ketolysis (1)
- Knowledge Graphs (1)
- LOTUS Sensor Node (1)
- Lattice Boltzmann Method (1)
- Lattice Boltzmann Method Code (1)
- Lennard-Jones parameters (1)
- Leukemia (1)
- Level-of-Detail (1)
- Lithium-ion (1)
- LoRa (1)
- LoRaWAN (1)
- Low-Power Wide Area Network (LP-WAN) (1)
- MESD (1)
- Machine Learning (1)
- Machine Learning (cs.LG) (1)
- Machine learning (1)
- Measurement (1)
- Mebendazole (1)
- Metal hydride (1)
- Molecular dynamics (1)
- Multi-Solution Optimization (1)
- Multidimensional Z-transforms (1)
- Natural Language Processing (1)
- Navigation (1)
- Neural networks (1)
- Neural representations (1)
- Nonlinear sampled-data system (1)
- OH-number (1)
- Optical Flow (1)
- Path Loss (1)
- Pulse-width modulation (1)
- Pytorch (1)
- Quality Diversity (1)
- Quality diversity (1)
- Quantum mechanics (1)
- Random distribution (1)
- Range variability (1)
- Real-Time Image Processing (1)
- Rendering (1)
- Risk factors (1)
- Risk-based Authentication (RBA) (1)
- Robot Perception (1)
- Robotics (1)
- Robotics (cs.RO) (1)
- Rural women (1)
- SEC (1)
- SEMA (1)
- SIS mixer (1)
- Scale Tuning (1)
- Set partition problem (1)
- Sexual violence (1)
- Side Channel Analysis (1)
- Simulation (1)
- Spherical Treadmill (1)
- Standard deviation (1)
- Tautomers (1)
- TinyECC 2.0 (1)
- Transfer Learning (1)
- Transformers (1)
- UV-VIS (1)
- Uganda (1)
- Uncertainty Quantification (1)
- Urban (1)
- Usable Security (1)
- Virtual Reality (1)
- Volterra-Wiener series (1)
- Wireless Sensor Network (1)
- XRD (1)
- actinometry (1)
- active packaging (1)
- adhesion (1)
- affective events (1)
- affective rumination (1)
- airborne astronomy (1)
- anomaly detection (1)
- antioxidant activity (1)
- basic human needs, evolution of behavior (1)
- beta-ketothiolase (1)
- bio-based polymers (1)
- bioeconomy (1)
- biomass (1)
- biomaterial (1)
- bone regeneration (1)
- caching (1)
- cardiac magnetic resonance (1)
- cellular automata (1)
- computer vision (1)
- convolutional neural networks (1)
- dc electric drive (1)
- design-of-experiments (1)
- designing air flow (1)
- distributed services (1)
- diversity (1)
- domain adaptation (1)
- drug release (1)
- elite athletes (1)
- employee well-being (1)
- essential oil (1)
- experience sampling (1)
- far-infrared astronomy (1)
- feature discovery (1)
- food waste (1)
- force field (1)
- force-retraction displacement-curve (1)
- genetic neutrality (1)
- global illumination (1)
- growth curve modeling (1)
- heterodyne spectroscopy (1)
- human behavior (1)
- hydrogel (1)
- hyperammonemia (1)
- hypoglycemia (1)
- irritation (1)
- isoleucine (1)
- job demands-resources model (1)
- kraft lignin (1)
- leucine (1)
- lignocellulose feedstock (1)
- local optimization (1)
- low-cost air sensor (1)
- mental health (1)
- multi-objective optimization (1)
- multimodal optimization (1)
- multiscale parameterization (1)
- multivariate data processing (1)
- natural additives (1)
- negative work reflection (1)
- non-linear projection (1)
- object detection (1)
- objective function (1)
- organosolv (1)
- osteogenesis (1)
- overcommitment (1)
- parameter optimisation (1)
- parametric (1)
- path tracing (1)
- permeability (1)
- perseverative cognition (1)
- phenotypic diversity (1)
- phenotypic feature (1)
- phenotypic niching (1)
- photocatalysis (1)
- photolysis (1)
- plant extracts (1)
- positive work reflection (1)
- pressure sensitive adhesive (1)
- problem-solving pondering (1)
- psychological needs (1)
- rds encoding (1)
- real-time (1)
- receivers (1)
- representation (1)
- rumination (1)
- satisfaction with life (1)
- scaffolds (1)
- sensitization-satiation effects (1)
- shelf life (1)
- stem cells (1)
- subjective well-being (1)
- submillimeter-wave technology (1)
- superconducting devices (1)
- surrogate assisted phenotypic niching (1)
- surrogate models (1)
- sustainable packaging (1)
- tack (1)
- thriving (1)
- tissue engineering (1)
- total phenol content (1)
- traffic surveillance (1)
- transdermal therapeutic systems (1)
- transfer learning (1)
- unsupervised clustering (1)
- variational recurrent autoencoder (1)
- vitality (1)
- weighting factors (1)
- wind nuisance threshold (1)
- wind turbines time series (1)
- work reflection (1)
- work-related rumination (1)
- workflow automation (1)
The prototype of a workflow system for the submission of content to a digital object repository is here presented. It is based entirely on open-source standard components and features a service-oriented architecture. The front-end consists of Java Business Process Management (jBPM), Java Server Faces (JSF), and Java Server Pages (JSP). A Fedora Repository and a mySQL data base management system serve as a back-end. The communication between front-end and back-end uses a SOAP minimal binding stub. We describe the design principles and the construction of the prototype and discuss the possibilities and limitations of work ow creation by administrators. The code of the prototype is open-source and can be retrieved in the project escipub at http://sourceforge.net/ .
In this article we introduce the concept and the first implementation of a lightweight client-server-framework as middleware for distributed computing. On the client side an installation without administrative rights or privileged ports can turn any computer into a worker node. Only a Java runtime environment and the JAR files comprising the workflow client are needed. To connect all clients to the engine one open server port is sufficient. The engine submits data to the clients and orchestrates their work by workflow descriptions from a central database. Clients request new task descriptions periodically, thus the system is robust against network failures. In the basic set-up, data up- and downloads are handled via HTTP communication with the server. The performance of the modular system could additionally be improved using dedicated file servers or distributed network file systems. We demonstrate the design features of the proposed engine in real-world applications from mechanical engineering. We have used this system on a compute cluster in design-of-experiment studies, parameter optimisations and robustness validations of finite element structures.
The development of robot control programs is a complex task. Many robots are different in their electrical and mechanical structure which is also reflected in the software. Specific robot software environments support the program development, but are mainly text-based and usually applied by experts in the field with profound knowledge of the target robot. This paper presents a graphical programming environment which aims to ease the development of robot control programs. In contrast to existing graphical robot programming environments, our approach focuses on the composition of parallel action sequences. The developed environment allows to schedule independent robot actions on parallel execution lines and provides mechanism to avoid side-effects of parallel actions. The developed environment is platform-independent and based on the model-driven paradigm. The feasibility of our approach is shown by the application of the sequencer to a simulated service robot and a robot for educational purpose.
Suppose we have n keys, n access probabilities for the keys, and n+1 access probabilities for the gaps between the keys. Let h_min(n) be the minimal height of a binary search tree for n keys. We consider the problem to construct an optimal binary search tree with near minimal height, i.e.\ with height h <= h_min(n) + Delta for some fixed Delta. It is shown, that for any fixed Delta optimal binary search trees with near minimal height can be constructed in time O(n^2). This is as fast as in the unrestricted case. So far, the best known algorithms for the construction of height-restricted optimal binary search trees have running time O(L n^2), whereby L is the maximal permitted height. Compared to these algorithms our algorithm is at least faster by a factor of log n, because L is lower bounded by log n.
In this paper, we describe an approach that enables an autonomous system to infer the semantics of a command (i.e. a symbol sequence representing an action) in terms of the relations between changes in the observations and the action instances. We present a method of how to induce a theory (i.e. a semantic description) of the meaning of a command in terms of a minimal set of background knowledge. The only thing we have is a sequence of observations from which we extract what kinds of effects were caused by performing the command. This way, we yield a description of the semantics of the action and, hence, a definition.
We derive rates of convergence for limit theorems that reveal the intricate structure of the phase transitions in a mean-field version of the Blume-Emery-Griffith model. The theorems consist of scaling limits for the total spin. The model depends on the inverse temperature β and the interaction strength K. The rates of convergence results are obtained as (β,K) converges along appropriate sequences (βn,Kn) to points belonging to various subsets of the phase diagram which include a curve of second-order points and a tricritical point. We apply Stein's method for normal and non-normal approximation avoiding the use of transforms and supplying bounds, such as those of Berry-Esseen quality, on approximation error. We observe an additional phase transition phenomenon in the sense that depending on how fast Kn and βn are converging to points in various subsets of the phase diagram, different rates of convergences to one and the same limiting distribution occur.
Humans exhibit flexible and robust behavior in achieving their goals. We make suitable substitutions for objects, actions, or tools to get the job done. When opportunities that would allow us to reach our goals with less effort arise, we often take advantage of them. Robots are not nearly as robust in handling such situations. Enabling a domestic service robot to find ways to get a job done by making substitutions is the goal of our work. In this paper, we highlight the challenges faced in our approach to combine Hierarchical Task Network planning, Description Logics, and the notions of affordances and conceptual similarity. We present open questions in modeling the necessary knowledge, creating planning problems, and enabling the system to handle cases where plan generation fails due to missing/unavailable objects.
Since being introduced in the sixties and seventies, semi-implicit RosenbrockWanner (ROW) methods have become an important tool for the timeintegration of ODE and DAE problems. Over the years, these methods have been further developed in order to save computational effort by regarding approximations with respect to the given Jacobian [5], reduce effects of order reduction by introducing additional conditions [2, 4] or use advantages of partial explicit integration by considering underlying Runge-Kutta formulations [1]. As a consequence, there is a large number of different ROW-type schemes with characteristic properties for solving various problem formulations given in literature today.
TinyECC 2.0 is an open source library for Elliptic Curve Cryptography (ECC) in wireless sensor networks. This paper analyzes the side channel susceptibility of TinyECC 2.0 on a LOTUS sensor node platform. In our work we measured the electromagnetic (EM) emanation during computation of the scalar multiplication using 56 different configurations of TinyECC 2.0. All of them were found to be vulnerable, but to a different degree. The different degrees of leakage include adversary success using (i) Simple EM Analysis (SEMA) with a single measurement, (ii) SEMA using averaging, and (iii) Multiple-Exponent Single-Data (MESD) with a single measurement of the secret scalar. It is extremely critical that in 30 TinyECC 2.0 configurations a single EM measurement of an ECC private key operation is sufficient to simply read out the secret scalar. MESD requires additional adversary capabilities and it affects all TinyECC 2.0 configurations, again with only a single measurement of the ECC private key operation. These findings give evidence that in security applications a configuration of TinyECC 2.0 should be chosen that withstands SEMA with a single measurement and, beyond that, an addition of appropriate randomizing countermeasures is necessary.
Current robot platforms are being employed to collaborate with humans in a wide range of domestic and industrial tasks. These environments require autonomous systems that are able to classify and communicate anomalous situations such as fires, injured persons, car accidents; or generally, any potentially dangerous situation for humans. In this paper we introduce an anomaly detection dataset for the purpose of robot applications as well as the design and implementation of a deep learning architecture that classifies and describes dangerous situations using only a single image as input. We report a classification accuracy of 97 % and METEOR score of 16.2. We will make the dataset publicly available after this paper is accepted.
In this paper we propose an implement a general convolutional neural network (CNN) building framework for designing real-time CNNs. We validate our models by creating a real-time vision system which accomplishes the tasks of face detection, gender classification and emotion classification simultaneously in one blended step using our proposed CNN architecture. After presenting the details of the training procedure setup we proceed to evaluate on standard benchmark sets. We report accuracies of 96% in the IMDB gender dataset and 66% in the FER-2013 emotion dataset. Along with this we also introduced the very recent real-time enabled guided back-propagation visualization technique. Guided back-propagation uncovers the dynamics of the weight changes and evaluates the learned features. We argue that the careful implementation of modern CNN architectures, the use of the current regularization methods and the visualization of previously hidden features are necessary in order to reduce the gap between slow performances and real-time architectures. Our system has been validated by its deployment on a Care-O-bot 3 robot used during RoboCup@Home competitions. All our code, demos and pre-trained architectures have been released under an open-source license in our public repository.
The MAP-Elites algorithm produces a set of high-performing solutions that vary according to features defined by the user. This technique has the potential to be a powerful tool for design space exploration, but is limited by the need for numerous evaluations. The Surrogate-Assisted Illumination algorithm (SAIL), introduced here, integrates approximative models and intelligent sampling of the objective function to minimize the number of evaluations required by MAP-Elites.
The ability of SAIL to efficiently produce both accurate models and diverse high performing solutions is illustrated on a 2D airfoil design problem. The search space is divided into bins, each holding a design with a different combination of features. In each bin SAIL produces a better performing solution than MAP-Elites, and requires several orders of magnitude fewer evaluations. The CMA-ES algorithm was used to produce an optimal design in each bin: with the same number of evaluations required by CMA-ES to find a near-optimal solution in a single bin, SAIL finds solutions of similar quality in every bin.
Today, more than 70 million tons of lignin are produced by the pulp and paper industry every year. However, the utilization of lignin as a source for chemical synthesis is still limited due to the complex and heterogeneous lignin structure. The purpose of this study was a selective photodegradation of industrially available kraft lignin in order to obtain appropriate fragments and building block chemicals for further utilization, e.g. polymerization. Thus, kraft lignin obtained from soft wood black liquor by acidification was dissolved in sodium hydroxide and irradiated at a wavelength of 254 nm with and without the presence of titanium dioxide in various concentrations. Analyses of the irradiated products via SEC showed decreasing molar masses and decreasing polydispersity indices over time. At the end of the irradiation period the lignin was depolymerised to form fragments as small as the lignin monomers. TOC analyses showed minimal mineralisation due to the depolymerisation process.
Differential-Algebraic Equations and Beyond: From Smooth to Nonsmooth Constrained Dynamical Systems
(2018)
The present article presents a summarizing view at differential-algebraic equations (DAEs) and analyzes how new application fields and corresponding mathematical models lead to innovations both in theory and in numerical analysis for this problem class. Recent numerical methods for nonsmooth dynamical systems subject to unilateral contact and friction illustrate the topicality of this development.
Renewable resources gain increasing interest as source for environmentally benign biomaterials, such as drug encapsulation/release compounds, and scaffolds for tissue engineering in regenerative medicine. Being the second largest naturally abundant polymer, the interest in lignin valorization for biomedical utilization is rapidly growing. Depending on resource and isolation procedure, lignin shows specific antioxidant and antimicrobial activity. Today, efforts in research and industry are directed toward lignin utilization as renewable macromolecular building block for the preparation of polymeric drug encapsulation and scaffold materials. Within the last five years, remarkable progress has been made in isolation, functionalization and modification of lignin and lignin-derived compounds. However, literature so far mainly focuses lignin-derived fuels, lubricants and resins. The purpose of this review is to summarize the current state of the art and to highlight the most important results in the field of lignin-based materials for potential use in biomedicine (reported in 2014–2018). Special focus is drawn on lignin-derived nanomaterials for drug encapsulation and release as well as lignin hybrid materials used as scaffolds for guided bone regeneration in stem cell-based therapies.
Antioxidant activity is an essential feature required for oxygen-sensitive merchandise and goods, such as food and corresponding packaging as well as materials used in cosmetics and biomedicine. For example, vanillin, one of the most prominent antioxidants, is fabricated from lignin, the second most abundant natural polymer in the world. Antioxidant potential is primarily related to the termination of oxidation propagation reactions through hydrogen transfer. The application of technical lignin as a natural antioxidant has not yet been implemented in the industrial sector, mainly due to the complex heterogeneous structure and polydispersity of lignin. Thus, current research focuses on various isolation and purification strategies to improve the compatibility of lignin material with substrates and enhancing its stabilizing effect.
Traffic sign recognition is an important component of many advanced driving assistance systems, and it is required for full autonomous driving. Computational performance is usually the bottleneck in using large scale neural networks for this purpose. SqueezeNet is a good candidate for efficient image classification of traffic signs, but in our experiments it does not reach high accuracy, and we believe this is due to lack of data, requiring data augmentation. Generative adversarial networks can learn the high dimensional distribution of empirical data, allowing the generation of new data points. In this paper we apply pix2pix GANs architecture to generate new traffic sign images and evaluate the use of these images in data augmentation. We were motivated to use pix2pix to translate symbolic sign images to real ones due to the mode collapse in Conditional GANs. Through our experiments we found that data augmentation using GAN can increase classification accuracy for circular traffic signs from 92.1% to 94.0%, and for triangular traffic signs from 93.8% to 95.3%, producing an overall improvement of 2%. However some traditional augmentation techniques can outperform GAN data augmentation, for example contrast variation in circular traffic signs (95.5%) and displacement on triangular traffic signs (96.7 %). Our negative results shows that while GANs can be naively used for data augmentation, they are not always the best choice, depending on the problem and variability in the data.
Background: Virtual reality combined with spherical treadmills is used across species for studying neural circuits underlying navigation.
New Method: We developed an optical flow-based method for tracking treadmil ball motion in real-time using a single high-resolution camera.
Results: Tracking accuracy and timing were determined using calibration data. Ball tracking was performed at 500 Hz and integrated with an open source game engine for virtual reality projection. The projection was updated at 120 Hz with a latency with respect to ball motion of 30 ± 8 ms.
Comparison: with Existing Method(s) Optical flow based tracking of treadmill motion is typically achieved using optical mice. The camera-based optical flow tracking system developed here is based on off-the-shelf components and offers control over the image acquisition and processing parameters. This results in flexibility with respect to tracking conditions – such as ball surface texture, lighting conditions, or ball size – as well as camera alignment and calibration.
Conclusions: A fast system for rotational ball motion tracking suitable for virtual reality animal behavior across different scales was developed and characterized.
In an effort to assist researchers in choosing basis sets for quantum mechanical modeling of molecules (i.e. balancing calculation cost versus desired accuracy), we present a systematic study on the accuracy of computed conformational relative energies and their geometries in comparison to MP2/CBS and MP2/AV5Z data, respectively. In order to do so, we introduce a new nomenclature to unambiguously indicate how a CBS extrapolation was computed. Nineteen minima and transition states of buta-1,3-diene, propan-2-ol and the water dimer were optimized using forty-five different basis sets. Specifically, this includes one Pople (i.e. 6-31G(d)), eight Dunning (i.e. VXZ and AVXZ, X=2-5), twenty-five Jensen (i.e. pc-n, pcseg-n, aug-pcseg-n, pcSseg-n and aug-pcSseg-n, n=0-4) and nine Karlsruhe (e.g. def2-SV(P), def2-QZVPPD) basis sets. The molecules were chosen to represent both common and electronically diverse molecular systems. In comparison to MP2/CBS relative energies computed using the largest Jensen basis sets (i.e. n=2,3,4), the use of smaller sizes (n=0,1,2 and n=1,2,3) provides results that are within 0.11--0.24 and 0.09-0.16 kcal/mol. To practically guide researchers in their basis set choice, an equation is introduced that ranks basis sets based on a user-defined balance between their accuracy and calculation cost. Furthermore, we explain why the aug-pcseg-2, def2-TZVPPD and def2-TZVP basis sets are very suitable choices to balance speed and accuracy.
Background 3-hydroxy-3-methylglutaryl-coenzyme A lyase deficiency (HMGCLD) is an autosomal recessive disorder of ketogenesis and leucine degradation due to mutations in HMGCL .
Method We performed a systematic literature search to identify all published cases. 211 patients of whom relevant clinical data were available were included in this analysis. Clinical course, biochemical findings and mutation data are highlighted and discussed. An overview on all published HMGCL variants is provided.
Results More than 95% of patients presented with acute metabolic decompensation. Most patients manifested within the first year of life, 42.4% already neonatally. Very few individuals remained asymptomatic. The neurologic long-term outcome was favorable with 62.6% of patients showing normal development.
Conclusion This comprehensive data analysis provides a systematic overview on all published cases with HMGCLD including a list of all known HMGCL mutations.
2-methylacetoacetyl-coenzyme A thiolase (beta-ketothiolase) deficiency: one disease - two pathways
(2019)
Background: 2-methylacetoacetyl-coenzyme A thiolase deficiency (MATD; deficiency of mitochondrial acetoacetyl-coenzyme A thiolase T2/ “beta-ketothiolase”) is an autosomal recessive disorder of ketone body utilization and isoleucine degradation due to mutations in ACAT1.
Methods: We performed a systematic literature search for all available clinical descriptions of patients with MATD. 244 patients were identified and included in this analysis. Clinical course and biochemical data are presented and discussed.
Results: For 89.6 % of patients at least one acute metabolic decompensation was reported. Age at first symptoms ranged from 2 days to 8 years (median 12 months). More than 82% of patients presented in the first two years of life, while manifestation in the neonatal period was the exception (3.4%). 77.0% (157 of 204 patients) of patients showed normal psychomotor development without neurologic abnormalities.
Conclusion: This comprehensive data analysis provides a systematic overview on all cases with MATD identified in the literature. It demonstrates that MATD is a rather benign disorder with often favourable outcome, when compared with many other organic acidurias.
This work introduces a semi-Lagrangian lattice Boltzmann (SLLBM) solver for compressible flows (with or without discontinuities). It makes use of a cell-wise representation of the simulation domain and utilizes interpolation polynomials up to fourth order to conduct the streaming step. The SLLBM solver allows for an independent time step size due to the absence of a time integrator and for the use of unusual velocity sets, like a D2Q25, which is constructed by the roots of the fifth-order Hermite polynomial. The properties of the proposed model are shown in diverse example simulations of a Sod shock tube, a two-dimensional Riemann problem and a shock-vortex interaction. It is shown that the cell-based interpolation and the use of Gauss-Lobatto-Chebyshev support points allow for spatially high-order solutions and minimize the mass loss caused by the interpolation. Transformed grids in the shock-vortex interaction show the general applicability to non-uniform grids.
During the dawn of chemistry when the temperature of the young Universe had fallen below ∼4000 K, the ions of the light elements produced in Big Bang nucleosynthesis recombined in reverse order of their ionization potential. With its higher ionization potentials, He++ (54.5 eV) and He+ (24.6 eV) combined first with free electrons to form the first neutral atom, prior to the recombination of hydrogen (13.6 eV). At that time, in this metal-free and low-density environment, neutral helium atoms formed the Universe's first molecular bond in the helium hydride ion HeH+, by radiative association with protons (He + H+ → HeH+ + hν). As recombination progressed, the destruction of HeH+ (HeH+ + H → He + H+2) created a first path to the formation of molecular hydrogen, marking the beginning of the Molecular Age. Despite its unquestioned importance for the evolution of the early Universe, the HeH+ molecule has so far escaped unequivocal detection in interstellar space. In the laboratory, the ion was discovered as long ago as 1925, but only in the late seventies was the possibility that HeH+ might exist in local astrophysical plasmas discussed. In particular, the conditions in planetary nebulae were shown to be suitable for the production of potentially detectable HeH+ column densities: the hard radiation field from the central hot white dwarf creates overlapping Strömgren spheres, where HeH+ is predicted to form, primarily by radiative association of He+ and H. With the GREAT spectrometer onboard SOFIA, the HeH+ rotational ground-state transition at λ149.1 μm is now accessible. We report here its detection towards the planetary nebula NGC7027.
Although work events can be regarded as pivotal elements of organizational life, only a few studies have examined how positive and negative events relate to and combine to affect work engagement over time. Theory suggests that to better understand how current events affect work engagement (WE), we have to account for recent events that have preceded these current events. We present competing theoretical views on how recent and current work events may affect employees (e.g., getting used to a high frequency of negative events or becoming more sensitive to negative events). Although the occurrence of events implies discrete changes in the experience of work, prior research has not considered whether work events actually accumulate to sustained mid-term changes in WE. To address these gaps in the literature, we conducted a week-level longitudinal study across a period of 15 consecutive weeks among 135 employees, which yielded 849 weekly observations. While positive events were associated with higher levels of WE within the same week, negative events were not. Our results support neither satiation nor sensitization processes. However, high frequencies of negative events in the preceding week amplified the beneficial effects of positive events on WE in the current week. Growth curve analyses show that the benefits of positive events accumulate to sustain high levels of WE. WE dissipates in the absence of continuous experience of positive events. Our study adds a temporal component and informs research that has taken a feature-oriented perspective on the dynamic interplay of job demands and resources.
Start-ups stehen im Wettbewerb um qualifizierte Mitarbeiter in starker Konkurrenz zu etablierten Unternehmen und Konzernen. Der Bedarf an Fachkräften (etwa Software-Entwicklern) ist größer als je zuvor [1]. Wie stellen sich Start-ups als Arbeitgeber dar, um Personal für sich zu gewinnen? Dieser Frage wurde im Rahmen der Studie „Start-ups als Arbeitgeber“ nachgegangen.
Die Bilder aus dem Silicon Valley sind bekannt: Das Großraumbüro mit den Sitzecken zum Zurückziehen. Schaukeln, Kickern und Videospiele zum Relaxen in den Arbeitspausen. Überall gibt es etwas zu essen und zu trinken, und das natürlich gratis. – Diese Vorstellungen haben viele im Kopf. Finden sich diese Bilder in der Selbstdarstellung deutscher Start-ups als Arbeitgeber wieder?
Die hier vorgestellte Studie will keine allgemeingültigen Ergebnisse liefern, sondern ist explorativ angelegt und soll zur weiteren Beschäftigung mit diesem Forschungsfeld in Wissenschaft und Praxis anregen.
In the literature on occupational stress and recovery from work several facets of thinking about work in off-job time have been conceptualized. However, research on the focal concepts is currently rather disintegrated. In this study we take a closer look at the five most established concepts, namely (1) psychological detachment, (2) affective rumination, (3) problem-solving pondering, (4) positive work reflection, and (5) negative work reflection. More specifically, we scrutinized (1) whether the five facets of work-related rumination are empirically distinct, (2) whether they yield differential associations with different facets of employee well-being (burnout, work engagement, thriving, satisfaction with life, and flourishing), and (3) to what extent the five facets can be distinguished from and relate to conceptually similar constructs, such as irritation, worry, and neuroticism. We applied structural equation modeling techniques to cross-sectional survey data from 474 employees. Our results provide evidence that (1) the five facets of work-related rumination are highly related, yet empirically distinct, (2) that each facet contributes uniquely to explain variance in certain aspects of employee well-being, and (3) that they are distinct from related concepts, albeit there is a high overlap between (lower levels of) psychological detachment and cognitive irritation. Our study contributes to clarify the structure of work-related rumination and extends the nomological network around different types of thinking about work in off-job time and employee well-being.
Reinforcement learning (RL) algorithms should learn as much as possible about the environment but not the properties of the physics engines that generate the environment. There are multiple algorithms that solve the task in a physics engine based environment but there is no work done so far to understand if the RL algorithms can generalize across physics engines. In this work, we compare the generalization performance of various deep reinforcement learning algorithms on a variety of control tasks. Our results show that MuJoCo is the best engine to transfer the learning to other engines. On the other hand, none of the algorithms generalize when trained on PyBullet. We also found out that various algorithms have a promising generalizability if the effect of random seeds can be minimized on their performance.
Facial emotion recognition is the task to classify human emotions in face images. It is a difficult task due to high aleatoric uncertainty and visual ambiguity. A large part of the literature aims to show progress by increasing accuracy on this task, but this ignores the inherent uncertainty and ambiguity in the task. In this paper we show that Bayesian Neural Networks, as approximated using MC-Dropout, MC-DropConnect, or an Ensemble, are able to model the aleatoric uncertainty in facial emotion recognition, and produce output probabilities that are closer to what a human expects. We also show that calibration metrics show strange behaviors for this task, due to the multiple classes that can be considered correct, which motivates future work. We believe our work will motivate other researchers to move away from Classical and into Bayesian Neural Networks.
In this paper we introduce the Perception for Autonomous Systems (PAZ) software library. PAZ is a hierarchical perception library that allow users to manipulate multiple levels of abstraction in accordance to their requirements or skill level. More specifically, PAZ is divided into three hierarchical levels which we refer to as pipelines, processors, and backends. These abstractions allows users to compose functions in a hierarchical modular scheme that can be applied for preprocessing, data-augmentation, prediction and postprocessing of inputs and outputs of machine learning (ML) models. PAZ uses these abstractions to build reusable training and prediction pipelines for multiple robot perception tasks such as: 2D keypoint estimation, 2D object detection, 3D keypoint discovery, 6D pose estimation, emotion classification, face recognition, instance segmentation, and attention mechanisms.
Fundamental hydrogen storage properties of TiFe-alloy with partial substitution of Fe by Ti and Mn
(2020)
TiFe intermetallic compound has been extensively studied, owing to its low cost, good volumetric hydrogen density, and easy tailoring of hydrogenation thermodynamics by elemental substitution. All these positive aspects make this material promising for large-scale applications of solid-state hydrogen storage. On the other hand, activation and kinetic issues should be amended and the role of elemental substitution should be further understood. This work investigates the thermodynamic changes induced by the variation of Ti content along the homogeneity range of the TiFe phase (Ti:Fe ratio from 1:1 to 1:0.9) and of the substitution of Mn for Fe between 0 and 5 at.%. In all considered alloys, the major phase is TiFe-type together with minor amounts of TiFe2 or \b{eta}-Ti-type and Ti4Fe2O-type at the Ti-poor and rich side of the TiFe phase domain, respectively. Thermodynamic data agree with the available literature but offer here a comprehensive picture of hydrogenation properties over an extended Ti and Mn compositional range. Moreover, it is demonstrated that Ti-rich alloys display enhanced storage capacities, as long as a limited amount of \b{eta}-Ti is formed. Both Mn and Ti substitutions increase the cell parameter by possibly substituting Fe, lowering the plateau pressures and decreasing the hysteresis of the isotherms. A full picture of the dependence of hydrogen storage properties as a function of the composition will be discussed, together with some observed correlations.
Deep learning models are extensively used in various safety critical applications. Hence these models along with being accurate need to be highly reliable. One way of achieving this is by quantifying uncertainty. Bayesian methods for UQ have been extensively studied for Deep Learning models applied on images but have been less explored for 3D modalities such as point clouds often used for Robots and Autonomous Systems. In this work, we evaluate three uncertainty quantification methods namely Deep Ensembles, MC-Dropout and MC-DropConnect on the DarkNet21Seg 3D semantic segmentation model and comprehensively analyze the impact of various parameters such as number of models in ensembles or forward passes, and drop probability values, on task performance and uncertainty estimate quality. We find that Deep Ensembles outperforms other methods in both performance and uncertainty metrics. Deep ensembles outperform other methods by a margin of 2.4% in terms of mIOU, 1.3% in terms of accuracy, while providing reliable uncertainty for decision making.
Describing the elephant: a foundational model of human needs, motivation, behaviour, and wellbeing
(2020)
Models of basic psychological needs have been present and popular in the academic and lay literature for more than a century yet reviews of needs models show an astonishing lack of consensus. This raises the question of what basic human psychological needs are and if this can be consolidated into a model or framework that can align previous research and empirical study. The authors argue that the lack of consensus arises from researchers describing parts of the proverbial elephant correctly but failing to describe the full elephant. Through redefining what human needs are and matching this to an evolutionary framework we can see broad consensus across needs models and neatly slot constructs and psychological and behavioural theories into this framework. This enables a descriptive model of drives, motives, and well-being that can be simply outlined but refined enough to do justice to the complexities of human behaviour. This also raises some issues of how subjective well-being is and should be measured. Further avenues of research and how to continue building this model and framework are proposed.
Graph drawing with spring embedders employs a V x V computation phase over the graph's vertex set to compute repulsive forces. Here, the efficacy of forces diminishes with distance: a vertex can effectively only influence other vertices in a certain radius around its position. Therefore, the algorithm lends itself to an implementation using search data structures to reduce the runtime complexity. NVIDIA RT cores implement hierarchical tree traversal in hardware. We show how to map the problem of finding graph layouts with force-directed methods to a ray tracing problem that can subsequently be implemented with dedicated ray tracing hardware. With that, we observe speedups of 4x to 13x over a CUDA software implementation.
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
Long-term variability of solar irradiance and its implications for photovoltaic power in West Africa
(2020)
This paper addresses long-term changes in solar irradiance for West Africa (3° N to 20° N and 20° W to 16° E) and its implications for photovoltaic power systems. Here we use satellite irradiance (Surface Solar Radiation Data Set-Heliosat, Edition 2.1, SARAH-2.1) to derive photovoltaic yields. Based on 35 years of data (1983–2017) the temporal and regional variability as well as long-term trends of global and direct horizontal irradiance are analyzed. Furthermore, at four locations a detailed time series analysis is undertaken. The dry and the wet season are considered separately.
Object detectors have improved considerably in the last years by using advanced CNN architectures. However, many detector hyper-parameters are generally manually tuned, or they are used with values set by the detector authors. Automatic Hyper-parameter optimization has not been explored in improving CNN-based object detectors hyper-parameters. In this work, we propose the use of Black-box optimization methods to tune the prior/default box scales in Faster R-CNN and SSD, using Bayesian Optimization, SMAC, and CMA-ES. We show that by tuning the input image size and prior box anchor scale on Faster R-CNN mAP increases by 2% on PASCAL VOC 2007, and by 3% with SSD. On the COCO dataset with SSD there are mAP improvement in the medium and large objects, but mAP decreases by 1% in small objects. We also perform a regression analysis to find the significant hyper-parameters to tune.
4GREAT is an extension of the German Receiver for Astronomy at Terahertz frequencies (GREAT) operated aboard the Stratospheric Observatory for Infrared Astronomy (SOFIA). The spectrometer comprises four different detector bands and their associated subsystems for simultaneous and fully independent science operation. All detector beams are co-aligned on the sky. The frequency bands of 4GREAT cover 491-635, 890-1090, 1240-1525 and 2490-2590 GHz, respectively. This paper presents the design and characterization of the instrument, and its in-flight performance. 4GREAT saw first light in June 2018, and has been offered to the interested SOFIA communities starting with observing cycle 6.
Turbulent compressible flows are traditionally simulated using explicit Eulerian time integration applied to the Navier-Stokes equations. However, the associated Courant-Friedrichs-Lewy condition severely restricts the maximum time step size. Exploiting the Lagrangian nature of the Boltzmann equation's material derivative, we now introduce a feasible three-dimensional semi-Lagrangian lattice Boltzmann method (SLLBM), which elegantly circumvents this restriction. Previous lattice Boltzmann methods for compressible flows were mostly restricted to two dimensions due to the enormous number of discrete velocities needed in three dimensions. In contrast, this Rapid Communication demonstrates how cubature rules enhance the SLLBM to yield a three-dimensional velocity set with only 45 discrete velocities. Based on simulations of a compressible Taylor-Green vortex we show that the new method accurately captures shocks or shocklets as well as turbulence in 3D without utilizing additional filtering or stabilizing techniques, even when the time step sizes are up to two orders of magnitude larger compared to simulations in the literature. Our new method therefore enables researchers for the first time to study compressible turbulent flows by a fully explicit scheme, whose range of admissible time step sizes is only dictated by physics, while being decoupled from the spatial discretization.
In complex, expensive optimization domains we often narrowly focus on finding high performing solutions, instead of expanding our understanding of the domain itself. But what if we could quickly understand the complex behaviors that can emerge in said domains instead? We introduce surrogate-assisted phenotypic niching, a quality diversity algorithm which allows to discover a large, diverse set of behaviors by using computationally expensive phenotypic features. In this work we discover the types of air flow in a 2D fluid dynamics optimization problem. A fast GPU-based fluid dynamics solver is used in conjunction with surrogate models to accurately predict fluid characteristics from the shapes that produce the air flow. We show that these features can be modeled in a data-driven way while sampling to improve performance, rather than explicitly sampling to improve feature models. Our method can reduce the need to run an infeasibly large set of simulations while still being able to design a large diversity of air flows and the shapes that cause them. Discovering diversity of behaviors helps engineers to better understand expensive domains and their solutions.
The way solutions are represented, or encoded, is usually the result of domain knowledge and experience. In this work, we combine MAP-Elites with Variational Autoencoders to learn a Data-Driven Encoding (DDE) that captures the essence of the highest-performing solutions while still able to encode a wide array of solutions. Our approach learns this data-driven encoding during optimization by balancing between exploiting the DDE to generalize the knowledge contained in the current archive of elites and exploring new representations that are not yet captured by the DDE. Learning representation during optimization allows the algorithm to solve high-dimensional problems, and provides a low-dimensional representation which can be then be re-used. We evaluate the DDE approach by evolving solutions for inverse kinematics of a planar arm (200 joint angles) and for gaits of a 6-legged robot in action space (a sequence of 60 positions for each of the 12 joints). We show that the DDE approach not only accelerates and improves optimization, but produces a powerful encoding that captures a bias for high performance while expressing a variety of solutions.
In optimization methods that return diverse solution sets, three interpretations of diversity can be distinguished: multi-objective optimization which searches diversity in objective space, multimodal optimization which tries spreading out the solutions in genetic space, and quality diversity which performs diversity maintenance in phenotypic space. We introduce niching methods that provide more flexibility to the analysis of diversity and a simple domain to compare and provide insights about the paradigms. We show that multiobjective optimization does not always produce much diversity, quality diversity is not sensitive to genetic neutrality and creates the most diverse set of solutions, and multimodal optimization produces higher fitness solutions. An autoencoder is used to discover phenotypic features automatically, producing an even more diverse solution set. Finally, we make recommendations about when to use which approach.
Grasp verification is advantageous for autonomous manipulation robots as they provide the feedback required for higher level planning components about successful task completion. However, a major obstacle in doing grasp verification is sensor selection. In this paper, we propose a vision based grasp verification system using machine vision cameras, with the verification problem formulated as an image classification task. Machine vision cameras consist of a camera and a processing unit capable of on-board deep learning inference. The inference in these low-power hardware are done near the data source, reducing the robot's dependence on a centralized server, leading to reduced latency, and improved reliability. Machine vision cameras provide the deep learning inference capabilities using different neural accelerators. Although, it is not clear from the documentation of these cameras what is the effect of these neural accelerators on performance metrics such as latency and throughput. To systematically benchmark these machine vision cameras, we propose a parameterized model generator that generates end to end models of Convolutional Neural Networks(CNN). Using these generated models we benchmark latency and throughput of two machine vision cameras, JeVois A33 and Sipeed Maix Bit. Our experiments demonstrate that the selected machine vision camera and the deep learning models can robustly verify grasp with 97% per frame accuracy.
Comparative Evaluation of Pretrained Transfer Learning Models on Automatic Short Answer Grading
(2020)
Automatic Short Answer Grading (ASAG) is the process of grading the student answers by computational approaches given a question and the desired answer. Previous works implemented the methods of concept mapping, facet mapping, and some used the conventional word embeddings for extracting semantic features. They extracted multiple features manually to train on the corresponding datasets. We use pretrained embeddings of the transfer learning models, ELMo, BERT, GPT, and GPT-2 to assess their efficiency on this task. We train with a single feature, cosine similarity, extracted from the embeddings of these models. We compare the RMSE scores and correlation measurements of the four models with previous works on Mohler dataset. Our work demonstrates that ELMo outperformed the other three models. We also, briefly describe the four transfer learning models and conclude with the possible causes of poor results of transfer learning models.
It has been well proved that deep networks are efficient at extracting features from a given (source) labeled dataset. However, it is not always the case that they can generalize well to other (target) datasets which very often have a different underlying distribution. In this report, we evaluate four different domain adaptation techniques for image classification tasks: DeepCORAL, DeepDomainConfusion, CDAN and CDAN+E. These techniques are unsupervised given that the target dataset dopes not carry any labels during training phase. We evaluate model performance on the office-31 dataset. A link to the github repository of this report can be found here: https://github.com/agrija9/Deep-Unsupervised-Domain-Adaptation.
Ice accumulation in the blades of wind turbines can cause them to describe anomalous rotations or no rotations at all, thus affecting the generation of electricity and power output. In this work, we investigate the problem of ice accumulation in wind turbines by framing it as anomaly detection of multi-variate time series. Our approach focuses on two main parts: first, learning low-dimensional representations of time series using a Variational Recurrent Autoencoder (VRAE), and second, using unsupervised clustering algorithms to classify the learned representations as normal (no ice accumulated) or abnormal (ice accumulated). We have evaluated our approach on a custom wind turbine time series dataset, for the two-classes problem (one normal versus one abnormal class), we obtained a classification accuracy of up to 96$\%$ on test data. For the multiple-class problem (one normal versus multiple abnormal classes), we present a qualitative analysis of the low-dimensional learned latent space, providing insights into the capacities of our approach to tackle such problem. The code to reproduce this work can be found here https://github.com/agrija9/Wind-Turbines-VRAE-Paper.
Machine learning and neural networks are now ubiquitous in sonar perception, but it lags behind the computer vision field due to the lack of data and pre-trained models specifically for sonar images. In this paper we present the Marine Debris Turntable dataset and produce pre-trained neural networks trained on this dataset, meant to fill the gap of missing pre-trained models for sonar images. We train Resnet 20, MobileNets, DenseNet121, SqueezeNet, MiniXception, and an Autoencoder, over several input image sizes, from 32 x 32 to 96 x 96, on the Marine Debris turntable dataset. We evaluate these models using transfer learning for low-shot classification in the Marine Debris Watertank and another dataset captured using a Gemini 720i sonar. Our results show that in both datasets the pre-trained models produce good features that allow good classification accuracy with low samples (10-30 samples per class). The Gemini dataset validates that the features transfer to other kinds of sonar sensors. We expect that the community benefits from the public release of our pre-trained models and the turntable dataset.
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited. To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs. This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against two baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.083. Additionally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications. Finally, the source code and pre-trained STonKGs models are available at https://github.com/stonkgs/stonkgs and https://huggingface.co/stonkgs/stonkgs-150k.