Refine
Departments, institutes and facilities
- Fachbereich Informatik (50)
- Fachbereich Ingenieurwissenschaften und Kommunikation (20)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (16)
- Fachbereich Angewandte Naturwissenschaften (9)
- Fachbereich Wirtschaftswissenschaften (6)
- Institut für Cyber Security & Privacy (ICSP) (4)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (2)
- Fachbereich Sozialpolitik und Soziale Sicherung (1)
- Institut für funktionale Gen-Analytik (IFGA) (1)
- Institute of Visual Computing (IVC) (1)
Document Type
- Preprint (88) (remove)
Year of publication
Language
- English (88) (remove)
Keywords
- Evolutionary Computation (2)
- FOS: Computer and information sciences (2)
- burnout (2)
- inborn error of metabolism (2)
- ketone body (2)
- lignin (2)
- metabolic acidosis (2)
- metabolic decompensation (2)
- organic aciduria (2)
- psychological detachment (2)
- unsupervised learning (2)
- work engagement (2)
- ACAT1 (1)
- AML (1)
- ATR-FTIR (1)
- Air Pollution Monitoring (1)
- Artificial Intelligence (cs.AI) (1)
- Authentication features (1)
- Autoencoder (1)
- Automatic Differentiation (1)
- Automatic Short Answer Grading (1)
- BERT (1)
- Ball Tracking (1)
- Battery Packs (1)
- Battery degradation (1)
- Bayesian Deep Learning (1)
- Bayesian optimization (1)
- Big Data Analysis (1)
- Bioinformatics (1)
- Black-Box Optimization (1)
- COVID-19 (1)
- Calendar ageing (1)
- Capacity fade (1)
- Cell-to-cell variations (1)
- Compositional Pattern Producing Networks (1)
- Computational Fluid Dynamics (1)
- Computer Science - Computer Vision and Pattern Recognition (1)
- Computer Science - Learning (1)
- Cutting sticks problem (1)
- Cyclic Ageing (1)
- Cyclic testing (1)
- Deep Learning (1)
- Dimensionality reduction (1)
- Drosophila (1)
- Drug (1)
- ELMo (1)
- Electric Vehicles (1)
- Facial Emotion Recognition (1)
- Feature Model (1)
- Filtering (1)
- Folin-Ciocalteu assay (1)
- GPT (1)
- GPT-2 (1)
- Gender-based violence (1)
- HEB mixer (1)
- HMGCL (1)
- HSP90 (1)
- Heat Shock Protein (1)
- Hydrogen storage (1)
- Hyper-parameter Tuning (1)
- IoT (1)
- Java grid engine (1)
- Ketogenesis (1)
- Ketolysis (1)
- Knowledge Graphs (1)
- LOTUS Sensor Node (1)
- Lattice Boltzmann Method (1)
- Lattice Boltzmann Method Code (1)
- Lennard-Jones parameters (1)
- Leukemia (1)
- Level-of-Detail (1)
- Lithium-ion (1)
- LoRa (1)
- LoRaWAN (1)
- Low-Power Wide Area Network (LP-WAN) (1)
- MESD (1)
- Machine Learning (1)
- Machine Learning (cs.LG) (1)
- Machine learning (1)
- Measurement (1)
- Mebendazole (1)
- Metal hydride (1)
- Molecular dynamics (1)
- Multi-Solution Optimization (1)
- Multidimensional Z-transforms (1)
- Natural Language Processing (1)
- Navigation (1)
- Neural networks (1)
- Neural representations (1)
- Nonlinear sampled-data system (1)
- OH-number (1)
- Optical Flow (1)
- Path Loss (1)
- Pulse-width modulation (1)
- Pytorch (1)
- Quality Diversity (1)
- Quality diversity (1)
- Quantum mechanics (1)
- Random distribution (1)
- Range variability (1)
- Real-Time Image Processing (1)
- Rendering (1)
- Risk factors (1)
- Risk-based Authentication (RBA) (1)
- Robot Perception (1)
- Robotics (1)
- Robotics (cs.RO) (1)
- Rural women (1)
- SEC (1)
- SEMA (1)
- SIS mixer (1)
- Scale Tuning (1)
- Set partition problem (1)
- Sexual violence (1)
- Side Channel Analysis (1)
- Simulation (1)
- Spherical Treadmill (1)
- Standard deviation (1)
- Tautomers (1)
- TinyECC 2.0 (1)
- Transfer Learning (1)
- Transformers (1)
- UV-VIS (1)
- Uganda (1)
- Uncertainty Quantification (1)
- Urban (1)
- Usable Security (1)
- Virtual Reality (1)
- Volterra-Wiener series (1)
- Wireless Sensor Network (1)
- XRD (1)
- actinometry (1)
- active packaging (1)
- adhesion (1)
- affective events (1)
- affective rumination (1)
- airborne astronomy (1)
- anomaly detection (1)
- antioxidant activity (1)
- basic human needs, evolution of behavior (1)
- beta-ketothiolase (1)
- bio-based polymers (1)
- bioeconomy (1)
- biomass (1)
- biomaterial (1)
- bone regeneration (1)
- caching (1)
- cardiac magnetic resonance (1)
- cellular automata (1)
- computer vision (1)
- convolutional neural networks (1)
- dc electric drive (1)
- design-of-experiments (1)
- designing air flow (1)
- distributed services (1)
- diversity (1)
- domain adaptation (1)
- drug release (1)
- elite athletes (1)
- employee well-being (1)
- essential oil (1)
- experience sampling (1)
- far-infrared astronomy (1)
- feature discovery (1)
- food waste (1)
- force field (1)
- force-retraction displacement-curve (1)
- genetic neutrality (1)
- global illumination (1)
- growth curve modeling (1)
- heterodyne spectroscopy (1)
- human behavior (1)
- hydrogel (1)
- hyperammonemia (1)
- hypoglycemia (1)
- irritation (1)
- isoleucine (1)
- job demands-resources model (1)
- kraft lignin (1)
- leucine (1)
- lignocellulose feedstock (1)
- local optimization (1)
- low-cost air sensor (1)
- mental health (1)
- multi-objective optimization (1)
- multimodal optimization (1)
- multiscale parameterization (1)
- multivariate data processing (1)
- natural additives (1)
- negative work reflection (1)
- non-linear projection (1)
- object detection (1)
- objective function (1)
- organosolv (1)
- osteogenesis (1)
- overcommitment (1)
- parameter optimisation (1)
- parametric (1)
- path tracing (1)
- permeability (1)
- perseverative cognition (1)
- phenotypic diversity (1)
- phenotypic feature (1)
- phenotypic niching (1)
- photocatalysis (1)
- photolysis (1)
- plant extracts (1)
- positive work reflection (1)
- pressure sensitive adhesive (1)
- problem-solving pondering (1)
- psychological needs (1)
- rds encoding (1)
- real-time (1)
- receivers (1)
- representation (1)
- rumination (1)
- satisfaction with life (1)
- scaffolds (1)
- sensitization-satiation effects (1)
- shelf life (1)
- stem cells (1)
- subjective well-being (1)
- submillimeter-wave technology (1)
- superconducting devices (1)
- surrogate assisted phenotypic niching (1)
- surrogate models (1)
- sustainable packaging (1)
- tack (1)
- thriving (1)
- tissue engineering (1)
- total phenol content (1)
- traffic surveillance (1)
- transdermal therapeutic systems (1)
- transfer learning (1)
- unsupervised clustering (1)
- variational recurrent autoencoder (1)
- vitality (1)
- weighting factors (1)
- wind nuisance threshold (1)
- wind turbines time series (1)
- work reflection (1)
- work-related rumination (1)
- workflow automation (1)
2-methylacetoacetyl-coenzyme A thiolase (beta-ketothiolase) deficiency: one disease - two pathways
(2019)
Background: 2-methylacetoacetyl-coenzyme A thiolase deficiency (MATD; deficiency of mitochondrial acetoacetyl-coenzyme A thiolase T2/ “beta-ketothiolase”) is an autosomal recessive disorder of ketone body utilization and isoleucine degradation due to mutations in ACAT1.
Methods: We performed a systematic literature search for all available clinical descriptions of patients with MATD. 244 patients were identified and included in this analysis. Clinical course and biochemical data are presented and discussed.
Results: For 89.6 % of patients at least one acute metabolic decompensation was reported. Age at first symptoms ranged from 2 days to 8 years (median 12 months). More than 82% of patients presented in the first two years of life, while manifestation in the neonatal period was the exception (3.4%). 77.0% (157 of 204 patients) of patients showed normal psychomotor development without neurologic abnormalities.
Conclusion: This comprehensive data analysis provides a systematic overview on all cases with MATD identified in the literature. It demonstrates that MATD is a rather benign disorder with often favourable outcome, when compared with many other organic acidurias.
Background 3-hydroxy-3-methylglutaryl-coenzyme A lyase deficiency (HMGCLD) is an autosomal recessive disorder of ketogenesis and leucine degradation due to mutations in HMGCL .
Method We performed a systematic literature search to identify all published cases. 211 patients of whom relevant clinical data were available were included in this analysis. Clinical course, biochemical findings and mutation data are highlighted and discussed. An overview on all published HMGCL variants is provided.
Results More than 95% of patients presented with acute metabolic decompensation. Most patients manifested within the first year of life, 42.4% already neonatally. Very few individuals remained asymptomatic. The neurologic long-term outcome was favorable with 62.6% of patients showing normal development.
Conclusion This comprehensive data analysis provides a systematic overview on all published cases with HMGCLD including a list of all known HMGCL mutations.
4GREAT is an extension of the German Receiver for Astronomy at Terahertz frequencies (GREAT) operated aboard the Stratospheric Observatory for Infrared Astronomy (SOFIA). The spectrometer comprises four different detector bands and their associated subsystems for simultaneous and fully independent science operation. All detector beams are co-aligned on the sky. The frequency bands of 4GREAT cover 491-635, 890-1090, 1240-1525 and 2490-2590 GHz, respectively. This paper presents the design and characterization of the instrument, and its in-flight performance. 4GREAT saw first light in June 2018, and has been offered to the interested SOFIA communities starting with observing cycle 6.
Safety-critical applications like autonomous driving use Deep Neural Networks (DNNs) for object detection and segmentation. The DNNs fail to predict when they observe an Out-of-Distribution (OOD) input leading to catastrophic consequences. Existing OOD detection methods were extensively studied for image inputs but have not been explored much for LiDAR inputs. So in this study, we proposed two datasets for benchmarking OOD detection in 3D semantic segmentation. We used Maximum Softmax Probability and Entropy scores generated using Deep Ensembles and Flipout versions of RandLA-Net as OOD scores. We observed that Deep Ensembles out perform Flipout model in OOD detection with greater AUROC scores for both datasets.
The development of robot control programs is a complex task. Many robots are different in their electrical and mechanical structure which is also reflected in the software. Specific robot software environments support the program development, but are mainly text-based and usually applied by experts in the field with profound knowledge of the target robot. This paper presents a graphical programming environment which aims to ease the development of robot control programs. In contrast to existing graphical robot programming environments, our approach focuses on the composition of parallel action sequences. The developed environment allows to schedule independent robot actions on parallel execution lines and provides mechanism to avoid side-effects of parallel actions. The developed environment is platform-independent and based on the model-driven paradigm. The feasibility of our approach is shown by the application of the sequencer to a simulated service robot and a robot for educational purpose.
Graph drawing with spring embedders employs a V x V computation phase over the graph's vertex set to compute repulsive forces. Here, the efficacy of forces diminishes with distance: a vertex can effectively only influence other vertices in a certain radius around its position. Therefore, the algorithm lends itself to an implementation using search data structures to reduce the runtime complexity. NVIDIA RT cores implement hierarchical tree traversal in hardware. We show how to map the problem of finding graph layouts with force-directed methods to a ray tracing problem that can subsequently be implemented with dedicated ray tracing hardware. With that, we observe speedups of 4x to 13x over a CUDA software implementation.
Loading of shipping containers for dairy products often includes a press-fit task, which involves manually stacking milk cartons in a container without using pallets or packaging. Automating this task with a mobile manipulator can reduce worker strain, and also enhance the efficiency and safety of the container loading process. This paper proposes an approach called Adaptive Compliant Control with Integrated Failure Recovery (ACCIFR), which enables a mobile manipulator to reliably perform the press-fit task. We base the approach on a demonstration learning-based compliant control framework, such that we integrate a monitoring and failure recovery mechanism for successful task execution. Concretely, we monitor the execution through distance and force feedback, detect collisions while the robot is performing the press-fit task, and use wrench measurements to classify the direction of collision; this information informs the subsequent recovery process. We evaluate the method on a miniature container setup, considering variations in the (i) starting position of the end effector, (ii) goal configuration, and (iii) object grasping position. The results demonstrate that the proposed approach outperforms the baseline demonstration-based learning framework regarding adaptability to environmental variations and the ability to recover from collision failures, making it a promising solution for practical press-fit applications.
The clear-sky radiative effect of aerosol-radiation interactions is of relevance for our understanding of the climate system. The influence of aerosol on the surface energy budget is of high interest for the renewable energy sector. In this study, the radiative effect is investigated in particular with respect to seasonal and regional variations for the region of Germany and the year 2015 at the surface and top of atmosphere using two complementary approaches.
First, an ensemble of clear-sky models which explicitly consider aerosols is utilized to retrieve the aerosol optical depth and the surface direct radiative effect of aerosols by means of a clear sky fitting technique. For this, short-wave broadband irradiance measurements in the absence of clouds are used as a basis. A clear sky detection algorithm is used to identify cloud free observations. Considered are measurements of the shortwave broadband global and diffuse horizontal irradiance with shaded and unshaded pyranometers at 25 stations across Germany within the observational network of the German Weather Service (DWD). Clear sky models used are MMAC, MRMv6.1, METSTAT, ESRA, Heliosat-1, CEM and the simplified Solis model. The definition of aerosol and atmospheric characteristics of the models are examined in detail for their suitability for this approach.
Second, the radiative effect is estimated using explicit radiative transfer simulations with inputs on the meteorological state of the atmosphere, trace-gases and aerosol from CAMS reanalysis. The aerosol optical properties (aerosol optical depth, Ångström exponent, single scattering albedo and assymetrie parameter) are first evaluated with AERONET direct sun and inversion products. The largest inconsistency is found for the aerosol absorption, which is overestimated by about 0.03 or about 30 % by the CAMS reanalysis. Compared to the DWD observational network, the simulated global, direct and diffuse irradiances show reasonable agreement within the measurement uncertainty. The radiative kernel method is used to estimate the resulting uncertainty and bias of the simulated direct radiative effect. The uncertainty is estimated to −1.5 ± 7.7 and 0.6 ± 3.5 W m−2 at the surface and top of atmosphere, respectively, while the annual-mean biases at the surface, top of atmosphere and total atmosphere are −10.6, −6.5 and 4.1 W m−2, respectively.
The retrieval of the aerosol radiative effect with the clear sky models shows a high level of agreement with the radiative transfer simulations, with an RMSE of 5.8 W m−2 and a correlation of 0.75. The annual mean of the REari at the surface for the 25 DWD stations shows a value of −12.8 ± 5 W m−2 as average over the clear sky models, compared to −11 W m−2 from the radiative transfer simulations. Since all models assume a fixed aerosol characterisation, the annual cycle of the aerosol radiation effect cannot be reproduced. Out of this set of clear sky models, the largest level of agreement is shown by the ESRA and MRMv6.1 models.
In optimization methods that return diverse solution sets, three interpretations of diversity can be distinguished: multi-objective optimization which searches diversity in objective space, multimodal optimization which tries spreading out the solutions in genetic space, and quality diversity which performs diversity maintenance in phenotypic space. We introduce niching methods that provide more flexibility to the analysis of diversity and a simple domain to compare and provide insights about the paradigms. We show that multiobjective optimization does not always produce much diversity, quality diversity is not sensitive to genetic neutrality and creates the most diverse set of solutions, and multimodal optimization produces higher fitness solutions. An autoencoder is used to discover phenotypic features automatically, producing an even more diverse solution set. Finally, we make recommendations about when to use which approach.
Ice accumulation in the blades of wind turbines can cause them to describe anomalous rotations or no rotations at all, thus affecting the generation of electricity and power output. In this work, we investigate the problem of ice accumulation in wind turbines by framing it as anomaly detection of multi-variate time series. Our approach focuses on two main parts: first, learning low-dimensional representations of time series using a Variational Recurrent Autoencoder (VRAE), and second, using unsupervised clustering algorithms to classify the learned representations as normal (no ice accumulated) or abnormal (ice accumulated). We have evaluated our approach on a custom wind turbine time series dataset, for the two-classes problem (one normal versus one abnormal class), we obtained a classification accuracy of up to 96$\%$ on test data. For the multiple-class problem (one normal versus multiple abnormal classes), we present a qualitative analysis of the low-dimensional learned latent space, providing insights into the capacities of our approach to tackle such problem. The code to reproduce this work can be found here https://github.com/agrija9/Wind-Turbines-VRAE-Paper.
The way solutions are represented, or encoded, is usually the result of domain knowledge and experience. In this work, we combine MAP-Elites with Variational Autoencoders to learn a Data-Driven Encoding (DDE) that captures the essence of the highest-performing solutions while still able to encode a wide array of solutions. Our approach learns this data-driven encoding during optimization by balancing between exploiting the DDE to generalize the knowledge contained in the current archive of elites and exploring new representations that are not yet captured by the DDE. Learning representation during optimization allows the algorithm to solve high-dimensional problems, and provides a low-dimensional representation which can be then be re-used. We evaluate the DDE approach by evolving solutions for inverse kinematics of a planar arm (200 joint angles) and for gaits of a 6-legged robot in action space (a sequence of 60 positions for each of the 12 joints). We show that the DDE approach not only accelerates and improves optimization, but produces a powerful encoding that captures a bias for high performance while expressing a variety of solutions.
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot. We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation, robust object recognition and task planning. New developments include an approach to grasp vertical objects, placement of objects by considering the empty space on a workstation, and the process of porting our code to ROS2.
Object detectors have improved considerably in the last years by using advanced CNN architectures. However, many detector hyper-parameters are generally manually tuned, or they are used with values set by the detector authors. Automatic Hyper-parameter optimization has not been explored in improving CNN-based object detectors hyper-parameters. In this work, we propose the use of Black-box optimization methods to tune the prior/default box scales in Faster R-CNN and SSD, using Bayesian Optimization, SMAC, and CMA-ES. We show that by tuning the input image size and prior box anchor scale on Faster R-CNN mAP increases by 2% on PASCAL VOC 2007, and by 3% with SSD. On the COCO dataset with SSD there are mAP improvement in the medium and large objects, but mAP decreases by 1% in small objects. We also perform a regression analysis to find the significant hyper-parameters to tune.
Reinforcement learning (RL) algorithms should learn as much as possible about the environment but not the properties of the physics engines that generate the environment. There are multiple algorithms that solve the task in a physics engine based environment but there is no work done so far to understand if the RL algorithms can generalize across physics engines. In this work, we compare the generalization performance of various deep reinforcement learning algorithms on a variety of control tasks. Our results show that MuJoCo is the best engine to transfer the learning to other engines. On the other hand, none of the algorithms generalize when trained on PyBullet. We also found out that various algorithms have a promising generalizability if the effect of random seeds can be minimized on their performance.
The promotion of sustainable packaging is part of the European Green Deal and plays a key role in the EU’s social and political strategy. One option is the use of renewable resources and biomass waste as raw materials for polymer production. Lignocellulose biomass from annual and perennial industrial crops and agricultural residues are a major source of polysaccharides, proteins, and lignin, and can also be used to obtain plant-based extracts and essential oils. Therefore, these biomasses are considered as potential substitute for fossil-based resources. Here, the status quo of bio-based polymers is discussed and evaluated in terms of properties related to packaging applications such as gas and water vapor permeability as well as mechanical properties. So far, their practical use is still restricted due to lower performance in fundamental packaging functions that directly influence food quality and safety, the length of shelf life and thus the amount of food waste. Besides bio-based polymers, this review focuses on plant extracts as active packaging agents. Incorporating extracts of herbs, flowers, trees, and their fruits is inevitable to achieve desired material properties that are capable to prolong the food shelf life. Finally, the adoption potential of packaging based on polymers from renewable resources is discussed from a bioeconomy perspective.
We introduce canonical weight normalization for convolutional neural networks. Inspired by the canonical tensor decomposition, we express the weight tensors in so-called canonical networks as scaled sums of outer vector products. In particular, we train network weights in the decomposed form, where scale weights are optimized separately for each mode. Additionally, similarly to weight normalization, we include a global scaling parameter. We study the initialization of the canonical form by running the power method and by drawing randomly from Gaussian or uniform distributions. Our results indicate that we can replace the power method with cheaper initializations drawn from standard distributions. The canonical re-parametrization leads to competitive normalization performance on the MNIST, CIFAR10, and SVHN data sets. Moreover, the formulation simplifies network compression. Once training has converged, the canonical form allows convenient model-compression by truncating the parameter sums.
Humans exhibit flexible and robust behavior in achieving their goals. We make suitable substitutions for objects, actions, or tools to get the job done. When opportunities that would allow us to reach our goals with less effort arise, we often take advantage of them. Robots are not nearly as robust in handling such situations. Enabling a domestic service robot to find ways to get a job done by making substitutions is the goal of our work. In this paper, we highlight the challenges faced in our approach to combine Hierarchical Task Network planning, Description Logics, and the notions of affordances and conceptual similarity. We present open questions in modeling the necessary knowledge, creating planning problems, and enabling the system to handle cases where plan generation fails due to missing/unavailable objects.
Comparative Evaluation of Pretrained Transfer Learning Models on Automatic Short Answer Grading
(2020)
Automatic Short Answer Grading (ASAG) is the process of grading the student answers by computational approaches given a question and the desired answer. Previous works implemented the methods of concept mapping, facet mapping, and some used the conventional word embeddings for extracting semantic features. They extracted multiple features manually to train on the corresponding datasets. We use pretrained embeddings of the transfer learning models, ELMo, BERT, GPT, and GPT-2 to assess their efficiency on this task. We train with a single feature, cosine similarity, extracted from the embeddings of these models. We compare the RMSE scores and correlation measurements of the four models with previous works on Mohler dataset. Our work demonstrates that ELMo outperformed the other three models. We also, briefly describe the four transfer learning models and conclude with the possible causes of poor results of transfer learning models.
Comparative study of 3D object detection frameworks based on LiDAR data and sensor fusion techniques
(2022)
Estimating and understanding the surroundings of the vehicle precisely forms the basic and crucial step for the autonomous vehicle. The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time. Generally, the perception system involves various subsystems such as localization, obstacle (static and dynamic) detection, and avoidance, mapping systems, and others. For perceiving the environment, these vehicles will be equipped with various exteroceptive (both passive and active) sensors in particular cameras, Radars, LiDARs, and others. These systems are equipped with deep learning techniques that transform the huge amount of data from the sensors into semantic information on which the object detection and localization tasks are performed. For numerous driving tasks, to provide accurate results, the location and depth information of a particular object is necessary. 3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object. Based on recent research, 3D object detection frameworks performing object detection and localization on LiDAR data and sensor fusion techniques show significant improvement in their performance. In this work, a comparative study of the effect of using LiDAR data for object detection frameworks and the performance improvement seen by using sensor fusion techniques are performed. Along with discussing various state-of-the-art methods in both the cases, performing experimental analysis, and providing future research directions.
Off-lattice Boltzmann methods increase the flexibility and applicability of lattice Boltzmann methods by decoupling the discretizations of time, space, and particle velocities. However, the velocity sets that are mostly used in off-lattice Boltzmann simulations were originally tailored to on-lattice Boltzmann methods. In this contribution, we show how the accuracy and efficiency of weakly and fully compressible semi-Lagrangian off-lattice Boltzmann simulations is increased by velocity sets derived from cubature rules, i.e. multivariate quadratures, which have not been produced by the Gauss-product rule. In particular, simulations of 2D shock-vortex interactions indicate that the cubature-derived degree-nine D2Q19 velocity set is capable to replace the Gauss-product rule-derived D2Q25. Likewise, the degree-five velocity sets D3Q13 and D3Q21, as well as a degree-seven D3V27 velocity set were successfully tested for 3D Taylor-Green vortex flows to challenge and surpass the quality of the customary D3Q27 velocity set. In compressible 3D Taylor-Green vortex flows with Mach numbers Ma={0.5;1.0;1.5;2.0} on-lattice simulations with velocity sets D3Q103 and D3V107 showed only limited stability, while the off-lattice degree-nine D3Q45 velocity set accurately reproduced the kinetic energy provided by literature.
The Covid-19 pandemic has challenged educators across the world to move their teaching and mentoring from in-person to remote. During nonpandemic semesters at their institutes (e.g. universities), educators can directly provide students the software environment needed to support their learning - either in specialized computer laboratories (e.g. computational chemistry labs) or shared computer spaces. These labs are often supported by staff that maintains the operating systems (OS) and software. But how does one provide a specialized software environment for remote teaching? One solution is to provide students a customized operating system (e.g., Linux) that includes open-source software for supporting your teaching goals. However, such a solution should not require students to install the OS alongside their existing one (i.e. dual/multi-booting) or be used as a complete replacement. Such approaches are risky because of a) the students' possible lack of software expertise, b) the possible disruption of an existing software workflow that is needed in other classes or by other family members, and c) the importance of maintaining a working computer when isolated (e.g. societal restrictions). To illustrate possible solutions, we discuss our approach that used a customized Linux OS and a Docker container in a course that teaches computational chemistry and Python3.
Traffic sign recognition is an important component of many advanced driving assistance systems, and it is required for full autonomous driving. Computational performance is usually the bottleneck in using large scale neural networks for this purpose. SqueezeNet is a good candidate for efficient image classification of traffic signs, but in our experiments it does not reach high accuracy, and we believe this is due to lack of data, requiring data augmentation. Generative adversarial networks can learn the high dimensional distribution of empirical data, allowing the generation of new data points. In this paper we apply pix2pix GANs architecture to generate new traffic sign images and evaluate the use of these images in data augmentation. We were motivated to use pix2pix to translate symbolic sign images to real ones due to the mode collapse in Conditional GANs. Through our experiments we found that data augmentation using GAN can increase classification accuracy for circular traffic signs from 92.1% to 94.0%, and for triangular traffic signs from 93.8% to 95.3%, producing an overall improvement of 2%. However some traditional augmentation techniques can outperform GAN data augmentation, for example contrast variation in circular traffic signs (95.5%) and displacement on triangular traffic signs (96.7 %). Our negative results shows that while GANs can be naively used for data augmentation, they are not always the best choice, depending on the problem and variability in the data.
Electric vehicles (EVs) are rapidly growing in popularity, but range variability has become an important research area with significant implications for EV performance, usability, and overall market adoption. This study aims to unravel the complexities of range variability by examining the contributing factors and offering innovative strategies to mitigate these differences during pack design. Through a detailed analysis of cell parameter deviation, cell connections, battery configuration, battery pack size, and driving behavior, the research illuminates their impact on extractable energy and driving range. The study employed a comprehensive approach and conducted systematic simulation-based experimentation to identify the optimal battery pack configuration based on maximum extractable energy, minimal variability and maximum range. The results reveal insights into the relationship between discharge rate and battery pack performance, and the impact of cell parameter variations on pack energy output. This research advances the understanding of EV performance optimisation, reduces pack-to-pack variability, and extends battery pack lifespan.
Describing the elephant: a foundational model of human needs, motivation, behaviour, and wellbeing
(2020)
Models of basic psychological needs have been present and popular in the academic and lay literature for more than a century yet reviews of needs models show an astonishing lack of consensus. This raises the question of what basic human psychological needs are and if this can be consolidated into a model or framework that can align previous research and empirical study. The authors argue that the lack of consensus arises from researchers describing parts of the proverbial elephant correctly but failing to describe the full elephant. Through redefining what human needs are and matching this to an evolutionary framework we can see broad consensus across needs models and neatly slot constructs and psychological and behavioural theories into this framework. This enables a descriptive model of drives, motives, and well-being that can be simply outlined but refined enough to do justice to the complexities of human behaviour. This also raises some issues of how subjective well-being is and should be measured. Further avenues of research and how to continue building this model and framework are proposed.
Design and characterization of geopolymer foams reinforced with Miscanthus x giganteus fibers
(2024)
This paper presents the effects of different amounts of fibers and foaming agent, as well as different fiber sizes, on the mechanical and thermal properties of fly ash-based geopolymer foams reinforced with Miscanthus x giganteus fibers. The mechanical properties of the geopolymer foams were measured through compressive strength, and their thermal properties were characterized by thermal conductivity and X-ray micro-computed tomography. Furthermore, design of experiment (DoE) were used to optimize the thermal conductivity and compressive strength of Miscanthus x giganteus reinforced geopolymer foams. In addition, the microstructure was studied using X-ray diffraction (XRD), Field emission scanning electron microscopy (SEM) and Fourier-Transform Infrared Spectroscopy (FTIR). Mixtures with a low thermal conductivity of 0.056 W (m K)−1 and a porosity of 79 vol% achieved a compressive strength of only 0.02 MPa. In comparison, mixtures with a thermal conductivity of 0.087 W (m K)−1 and a porosity of 58 vol% achieved a compressive strength of 0.45 MPa.
In complex, expensive optimization domains we often narrowly focus on finding high performing solutions, instead of expanding our understanding of the domain itself. But what if we could quickly understand the complex behaviors that can emerge in said domains instead? We introduce surrogate-assisted phenotypic niching, a quality diversity algorithm which allows to discover a large, diverse set of behaviors by using computationally expensive phenotypic features. In this work we discover the types of air flow in a 2D fluid dynamics optimization problem. A fast GPU-based fluid dynamics solver is used in conjunction with surrogate models to accurately predict fluid characteristics from the shapes that produce the air flow. We show that these features can be modeled in a data-driven way while sampling to improve performance, rather than explicitly sampling to improve feature models. Our method can reduce the need to run an infeasibly large set of simulations while still being able to design a large diversity of air flows and the shapes that cause them. Discovering diversity of behaviors helps engineers to better understand expensive domains and their solutions.
State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
21 pages, with supplementary
Quality diversity algorithms can be used to efficiently create a diverse set of solutions to inform engineers' intuition. But quality diversity is not efficient in very expensive problems, needing 100.000s of evaluations. Even with the assistance of surrogate models, quality diversity needs 100s or even 1000s of evaluations, which can make it use infeasible. In this study we try to tackle this problem by using a pre-optimization strategy on a lower-dimensional optimization problem and then map the solutions to a higher-dimensional case. For a use case to design buildings that minimize wind nuisance, we show that we can predict flow features around 3D buildings from 2D flow features around building footprints. For a diverse set of building designs, by sampling the space of 2D footprints with a quality diversity algorithm, a predictive model can be trained that is more accurate than when trained on a set of footprints that were selected with a space-filling algorithm like the Sobol sequence. Simulating only 16 buildings in 3D, a set of 1024 building designs with low predicted wind nuisance is created. We show that we can produce better machine learning models by producing training data with quality diversity instead of using common sampling techniques. The method can bootstrap generative design in a computationally expensive 3D domain and allow engineers to sweep the design space, understanding wind nuisance in early design phases.
Deep learning models are extensively used in various safety critical applications. Hence these models along with being accurate need to be highly reliable. One way of achieving this is by quantifying uncertainty. Bayesian methods for UQ have been extensively studied for Deep Learning models applied on images but have been less explored for 3D modalities such as point clouds often used for Robots and Autonomous Systems. In this work, we evaluate three uncertainty quantification methods namely Deep Ensembles, MC-Dropout and MC-DropConnect on the DarkNet21Seg 3D semantic segmentation model and comprehensively analyze the impact of various parameters such as number of models in ensembles or forward passes, and drop probability values, on task performance and uncertainty estimate quality. We find that Deep Ensembles outperforms other methods in both performance and uncertainty metrics. Deep ensembles outperform other methods by a margin of 2.4% in terms of mIOU, 1.3% in terms of accuracy, while providing reliable uncertainty for decision making.
It has been well proved that deep networks are efficient at extracting features from a given (source) labeled dataset. However, it is not always the case that they can generalize well to other (target) datasets which very often have a different underlying distribution. In this report, we evaluate four different domain adaptation techniques for image classification tasks: DeepCORAL, DeepDomainConfusion, CDAN and CDAN+E. These techniques are unsupervised given that the target dataset dopes not carry any labels during training phase. We evaluate model performance on the office-31 dataset. A link to the github repository of this report can be found here: https://github.com/agrija9/Deep-Unsupervised-Domain-Adaptation.
Although work events can be regarded as pivotal elements of organizational life, only a few studies have examined how positive and negative events relate to and combine to affect work engagement over time. Theory suggests that to better understand how current events affect work engagement (WE), we have to account for recent events that have preceded these current events. We present competing theoretical views on how recent and current work events may affect employees (e.g., getting used to a high frequency of negative events or becoming more sensitive to negative events). Although the occurrence of events implies discrete changes in the experience of work, prior research has not considered whether work events actually accumulate to sustained mid-term changes in WE. To address these gaps in the literature, we conducted a week-level longitudinal study across a period of 15 consecutive weeks among 135 employees, which yielded 849 weekly observations. While positive events were associated with higher levels of WE within the same week, negative events were not. Our results support neither satiation nor sensitization processes. However, high frequencies of negative events in the preceding week amplified the beneficial effects of positive events on WE in the current week. Growth curve analyses show that the benefits of positive events accumulate to sustain high levels of WE. WE dissipates in the absence of continuous experience of positive events. Our study adds a temporal component and informs research that has taken a feature-oriented perspective on the dynamic interplay of job demands and resources.
The MAP-Elites algorithm produces a set of high-performing solutions that vary according to features defined by the user. This technique has the potential to be a powerful tool for design space exploration, but is limited by the need for numerous evaluations. The Surrogate-Assisted Illumination algorithm (SAIL), introduced here, integrates approximative models and intelligent sampling of the objective function to minimize the number of evaluations required by MAP-Elites.
The ability of SAIL to efficiently produce both accurate models and diverse high performing solutions is illustrated on a 2D airfoil design problem. The search space is divided into bins, each holding a design with a different combination of features. In each bin SAIL produces a better performing solution than MAP-Elites, and requires several orders of magnitude fewer evaluations. The CMA-ES algorithm was used to produce an optimal design in each bin: with the same number of evaluations required by CMA-ES to find a near-optimal solution in a single bin, SAIL finds solutions of similar quality in every bin.
Force field (FF) based molecular modeling is an often used method to investigate and study structural and dynamic properties of (bio-)chemical substances and systems. When such a system is modeled or refined, the force field parameters need to be adjusted. This force field parameter optimization can be a tedious task and is always a trade-off in terms of errors regarding the targeted properties. To better control the balance of various properties’ errors, in this study we introduce weighting factors for the optimization objectives. Different weighting strategies are compared to fine-tune the balance between bulk-phase density and relative conformational energies (RCE), using n-octane as a representative system. Additionally, a non-linear projection of the individual property-specific parts of the optimized loss function is deployed to further improve the balance between them. The results show that the overall error is reduced. One interesting outcome is a large variety in the resulting optimized force field parameters (FFParams) and corresponding errors, suggesting that the optimization landscape is multi-modal and very dependent on the weighting factor setup. We conclude that adjusting the weighting factors can be a very important feature to lower the overall error in the FF optimization procedure, giving researchers the possibility to fine-tune their FFs.
During the dawn of chemistry when the temperature of the young Universe had fallen below ∼4000 K, the ions of the light elements produced in Big Bang nucleosynthesis recombined in reverse order of their ionization potential. With its higher ionization potentials, He++ (54.5 eV) and He+ (24.6 eV) combined first with free electrons to form the first neutral atom, prior to the recombination of hydrogen (13.6 eV). At that time, in this metal-free and low-density environment, neutral helium atoms formed the Universe's first molecular bond in the helium hydride ion HeH+, by radiative association with protons (He + H+ → HeH+ + hν). As recombination progressed, the destruction of HeH+ (HeH+ + H → He + H+2) created a first path to the formation of molecular hydrogen, marking the beginning of the Molecular Age. Despite its unquestioned importance for the evolution of the early Universe, the HeH+ molecule has so far escaped unequivocal detection in interstellar space. In the laboratory, the ion was discovered as long ago as 1925, but only in the late seventies was the possibility that HeH+ might exist in local astrophysical plasmas discussed. In particular, the conditions in planetary nebulae were shown to be suitable for the production of potentially detectable HeH+ column densities: the hard radiation field from the central hot white dwarf creates overlapping Strömgren spheres, where HeH+ is predicted to form, primarily by radiative association of He+ and H. With the GREAT spectrometer onboard SOFIA, the HeH+ rotational ground-state transition at λ149.1 μm is now accessible. We report here its detection towards the planetary nebula NGC7027.
Representing 3D surfaces as level sets of continuous functions over R3 is the common denominator of neural implicit representations, which recently enabled remarkable progress in geometric deep learning and computer vision tasks. In order to represent 3D motion within this framework, it is often assumed (either explicitly or implicitly) that the transformations which a surface may undergo are homeomorphic: this is not necessarily true, for instance, in the case of fluid dynamics. In order to represent more general classes of deformations, we propose to apply this theoretical framework as regularizers for the optimization of simple 4D implicit functions (such as signed distance fields). We show that our representation is capable of capturing both homeomorphic and topology-changing deformations, while also defining correspondences over the continuously-reconstructed surfaces.
Application of underwater robots are on the rise, most of them are dependent on sonar for underwater vision, but the lack of strong perception capabilities limits them in this task. An important issue in sonar perception is matching image patches, which can enable other techniques like localization, change detection, and mapping. There is a rich literature for this problem in color images, but for acoustic images, it is lacking, due to the physics that produce these images. In this paper we improve on our previous results for this problem (Valdenegro-Toro et al, 2017), instead of modeling features manually, a Convolutional Neural Network (CNN) learns a similarity function and predicts if two input sonar images are similar or not. With the objective of improving the sonar image matching problem further, three state of the art CNN architectures are evaluated on the Marine Debris dataset, namely DenseNet, and VGG, with a siamese or two-channel architecture, and contrastive loss. To ensure a fair evaluation of each network, thorough hyper-parameter optimization is executed. We find that the best performing models are DenseNet Two-Channel network with 0.955 AUC, VGG-Siamese with contrastive loss at 0.949 AUC and DenseNet Siamese with 0.921 AUC. By ensembling the top performing DenseNet two-channel and DenseNet-Siamese models overall highest prediction accuracy obtained is 0.978 AUC, showing a large improvement over the 0.91 AUC in the state of the art.
Fundamental hydrogen storage properties of TiFe-alloy with partial substitution of Fe by Ti and Mn
(2020)
TiFe intermetallic compound has been extensively studied, owing to its low cost, good volumetric hydrogen density, and easy tailoring of hydrogenation thermodynamics by elemental substitution. All these positive aspects make this material promising for large-scale applications of solid-state hydrogen storage. On the other hand, activation and kinetic issues should be amended and the role of elemental substitution should be further understood. This work investigates the thermodynamic changes induced by the variation of Ti content along the homogeneity range of the TiFe phase (Ti:Fe ratio from 1:1 to 1:0.9) and of the substitution of Mn for Fe between 0 and 5 at.%. In all considered alloys, the major phase is TiFe-type together with minor amounts of TiFe2 or \b{eta}-Ti-type and Ti4Fe2O-type at the Ti-poor and rich side of the TiFe phase domain, respectively. Thermodynamic data agree with the available literature but offer here a comprehensive picture of hydrogenation properties over an extended Ti and Mn compositional range. Moreover, it is demonstrated that Ti-rich alloys display enhanced storage capacities, as long as a limited amount of \b{eta}-Ti is formed. Both Mn and Ti substitutions increase the cell parameter by possibly substituting Fe, lowering the plateau pressures and decreasing the hysteresis of the isotherms. A full picture of the dependence of hydrogen storage properties as a function of the composition will be discussed, together with some observed correlations.
Since being introduced in the sixties and seventies, semi-implicit RosenbrockWanner (ROW) methods have become an important tool for the timeintegration of ODE and DAE problems. Over the years, these methods have been further developed in order to save computational effort by regarding approximations with respect to the given Jacobian [5], reduce effects of order reduction by introducing additional conditions [2, 4] or use advantages of partial explicit integration by considering underlying Runge-Kutta formulations [1]. As a consequence, there is a large number of different ROW-type schemes with characteristic properties for solving various problem formulations given in literature today.
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
Facial emotion recognition is the task to classify human emotions in face images. It is a difficult task due to high aleatoric uncertainty and visual ambiguity. A large part of the literature aims to show progress by increasing accuracy on this task, but this ignores the inherent uncertainty and ambiguity in the task. In this paper we show that Bayesian Neural Networks, as approximated using MC-Dropout, MC-DropConnect, or an Ensemble, are able to model the aleatoric uncertainty in facial emotion recognition, and produce output probabilities that are closer to what a human expects. We also show that calibration metrics show strange behaviors for this task, due to the multiple classes that can be considered correct, which motivates future work. We believe our work will motivate other researchers to move away from Classical and into Bayesian Neural Networks.
Turbulent compressible flows are traditionally simulated using explicit Eulerian time integration applied to the Navier-Stokes equations. However, the associated Courant-Friedrichs-Lewy condition severely restricts the maximum time step size. Exploiting the Lagrangian nature of the Boltzmann equation's material derivative, we now introduce a feasible three-dimensional semi-Lagrangian lattice Boltzmann method (SLLBM), which elegantly circumvents this restriction. Previous lattice Boltzmann methods for compressible flows were mostly restricted to two dimensions due to the enormous number of discrete velocities needed in three dimensions. In contrast, this Rapid Communication demonstrates how cubature rules enhance the SLLBM to yield a three-dimensional velocity set with only 45 discrete velocities. Based on simulations of a compressible Taylor-Green vortex we show that the new method accurately captures shocks or shocklets as well as turbulence in 3D without utilizing additional filtering or stabilizing techniques, even when the time step sizes are up to two orders of magnitude larger compared to simulations in the literature. Our new method therefore enables researchers for the first time to study compressible turbulent flows by a fully explicit scheme, whose range of admissible time step sizes is only dictated by physics, while being decoupled from the spatial discretization.
In vision tasks, a larger effective receptive field (ERF) is associated with better performance. While attention natively supports global context, convolution requires multiple stacked layers and a hierarchical structure for large context. In this work, we extend Hyena, a convolution-based attention replacement, from causal sequences to the non-causal two-dimensional image space. We scale the Hyena convolution kernels beyond the feature map size up to 191$\times$191 to maximize the ERF while maintaining sub-quadratic complexity in the number of pixels. We integrate our two-dimensional Hyena, HyenaPixel, and bidirectional Hyena into the MetaFormer framework. For image categorization, HyenaPixel and bidirectional Hyena achieve a competitive ImageNet-1k top-1 accuracy of 83.0% and 83.5%, respectively, while outperforming other large-kernel networks. Combining HyenaPixel with attention further increases accuracy to 83.6%. We attribute the success of attention to the lack of spatial bias in later stages and support this finding with bidirectional Hyena.
Current robot platforms are being employed to collaborate with humans in a wide range of domestic and industrial tasks. These environments require autonomous systems that are able to classify and communicate anomalous situations such as fires, injured persons, car accidents; or generally, any potentially dangerous situation for humans. In this paper we introduce an anomaly detection dataset for the purpose of robot applications as well as the design and implementation of a deep learning architecture that classifies and describes dangerous situations using only a single image as input. We report a classification accuracy of 97 % and METEOR score of 16.2. We will make the dataset publicly available after this paper is accepted.
This paper addresses the classification of Arabic text data in the field of Natural Language Processing (NLP), with a particular focus on Natural Language Inference (NLI) and Contradiction Detection (CD). Arabic is considered a resource-poor language, meaning that there are few data sets available, which leads to limited availability of NLP methods. To overcome this limitation, we create a dedicated data set from publicly available resources. Subsequently, transformer-based machine learning models are being trained and evaluated. We find that a language-specific model (AraBERT) performs competitively with state-of-the-art multilingual approaches, when we apply linguistically informed pre-training methods such as Named Entity Recognition (NER). To our knowledge, this is the first large-scale evaluation for this task in Arabic, as well as the first application of multi-task pre-training in this context.
Solar photovoltaic power output is modulated by atmospheric aerosols and clouds and thus contains valuable information on the optical properties of the atmosphere. As a ground-based data source with high spatiotemporal resolution it has great potential to complement other ground-based solar irradiance measurements as well as those of weather models and satellites, thus leading to an improved characterisation of global horizontal irradiance. In this work several algorithms are presented that can retrieve global tilted and horizontal irradiance and atmospheric optical properties from solar photovoltaic data and/or pyranometer measurements. Specifically, the aerosol (cloud) optical depth is inferred during clear sky (completely overcast) conditions. The method is tested on data from two measurement campaigns that took place in Allgäu, Germany in autumn 2018 and summer 2019, and the results are compared with local pyranometer measurements as well as satellite and weather model data. Using power data measured at 1 Hz and averaged to 1 minute resolution, the hourly global horizontal irradiance is extracted with a mean bias error compared to concurrent pyranometer measurements of 11.45 W m−2, averaged over the two campaigns, whereas for the retrieval using coarser 15 minute power data the mean bias error is 16.39 W m−2.
During completely overcast periods the cloud optical depth is extracted from photovoltaic power using a lookup table method based on a one-dimensional radiative transfer simulation, and the results are compared to both satellite retrievals as well as data from the COSMO weather model. Potential applications of this approach for extracting cloud optical properties are discussed, as well as certain limitations, such as the representation of 3D radiative effects that occur under broken cloud conditions. In principle this method could provide an unprecedented amount of ground-based data on both irradiance and optical properties of the atmosphere, as long as the required photovoltaic power data are available and are properly pre-screened to remove unwanted artefacts in the signal. Possible solutions to this problem are discussed in the context of future work.
The latest trends in inverse rendering techniques for reconstruction use neural networks to learn 3D representations as neural fields. NeRF-based techniques fit multi-layer perceptrons (MLPs) to a set of training images to estimate a radiance field which can then be rendered from any virtual camera by means of volume rendering algorithms. Major drawbacks of these representations are the lack of well-defined surfaces and non-interactive rendering times, as wide and deep MLPs must be queried millions of times per single frame. These limitations have recently been singularly overcome, but managing to accomplish this simultaneously opens up new use cases. We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates. KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes, and it demonstrably performs in real-time with plenty of room for future optimizations and extensions.
Robots applied in therapeutic scenarios, for instance in the therapy of individuals with Autism Spectrum Disorder, are sometimes used for imitation learning activities in which a person needs to repeat motions by the robot. To simplify the task of incorporating new types of motions that a robot can perform, it is desirable that the robot has the ability to learn motions by observing demonstrations from a human, such as a therapist. In this paper, we investigate an approach for acquiring motions from skeleton observations of a human, which are collected by a robot-centric RGB-D camera. Given a sequence of observations of various joints, the joint positions are mapped to match the configuration of a robot before being executed by a PID position controller. We evaluate the method, in particular the reproduction error, by performing a study with QTrobot in which the robot acquired different upper-body dance moves from multiple participants. The results indicate the method's overall feasibility, but also indicate that the reproduction quality is affected by noise in the skeleton observations.
In this paper, we describe an approach that enables an autonomous system to infer the semantics of a command (i.e. a symbol sequence representing an action) in terms of the relations between changes in the observations and the action instances. We present a method of how to induce a theory (i.e. a semantic description) of the meaning of a command in terms of a minimal set of background knowledge. The only thing we have is a sequence of observations from which we extract what kinds of effects were caused by performing the command. This way, we yield a description of the semantics of the action and, hence, a definition.