Refine
Departments, institutes and facilities
- Fachbereich Informatik (47)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (41)
- Fachbereich Elektrotechnik, Maschinenbau und Technikjournalismus (10)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (3)
- Institut für Sicherheitsforschung (ISF) (1)
- Institute of Visual Computing (IVC) (1)
Document Type
- Conference Object (38)
- Article (11)
- Preprint (5)
- Report (5)
- Book (monograph, edited volume) (1)
- Part of a Book (1)
- Diploma Thesis (1)
- Doctoral Thesis (1)
Year of publication
Keywords
- Quality Diversity (4)
- Quality diversity (4)
- Bayesian optimization (3)
- MAP-Elites (3)
- Aerodynamics (2)
- Autoencoder (2)
- Evolutionary Computation (2)
- Evolutionary computation (2)
- Heart Rate Prediction (2)
- Lattice Boltzmann Method (2)
Aufgrund eines nahezu gleichlautenden Beschlusses des Kreistages im Rhein-Sieg-Kreis (RSK) und des Hauptausschusses der Stadt Bonn im Jahr 2011 wurden die jeweiligen Verwaltungen beauftragt, gemeinsam mit den Energieversorgern der Region ein Starthilfekonzept Elektromobilität zu entwickeln. In Folge dieses Beschlusses konstituierte sich Ende 2011 ein Arbeitskreis, der aus den Verwaltungen des Rhein-Sieg-Kreises und der Stadt Bonn, den Energieversorgern SWB Energie und Wasser, der Rhenag, den Stadtwerken Troisdorf, der Rheinenergie und den RWE besteht. Die inhaltlichen Schwerpunkte, die inzwischen in drei Arbeitskreisen behandelt werden, umfassen den Ausbau der Ladeinfrastruktur, die Öffentlichkeitsarbeit und die Bereitstellung von Strom aus regenerativen Quellen durch den Zubau entsprechender Anlagen in der Region. Während Maßnahmen zur Öffentlichkeitsarbeit und die Bereitstellung Grünen Stroms aus den Arbeitskreisen direkt bearbeitet und bewegt werden, ist dies aufgrund der Komplexität des Themas und der zahlreichen Einflussgrößen beim Ausbau der Ladeinfrastruktur nicht möglich. Daraus entstand die Überlegung einer Kooperation mit der Hochschule Bonn-Rhein-Sieg.
AErOmAt Abschlussbericht
(2020)
Das Projekt AErOmAt hatte zum Ziel, neue Methoden zu entwickeln, um einen erheblichen Teil aerodynamischer Simulationen bei rechenaufwändigen Optimierungsdomänen einzusparen. Die Hochschule Bonn-Rhein-Sieg (H-BRS) hat auf diesem Weg einen gesellschaftlich relevanten und gleichzeitig wirtschaftlich verwertbaren Beitrag zur Energieeffizienzforschung geleistet. Das Projekt führte außerdem zu einer schnelleren Integration der neuberufenen Antragsteller in die vorhandenen Forschungsstrukturen.
In complex, expensive optimization domains we often narrowly focus on finding high performing solutions, instead of expanding our understanding of the domain itself. But what if we could quickly understand the complex behaviors that can emerge in said domains instead? We introduce surrogate-assisted phenotypic niching, a quality diversity algorithm which allows to discover a large, diverse set of behaviors by using computationally expensive phenotypic features. In this work we discover the types of air flow in a 2D fluid dynamics optimization problem. A fast GPU-based fluid dynamics solver is used in conjunction with surrogate models to accurately predict fluid characteristics from the shapes that produce the air flow. We show that these features can be modeled in a data-driven way while sampling to improve performance, rather than explicitly sampling to improve feature models. Our method can reduce the need to run an infeasibly large set of simulations while still being able to design a large diversity of air flows and the shapes that cause them. Discovering diversity of behaviors helps engineers to better understand expensive domains and their solutions.
In optimization methods that return diverse solution sets, three interpretations of diversity can be distinguished: multi-objective optimization which searches diversity in objective space, multimodal optimization which tries spreading out the solutions in genetic space, and quality diversity which performs diversity maintenance in phenotypic space. We introduce niching methods that provide more flexibility to the analysis of diversity and a simple domain to compare and provide insights about the paradigms. We show that multiobjective optimization does not always produce much diversity, quality diversity is not sensitive to genetic neutrality and creates the most diverse set of solutions, and multimodal optimization produces higher fitness solutions. An autoencoder is used to discover phenotypic features automatically, producing an even more diverse solution set. Finally, we make recommendations about when to use which approach.
Quality diversity algorithms can be used to efficiently create a diverse set of solutions to inform engineers' intuition. But quality diversity is not efficient in very expensive problems, needing 100.000s of evaluations. Even with the assistance of surrogate models, quality diversity needs 100s or even 1000s of evaluations, which can make it use infeasible. In this study we try to tackle this problem by using a pre-optimization strategy on a lower-dimensional optimization problem and then map the solutions to a higher-dimensional case. For a use case to design buildings that minimize wind nuisance, we show that we can predict flow features around 3D buildings from 2D flow features around building footprints. For a diverse set of building designs, by sampling the space of 2D footprints with a quality diversity algorithm, a predictive model can be trained that is more accurate than when trained on a set of footprints that were selected with a space-filling algorithm like the Sobol sequence. Simulating only 16 buildings in 3D, a set of 1024 building designs with low predicted wind nuisance is created. We show that we can produce better machine learning models by producing training data with quality diversity instead of using common sampling techniques. The method can bootstrap generative design in a computationally expensive 3D domain and allow engineers to sweep the design space, understanding wind nuisance in early design phases.
Neuroevolution methods evolve the weights of a neural network, and in some cases the topology, but little work has been done to analyze the effect of evolving the activation functions of individual nodes on network size, an important factor when training networks with a small number of samples. In this work we extend the neuroevolution algorithm NEAT to evolve the activation function of neurons in addition to the topology and weights of the network. The size and performance of networks produced using NEAT with uniform activation in all nodes, or homogenous networks, is compared to networks which contain a mixture of activation functions, or heterogenous networks. For a number of regression and classification benchmarks it is shown that, (1) qualitatively different activation functions lead to different results in homogeneous networks, (2) the heterogeneous version of NEAT is able to select well performing activation functions, (3) the produced heterogeneous networks are significantly smaller than homogeneous networks.
The initial phase in real world engineering optimization and design is a process of discovery in which not all requirements can be made in advance, or are hard to formalize. Quality diversity algorithms, which produce a variety of high performing solutions, provide a unique chance to support engineers and designers in the search for what is possible and high performing. In this work we begin to answer the question how a user can interact with quality diversity and turn it into an interactive innovation aid. By modeling a user's selection it can be determined whether the optimization is drifting away from the user's preferences. The optimization is then constrained by adding a penalty to the objective function. We present an interactive quality diversity algorithm that can take into account the user's selection. The approach is evaluated in a new multimodal optimization benchmark that allows various optimization tasks to be performed. The user selection drift of the approach is compared to a state of the art alternative on both a planning and a neuroevolution control task, thereby showing its limits and possibilities.
We consider multi-solution optimization and generative models for the generation of diverse artifacts and the discovery of novel solutions. In cases where the domain's factors of variation are unknown or too complex to encode manually, generative models can provide a learned latent space to approximate these factors. When used as a search space, however, the range and diversity of possible outputs are limited to the expressivity and generative capabilities of the learned model. We compare the output diversity of a quality diversity evolutionary search performed in two different search spaces: 1) a predefined parameterized space and 2) the latent space of a variational autoencoder model. We find that the search on an explicit parametric encoding creates more diverse artifact sets than searching the latent space. A learned model is better at interpolating between known data points than at extrapolating or expanding towards unseen examples. We recommend using a generative model's latent space primarily to measure similarity between artifacts rather than for search and generation. Whenever a parametric encoding is obtainable, it should be preferred over a learned representation as it produces a higher diversity of solutions.
Computers can help us to trigger our intuition about how to solve a problem. But how does a computer take into account what a user wants and update these triggers? User preferences are hard to model as they are by nature vague, depend on the user’s background and are not always deterministic, changing depending on the context and process under which they were established. We pose that the process of preference discovery should be the object of interest in computer aided design or ideation. The process should be transparent, informative, interactive and intuitive. We formulate Hyper-Pref, a cyclic co-creative process between human and computer, which triggers the user’s intuition about what is possible and is updated according to what the user wants based on their decisions. We combine quality diversity algorithms, a divergent optimization method that can produce many, diverse solutions, with variational autoencoders to both model that diversity as well as the user’s preferences, discovering the preference hypervolume within large search spaces.
Künstliche Intelligenz (KI) ist aus der heutigen Gesellschaft kaum noch wegzudenken. Auch im Sport haben Methoden der KI in den letzten Jahren mehr und mehr Einzug gehalten. Ob und inwieweit dabei allerdings die derzeitigen Potenziale der KI tatsächlich ausgeschöpft werden, ist bislang nicht untersucht worden. Der Nutzen von Methoden der KI im Sport ist unbestritten, jedoch treten bei der Umsetzung in die Praxis gravierende Probleme auf, was den Zugang zu Ressourcen, die Verfügbarkeit von Experten und den Umgang mit den Methoden und Daten betrifft. Die Ursache für die, verglichen mit anderen Anwendungsgebieten, langsame An- bzw. Übernahme von Methoden der KI in den Spitzensport ist nach Hypothese des Autorenteams auf mehrere Mismatches zwischen dem Anwendungsfeld und den KI-Methoden zurückzuführen. Diese Mismatches sind methodischer, struktureller und auch kommunikativer Art. In der vorliegenden Expertise werden Vorschläge abgeleitet, die zur Auflösung der Mismatches führen können und zugleich neue Transfer- und Synergiemöglichkeiten aufzeigen. Außerdem wurden drei Use Cases zu Trainingssteuerung, Leistungsdiagnostik und Wettkampfdiagnostik exemplarisch umgesetzt. Dies erfolgte in Form entsprechender Projektbeschreibungen. Dabei zeigt die Ausarbeitung, auf welche Art und Weise Probleme, die heute noch bei der Verbindung zwischen KI und Sport bestehen, möglichst ausgeräumt werden können. Eine empirische Umsetzung des Use Case Trainingssteuerung erfolgte im Radsport, weshalb dieser ausführlicher dargestellt wird.
This paper explores the role of artificial intelligence (AI) in elite sports. We approach the topic from two perspectives. Firstly, we provide a literature based overview of AI success stories in areas other than sports. We identified multiple approaches in the area of Machine Perception, Machine Learning and Modeling, Planning and Optimization as well as Interaction and Intervention, holding a potential for improving training and competition. Secondly, we discover the present status of AI use in elite sports. Therefore, in addition to another literature review, we interviewed leading sports scientist, which are closely connected to the main national service institute for elite sports in their countries. The analysis of this literature review and the interviews show that the most activity is carried out in the methodical categories of signal and image processing. However, projects in the field of modeling & planning have become increasingly popular within the last years. Based on these two perspectives, we extract deficits, issues and opportunities and summarize them in six key challenges faced by the sports analytics community. These challenges include data collection, controllability of an AI by the practitioners and explainability of AI results.
Theoretische Informatik
(2002)
Eine anschauliche Einführung in die klassischen Themenbereiche der Theoretischen Informatik für Studierende der Informatik im Haupt- und Nebenfach. Die Autoren wählen einen Ansatz, der durch zahlreiche ausgearbeitete Beispiele auch LeserInnen mit nur elementaren Mathematikkenntnissen den Zugang zu Berechenbarkeit, Komplexitätstheorie und formalen Sprachen ermöglicht. Die mathematischen Konzepte werden sowohl formal eingeführt als auch informell erläutert und durch grafische Darstellungen veranschaulicht. Das Buch umfasst den Lehrstoff einführender Vorlesungen in die Theoretische Informatik und bietet zahlreiche Übungsaufgaben zu jedem Kapitel an. (Verlagsangaben)
In der vorliegenden Arbeit werden Verfahren vorgestellt, die geeignet sind, Modelle des menschlichen kardiovaskulären Systems an individuelle Kreislaufreaktionen anzupassen. Allgemeine Kreislaufmodelle des menschlichen kardiovaskulären Systems sind in der Regel nichtlineare Differentialgleichungssysteme, für die es keine effizienten Optimierungsverfahren gibt. Durch die Einschränkung auf relevante Aspekte (bzgl. der Individualisierungsaufgabe) wird ein solches Modell auf Modelle einfacherer Struktur projiziert, die eine Approximation durch Funktionsapproximatoren erlauben, für welche wiederum effiziente Optimierungsalgorithmen existieren. In Abhängigkeit von der Struktur der Individualisierungsaufgabe kommt zusätzlich ein modifiziertes BFGS-Verfahren zum Einsatz, das Approximationen solcher Modellaspekte verwendet um die Konvergenz der Modellindividualisierung zu verbessern.
A detailed analysis of autonomic cardiovascular control (ACVC) may provide a key to a better understanding of the mechanisms underlying postflight orthostatic hypotension. The central substrate of human ACVC is not directly accessible to measurements and observation in space research. Modelling--supporting inference and physiological reasoning--is a valuable tool to disclose its involvement We are currently determining the suitability of artificial neural networks (ANN's) as a model of the central substrate of ACVC. Having conducted a number of experiments with simulated tilt test data to clarify the choice of input coding and of architectural biases in network training we will now report on the approximation of data obtained from human subjects during preparation of the German MIR'97 and D-2 missions.
We present a model checking algorithm for ∀CTL (and full CTL) which uses an iterative abstraction refinement strategy.
It terminates at least for all transition systems M that have a finite simulation or bisimulation quotient. In contrast to other abstraction refinement algorithms, we always work with abstract models whose sizes depend only on the length of the formula θ (but not on the size of the system, which might be infinite).
With the increasing average age of the population in many developed countries, afflictions like cardiovascular diseases have also increased. Exercising has a proven therapeutic effect on the cardiovascular system and can counteract this development. To avoid overstrain, determining an optimal training dose is crucial. In previous research, heart rate has been shown to be a good measure for cardiovascular behavior. Hence, prediction of the heart rate from work load information is an essential part in models used for training control. Most heart-rate-based models are described in the context of specific scenarios, and have been evaluated on unique datasets only. In this paper, we conduct a joint evaluation of existing approaches to model the cardiovascular system under a certain strain, and compare their predictive performance. For this purpose, we investigated some analytical models as well as some machine learning approaches in two scenarios: prediction over a certain time horizon into the future, and estimation of the relation between work load and heart rate over a whole training session.
This paper describes the development of a Pedelec controller whose performance level (PL) conforms to European standard on safety of machinery [9] and whose soft- ware is verified to conform to EPAC standard [6] by means of a software verification technique called model checking. In compliance with the standard [9] the hardware needs to implement the required properties corresponding to categories “C” and “D”. The latter is used if the breaks are not able to bring the velomobile with a broken motor controller to a full stop. Therefore the controller needs to implement a test unit, which verifies the functionality of the components and, in case of an emergency, shuts the whole hardware down to prevent injuries of the cyclist. The MTTFd can be measured through a failure graph, which is the result of a FMEA analysis, and can be used to proof that the Pedelec controller meets the regulations of the system specification. The analysis of the system in compliance with [9] usually treats the software as a black box thus ignoring its inner workings and validating its correctness by means of testing. In this paper we present a temporal logic specification according to [6], based on which the software for the Pedelec controller is implemented, and verify instead of only testing its functionality. By means of model checking [1] we proof that the software fulfills all requirements which are regulated by its specification.
The use of wearable devices or “wearables” in the physical activity domain has been increasing in the last years. These devices are used as training tools providing the user with detailed information about individual physiological responses and feedback to the physical training process. Advantages in sensor technology, miniaturization, energy consumption and processing power increased the usability of these wearables. Furthermore, available sensor technologies must be reliable, valid, and usable. Considering the variety of the existing sensors not all of them are suitable to be integrated in wearables. The application and development of wearables has to consider the characteristics of the physical training process to improve the effectiveness and efficiency as training tools. During physical training, it is essential to elicit individual optimal strain to evoke the desired adjustments to training. One important goal is to neither overstrain nor under challenge the user. Many wearables use heart rate as indicator for this individual strain. However, due to a variety of internal and external influencing factors, heart rate kinetics are highly variable making it difficult to control the stress eliciting individually optimal strain. For optimal training control it is essential to model and predict individual responses and adapt the external stress if necessary. Basis for this modeling is the valid and reliable recording of these individual responses. Depending on the heart rate kinetics and the obtained physiological data, different models and techniques are available that can be used for strain or training control. Aim of this review is to give an overview of measurement, prediction, and control of individual heart rate responses. Therefore, available sensor technologies measuring the individual heart rate responses are analyzed and approaches to model and predict these individual responses discussed. Additionally, the feasibility for wearables is analyzed.
In mathematical modeling by means of performance models, the Fitness-Fatigue Model (FF-Model) is a common approach in sport and exercise science to study the training performance relationship. The FF-Model uses an initial basic level of performance and two antagonistic terms (for fitness and fatigue). By model calibration, parameters are adapted to the subject’s individual physical response to training load. Although the simulation of the recorded training data in most cases shows useful results when the model is calibrated and all parameters are adjusted, this method has two major difficulties. First, a fitted value as basic performance will usually be too high. Second, without modification, the model cannot be simply used for prediction. By rewriting the FF-Model such that effects of former training history can be analyzed separately – we call those terms preload – it is possible to close the gap between a more realistic initial performance level and an athlete's actual performance level without distorting other model parameters and increase model accuracy substantially. Fitting error of the preload-extended FF-Model is less than 32% compared to the error of the FF-Model without preloads. Prediction error of the preload-extended FF-Model is around 54% of the error of the FF-Model without preloads.
During exercise, heart rate has proven to be a good measure in planning workouts. It is not only simple to measure but also well understood and has been used for many years for workout planning. To use heart rate to control physical exercise, a model which predicts future heart rate dependent on a given strain can be utilized. In this paper, we present a mathematical model based on convolution for predicting the heart rate response to strain with four physiologically explainable parameters. This model is based on the general idea of the Fitness-Fatigue model for performance analysis, but is revised here for heart rate analysis. Comparisons show that the Convolution model can compete with other known heart rate models. Furthermore, this new model can be improved by reducing the number of parameters. The remaining parameter seems to be a promising indicator of the actual subject’s fitness.
Analyzing training performance in sport is usually based on standardized test protocols and needs laboratory equipment, e.g., for measuring blood lactate concentration or other physiological body parameters. Avoiding special equipment and standardized test protocols, we show that it is possible to reach a quality of performance simulation comparable to the results of laboratory studies using training models with nothing but training data. For this purpose, we introduce a fitting concept for a performance model that takes the peculiarities of using training data for the task of performance diagnostics into account. With a specific way of data preprocessing, accuracy of laboratory studies can be achieved for about 50% of the tested subjects, while lower correlation of the other 50% can be explained.
The Fitness-Fatigue model (Calvert et al. 1976) is widely used for performance analysis. This antagonistic model is based on a fitness-term, a fatigue-term, and an initial basic level of performance. Instead of generic parameter values, individualizing the model needs a fitting of parameters. With fitted parameters, the model adapts to account for individual responses to strain. Even though in most cases fitting of recorded training data shows useful results, without modification the model cannot be simply used for prediction.
The way solutions are represented, or encoded, is usually the result of domain knowledge and experience. In this work, we combine MAP-Elites with Variational Autoencoders to learn a Data-Driven Encoding (DDE) that captures the essence of the highest-performing solutions while still able to encode a wide array of solutions. Our approach learns this data-driven encoding during optimization by balancing between exploiting the DDE to generalize the knowledge contained in the current archive of elites and exploring new representations that are not yet captured by the DDE. Learning representation during optimization allows the algorithm to solve high-dimensional problems, and provides a low-dimensional representation which can be then be re-used. We evaluate the DDE approach by evolving solutions for inverse kinematics of a planar arm (200 joint angles) and for gaits of a 6-legged robot in action space (a sequence of 60 positions for each of the 12 joints). We show that the DDE approach not only accelerates and improves optimization, but produces a powerful encoding that captures a bias for high performance while expressing a variety of solutions.