Refine
H-BRS Bibliography
- yes (30)
Departments, institutes and facilities
Document Type
- Conference Object (15)
- Article (5)
- Preprint (5)
- Report (3)
- Part of a Book (1)
- Doctoral Thesis (1)
Year of publication
Keywords
Aufgrund eines nahezu gleichlautenden Beschlusses des Kreistages im Rhein-Sieg-Kreis (RSK) und des Hauptausschusses der Stadt Bonn im Jahr 2011 wurden die jeweiligen Verwaltungen beauftragt, gemeinsam mit den Energieversorgern der Region ein Starthilfekonzept Elektromobilität zu entwickeln. In Folge dieses Beschlusses konstituierte sich Ende 2011 ein Arbeitskreis, der aus den Verwaltungen des Rhein-Sieg-Kreises und der Stadt Bonn, den Energieversorgern SWB Energie und Wasser, der Rhenag, den Stadtwerken Troisdorf, der Rheinenergie und den RWE besteht. Die inhaltlichen Schwerpunkte, die inzwischen in drei Arbeitskreisen behandelt werden, umfassen den Ausbau der Ladeinfrastruktur, die Öffentlichkeitsarbeit und die Bereitstellung von Strom aus regenerativen Quellen durch den Zubau entsprechender Anlagen in der Region. Während Maßnahmen zur Öffentlichkeitsarbeit und die Bereitstellung Grünen Stroms aus den Arbeitskreisen direkt bearbeitet und bewegt werden, ist dies aufgrund der Komplexität des Themas und der zahlreichen Einflussgrößen beim Ausbau der Ladeinfrastruktur nicht möglich. Daraus entstand die Überlegung einer Kooperation mit der Hochschule Bonn-Rhein-Sieg.
We consider multi-solution optimization and generative models for the generation of diverse artifacts and the discovery of novel solutions. In cases where the domain's factors of variation are unknown or too complex to encode manually, generative models can provide a learned latent space to approximate these factors. When used as a search space, however, the range and diversity of possible outputs are limited to the expressivity and generative capabilities of the learned model. We compare the output diversity of a quality diversity evolutionary search performed in two different search spaces: 1) a predefined parameterized space and 2) the latent space of a variational autoencoder model. We find that the search on an explicit parametric encoding creates more diverse artifact sets than searching the latent space. A learned model is better at interpolating between known data points than at extrapolating or expanding towards unseen examples. We recommend using a generative model's latent space primarily to measure similarity between artifacts rather than for search and generation. Whenever a parametric encoding is obtainable, it should be preferred over a learned representation as it produces a higher diversity of solutions.
Neuroevolution methods evolve the weights of a neural network, and in some cases the topology, but little work has been done to analyze the effect of evolving the activation functions of individual nodes on network size, an important factor when training networks with a small number of samples. In this work we extend the neuroevolution algorithm NEAT to evolve the activation function of neurons in addition to the topology and weights of the network. The size and performance of networks produced using NEAT with uniform activation in all nodes, or homogenous networks, is compared to networks which contain a mixture of activation functions, or heterogenous networks. For a number of regression and classification benchmarks it is shown that, (1) qualitatively different activation functions lead to different results in homogeneous networks, (2) the heterogeneous version of NEAT is able to select well performing activation functions, (3) the produced heterogeneous networks are significantly smaller than homogeneous networks.
In complex, expensive optimization domains we often narrowly focus on finding high performing solutions, instead of expanding our understanding of the domain itself. But what if we could quickly understand the complex behaviors that can emerge in said domains instead? We introduce surrogate-assisted phenotypic niching, a quality diversity algorithm which allows to discover a large, diverse set of behaviors by using computationally expensive phenotypic features. In this work we discover the types of air flow in a 2D fluid dynamics optimization problem. A fast GPU-based fluid dynamics solver is used in conjunction with surrogate models to accurately predict fluid characteristics from the shapes that produce the air flow. We show that these features can be modeled in a data-driven way while sampling to improve performance, rather than explicitly sampling to improve feature models. Our method can reduce the need to run an infeasibly large set of simulations while still being able to design a large diversity of air flows and the shapes that cause them. Discovering diversity of behaviors helps engineers to better understand expensive domains and their solutions.
The initial phase in real world engineering optimization and design is a process of discovery in which not all requirements can be made in advance, or are hard to formalize. Quality diversity algorithms, which produce a variety of high performing solutions, provide a unique chance to support engineers and designers in the search for what is possible and high performing. In this work we begin to answer the question how a user can interact with quality diversity and turn it into an interactive innovation aid. By modeling a user's selection it can be determined whether the optimization is drifting away from the user's preferences. The optimization is then constrained by adding a penalty to the objective function. We present an interactive quality diversity algorithm that can take into account the user's selection. The approach is evaluated in a new multimodal optimization benchmark that allows various optimization tasks to be performed. The user selection drift of the approach is compared to a state of the art alternative on both a planning and a neuroevolution control task, thereby showing its limits and possibilities.
In optimization methods that return diverse solution sets, three interpretations of diversity can be distinguished: multi-objective optimization which searches diversity in objective space, multimodal optimization which tries spreading out the solutions in genetic space, and quality diversity which performs diversity maintenance in phenotypic space. We introduce niching methods that provide more flexibility to the analysis of diversity and a simple domain to compare and provide insights about the paradigms. We show that multiobjective optimization does not always produce much diversity, quality diversity is not sensitive to genetic neutrality and creates the most diverse set of solutions, and multimodal optimization produces higher fitness solutions. An autoencoder is used to discover phenotypic features automatically, producing an even more diverse solution set. Finally, we make recommendations about when to use which approach.
Quality diversity algorithms can be used to efficiently create a diverse set of solutions to inform engineers' intuition. But quality diversity is not efficient in very expensive problems, needing 100.000s of evaluations. Even with the assistance of surrogate models, quality diversity needs 100s or even 1000s of evaluations, which can make it use infeasible. In this study we try to tackle this problem by using a pre-optimization strategy on a lower-dimensional optimization problem and then map the solutions to a higher-dimensional case. For a use case to design buildings that minimize wind nuisance, we show that we can predict flow features around 3D buildings from 2D flow features around building footprints. For a diverse set of building designs, by sampling the space of 2D footprints with a quality diversity algorithm, a predictive model can be trained that is more accurate than when trained on a set of footprints that were selected with a space-filling algorithm like the Sobol sequence. Simulating only 16 buildings in 3D, a set of 1024 building designs with low predicted wind nuisance is created. We show that we can produce better machine learning models by producing training data with quality diversity instead of using common sampling techniques. The method can bootstrap generative design in a computationally expensive 3D domain and allow engineers to sweep the design space, understanding wind nuisance in early design phases.
This paper explores the role of artificial intelligence (AI) in elite sports. We approach the topic from two perspectives. Firstly, we provide a literature based overview of AI success stories in areas other than sports. We identified multiple approaches in the area of Machine Perception, Machine Learning and Modeling, Planning and Optimization as well as Interaction and Intervention, holding a potential for improving training and competition. Secondly, we discover the present status of AI use in elite sports. Therefore, in addition to another literature review, we interviewed leading sports scientist, which are closely connected to the main national service institute for elite sports in their countries. The analysis of this literature review and the interviews show that the most activity is carried out in the methodical categories of signal and image processing. However, projects in the field of modeling & planning have become increasingly popular within the last years. Based on these two perspectives, we extract deficits, issues and opportunities and summarize them in six key challenges faced by the sports analytics community. These challenges include data collection, controllability of an AI by the practitioners and explainability of AI results.
Computers can help us to trigger our intuition about how to solve a problem. But how does a computer take into account what a user wants and update these triggers? User preferences are hard to model as they are by nature vague, depend on the user’s background and are not always deterministic, changing depending on the context and process under which they were established. We pose that the process of preference discovery should be the object of interest in computer aided design or ideation. The process should be transparent, informative, interactive and intuitive. We formulate Hyper-Pref, a cyclic co-creative process between human and computer, which triggers the user’s intuition about what is possible and is updated according to what the user wants based on their decisions. We combine quality diversity algorithms, a divergent optimization method that can produce many, diverse solutions, with variational autoencoders to both model that diversity as well as the user’s preferences, discovering the preference hypervolume within large search spaces.
Künstliche Intelligenz (KI) ist aus der heutigen Gesellschaft kaum noch wegzudenken. Auch im Sport haben Methoden der KI in den letzten Jahren mehr und mehr Einzug gehalten. Ob und inwieweit dabei allerdings die derzeitigen Potenziale der KI tatsächlich ausgeschöpft werden, ist bislang nicht untersucht worden. Der Nutzen von Methoden der KI im Sport ist unbestritten, jedoch treten bei der Umsetzung in die Praxis gravierende Probleme auf, was den Zugang zu Ressourcen, die Verfügbarkeit von Experten und den Umgang mit den Methoden und Daten betrifft. Die Ursache für die, verglichen mit anderen Anwendungsgebieten, langsame An- bzw. Übernahme von Methoden der KI in den Spitzensport ist nach Hypothese des Autorenteams auf mehrere Mismatches zwischen dem Anwendungsfeld und den KI-Methoden zurückzuführen. Diese Mismatches sind methodischer, struktureller und auch kommunikativer Art. In der vorliegenden Expertise werden Vorschläge abgeleitet, die zur Auflösung der Mismatches führen können und zugleich neue Transfer- und Synergiemöglichkeiten aufzeigen. Außerdem wurden drei Use Cases zu Trainingssteuerung, Leistungsdiagnostik und Wettkampfdiagnostik exemplarisch umgesetzt. Dies erfolgte in Form entsprechender Projektbeschreibungen. Dabei zeigt die Ausarbeitung, auf welche Art und Weise Probleme, die heute noch bei der Verbindung zwischen KI und Sport bestehen, möglichst ausgeräumt werden können. Eine empirische Umsetzung des Use Case Trainingssteuerung erfolgte im Radsport, weshalb dieser ausführlicher dargestellt wird.
AErOmAt Abschlussbericht
(2020)
Das Projekt AErOmAt hatte zum Ziel, neue Methoden zu entwickeln, um einen erheblichen Teil aerodynamischer Simulationen bei rechenaufwändigen Optimierungsdomänen einzusparen. Die Hochschule Bonn-Rhein-Sieg (H-BRS) hat auf diesem Weg einen gesellschaftlich relevanten und gleichzeitig wirtschaftlich verwertbaren Beitrag zur Energieeffizienzforschung geleistet. Das Projekt führte außerdem zu einer schnelleren Integration der neuberufenen Antragsteller in die vorhandenen Forschungsstrukturen.
The representation, or encoding, utilized in evolutionary algorithms has a substantial effect on their performance. Examination of the suitability of widely used representations for quality diversity optimization (QD) in robotic domains has yielded inconsistent results regarding the most appropriate encoding method. Given the domain-dependent nature of QD, additional evidence from other domains is necessary. This study compares the impact of several representations, including direct encoding, a dictionary-based representation, parametric encoding, compositional pattern producing networks, and cellular automata, on the generation of voxelized meshes in an architecture setting. The results reveal that some indirect encodings outperform direct encodings and can generate more diverse solution sets, especially when considering full phenotypic diversity. The paper introduces a multi-encoding QD approach that incorporates all evaluated representations in the same archive. Species of encodings compete on the basis of phenotypic features, leading to an approach that demonstrates similar performance to the best single-encoding QD approach. This is noteworthy, as it does not always require the contribution of the best-performing single encoding.
The representation, or encoding, utilized in evolutionary algorithms has a substantial effect on their performance. Examination of the suitability of widely used representations for quality diversity optimization (QD) in robotic domains has yielded inconsistent results regarding the most appropriate encoding method. Given the domain-dependent nature of QD, additional evidence from other domains is necessary. This study compares the impact of several representations, including direct encoding, a dictionary-based representation, parametric encoding, compositional pattern producing networks, and cellular automata, on the generation of voxelized meshes in an architecture setting. The results reveal that some indirect encodings outperform direct encodings and can generate more diverse solution sets, especially when considering full phenotypic diversity. The paper introduces a multi-encoding QD approach that incorporates all evaluated representations in the same archive. Species of encodings compete on the basis of phenotypic features, leading to an approach that demonstrates similar performance to the best single-encoding QD approach. This is noteworthy, as it does not always require the contribution of the best-performing single encoding.
In this thesis it is posed that the central object of preference discovery is a co-creative process in which the Other can be represented by a machine. It explores efficient methods to enhance introverted intuition using extraverted intuition's communication lines. Possible implementations of such processes are presented using novel algorithms that perform divergent search to feed the users' intuition with many examples of high quality solutions, allowing them to take influence interactively. The machine feeds and reflects upon human intuition, combining both what is possible and preferred. The machine model and the divergent optimization algorithms are the motor behind this co-creative process, in which machine and users co-create and interactively choose branches of an ad hoc hierarchical decomposition of the solution space.
The proposed co-creative process consists of several elements: a formal model for interactive co-creative processes, evolutionary divergent search, diversity and similarity, data-driven methods to discover diversity, limitations of artificial creative agents, matters of efficiency in behavioral and morphological modeling, visualization, a connection to prototype theory, and methods to allow users to influence artificial creative agents. This thesis helps putting the human back into the design loop in generative AI and optimization.
Evolutionary illumination is a recent technique that allows producing many diverse, optimal solutions in a map of manually defined features. To support the large amount of objective function evaluations, surrogate model assistance was recently introduced. Illumination models need to represent many more, diverse optimal regions than classical surrogate models. In this PhD thesis, we propose to decompose the sample set, decreasing model complexity, by hierarchically segmenting the training set according to their coordinates in feature space. An ensemble of diverse models can then be trained to serve as a surrogate to illumination.
Surrogate models are used to reduce the burden of expensive-to-evaluate objective functions in optimization. By creating models which map genomes to objective values, these models can estimate the performance of unknown inputs, and so be used in place of expensive objective functions. Evolutionary techniques such as genetic programming or neuroevolution commonly alter the structure of the genome itself. A lack of consistency in the genotype is a fatal blow to data-driven modeling techniques: interpolation between points is impossible without a common input space. However, while the dimensionality of genotypes may differ across individuals, in many domains, such as controllers or classifiers, the dimensionality of the input and output remains constant. In this work we leverage this insight to embed differing neural networks into the same input space. To judge the difference between the behavior of two neural networks, we give them both the same input sequence, and examine the difference in output. This difference, the phenotypic distance, can then be used to situate these networks into a common input space, allowing us to produce surrogate models which can predict the performance of neural networks regardless of topology. In a robotic navigation task, we show that models trained using this phenotypic embedding perform as well or better as those trained on the weight values of a fixed topology neural network. We establish such phenotypic surrogate models as a promising and flexible approach which enables surrogate modeling even for representations that undergo structural changes.
Force field (FF) based molecular modeling is an often used method to investigate and study structural and dynamic properties of (bio-)chemical substances and systems. When such a system is modeled or refined, the force field parameters need to be adjusted. This force field parameter optimization can be a tedious task and is always a trade-off in terms of errors regarding the targeted properties. To better control the balance of various properties’ errors, in this study we introduce weighting factors for the optimization objectives. Different weighting strategies are compared to fine-tune the balance between bulk-phase density and relative conformational energies (RCE), using n-octane as a representative system. Additionally, a non-linear projection of the individual property-specific parts of the optimized loss function is deployed to further improve the balance between them. The results show that the overall error is reduced. One interesting outcome is a large variety in the resulting optimized force field parameters (FFParams) and corresponding errors, suggesting that the optimization landscape is multi-modal and very dependent on the weighting factor setup. We conclude that adjusting the weighting factors can be a very important feature to lower the overall error in the FF optimization procedure, giving researchers the possibility to fine-tune their FFs.
The field of computational chemistry has seen a significant increase in the integration of machine learning concepts and algorithms. In this Perspective, we surveyed 179 open-source software projects, with corresponding peer-reviewed papers published within the last 5 years, to better understand the topics within the field being investigated by machine learning approaches. For each project, we provide a short description, the link to the code, the accompanying license type, and whether the training data and resulting models are made publicly available. Based on those deposited in GitHub repositories, the most popular employed Python libraries are identified. We hope that this survey will serve as a resource to learn about machine learning or specific architectures thereof by identifying accessible codes with accompanying papers on a topic basis. To this end, we also include computational chemistry open-source software for generating training data and fundamental Python libraries for machine learning. Based on our observations and considering the three pillars of collaborative machine learning work, open data, open source (code), and open models, we provide some suggestions to the community.