Refine
H-BRS Bibliography
- yes (4)
Departments, institutes and facilities
Document Type
- Conference Object (3)
- Article (1)
Year of publication
- 2017 (4) (remove)
Language
- English (4)
Has Fulltext
- no (4)
Keywords
- Domestic robotics (1)
- Electric mobility (1)
- Fusion (1)
- Genetic algorithm (1)
- Maximal covering location problem (1)
- Modalities (1)
- Multi-objective (1)
- Multi-stage (1)
- Multimodal (1)
- Object recognition (1)
- Optimization (1)
- Single-objective (1)
- Transparency (1)
- activation function (1)
- bagging (1)
- bloat (1)
- evolutionary illumination (1)
- heterogeneous networks (1)
- neuroevolution (1)
- regression (1)
- surrogate modeling (1)
Evolutionary illumination is a recent technique that allows producing many diverse, optimal solutions in a map of manually defined features. To support the large amount of objective function evaluations, surrogate model assistance was recently introduced. Illumination models need to represent many more, diverse optimal regions than classical surrogate models. In this PhD thesis, we propose to decompose the sample set, decreasing model complexity, by hierarchically segmenting the training set according to their coordinates in feature space. An ensemble of diverse models can then be trained to serve as a surrogate to illumination.
Neuroevolution methods evolve the weights of a neural network, and in some cases the topology, but little work has been done to analyze the effect of evolving the activation functions of individual nodes on network size, an important factor when training networks with a small number of samples. In this work we extend the neuroevolution algorithm NEAT to evolve the activation function of neurons in addition to the topology and weights of the network. The size and performance of networks produced using NEAT with uniform activation in all nodes, or homogenous networks, is compared to networks which contain a mixture of activation functions, or heterogenous networks. For a number of regression and classification benchmarks it is shown that, (1) qualitatively different activation functions lead to different results in homogeneous networks, (2) the heterogeneous version of NEAT is able to select well performing activation functions, (3) the produced heterogeneous networks are significantly smaller than homogeneous networks.