Refine
H-BRS Bibliography
- yes (216) (remove)
Departments, institutes and facilities
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (216) (remove)
Document Type
- Conference Object (216) (remove)
Year of publication
Keywords
The need for innovation around the control functions of inverters is great. PV inverters were initially expected to be passive followers of the grid and to disconnect as soon as abnormal conditions happened. Since future power systems will be dominated by generation and storage resources interfaced through inverters these converters must move from following to forming and sustaining the grid. As “digital natives” PV inverters can also play an important role in the digitalisation of distribution networks. In this short review we identified a large potential to make the PV inverter the smart local hub in a distributed energy system. At the micro level, costs and coordination can be improved with bidirectional inverters between the AC grid and PV production, stationary storage, car chargers and DC loads. At the macro level the distributed nature of PV generation means that the same devices will support both to the local distribution network and to the global stability of the grid. Much success has been obtained in the former. The later remains a challenge, in particular in terms of scaling. Yet there is some urgency in researching and demonstrating such solutions. And while digitalisation offers promise in all control aspects it also raises significant cybersecurity concerns.
Introduction: After cellulose, lignin represents the most abundant biopolymer on earth that accounts for up to 18-35 % by weight of lignocellulose biomass. Today, it is a by-product of the paper and pulping industry. Although lignin is available in huge amounts, mainly in form of so called black liquor produced via Kraft-pulping, processes for the valorization of lignin are still limited [1]. Due to its hyperbranched polyphenol-like structure, lignin gained increasing interest as biobased building block for polymer synthesis [2]. The present work is focused on extraction and purification of lignin from industrial black liquor and synthesis of lignin-based polyurethanes.
The transport of carbon dioxide through pipelines is one of the important components of Carbon dioxide Capture and Storage (CCS) systems that are currently being developed. If high flow rates are desired a transportation in the liquid or supercritical phase is to be preferred. For technical reasons, the transport must stay in that phase, without transitioning to the gaseous state. In this paper, a numerical simulation of the stationary process of carbon dioxide transport with impurities and phase transitions is considered. We use the Homogeneous Equilibrium Model (HEM) and the GERG-2008 thermodynamic equation of state to describe the transport parameters. The algorithms used allow to solve scenarios of carbon dioxide transport in the liquid or supercritical phase, with the detection of approaching the phase transition region. Convergence of the solution algorithms is analyzed in connection with fast and abrupt changes of the equation of state and the enthalpy function in the region of phase transitions.
In this paper, modeling of piston and generic type gas compressors for a globally convergent algorithm for solving stationary gas transport problems is carried out. A theoretical analysis of the simulation stability, its practical implementation and verification of convergence on a realistic gas network have been carried out. The relevance of the paper for the topics of the conference is defined by a significance of gas transport networks as an advanced application of simulation and modeling, including the development of novel mathematical and numerical algorithms and methods.
Solving transport network problems can be complicated by non-linear effects. In the particular case of gas transport networks, the most complex non-linear elements are compressors and their drives. They are described by a system of equations, composed of a piecewise linear ‘free’ model for the control logic and a non-linear ‘advanced’ model for calibrated characteristics of the compressor. For all element equations, certain stability criteria must be fulfilled, providing the absence of folds in associated system mapping. In this paper, we consider a transformation (warping) of a system from the space of calibration parameters to the space of transport variables, satisfying these criteria. The algorithm drastically improves stability of the network solver. Numerous tests on realistic networks show that nearly 100% convergence rate of the solver is achieved with this approach.
In this paper, an analysis of the error ellipsoid in the space of solutions of stationary gas transport problems is carried out. For this purpose, a Principal Component Analysis of the solution set has been performed. The presence of unstable directions is shown associated with the marginal fulfillment of the resistivity conditions for the equations of compressors and other control elements in gas networks. Practically, the instabilities occur when multiple compressors or regulators try to control pressures or flows in the same part of the network. Such problems can occur, in particular, when the compressors or regulators reach their working limits. Possible ways of resolving instabilities are considered.
The paper presents the topological reduction method applied to gas transport networks, using contraction of series, parallel and tree-like subgraphs. The contraction operations are implemented for pipe elements, described by quadratic friction law. This allows significant reduction of the graphs and acceleration of solution procedure for stationary network problems. The algorithm has been tested on several realistic network examples. The possible extensions of the method to different friction laws and other elements are discussed.
Photovoltaic (PV) power data are a valuable but as yet under-utilised resource that could be used to characterise global irradiance with unprecedented spatio-temporal resolution. The resulting knowledge of atmospheric conditions can then be fed back into weather models and will ultimately serve to improve forecasts of PV power itself. This provides a data-driven alternative to statistical methods that use post-processing to overcome inconsistencies between ground-based irradiance measurements and the corresponding predictions of regional weather models (see for instance Frank et al., 2018). This work reports first results from an algorithm developed to infer global horizontal irradiance as well as atmospheric optical properties such as aerosol or cloud optical depth from PV power measurements.
In view of the rapid growth of solar power installations worldwide, accurate forecasts of photovoltaic (PV) power generation are becoming increasingly indispensable for the overall stability of the electricity grid. In the context of household energy storage systems, PV power forecasts contribute towards intelligent energy management and control of PV-battery systems, in particular so that self-sufficiency and battery lifetime are maximised. Typical battery control algorithms require day-ahead forecasts of PV power generation, and in most cases a combination of statistical methods and numerical weather prediction (NWP) models are employed. The latter are however often inaccurate, both due to deficiencies in model physics as well as an insufficient description of irradiance variability.
The electricity grid of the future will be built on renewable energy sources, which are highly variable and dependent on atmospheric conditions. In power grids with an increasingly high penetration of solar photovoltaics (PV), an accurate knowledge of the incoming solar irradiance is indispensable for grid operation and planning, and reliable irradiance forecasts are thus invaluable for energy system operators. In order to better characterise shortwave solar radiation in time and space, data from PV systems themselves can be used, since the measured power provides information about both irradiance and the optical properties of the atmosphere, in particular the cloud optical depth (COD). Indeed, in the European context with highly variable cloud cover, the cloud fraction and COD are important parameters in determining the irradiance, whereas aerosol effects are only of secondary importance.
Reliable and regional differentiated power forecasts are required to guarantee an efficient and economic energy transition towards renewable energies. Amongst other renewable energy technologies, e.g. wind mills, photovoltaic (PV) systems are an essential component of this transition being cost-efficient and simply to install. Reliable power forecasts are however required for a grid integration of photovoltaic systems, which among other data requires high-resolution spatio-temporal global irradiance data.
Reliable and regional differentiated power forecasts are required to guarantee an efficient and economic energy transition towards renewable energies. Amongst other renewable energy technologies, e.g. wind mills, photovoltaic systems are an essential component of this transition being cost-efficient and simply to install. Reliable power forecasts are however required for a grid integration of photovoltaic systems, which among other data requires high-resolution spatio-temporal global irradiance data. Hence the generation of robust reviewed global irradiance data is an essential contribution for the energy transition.
Synthesis of Substituted Hydroxyapatite for Application in Bone Tissue Engineering and Drug Delivery
(2019)
The accurate forecasting of solar radiation plays an important role for predictive control applications for energy systems with a high share of photovoltaic (PV) energy. Especially off-grid microgrid applications using predictive control applications can benefit from forecasts with a high temporal resolution to address sudden fluctuations of PV-power. However, cloud formation processes and movements are subject to ongoing research. For now-casting applications, all-sky-imagers (ASI) are used to offer an appropriate forecasting for aforementioned application. Recent research aims to achieve these forecasts via deep learning approaches, either as an image segmentation task to generate a DNI forecast through a cloud vectoring approach to translate the DNI to a GHI with ground-based measurement (Fabel et al., 2022; Nouri et al., 2021), or as an end-to-end regression task to generate a GHI forecast directly from the images (Paletta et al., 2021; Yang et al., 2021). While end-to-end regression might be the more attractive approach for off-grid scenarios, literature reports increased performance compared to smart-persistence but do not show satisfactory forecasting patterns (Paletta et al., 2021). This work takes a step back and investigates the possibility to translate ASI-images to current GHI to deploy the neural network as a feature extractor. An ImageNet pre-trained deep learning model is used to achieve such translation on an openly available dataset by the University of California San Diego (Pedro et al., 2019). The images and measurements were collected in Folsom, California. Results show that the neural network can successfully translate ASI-images to GHI for a variety of cloud situations without the need of any external variables. Extending the neural network to a forecasting task also shows promising forecasting patterns, which shows that the neural network extracts both temporal and momentarily features within the images to generate GHI forecasts.
In this paper, the electrochemical alkaline methanol oxidation process, which is relevant for the design of efficient fuel cells, is considered. An algorithm for reconstructing the reaction constants for this process from the experimentally measured polarization curve is presented. The approach combines statistical and principal component analysis and determination of the trust region for a linearized model. It is shown that this experiment does not allow one to determine accurately the reaction constants, but only some of their linear combinations. The possibilities of extending the method to additional experiments, including dynamic cyclic voltammetry and variations in the concentration of the main reagents, are discussed.
It is shown that the electrochemical kinetics of alkaline methanol oxidation can be reduced by setting certain fast reactions contained in it to a steady state. As a result, the underlying system of Ordinary Differential Equations (ODE) is transformed to a system of Differential-Algebraic Equations (DAE). We measure the precision characteristics of such transformation and discuss the consequences of the obtained model reduction.
Synthesis of serving policies for objects flow in the system with refillable storage component
(2017)
A novel approach to produce 2D designs by adapting the HyperNEAT algorithm to evolve non-uniform rational basis splines (NURBS) is presented. This representation is proposed as an alternative to previous pixel-based approaches primarily motivated by aesthetic interests, and not designed for optimization tasks. This spline representation outperforms previous pixel-based approaches on target matching tasks, performing well even in matching irregular target shapes. In addition to improved evolvability in the face of a well defined fitness metric, a NURBS representation has the added virtues of being continuous rather than discrete, as well as being intuitive and easily modified by graphic and industrial designers.
Surrogate-assistance approaches have long been used in computationally expensive domains to improve the data-efficiency of optimization algorithms. Neuroevolution, however, has so far resisted the application of these techniques because it requires the surrogate model to make fitness predictions based on variable topologies, instead of a vector of parameters. Our main insight is that we can sidestep this problem by using kernel-based surrogate models, which require only the definition of a distance measure between individuals. Our second insight is that the well-established Neuroevolution of Augmenting Topologies (NEAT) algorithm provides a computationally efficient distance measure between dissimilar networks in the form of "compatibility distance", initially designed to maintain topological diversity. Combining these two ideas, we introduce a surrogate-assisted neuroevolution algorithm that combines NEAT and a surrogate model built using a compatibility distance kernel. We demonstrate the data-efficiency of this new algorithm on the low dimensional cart-pole swing-up problem, as well as the higher dimensional half-cheetah running task. In both tasks the surrogate-assisted variant achieves the same or better results with several times fewer function evaluations as the original NEAT.
A new method for design space exploration and optimization, Surrogate-Assisted Illumination (SAIL), is presented. Inspired by robotics techniques designed to produce diverse repertoires of behaviors for use in damage recovery, SAIL produces diverse designs that vary according to features specified by the designer. By producing high-performing designs with varied combinations of user-defined features a map of the design space is created. This map illuminates the relationship between the chosen features and performance, and can aid designers in identifying promising design concepts. SAIL is designed for use with compu-tationally expensive design problems, such as fluid or structural dynamics, and integrates approximative models and intelligent sampling of the objective function to minimize the number of function evaluations required. On a 2D airfoil optimization problem SAIL is shown to produce hundreds of diverse designs which perform competitively with those found by state-of-the-art black box optimization. Its capabilities are further illustrated in a more expensive 3D aerodynamic optimization task.
The MAP-Elites algorithm produces a set of high-performing solutions that vary according to features defined by the user. This technique to 'illuminate' the problem space through the lens of chosen features has the potential to be a powerful tool for exploring design spaces, but is limited by the need for numerous evaluations. The Surrogate-Assisted Illumination (SAIL) algorithm, introduced here, integrates approximative models and intelligent sampling of the objective function to minimize the number of evaluations required by MAP-Elites.
The ability of SAIL to efficiently produce both accurate models and diverse high-performing solutions is illustrated on a 2D airfoil design problem. The search space is divided into bins, each holding a design with a different combination of features. In each bin SAIL produces a better performing solution than MAP-Elites, and requires several orders of magnitude fewer evaluations. The CMA-ES algorithm was used to produce an optimal design in each bin: with the same number of evaluations required by CMA-ES to find a near-optimal solution in a single bin, SAIL finds solutions of similar quality in every bin.
The encoding of solutions in black-box optimization is a delicate, handcrafted balance between expressiveness and domain knowledge between exploring a wide variety of solutions, and ensuring that those solutions are useful. Our main insight is that this process can be automated by generating a dataset of high-performing solutions with a quality diversity algorithm (here, MAP-Elites), then learning a representation with a generative model (here, a Varia-tional Autoencoder) from that dataset. Our second insight is that this representation can be used to scale quality diversity optimization to higher dimensions-but only if we carefully mix solutions generated with the learned representation and those generated with traditional variation operators. We demonstrate these capabilities by learning an low-dimensional encoding for the inverse kinemat-ics of a thousand joint planar arm. The results show that learned representations make it possible to solve high-dimensional problems with orders of magnitude fewer evaluations than the standard MAP-Elites, and that, once solved, the produced encoding can be used for rapid optimization of novel, but similar, tasks. The presented techniques not only scale up quality diversity algorithms to high dimensions, but show that black-box optimization encodings can be automatically learned, rather than hand designed.
Are quality diversity algorithms better at generating stepping stones than objective-based search?
(2019)
The route to the solution of complex design problems often lies through intermediate "stepping stones" which bear little resemblance to the final solution. By greedily following the path of greatest fitness improvement, objective-based search overlooks and discards stepping stones which might be critical to solving the problem. Here, we hypothesize that Quality Diversity (QD) algorithms are a better way to generate stepping stones than objective-based search: by maintaining a large set of solutions which are of high-quality, but phenotypically different, these algorithms collect promising stepping stones while protecting them in their own "ecological niche". To demonstrate the capabilities of QD we revisit the challenge of recreating images produced by user-driven evolution, a classic challenge which spurred work in novelty search and illustrated the limits of objective-based search. We show that QD far outperforms objective-based search in matching user-evolved images. Further, our results suggest some intriguing possibilities for leveraging the diversity of solutions created by QD.
Ressourceneffiziente Optimierung von Hohlkörpern aus Kunststoff mittels Multiskalensimulation
(2017)
Ressourceneffiziente Optimierung von Hohlkörpern aus Kunststoff mittels Multiskalensimulation
(2017)
Die mechanischen Eigenschaften von extrusionsblasgeformten Kunststoffhohlkörpern hängen wesentlich von den vom Verarbeitungsprozess beeinflussten Materialeigenschaften ab. Ziel der dargestellten Untersuchung ist, prozessabhängige Materialkennwerte in Simulationsprogrammen zu berücksichtigen und damit deren Vorhersagegenauigkeit zu erhöhen. Hierzu ist die Schaffung einer Schnittstelle zwischen Prozess- und Bauteilsimulation notwendig. Darüber hinaus wird vorgestellt, wie Simulationen auf Mikroebene (molekulardynamische Simulationen) genutzt werden können, um Materialkennwerte ohne die Durchführung eines Realexperiments zu ermitteln.
An der H-BRS, einer Hochschule für Angewandte Wissenschaften mit ca. 9.000 Studierenden, wurde die OER-Kultur bewusst als Teil der Strategie zur Digitalisierung der Lehre in drei Schritten etabliert: (1) Gemeinsame Strategiebildung als Teil eines partizipativ erarbeiteten Hochschulentwicklungsplans: Verankerung von OER in der Digitalisierungsstrategie. (2) Basierend auf der Vernetzung der Expertinnen und Experten erfolgreiche Einwerbung von OER-Projekten, die exemplarisch vorgestellt werden. (3) Dauerhafte strategische Verankerung, basierend auf kontinuierlicher interner und externer Netzwerkarbeit, Etablierung von digitalen Austauschplattformen für die Lehrenden, Transfer des OER-Gedankens (Kooperation, Austausch, Mehrfachnutzen) auf die Hochschuldidaktik sowie regelmäßige Ausschreibungen von Fördermaßnahmen.
The Potential of Sustainable Antimicrobial Additives for Food Packaging from Native Plants in Benin
(2019)
Evolutionary illumination is a recent technique that allows producing many diverse, optimal solutions in a map of manually defined features. To support the large amount of objective function evaluations, surrogate model assistance was recently introduced. Illumination models need to represent many more, diverse optimal regions than classical surrogate models. In this PhD thesis, we propose to decompose the sample set, decreasing model complexity, by hierarchically segmenting the training set according to their coordinates in feature space. An ensemble of diverse models can then be trained to serve as a surrogate to illumination.
Computers can help us to trigger our intuition about how to solve a problem. But how does a computer take into account what a user wants and update these triggers? User preferences are hard to model as they are by nature vague, depend on the user’s background and are not always deterministic, changing depending on the context and process under which they were established. We pose that the process of preference discovery should be the object of interest in computer aided design or ideation. The process should be transparent, informative, interactive and intuitive. We formulate Hyper-Pref, a cyclic co-creative process between human and computer, which triggers the user’s intuition about what is possible and is updated according to what the user wants based on their decisions. We combine quality diversity algorithms, a divergent optimization method that can produce many, diverse solutions, with variational autoencoders to both model that diversity as well as the user’s preferences, discovering the preference hypervolume within large search spaces.
The initial phase in real world engineering optimization and design is a process of discovery in which not all requirements can be made in advance, or are hard to formalize. Quality diversity algorithms, which produce a variety of high performing solutions, provide a unique chance to support engineers and designers in the search for what is possible and high performing. In this work we begin to answer the question how a user can interact with quality diversity and turn it into an interactive innovation aid. By modeling a user's selection it can be determined whether the optimization is drifting away from the user's preferences. The optimization is then constrained by adding a penalty to the objective function. We present an interactive quality diversity algorithm that can take into account the user's selection. The approach is evaluated in a new multimodal optimization benchmark that allows various optimization tasks to be performed. The user selection drift of the approach is compared to a state of the art alternative on both a planning and a neuroevolution control task, thereby showing its limits and possibilities.
We consider multi-solution optimization and generative models for the generation of diverse artifacts and the discovery of novel solutions. In cases where the domain's factors of variation are unknown or too complex to encode manually, generative models can provide a learned latent space to approximate these factors. When used as a search space, however, the range and diversity of possible outputs are limited to the expressivity and generative capabilities of the learned model. We compare the output diversity of a quality diversity evolutionary search performed in two different search spaces: 1) a predefined parameterized space and 2) the latent space of a variational autoencoder model. We find that the search on an explicit parametric encoding creates more diverse artifact sets than searching the latent space. A learned model is better at interpolating between known data points than at extrapolating or expanding towards unseen examples. We recommend using a generative model's latent space primarily to measure similarity between artifacts rather than for search and generation. Whenever a parametric encoding is obtainable, it should be preferred over a learned representation as it produces a higher diversity of solutions.
Neuroevolution methods evolve the weights of a neural network, and in some cases the topology, but little work has been done to analyze the effect of evolving the activation functions of individual nodes on network size, an important factor when training networks with a small number of samples. In this work we extend the neuroevolution algorithm NEAT to evolve the activation function of neurons in addition to the topology and weights of the network. The size and performance of networks produced using NEAT with uniform activation in all nodes, or homogenous networks, is compared to networks which contain a mixture of activation functions, or heterogenous networks. For a number of regression and classification benchmarks it is shown that, (1) qualitatively different activation functions lead to different results in homogeneous networks, (2) the heterogeneous version of NEAT is able to select well performing activation functions, (3) the produced heterogeneous networks are significantly smaller than homogeneous networks.
Surrogate models are used to reduce the burden of expensive-to-evaluate objective functions in optimization. By creating models which map genomes to objective values, these models can estimate the performance of unknown inputs, and so be used in place of expensive objective functions. Evolutionary techniques such as genetic programming or neuroevolution commonly alter the structure of the genome itself. A lack of consistency in the genotype is a fatal blow to data-driven modeling techniques: interpolation between points is impossible without a common input space. However, while the dimensionality of genotypes may differ across individuals, in many domains, such as controllers or classifiers, the dimensionality of the input and output remains constant. In this work we leverage this insight to embed differing neural networks into the same input space. To judge the difference between the behavior of two neural networks, we give them both the same input sequence, and examine the difference in output. This difference, the phenotypic distance, can then be used to situate these networks into a common input space, allowing us to produce surrogate models which can predict the performance of neural networks regardless of topology. In a robotic navigation task, we show that models trained using this phenotypic embedding perform as well or better as those trained on the weight values of a fixed topology neural network. We establish such phenotypic surrogate models as a promising and flexible approach which enables surrogate modeling even for representations that undergo structural changes.
The simulation of fluid flows is of importance to many fields of application, especially in industry and infrastructure. The modelling equations applied describe a coupled system of non-linear, hyperbolic partial differential equations given by one-dimensional shallow water equations that enable the consistent implementation of free surface flows in open channels as well as pressurised flows in closed pipes. The numerical realisation of these equations is complicated and challenging to date due to their characteristic properties that are able to cause discontinuous solutions.
The proper use of protective hoods on panel saws should reliably prevent severe injuries from (hand) contact with the blade or material kickbacks. It also should minimize long-term lung damages from fine-particle pollution. To achieve both purposes the hood must be adjusted properly by the operator for each workpiece to fit its height. After a work process is finished, the hood must be lowered down completely to the bench. Unfortunately, in practice the protective hood is fixed at a high position for most of the work time and herein loses its safety features. A system for an automatic height adjustment of the hood would increase comfort and safety. If the system can distinguish between workpieces and skin reliably, it furthermore will reduce occupational hazards for panel saw users. A functional demonstrator of such a system has been designed and implemented to show the feasibility of this approach. A specific optical sensor system is used to observe a point on the extended cut axis in front of the blade. The sensor determines the surface material reliably and measures the distance to the workpiece surface simultaneously. If the distance changes because of a workpiece fed to the machine, the control unit will set the motor-adjusted hood to the correct height. If the sensor detects skin, the hood will not be moved. In addition a camera observes the area under the hood. If there are no workpieces or offcuts left under the hood, it will be lowered back to the default position.