Refine
H-BRS Bibliography
- yes (53)
Departments, institutes and facilities
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (53) (remove)
Document Type
- Article (17)
- Conference Object (17)
- Part of a Book (3)
- Contribution to a Periodical (3)
- Preprint (3)
- Report (3)
- Research Data (2)
- Book (monograph, edited volume) (1)
- Doctoral Thesis (1)
- Lecture (1)
Year of publication
- 2021 (53) (remove)
Keywords
- AOD (2)
- Automatic Differentiation (2)
- COD (2)
- Distribution grid management (2)
- Energiemeteorologie (2)
- Erzeugungsprognose (2)
- Generative Models (2)
- Hydrogen storage (2)
- Inversion (2)
- Lattice Boltzmann Method (2)
TREE Jahresbericht 2019/2020
(2021)
Der Jahresbericht soll in seiner Breite als auch in seiner Tiefe die Stärken unserer gemeinschaftlichen Anstrengungen im Forschungsfeld der nachhaltigen Technologien aufzeigen: interdisziplinär, forschungsstark, nachwuchsfördernd und gesellschaftszugewandt.
Im vergangenen Jahr war die Pandemie auch für das Insitut TREE eine Herausforderung. Wie die Mitglieder mit der Umstellung auf eine hauptsächlich online stattfindende Kommunikation umgegangen sind und wie das Hochschulleben sich dadurch verändert hat, wurde im Jahresbericht unter "See you online" festgehalten. Auch der Wechsel im Direktorium des Instituts ist Thema des diesjährigen Jahresberichts. Unter den Hauptthemen "Wissenschaftstransfer", "TREE und Wirtschaft" und "Transfer Öffentlichkeit" können sie die wichtigsten Ereignisse für das Institut in den Jahren 2019 und 2020 nachlesen.
Sie sind im Bereich Qualitätsmanagement tätig und haben die Aufgabe bekommen, ein Problem systematisch zu untersuchen und methodisch zu lösen? Sie haben zu viele Aufgaben und wissen nicht, wie Sie diese priorisieren sollen? Oder haben Sie zu begrenzte Ressourcen, um alle Reklamationen gleichzeitig bearbeiten zu können? Oder wissen nicht, wie Sie einen bestimmten Prozess in seinen Grenzen zielführend verbessern können?
Die nationale Politik- und Forschungsstrategie Bioökonomie sieht eine Transformation der Wirtschaft vor, bei der die Verwendung fossiler Rohstoffe zunehmend durch den Einsatz nachwachsender Rohstoffe ersetzt wird. Der Einsatz biobasierter Kunststoffe soll dabei gefördert werden. Erste Analysen der Berichterstattung zu Biokunststoffen im Rahmen einer Pilotstudie ergaben, dass der Grundgedanke biologisch abbaubarer Kunststoffe breite Zustimmung im öffentlichen Diskurs erfährt. Abseits der soziopolitischen Diskursebene entwickelt sich jedoch eine medial geführte Diskussion um erhebliche Probleme mit den Stoffen in der Abfallwirtschaft. Die Gefahr besteht nun, dass diese Haltung verbreitet durch die Massenmedien auf die öffentliche Meinung abfärbt. Mangelnde öffentliche Akzeptanz könnte den Erfolg von innovativen Biokunststoff-Produkten gefährden.
In this contribution, we perform computer simulations to expedite the development of hydrogen storages based on metal hydride. These simulations enable in-depth analysis of the processes within the systems which otherwise could not be achieved. That is, because the determination of crucial process properties require measurement instruments in the setup which are currently not available. Therefore, we investigate the reliability of reaction values that are determined by a design of experiments.
Specifically, we first explain our model setup in detail. We define the mathematical terms to obtain insights into the thermal processes and reaction kinetics. We then compare the simulated results to measurements of a 5-gram sample consisting of iron-titanium-manganese (FeTiMn) to obtain the values with the highest agreement with the experimental data. In addition, we improve the model by replacing the commonly used Van’t-Hoff equation by a mathematical expression of the pressure-composition-isotherms (PCI) to calculate the equilibrium pressure.
Finally, the parameters’ accuracy is checked in yet another with an existing metal hydride system. The simulated results demonstrate high concordance with experimental data, which advocate the usage of approximated kinetic reaction properties by a design of experiments for further design studies. Furthermore, we are able to determine process parameters like the entropy and enthalpy.
The clear-sky radiative effect of aerosol-radiation interactions is of relevance for our understanding of the climate system. The influence of aerosol on the surface energy budget is of high interest for the renewable energy sector. In this study, the radiative effect is investigated in particular with respect to seasonal and regional variations for the region of Germany and the year 2015 at the surface and top of atmosphere using two complementary approaches.
First, an ensemble of clear-sky models which explicitly consider aerosols is utilized to retrieve the aerosol optical depth and the surface direct radiative effect of aerosols by means of a clear sky fitting technique. For this, short-wave broadband irradiance measurements in the absence of clouds are used as a basis. A clear sky detection algorithm is used to identify cloud free observations. Considered are measurements of the shortwave broadband global and diffuse horizontal irradiance with shaded and unshaded pyranometers at 25 stations across Germany within the observational network of the German Weather Service (DWD). Clear sky models used are MMAC, MRMv6.1, METSTAT, ESRA, Heliosat-1, CEM and the simplified Solis model. The definition of aerosol and atmospheric characteristics of the models are examined in detail for their suitability for this approach.
Second, the radiative effect is estimated using explicit radiative transfer simulations with inputs on the meteorological state of the atmosphere, trace-gases and aerosol from CAMS reanalysis. The aerosol optical properties (aerosol optical depth, Ångström exponent, single scattering albedo and assymetrie parameter) are first evaluated with AERONET direct sun and inversion products. The largest inconsistency is found for the aerosol absorption, which is overestimated by about 0.03 or about 30 % by the CAMS reanalysis. Compared to the DWD observational network, the simulated global, direct and diffuse irradiances show reasonable agreement within the measurement uncertainty. The radiative kernel method is used to estimate the resulting uncertainty and bias of the simulated direct radiative effect. The uncertainty is estimated to −1.5 ± 7.7 and 0.6 ± 3.5 W m−2 at the surface and top of atmosphere, respectively, while the annual-mean biases at the surface, top of atmosphere and total atmosphere are −10.6, −6.5 and 4.1 W m−2, respectively.
The retrieval of the aerosol radiative effect with the clear sky models shows a high level of agreement with the radiative transfer simulations, with an RMSE of 5.8 W m−2 and a correlation of 0.75. The annual mean of the REari at the surface for the 25 DWD stations shows a value of −12.8 ± 5 W m−2 as average over the clear sky models, compared to −11 W m−2 from the radiative transfer simulations. Since all models assume a fixed aerosol characterisation, the annual cycle of the aerosol radiation effect cannot be reproduced. Out of this set of clear sky models, the largest level of agreement is shown by the ESRA and MRMv6.1 models.
The genetic basis of brain tumor development is poorly understood. Here, leukocyte DNA of 21 patients from 15 families with ≥ 2 glioma cases each was analyzed by whole-genome or targeted sequencing. As a result, we identified two families with rare germline variants, p.(A592T) or p.(A817V), in the E-cadherin gene CDH1 that co-segregate with the tumor phenotype, consisting primarily of oligodendrogliomas, WHO grade II/III, IDH-mutant, 1p/19q-codeleted (ODs). Rare CDH1 variants, previously shown to predispose to gastric and breast cancer, were significantly overrepresented in these glioma families (13.3%) versus controls (1.7%). In 68 individuals from 28 gastric cancer families with pathogenic CDH1 germline variants, brain tumors, including a pituitary adenoma, were observed in three cases (4.4%), a significantly higher prevalence than in the general population (0.2%). Furthermore, rare CDH1 variants were identified in tumor DNA of 6/99 (6%) ODs. CDH1 expression was detected in undifferentiated and differentiating oligodendroglial cells isolated from rat brain. Functional studies using CRISPR/Cas9-mediated knock-in or stably transfected cell models demonstrated that the identified CDH1 germline variants affect cell membrane expression, cell migration and aggregation. E-cadherin ectodomain containing variant p.(A592T) had an increased intramolecular flexibility in a molecular dynamics simulation model. E-cadherin harboring intracellular variant p.(A817V) showed reduced β-catenin binding resulting in increased cytosolic and nuclear β-catenin levels reverted by treatment with the MAPK interacting serine/threonine kinase 1 inhibitor CGP 57380. Our data provide evidence for a role of deactivating CDH1 variants in the risk and tumorigenesis of neuroepithelial and epithelial brain tumors, particularly ODs, possibly via WNT/β-catenin signaling.
We consider multi-solution optimization and generative models for the generation of diverse artifacts and the discovery of novel solutions. In cases where the domain's factors of variation are unknown or too complex to encode manually, generative models can provide a learned latent space to approximate these factors. When used as a search space, however, the range and diversity of possible outputs are limited to the expressivity and generative capabilities of the learned model. We compare the output diversity of a quality diversity evolutionary search performed in two different search spaces: 1) a predefined parameterized space and 2) the latent space of a variational autoencoder model. We find that the search on an explicit parametric encoding creates more diverse artifact sets than searching the latent space. A learned model is better at interpolating between known data points than at extrapolating or expanding towards unseen examples. We recommend using a generative model's latent space primarily to measure similarity between artifacts rather than for search and generation. Whenever a parametric encoding is obtainable, it should be preferred over a learned representation as it produces a higher diversity of solutions.
The actomyosin system generates mechanical work with the execution of the power stroke, an ATP-driven, two-step rotational swing of the myosin-neck that occurs post ATP hydrolysis during the transition from weakly to strongly actin-bound myosin states concomitant with Pi release and prior to ADP dissociation. The activating role of actin on product release and force generation is well documented; however, the communication paths associated with weak-to-strong transitions are poorly characterized. With the aid of mutant analyses based on kinetic investigations and simulations, we identified the W-helix as an important hub coupling the structural changes of switch elements during ATP hydrolysis to temporally controlled interactions with actin that are passed to the central transducer and converter. Disturbing the W-helix/transducer pathway increased actin-activated ATP turnover and reduced motor performance as a consequence of prolonged duration of the strongly actin-attached states. Actin-triggered Pi release was accelerated, while ADP release considerably decelerated, both limiting maximum ATPase, thus transforming myosin-2 into a high-duty-ratio motor. This kinetic signature of the mutant allowed us to define the fractional occupancies of intermediate states during the ATPase cycle providing evidence that myosin populates a cleft-closure state of strong actin interaction during the weak-to-strong transition with bound hydrolysis products before accomplishing the power stroke.
This study investigates the effects of four multifunctional chain-extending cross-linkers (CECL) on the processability, mechanical performance, and structure of polybutylene adipate terephthalate (PBAT) and polylactic acid (PLA) blends produced using film blowing technology. The newly developed reference compound (M·VERA® B5029) and the CECL modified blends are characterized with respect to the initial properties and the corresponding properties after aging at 50 °C for 1 and 2 months. The tensile strength, seal strength, and melt volume rate (MVR) are markedly changed after thermal aging, whereas the storage modulus, elongation at the break, and tear resistance remain constant. The degradation of the polymer chains and crosslinking with increased and decreased MVR, respectively, is examined thoroughly with differential scanning calorimetry (DSC), with the results indicating that the CECL-modified blends do not generally endure thermo-oxidation over time. Further, DSC measurements of 25 µm and 100 µm films reveal that film blowing pronouncedly changes the structures of the compounds. These findings are also confirmed by dynamic mechanical analysis, with the conclusion that tris(2,4-di-tert-butylphenyl)phosphite barely affects the glass transition temperature, while with the other changes in CECL are seen. Cross-linking is found for aromatic polycarbodiimide and poly(4,4-dicyclohexylmethanecarbodiimide) CECL after melting of granules and films, although overall the most synergetic effect of the CECL is shown by 1,3-phenylenebisoxazoline.
Turbulent compressible flows are traditionally simulated using explicit time integrators applied to discretized versions of the Navier-Stokes equations. However, the associated Courant-Friedrichs-Lewy condition severely restricts the maximum time-step size. Exploiting the Lagrangian nature of the Boltzmann equation’s material derivative, we now introduce a feasible three-dimensional semi-Lagrangian lattice Boltzmann method (SLLBM), which circumvents this restriction. While many lattice Boltzmann methods for compressible flows were restricted to two dimensions due to the enormous number of discrete velocities in three dimensions, the SLLBM uses only 45 discrete velocities. Based on compressible Taylor-Green vortex simulations we show that the new method accurately captures shocks or shocklets as well as turbulence in 3D without utilizing additional filtering or stabilizing techniques other than the filtering introduced by the interpolation, even when the time-step sizes are up to two orders of magnitude larger compared to simulations in the literature. Our new method therefore enables researchers to study compressible turbulent flows by a fully explicit scheme, whose range of admissible time-step sizes is dictated by physics rather than spatial discretization.
In this paper, the electrochemical alkaline methanol oxidation process, which is relevant for the design of efficient fuel cells, is considered. An algorithm for reconstructing the reaction constants for this process from the experimentally measured polarization curve is presented. The approach combines statistical and principal component analysis and determination of the trust region for a linearized model. It is shown that this experiment does not allow one to determine accurately the reaction constants, but only some of their linear combinations. The possibilities of extending the method to additional experiments, including dynamic cyclic voltammetry and variations in the concentration of the main reagents, are discussed.
Animal models are often needed in cancer research but some research questions may be answered with other models, e.g., 3D replicas of patient-specific data, as these mirror the anatomy in more detail. We, therefore, developed a simple eight-step process to fabricate a 3D replica from computer tomography (CT) data using solely open access software and described the method in detail. For evaluation, we performed experiments regarding endoscopic tumor treatment with magnetic nanoparticles by magnetic hyperthermia and local drug release. For this, the magnetic nanoparticles need to be accumulated at the tumor site via a magnetic field trap. Using the developed eight-step process, we printed a replica of a locally advanced pancreatic cancer and used it to find the best position for the magnetic field trap. In addition, we described a method to hold these magnetic field traps stably in place. The results are highly important for the development of endoscopic tumor treatment with magnetic nanoparticles as the handling and the stable positioning of the magnetic field trap at the stomach wall in close proximity to the pancreatic tumor could be defined and practiced. Finally, the detailed description of the workflow and use of open access software allows for a wide range of possible uses.
In this study, we investigate the thermo-mechanical relaxation and crystallization behavior of polyethylene using mesoscale molecular dynamics simulations. Our models specifically mimic constraints that occur in real-life polymer processing: After strong uniaxial stretching of the melt, we quench and release the polymer chains at different loading conditions. These conditions allow for free or hindered shrinkage, respectively. We present the shrinkage and swelling behavior as well as the crystallization kinetics over up to 600 ns simulation time. We are able to precisely evaluate how the interplay of chain length, temperature, local entanglements and orientation of chain segments influences crystallization and relaxation behavior. From our models, we determine the temperature dependent crystallization rate of polyethylene, including crystallization onset temperature.
Solving transport network problems can be complicated by non-linear effects. In the particular case of gas transport networks, the most complex non-linear elements are compressors and their drives. They are described by a system of equations, composed of a piecewise linear ‘free’ model for the control logic and a non-linear ‘advanced’ model for calibrated characteristics of the compressor. For all element equations, certain stability criteria must be fulfilled, providing the absence of folds in associated system mapping. In this paper, we consider a transformation (warping) of a system from the space of calibration parameters to the space of transport variables, satisfying these criteria. The algorithm drastically improves stability of the network solver. Numerous tests on realistic networks show that nearly 100% convergence rate of the solver is achieved with this approach.
The lattice Boltzmann method (LBM) is an efficient simulation technique for computational fluid mechanics and beyond. It is based on a simple stream-and-collide algorithm on Cartesian grids, which is easily compatible with modern machine learning architectures. While it is becoming increasingly clear that deep learning can provide a decisive stimulus for classical simulation techniques, recent studies have not addressed possible connections between machine learning and LBM. Here, we introduce Lettuce, a PyTorch-based LBM code with a threefold aim. Lettuce enables GPU accelerated calculations with minimal source code, facilitates rapid prototyping of LBM models, and enables integrating LBM simulations with PyTorch's deep learning and automatic differentiation facility. As a proof of concept for combining machine learning with the LBM, a neural collision model is developed, trained on a doubly periodic shear layer and then transferred to a different flow, a decaying turbulence. We also exemplify the added benefit of PyTorch's automatic differentiation framework in flow control and optimization. To this end, the spectrum of a forced isotropic turbulence is maintained without further constraining the velocity field.
In view of the rapid growth of solar power installations worldwide, accurate forecasts of photovoltaic (PV) power generation are becoming increasingly indispensable for the overall stability of the electricity grid. In the context of household energy storage systems, PV power forecasts contribute towards intelligent energy management and control of PV-battery systems, in particular so that self-sufficiency and battery lifetime are maximised. Typical battery control algorithms require day-ahead forecasts of PV power generation, and in most cases a combination of statistical methods and numerical weather prediction (NWP) models are employed. The latter are however often inaccurate, both due to deficiencies in model physics as well as an insufficient description of irradiance variability.
This book discusses the development of the Rosenbrock—Wanner methods from the origins of the idea to current research with the stable and efficient numerical solution and differential-algebraic systems of equations, still in focus. The reader gets a comprehensive insight into the classical methods as well as into the development and properties of novel W-methods, two-step and exponential Rosenbrock methods. In addition, descriptive applications from the fields of water and hydrogen network simulation and visual computing are presented. (Verlagsangaben)
Photovoltaic (PV) power data are a valuable but as yet under-utilised resource that could be used to characterise global irradiance with unprecedented spatio-temporal resolution. The resulting knowledge of atmospheric conditions can then be fed back into weather models and will ultimately serve to improve forecasts of PV power itself. This provides a data-driven alternative to statistical methods that use post-processing to overcome inconsistencies between ground-based irradiance measurements and the corresponding predictions of regional weather models (see for instance Frank et al., 2018). This work reports first results from an algorithm developed to infer global horizontal irradiance as well as atmospheric optical properties such as aerosol or cloud optical depth from PV power measurements.
In the research project "MetPVNet", both, the forecast-based operation management in distribution grids and as well as the forecasts of the feed-in of PV-power from decentralized plants could be improved on the basis of satellite data and numerical weather forecasts. Based on a detailed network analyses for a real medium-voltage grid area, it was shown that both – the integration of forecast data based on satellite and weather data and the improvement of subsequent day forecasts based on numerical weather models – have a significant added value for forecast-based congestion management or redispatch and reactive power management in the distribution grid. Furthermore, forecast improvements for the forecast model of the German Weather Service were achieved by assimilating visible satellite imagery, and cloud and radiation products from satellites were improved, thus improving the database for short-term forecasting as well as for assimilation. In addition, several methods have been developed that will enable forecast improvement in the future, especially for weather situations with high cloud induced variability and high forecast errors. This article summarizes the most important project results.
In this thesis it is posed that the central object of preference discovery is a co-creative process in which the Other can be represented by a machine. It explores efficient methods to enhance introverted intuition using extraverted intuition's communication lines. Possible implementations of such processes are presented using novel algorithms that perform divergent search to feed the users' intuition with many examples of high quality solutions, allowing them to take influence interactively. The machine feeds and reflects upon human intuition, combining both what is possible and preferred. The machine model and the divergent optimization algorithms are the motor behind this co-creative process, in which machine and users co-create and interactively choose branches of an ad hoc hierarchical decomposition of the solution space.
The proposed co-creative process consists of several elements: a formal model for interactive co-creative processes, evolutionary divergent search, diversity and similarity, data-driven methods to discover diversity, limitations of artificial creative agents, matters of efficiency in behavioral and morphological modeling, visualization, a connection to prototype theory, and methods to allow users to influence artificial creative agents. This thesis helps putting the human back into the design loop in generative AI and optimization.