Fachbereich Ingenieurwissenschaften und Kommunikation
Refine
H-BRS Bibliography
- yes (22)
Departments, institutes and facilities
Document Type
- Preprint (22) (remove)
Year of publication
Keywords
- Automatic Differentiation (1)
- Battery Packs (1)
- Battery degradation (1)
- Calendar ageing (1)
- Capacity fade (1)
- Cell-to-cell variations (1)
- Cyclic Ageing (1)
- Cyclic testing (1)
- Electric Vehicles (1)
- HEB mixer (1)
Solar photovoltaic power output is modulated by atmospheric aerosols and clouds and thus contains valuable information on the optical properties of the atmosphere. As a ground-based data source with high spatiotemporal resolution it has great potential to complement other ground-based solar irradiance measurements as well as those of weather models and satellites, thus leading to an improved characterisation of global horizontal irradiance. In this work several algorithms are presented that can retrieve global tilted and horizontal irradiance and atmospheric optical properties from solar photovoltaic data and/or pyranometer measurements. Specifically, the aerosol (cloud) optical depth is inferred during clear sky (completely overcast) conditions. The method is tested on data from two measurement campaigns that took place in Allgäu, Germany in autumn 2018 and summer 2019, and the results are compared with local pyranometer measurements as well as satellite and weather model data. Using power data measured at 1 Hz and averaged to 1 minute resolution, the hourly global horizontal irradiance is extracted with a mean bias error compared to concurrent pyranometer measurements of 11.45 W m−2, averaged over the two campaigns, whereas for the retrieval using coarser 15 minute power data the mean bias error is 16.39 W m−2.
During completely overcast periods the cloud optical depth is extracted from photovoltaic power using a lookup table method based on a one-dimensional radiative transfer simulation, and the results are compared to both satellite retrievals as well as data from the COSMO weather model. Potential applications of this approach for extracting cloud optical properties are discussed, as well as certain limitations, such as the representation of 3D radiative effects that occur under broken cloud conditions. In principle this method could provide an unprecedented amount of ground-based data on both irradiance and optical properties of the atmosphere, as long as the required photovoltaic power data are available and are properly pre-screened to remove unwanted artefacts in the signal. Possible solutions to this problem are discussed in the context of future work.
In this contribution, we perform computer simulations to expedite the development of hydrogen storages based on metal hydride. These simulations enable in-depth analysis of the processes within the systems which otherwise could not be achieved. That is, because the determination of crucial process properties require measurement instruments in the setup which are currently not available. Therefore, we investigate the reliability of reaction values that are determined by a design of experiments.
Specifically, we first explain our model setup in detail. We define the mathematical terms to obtain insights into the thermal processes and reaction kinetics. We then compare the simulated results to measurements of a 5-gram sample consisting of iron-titanium-manganese (FeTiMn) to obtain the values with the highest agreement with the experimental data. In addition, we improve the model by replacing the commonly used Van’t-Hoff equation by a mathematical expression of the pressure-composition-isotherms (PCI) to calculate the equilibrium pressure.
Finally, the parameters’ accuracy is checked in yet another with an existing metal hydride system. The simulated results demonstrate high concordance with experimental data, which advocate the usage of approximated kinetic reaction properties by a design of experiments for further design studies. Furthermore, we are able to determine process parameters like the entropy and enthalpy.
The lattice Boltzmann method (LBM) is an efficient simulation technique for computational fluid mechanics and beyond. It is based on a simple stream-and-collide algorithm on Cartesian grids, which is easily compatible with modern machine learning architectures. While it is becoming increasingly clear that deep learning can provide a decisive stimulus for classical simulation techniques, recent studies have not addressed possible connections between machine learning and LBM. Here, we introduce Lettuce, a PyTorch-based LBM code with a threefold aim. Lettuce enables GPU accelerated calculations with minimal source code, facilitates rapid prototyping of LBM models, and enables integrating LBM simulations with PyTorch's deep learning and automatic differentiation facility. As a proof of concept for combining machine learning with the LBM, a neural collision model is developed, trained on a doubly periodic shear layer and then transferred to a different flow, a decaying turbulence. We also exemplify the added benefit of PyTorch's automatic differentiation framework in flow control and optimization. To this end, the spectrum of a forced isotropic turbulence is maintained without further constraining the velocity field.
Fundamental hydrogen storage properties of TiFe-alloy with partial substitution of Fe by Ti and Mn
(2020)
TiFe intermetallic compound has been extensively studied, owing to its low cost, good volumetric hydrogen density, and easy tailoring of hydrogenation thermodynamics by elemental substitution. All these positive aspects make this material promising for large-scale applications of solid-state hydrogen storage. On the other hand, activation and kinetic issues should be amended and the role of elemental substitution should be further understood. This work investigates the thermodynamic changes induced by the variation of Ti content along the homogeneity range of the TiFe phase (Ti:Fe ratio from 1:1 to 1:0.9) and of the substitution of Mn for Fe between 0 and 5 at.%. In all considered alloys, the major phase is TiFe-type together with minor amounts of TiFe2 or \b{eta}-Ti-type and Ti4Fe2O-type at the Ti-poor and rich side of the TiFe phase domain, respectively. Thermodynamic data agree with the available literature but offer here a comprehensive picture of hydrogenation properties over an extended Ti and Mn compositional range. Moreover, it is demonstrated that Ti-rich alloys display enhanced storage capacities, as long as a limited amount of \b{eta}-Ti is formed. Both Mn and Ti substitutions increase the cell parameter by possibly substituting Fe, lowering the plateau pressures and decreasing the hysteresis of the isotherms. A full picture of the dependence of hydrogen storage properties as a function of the composition will be discussed, together with some observed correlations.
4GREAT is an extension of the German Receiver for Astronomy at Terahertz frequencies (GREAT) operated aboard the Stratospheric Observatory for Infrared Astronomy (SOFIA). The spectrometer comprises four different detector bands and their associated subsystems for simultaneous and fully independent science operation. All detector beams are co-aligned on the sky. The frequency bands of 4GREAT cover 491-635, 890-1090, 1240-1525 and 2490-2590 GHz, respectively. This paper presents the design and characterization of the instrument, and its in-flight performance. 4GREAT saw first light in June 2018, and has been offered to the interested SOFIA communities starting with observing cycle 6.
We derive rates of convergence for limit theorems that reveal the intricate structure of the phase transitions in a mean-field version of the Blume-Emery-Griffith model. The theorems consist of scaling limits for the total spin. The model depends on the inverse temperature β and the interaction strength K. The rates of convergence results are obtained as (β,K) converges along appropriate sequences (βn,Kn) to points belonging to various subsets of the phase diagram which include a curve of second-order points and a tricritical point. We apply Stein's method for normal and non-normal approximation avoiding the use of transforms and supplying bounds, such as those of Berry-Esseen quality, on approximation error. We observe an additional phase transition phenomenon in the sense that depending on how fast Kn and βn are converging to points in various subsets of the phase diagram, different rates of convergences to one and the same limiting distribution occur.
During the dawn of chemistry when the temperature of the young Universe had fallen below ∼4000 K, the ions of the light elements produced in Big Bang nucleosynthesis recombined in reverse order of their ionization potential. With its higher ionization potentials, He++ (54.5 eV) and He+ (24.6 eV) combined first with free electrons to form the first neutral atom, prior to the recombination of hydrogen (13.6 eV). At that time, in this metal-free and low-density environment, neutral helium atoms formed the Universe's first molecular bond in the helium hydride ion HeH+, by radiative association with protons (He + H+ → HeH+ + hν). As recombination progressed, the destruction of HeH+ (HeH+ + H → He + H+2) created a first path to the formation of molecular hydrogen, marking the beginning of the Molecular Age. Despite its unquestioned importance for the evolution of the early Universe, the HeH+ molecule has so far escaped unequivocal detection in interstellar space. In the laboratory, the ion was discovered as long ago as 1925, but only in the late seventies was the possibility that HeH+ might exist in local astrophysical plasmas discussed. In particular, the conditions in planetary nebulae were shown to be suitable for the production of potentially detectable HeH+ column densities: the hard radiation field from the central hot white dwarf creates overlapping Strömgren spheres, where HeH+ is predicted to form, primarily by radiative association of He+ and H. With the GREAT spectrometer onboard SOFIA, the HeH+ rotational ground-state transition at λ149.1 μm is now accessible. We report here its detection towards the planetary nebula NGC7027.
Quality diversity algorithms can be used to efficiently create a diverse set of solutions to inform engineers' intuition. But quality diversity is not efficient in very expensive problems, needing 100.000s of evaluations. Even with the assistance of surrogate models, quality diversity needs 100s or even 1000s of evaluations, which can make it use infeasible. In this study we try to tackle this problem by using a pre-optimization strategy on a lower-dimensional optimization problem and then map the solutions to a higher-dimensional case. For a use case to design buildings that minimize wind nuisance, we show that we can predict flow features around 3D buildings from 2D flow features around building footprints. For a diverse set of building designs, by sampling the space of 2D footprints with a quality diversity algorithm, a predictive model can be trained that is more accurate than when trained on a set of footprints that were selected with a space-filling algorithm like the Sobol sequence. Simulating only 16 buildings in 3D, a set of 1024 building designs with low predicted wind nuisance is created. We show that we can produce better machine learning models by producing training data with quality diversity instead of using common sampling techniques. The method can bootstrap generative design in a computationally expensive 3D domain and allow engineers to sweep the design space, understanding wind nuisance in early design phases.
Since being introduced in the sixties and seventies, semi-implicit RosenbrockWanner (ROW) methods have become an important tool for the timeintegration of ODE and DAE problems. Over the years, these methods have been further developed in order to save computational effort by regarding approximations with respect to the given Jacobian [5], reduce effects of order reduction by introducing additional conditions [2, 4] or use advantages of partial explicit integration by considering underlying Runge-Kutta formulations [1]. As a consequence, there is a large number of different ROW-type schemes with characteristic properties for solving various problem formulations given in literature today.
In an effort to assist researchers in choosing basis sets for quantum mechanical modeling of molecules (i.e. balancing calculation cost versus desired accuracy), we present a systematic study on the accuracy of computed conformational relative energies and their geometries in comparison to MP2/CBS and MP2/AV5Z data, respectively. In order to do so, we introduce a new nomenclature to unambiguously indicate how a CBS extrapolation was computed. Nineteen minima and transition states of buta-1,3-diene, propan-2-ol and the water dimer were optimized using forty-five different basis sets. Specifically, this includes one Pople (i.e. 6-31G(d)), eight Dunning (i.e. VXZ and AVXZ, X=2-5), twenty-five Jensen (i.e. pc-n, pcseg-n, aug-pcseg-n, pcSseg-n and aug-pcSseg-n, n=0-4) and nine Karlsruhe (e.g. def2-SV(P), def2-QZVPPD) basis sets. The molecules were chosen to represent both common and electronically diverse molecular systems. In comparison to MP2/CBS relative energies computed using the largest Jensen basis sets (i.e. n=2,3,4), the use of smaller sizes (n=0,1,2 and n=1,2,3) provides results that are within 0.11--0.24 and 0.09-0.16 kcal/mol. To practically guide researchers in their basis set choice, an equation is introduced that ranks basis sets based on a user-defined balance between their accuracy and calculation cost. Furthermore, we explain why the aug-pcseg-2, def2-TZVPPD and def2-TZVP basis sets are very suitable choices to balance speed and accuracy.