Refine
Departments, institutes and facilities
- Fachbereich Ingenieurwissenschaften und Kommunikation (87)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (80)
- Fachbereich Informatik (22)
- Zentrum für Innovation und Entwicklung in der Lehre (ZIEL) (6)
- Institute of Visual Computing (IVC) (4)
- Institut für funktionale Gen-Analytik (IFGA) (3)
- Fachbereich Angewandte Naturwissenschaften (2)
- Fachbereich Wirtschaftswissenschaften (2)
- Fachbereich Sozialpolitik und Soziale Sicherung (1)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (1)
Document Type
- Article (72)
- Conference Object (29)
- Preprint (8)
- Part of a Book (6)
- Report (4)
- Contribution to a Periodical (1)
- Doctoral Thesis (1)
- Other (1)
Year of publication
Keywords
Herein we report an update to ACPYPE, a Python3 tool that now properly converts AMBER to GROMACS topologies for force fields that utilize nondefault and nonuniform 1–4 electrostatic and nonbonded scaling factors or negative dihedral force constants. Prior to this work, ACPYPE only converted AMBER topologies that used uniform, default 1–4 scaling factors and positive dihedral force constants. We demonstrate that the updated ACPYPE accurately transfers the GLYCAM06 force field from AMBER to GROMACS topology files, which employs non-uniform 1–4 scaling factors as well as negative dihedral force constants. Validation was performed using β-d-GlcNAc through gas-phase analysis of dihedral energy curves and probability density functions. The updated ACPYPE retains all of its original functionality, but now allows the simulation of complex glycomolecular systems in GROMACS using AMBER-originated force fields. ACPYPE is available for download at https://github.com/alanwilter/acpype.
Wo Laborexperimente zu aufwendig, zu teuer, zu langsam oder zu gefährlich oder Stoffeigenschaften gar nicht erst experimentell zugänglich sind, können Computersimulationen von Atomen und Molekülen diese ersetzen oder ergänzen. Sie ermöglichen dadurch Reduktion von Kosten, Entwicklungszeit und Materialeinsatz. Die für diese Simulationen benötigten Molekülmodelle beinhalten zahlreiche Parameter, die der Simulant einstellen oder auswählen muss. Eine passende Parametrierung ist nur bei entsprechenden Kenntnissen über die Auswirkungen der Parameter auf die zu berechnenden Größen und Eigenschaften möglich. Eine Gruppe von Standardparametern in molekularen Simulationen sind die Partialladungen der einzelnen Atome innerhalb eines Moleküls. Die räumliche Ladungsverteilung innerhalb des Moleküls wird durch Punktladungen auf den Atomzentren angenähert. Für diese Annäherung existieren diverse Ansätze für verschiedene Molekülklassen und Anwendungen. In diesem Teilprojekt des Promotionsvorhabens wurde systematisch der Einfluss der Wahl des Partialladungssatzes auf potentielle Energien und ausgewählte makroskopische Eigenschaften aus Molekulardynamik-Simulationen evaluiert. Es konnte gezeigt werden, dass insbesondere bei stark polaren Molekülen die Auswahl des geeigneten Partialladungssatzes entscheidenden Einfluss auf die Simulationsergebnisse hat und daher nicht naiv, sondern nur ganz gezielt getroffen werden darf.
Integrating physical simulation data into data ecosystems challenges the compatibility and interoperability of data management tools. Semantic web technologies and relational databases mostly use other data types, such as measurement or manufacturing design data. Standardizing simulation data storage and harmonizing the data structures with other domains is still a challenge, as current standards such as the ISO standard STEP (ISO 10303 ”Standard for the Exchange of Product model data”) fail to bridge the gap between design and simulation data. This challenge requires new methods, such as ontologies, to rethink simulation results integration. This research describes a new software architecture and application methodology based on the industrial standard ”Virtual Material Modelling in Manufacturing” (VMAP). The architecture integrates large quantities of structured simulation data and their analyses into a semantic data structure. It is capable of providing data permeability from the global digital twin level to the detailed numerical values of data entries and even new key indicators in a three-step approach: It represents a file as an instance in a knowledge graph, queries the file’s metadata, and finds a semantically represented process that enables new metadata to be created and instantiated.
Off-lattice Boltzmann methods increase the flexibility and applicability of lattice Boltzmann methods by decoupling the discretizations of time, space, and particle velocities. However, the velocity sets that are mostly used in off-lattice Boltzmann simulations were originally tailored to on-lattice Boltzmann methods. In this contribution, we show how the accuracy and efficiency of weakly and fully compressible semi-Lagrangian off-lattice Boltzmann simulations is increased by velocity sets derived from cubature rules, i.e. multivariate quadratures, which have not been produced by the Gauss-product rule. In particular, simulations of 2D shock-vortex interactions indicate that the cubature-derived degree-nine D2Q19 velocity set is capable to replace the Gauss-product rule-derived D2Q25. Likewise, the degree-five velocity sets D3Q13 and D3Q21, as well as a degree-seven D3V27 velocity set were successfully tested for 3D Taylor-Green vortex flows to challenge and surpass the quality of the customary D3Q27 velocity set. In compressible 3D Taylor-Green vortex flows with Mach numbers Ma={0.5;1.0;1.5;2.0} on-lattice simulations with velocity sets D3Q103 and D3V107 showed only limited stability, while the off-lattice degree-nine D3Q45 velocity set accurately reproduced the kinetic energy provided by literature.
In this contribution, we perform computer simulations to expedite the development of hydrogen storages based on metal hydride. These simulations enable in-depth analysis of the processes within the systems which otherwise could not be achieved. That is, because the determination of crucial process properties require measurement instruments in the setup which are currently not available. Therefore, we investigate the reliability of reaction values that are determined by a design of experiments.
Specifically, we first explain our model setup in detail. We define the mathematical terms to obtain insights into the thermal processes and reaction kinetics. We then compare the simulated results to measurements of a 5-gram sample consisting of iron-titanium-manganese (FeTiMn) to obtain the values with the highest agreement with the experimental data. In addition, we improve the model by replacing the commonly used Van’t-Hoff equation by a mathematical expression of the pressure-composition-isotherms (PCI) to calculate the equilibrium pressure.
Finally, the parameters’ accuracy is checked in yet another with an existing metal hydride system. The simulated results demonstrate high concordance with experimental data, which advocate the usage of approximated kinetic reaction properties by a design of experiments for further design studies. Furthermore, we are able to determine process parameters like the entropy and enthalpy.
In an effort to assist researchers in choosing basis sets for quantum mechanical modeling of molecules (i.e. balancing calculation cost versus desired accuracy), we present a systematic study on the accuracy of computed conformational relative energies and their geometries in comparison to MP2/CBS and MP2/AV5Z data, respectively. In order to do so, we introduce a new nomenclature to unambiguously indicate how a CBS extrapolation was computed. Nineteen minima and transition states of buta-1,3-diene, propan-2-ol and the water dimer were optimized using forty-five different basis sets. Specifically, this includes one Pople (i.e. 6-31G(d)), eight Dunning (i.e. VXZ and AVXZ, X=2-5), twenty-five Jensen (i.e. pc-n, pcseg-n, aug-pcseg-n, pcSseg-n and aug-pcSseg-n, n=0-4) and nine Karlsruhe (e.g. def2-SV(P), def2-QZVPPD) basis sets. The molecules were chosen to represent both common and electronically diverse molecular systems. In comparison to MP2/CBS relative energies computed using the largest Jensen basis sets (i.e. n=2,3,4), the use of smaller sizes (n=0,1,2 and n=1,2,3) provides results that are within 0.11--0.24 and 0.09-0.16 kcal/mol. To practically guide researchers in their basis set choice, an equation is introduced that ranks basis sets based on a user-defined balance between their accuracy and calculation cost. Furthermore, we explain why the aug-pcseg-2, def2-TZVPPD and def2-TZVP basis sets are very suitable choices to balance speed and accuracy.
Molecular modeling is an important subdomain in the field of computational modeling, regarding both scientific and industrial applications. This is because computer simulations on a molecular level are a virtuous instrument to study the impact of microscopic on macroscopic phenomena. Accurate molecular models are indispensable for such simulations in order to predict physical target observables, like density, pressure, diffusion coefficients or energetic properties, quantitatively over a wide range of temperatures. Thereby, molecular interactions are described mathematically by force fields. The mathematical description includes parameters for both intramolecular and intermolecular interactions. While intramolecular force field parameters can be determined by quantum mechanics, the parameterization of the intermolecular part is often tedious. Recently, an empirical procedure, based on the minimization of a loss function between simulated and experimental physical properties, was published by the authors. Thereby, efficient gradient-based numerical optimization algorithms were used. However, empirical force field optimization is inhibited by the two following central issues appearing in molecular simulations: firstly, they are extremely time-consuming, even on modern and high-performance computer clusters, and secondly, simulation data is affected by statistical noise. The latter provokes the fact that an accurate computation of gradients or Hessians is nearly impossible close to a local or global minimum, mainly because the loss function is flat. Therefore, the question arises of whether to apply a derivative-free method approximating the loss function by an appropriate model function. In this paper, a new Sparse Grid-based Optimization Workflow (SpaGrOW) is presented, which accomplishes this task robustly and, at the same time, keeps the number of time-consuming simulations relatively small. This is achieved by an efficient sampling procedure for the approximation based on sparse grids, which is described in full detail: in order to counteract the fact that sparse grids are fully occupied on their boundaries, a mathematical transformation is applied to generate homogeneous Dirichlet boundary conditions. As the main drawback of sparse grids methods is the assumption that the function to be modeled exhibits certain smoothness properties, it has to be approximated by smooth functions first. Radial basis functions turned out to be very suitable to solve this task. The smoothing procedure and the subsequent interpolation on sparse grids are performed within sufficiently large compact trust regions of the parameter space. It is shown and explained how the combination of the three ingredients leads to a new efficient derivative-free algorithm, which has the additional advantage that it is capable of reducing the overall number of simulations by a factor of about two in comparison to gradient-based optimization methods. At the same time, the robustness with respect to statistical noise is maintained. This assertion is proven by both theoretical considerations and practical evaluations for molecular simulations on chemical example substances.
Automated parameterization of intermolecular pair potentials using global optimization techniques
(2014)
In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters’ influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.
Turbulent compressible flows are traditionally simulated using explicit time integrators applied to discretized versions of the Navier-Stokes equations. However, the associated Courant-Friedrichs-Lewy condition severely restricts the maximum time-step size. Exploiting the Lagrangian nature of the Boltzmann equation’s material derivative, we now introduce a feasible three-dimensional semi-Lagrangian lattice Boltzmann method (SLLBM), which circumvents this restriction. While many lattice Boltzmann methods for compressible flows were restricted to two dimensions due to the enormous number of discrete velocities in three dimensions, the SLLBM uses only 45 discrete velocities. Based on compressible Taylor-Green vortex simulations we show that the new method accurately captures shocks or shocklets as well as turbulence in 3D without utilizing additional filtering or stabilizing techniques other than the filtering introduced by the interpolation, even when the time-step sizes are up to two orders of magnitude larger compared to simulations in the literature. Our new method therefore enables researchers to study compressible turbulent flows by a fully explicit scheme, whose range of admissible time-step sizes is dictated by physics rather than spatial discretization.
This work thoroughly investigates a semi-Lagrangian lattice Boltzmann (SLLBM) solver for compressible flows. In contrast to other LBM for compressible flows, the vertices are organized in cells, and interpolation polynomials up to fourth order are used to attain the off-vertex distribution function values. Differing from the recently introduced Particles on Demand (PoD) method , the method operates in a static, non-moving reference frame. Yet the SLLBM in the present formulation grants supersonic flows and exhibits a high degree of Galilean invariance. The SLLBM solver allows for an independent time step size due to the integration along characteristics and for the use of unusual velocity sets, like the D2Q25, which is constructed by the roots of the fifth-order Hermite polynomial. The properties of the present model are shown in diverse example simulations of a two-dimensional Taylor-Green vortex, a Sod shock tube, a two-dimensional Riemann problem and a shock-vortex interaction. It is shown that the cell-based interpolation and the use of Gauss-Lobatto-Chebyshev support points allow for spatially high-order solutions and minimize the mass loss caused by the interpolation. Transformed grids in the shock-vortex interaction show the general applicability to non-uniform grids.
Stably stratified Taylor–Green vortex simulations are performed by lattice Boltzmann methods (LBM) and compared to other recent works using Navier–Stokes solvers. The density variation is modeled with a separate distribution function in addition to the particle distribution function modeling the flow physics. Different stencils, forcing schemes, and collision models are tested and assessed. The overall agreement of the lattice Boltzmann solutions with reference solutions from other works is very good, even when no explicit subgrid model is used, but the quality depends on the LBM setup. Although the LBM forcing scheme is not decisive for the quality of the solution, the choice of the collision model and of the stencil are crucial for adequate solutions in underresolved conditions. The LBM simulations confirm the suppression of vertical flow motion for decreasing initial Froude numbers. To gain further insight into buoyancy effects, energy decay, dissipation rates, and flux coefficients are evaluated using the LBM model for various Froude numbers.
This study investigates the initial stage of the thermo-mechanical crystallization behavior for uni- and biaxially stretched polyethylene. The models are based on a mesoscale molecular dynamics approach. We take constraints that occur in real-life polymer processing into account, especially with respect to the blowing stage of the extrusion blow-molding process. For this purpose, we deform our systems using a wide range of stretching levels before they are quenched. We discuss the effects of the stretching procedures on the micro-mechanical state of the systems, characterized by entanglement behavior and nematic ordering of chain segments. For the cooling stage, we use two different approaches which allow for free or hindered shrinkage, respectively. During cooling, crystallization kinetics are monitored: We precisely evaluate how the interplay of chain length, temperature, local entanglements and orientation of chain segments influence crystallization behavior. Our models reveal that the main stretching direction dominates microscopic states of the different systems. We are able to show that crystallization mainly depends on the (dis-)entanglement behavior. Nematic ordering plays a secondary role.
Structural and Dynamical Properties of Polystyrene Determined by Coarse-Graining MD Simulations
(2007)
We present results from a detailed study of a new, optimized coarse-grained (CG) model of polystyrene (PS) and compare it with a recently published one (Harmandaris et al., Macromolecules 2006, 39, 6708). We will explain in detail, what led us to a different mapping scheme and put that into the general framework, with special emphasis on the aspect of time mapping. The new model is tested against the structural and dynamic properties of PS, resulting from atomistic simulations.
The Fraunhofer Institute for Algorithms and Scientific Computing (SCAI) has developed a software tool for the automated parameterization of force fields for molecular simulations using efficient gradient-based algorithms. This tool, combined with well-established simulation techniques, can quantitatively determine many physicochemical properties for given compounds.
Comparison Between Coarse-Graining Models for Polymer Systems: Two Mapping Schemes for Polystyrene
(2007)
Ressourceneffiziente Optimierung von Hohlkörpern aus Kunststoff mittels Multiskalensimulation
(2017)
This work introduces a semi-Lagrangian lattice Boltzmann (SLLBM) solver for compressible flows (with or without discontinuities). It makes use of a cell-wise representation of the simulation domain and utilizes interpolation polynomials up to fourth order to conduct the streaming step. The SLLBM solver allows for an independent time step size due to the absence of a time integrator and for the use of unusual velocity sets, like a D2Q25, which is constructed by the roots of the fifth-order Hermite polynomial. The properties of the proposed model are shown in diverse example simulations of a Sod shock tube, a two-dimensional Riemann problem and a shock-vortex interaction. It is shown that the cell-based interpolation and the use of Gauss-Lobatto-Chebyshev support points allow for spatially high-order solutions and minimize the mass loss caused by the interpolation. Transformed grids in the shock-vortex interaction show the general applicability to non-uniform grids.
Turbulent compressible flows are traditionally simulated using explicit Eulerian time integration applied to the Navier-Stokes equations. However, the associated Courant-Friedrichs-Lewy condition severely restricts the maximum time step size. Exploiting the Lagrangian nature of the Boltzmann equation's material derivative, we now introduce a feasible three-dimensional semi-Lagrangian lattice Boltzmann method (SLLBM), which elegantly circumvents this restriction. Previous lattice Boltzmann methods for compressible flows were mostly restricted to two dimensions due to the enormous number of discrete velocities needed in three dimensions. In contrast, this Rapid Communication demonstrates how cubature rules enhance the SLLBM to yield a three-dimensional velocity set with only 45 discrete velocities. Based on simulations of a compressible Taylor-Green vortex we show that the new method accurately captures shocks or shocklets as well as turbulence in 3D without utilizing additional filtering or stabilizing techniques, even when the time step sizes are up to two orders of magnitude larger compared to simulations in the literature. Our new method therefore enables researchers for the first time to study compressible turbulent flows by a fully explicit scheme, whose range of admissible time step sizes is only dictated by physics, while being decoupled from the spatial discretization.
Animal models are often needed in cancer research but some research questions may be answered with other models, e.g., 3D replicas of patient-specific data, as these mirror the anatomy in more detail. We, therefore, developed a simple eight-step process to fabricate a 3D replica from computer tomography (CT) data using solely open access software and described the method in detail. For evaluation, we performed experiments regarding endoscopic tumor treatment with magnetic nanoparticles by magnetic hyperthermia and local drug release. For this, the magnetic nanoparticles need to be accumulated at the tumor site via a magnetic field trap. Using the developed eight-step process, we printed a replica of a locally advanced pancreatic cancer and used it to find the best position for the magnetic field trap. In addition, we described a method to hold these magnetic field traps stably in place. The results are highly important for the development of endoscopic tumor treatment with magnetic nanoparticles as the handling and the stable positioning of the magnetic field trap at the stomach wall in close proximity to the pancreatic tumor could be defined and practiced. Finally, the detailed description of the workflow and use of open access software allows for a wide range of possible uses.
In this study, we investigate the thermo-mechanical relaxation and crystallization behavior of polyethylene using mesoscale molecular dynamics simulations. Our models specifically mimic constraints that occur in real-life polymer processing: After strong uniaxial stretching of the melt, we quench and release the polymer chains at different loading conditions. These conditions allow for free or hindered shrinkage, respectively. We present the shrinkage and swelling behavior as well as the crystallization kinetics over up to 600 ns simulation time. We are able to precisely evaluate how the interplay of chain length, temperature, local entanglements and orientation of chain segments influences crystallization and relaxation behavior. From our models, we determine the temperature dependent crystallization rate of polyethylene, including crystallization onset temperature.
Automated force field optimisation of small molecules using a gradient-based workflow package
(2010)
In this study, the recently developed gradient-based optimisation workflow for the automated development of molecular models is for the first time applied to the parameterisation of force fields for molecular dynamics simulations. As a proof-of-concept, two small molecules (benzene and phosgene) are considered. In order to optimise the underlying intermolecular force field (described by the (12,6)-Lennard-Jones and the Coulomb potential), the energetic and diameter parameters ε and σ are fitted to experimental physical properties by gradient-based numerical optimisation techniques. Thereby, a quadratic loss function between experimental and simulated target properties is minimised with respect to the force field parameters. In this proof-of-concept, the considered physical target properties are chosen to be diverse: density, enthalpy of vapourisation and self-diffusion coefficient are optimised simultaneously at different temperatures. We found that in both cases, the optimisation could be successfully concluded by fulfillment of a pre-defined stopping criterion. Since a fairly small number of iterations were needed to do so, this study will serve as a good starting point for more complex systems and further improvements of the parametrisation task.
Liquid–liquid equilibria of dipropylene glycol dimethyl ether and water by molecular dynamics
(2011)
In dieser Dissertation stellen wir einen neuen Ansatz zur Modellierung von Polymersystemen vor. Es werden (von methodischer Seite her) zwei automatisierte Iterationschemata dazu eingeführt, Kraftfeldparameter mesoskopischer Polymersysteme systematisch zu optimieren: Das Simplex-Verfahren und das Struktur-Differenzen-Verfahren. So werden diejenigen Freiheitsgrade aus Polymersystemen eliminiert, die eine hohe Auflösung erfordern, was die Modellierung größerer Systeme ermöglicht. Nach Tests an einfachen Flüssigkeiten werden vergröberte Modelle von drei prototypischen Polymeren (Polyacrylsäure, Polyvinylalkohol und Polyisopren) in unterschiedlichen Umgebungen (gutes Lösungsmittel und Schmelze) entwickelt und ihr Verhalten auf der Mesoskala ausgiebig geprüft. Die zugehörige Abbildung (von physikalischer Seite her) so zu gestalten, daß sie die unverwechselbaren Charakteristiken jedes Systems auf die mesoskopische Längenskala überträgt, stellt eine entscheidende Anforderung an die automatisierten Verfahren dar.
The influence of interaction details on the thermal diffusion in binary Lennard-Jones liquids
(2001)
Computational chemistry began with the birth of computers in the mid 1900s, and its growth has been directly coupled to the technological advances made in computer science and high-performance computing. A popular goal within the field, be it Newtonian or quantum based methods, is the accurate modelling of physical forces and energetics through mathematics and algorithm design. Through reliable modelling of the underlying forces, molecular simulations frequently provide atomistic insights into macroscopic experimental observations.
GROW: A gradient-based optimization workflow for the automated development of molecular models
(2010)
Ressourceneffiziente Optimierung von Hohlkörpern aus Kunststoff mittels Multiskalensimulation
(2017)
Die mechanischen Eigenschaften von extrusionsblasgeformten Kunststoffhohlkörpern hängen wesentlich von den vom Verarbeitungsprozess beeinflussten Materialeigenschaften ab. Ziel der dargestellten Untersuchung ist, prozessabhängige Materialkennwerte in Simulationsprogrammen zu berücksichtigen und damit deren Vorhersagegenauigkeit zu erhöhen. Hierzu ist die Schaffung einer Schnittstelle zwischen Prozess- und Bauteilsimulation notwendig. Darüber hinaus wird vorgestellt, wie Simulationen auf Mikroebene (molekulardynamische Simulationen) genutzt werden können, um Materialkennwerte ohne die Durchführung eines Realexperiments zu ermitteln.
Gleichlaufgelenke als Teil der Antriebswellen (Seitenwellen und Längswellen) sind in allen maßgeblichen Triebstrangkonfigurationen im direkten Leistungsfluss angeordnet. Ihre Hauptfunktion ist die Übertragung einer Antriebsleistung unter Ermöglichung von Abbeugung und Axialverschiebung. Dieser Beitrag soll einen Überblick zu den wesentlichen, auf dem heutigen Markt verbreiteten Bauweisen und ihren jeweiligen Einsatzgebieten geben. Besonders berücksichtigt werden hierbei neue Gelenkkonzepte, die sich aufgrund ihrer besonderen Gestaltung durch deutlich höhere Wirkungsgrade auszeichnen. Der Einfluss auf den Energieverbrauch soll quantifiziert werden, hierzu wird ein neuartiger Berechnungsansatz vorgestellt, der eine einfache Abschätzung des Einflusses von Wirkungsgradverbesserungen auf den Energieverbrauch für verschiedener Antriebskonzepte (ICE / Hybrid / E-Fahrzeuge) erlaubt.
Abschlussbericht zum BMBF-Fördervorhaben Enabling Infrastructure for HPC-Applications (EI-HPC)
(2020)
The lattice Boltzmann method (LBM) is an efficient simulation technique for computational fluid mechanics and beyond. It is based on a simple stream-and-collide algorithm on Cartesian grids, which is easily compatible with modern machine learning architectures. While it is becoming increasingly clear that deep learning can provide a decisive stimulus for classical simulation techniques, recent studies have not addressed possible connections between machine learning and LBM. Here, we introduce Lettuce, a PyTorch-based LBM code with a threefold aim. Lettuce enables GPU accelerated calculations with minimal source code, facilitates rapid prototyping of LBM models, and enables integrating LBM simulations with PyTorch's deep learning and automatic differentiation facility. As a proof of concept for combining machine learning with the LBM, a neural collision model is developed, trained on a doubly periodic shear layer and then transferred to a different flow, a decaying turbulence. We also exemplify the added benefit of PyTorch's automatic differentiation framework in flow control and optimization. To this end, the spectrum of a forced isotropic turbulence is maintained without further constraining the velocity field.
Energy Profiles of the Ring Puckering of Cyclopentane, Methylcyclopentane and Ethylcyclopentane
(2019)
This paper presents a novel approach to address noise, vibration, and harshness (NVH) issues in electrically assisted bicycles (e-bikes) caused by the drive unit. By investigating and optimising the structural dynamics during early product development, NVH can decisively be improved and valuable resources can be saved, emphasising its significance for enhancing riding performance. The paper offers a comprehensive analysis of the e-bike drive unit’s mechanical interactions among relevant components, culminating—to the best of our knowledge—in the development of the first high-fidelity model of an entire e-bike drive unit. The proposed model uses the principles of elastic multi body dynamics (eMBD) to elucidate the structural dynamics in dynamic-transient calculations. Comparing power spectra between measured and simulated motion variables validates the chosen model assumptions. The measurements of physical samples utilise accelerometers, contactless laser Doppler vibrometry (LDV) and various test arrangements, which are replicated in simulations and provide accessibility to measure vibrations onto rotating shafts and stationary structures. In summary, this integrated system-level approach can serve as a viable starting point for comprehending and managing the NVH behaviour of e-bikes.
Quality diversity algorithms can be used to efficiently create a diverse set of solutions to inform engineers' intuition. But quality diversity is not efficient in very expensive problems, needing 100.000s of evaluations. Even with the assistance of surrogate models, quality diversity needs 100s or even 1000s of evaluations, which can make it use infeasible. In this study we try to tackle this problem by using a pre-optimization strategy on a lower-dimensional optimization problem and then map the solutions to a higher-dimensional case. For a use case to design buildings that minimize wind nuisance, we show that we can predict flow features around 3D buildings from 2D flow features around building footprints. For a diverse set of building designs, by sampling the space of 2D footprints with a quality diversity algorithm, a predictive model can be trained that is more accurate than when trained on a set of footprints that were selected with a space-filling algorithm like the Sobol sequence. Simulating only 16 buildings in 3D, a set of 1024 building designs with low predicted wind nuisance is created. We show that we can produce better machine learning models by producing training data with quality diversity instead of using common sampling techniques. The method can bootstrap generative design in a computationally expensive 3D domain and allow engineers to sweep the design space, understanding wind nuisance in early design phases.