620 Ingenieurwissenschaften und zugeordnete Tätigkeiten
Refine
Departments, institutes and facilities
- Fachbereich Ingenieurwissenschaften und Kommunikation (180)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (118)
- Fachbereich Informatik (26)
- Fachbereich Angewandte Naturwissenschaften (24)
- Institute of Visual Computing (IVC) (9)
- Institut für Sicherheitsforschung (ISF) (8)
- Graduierteninstitut (6)
- Institut für Detektionstechnologien (IDT) (3)
- Zentrum für Innovation und Entwicklung in der Lehre (ZIEL) (3)
- Fachbereich Wirtschaftswissenschaften (1)
Document Type
- Conference Object (173)
- Article (119)
- Report (14)
- Part of a Book (11)
- Book (monograph, edited volume) (10)
- Doctoral Thesis (8)
- Patent (7)
- Bachelor Thesis (6)
- Master's Thesis (6)
- Diploma Thesis (5)
Year of publication
- 2024 (4)
- 2023 (18)
- 2022 (15)
- 2021 (17)
- 2020 (14)
- 2019 (18)
- 2018 (20)
- 2017 (23)
- 2016 (23)
- 2015 (19)
- 2014 (19)
- 2013 (13)
- 2012 (6)
- 2011 (13)
- 2010 (19)
- 2009 (15)
- 2008 (6)
- 2007 (3)
- 2006 (4)
- 2005 (7)
- 2004 (4)
- 2003 (4)
- 2002 (4)
- 2001 (4)
- 2000 (5)
- 1999 (8)
- 1998 (2)
- 1997 (3)
- 1996 (4)
- 1995 (3)
- 1994 (4)
- 1993 (13)
- 1992 (15)
- 1991 (7)
- 1990 (6)
- 1989 (6)
- 1988 (3)
- 1986 (3)
- 1985 (1)
Keywords
- modeling of complex systems (6)
- advanced applications (5)
- gas transport networks (4)
- FPGA (3)
- Hydrogen storage (3)
- applications (3)
- efficiency (3)
- globally convergent solvers (3)
- mathematical chemistry (3)
- observational data and simulations (3)
The lattice Boltzmann method (LBM) stands apart from conventional macroscopic approaches due to its low numerical dissipation and reduced computational cost, attributed to a simple streaming and local collision step. While this property makes the method particularly attractive for applications such as direct noise computation, it also renders the method highly susceptible to instabilities. A vast body of literature exists on stability-enhancing techniques, which can be categorized into selective filtering, regularized LBM, and multi-relaxation time (MRT) models. Although each technique bolsters stability by adding numerical dissipation, they act on different modes. Consequently, there is not a universal scheme optimally suited for a wide range of different flows. The reason for this lies in the static nature of these methods; they cannot adapt to local or global flow features. Still, adaptive filtering using a shear sensor constitutes an exception to this. For this reason, we developed a novel collision operator that uses space- and time-variant collision rates associated with the bulk viscosity. These rates are optimized by a physically informed neural net. In this study, the training data consists of a time series of different instances of a 2D barotropic vortex solution, obtained from a high-order Navier–Stokes solver that embodies desirable numerical features. For this specific text case our results demonstrate that the relaxation times adapt to the local flow and show a dependence on the velocity field. Furthermore, the novel collision operator demonstrates a better stability-to-precision ratio and outperforms conventional techniques that use an empirical constant for the bulk viscosity.
Process-induced changes in thermo-mechanical viscoelastic properties and the corresponding morphology of biodegradable polybutylene adipate terephthalate (PBAT) and polylactic acid (PLA) blown film blends modified with four multifunctional chain-extending cross-linkers (CECL) were investigated. The introduction of CECL modified the properties of the reference PBAT/PLA blend significantly. The thermal analysis showed that the chemical reactions were incomplete after compounding, and that film blowing extended them. SEM investigations of the fracture surfaces of blown extrusion films reveal the significant effect of CECL on the morphology formed during the processing. The anisotropic morphology introduced during film blowing proved to affect the degradation processes as well. Furthermore, the reactions of CECL with PBAT/PLA induced by the processing depend on the deformation directions. The blow-up ratio parameter was altered to investigate further process-induced changes proving synergy with mechanical and morphological features. Using blown film extrusion, the elongational behavior represents a very important characteristic. However, its evaluation may be quite often problematic, but with the SER Universal Testing Platform it was possible to determine changes in the duration of time intervals corresponding to the rupture of elongated samples.
Force field (FF) based molecular modeling is an often used method to investigate and study structural and dynamic properties of (bio-)chemical substances and systems. When such a system is modeled or refined, the force field parameters need to be adjusted. This force field parameter optimization can be a tedious task and is always a trade-off in terms of errors regarding the targeted properties. To better control the balance of various properties’ errors, in this study we introduce weighting factors for the optimization objectives. Different weighting strategies are compared to fine-tune the balance between bulk-phase density and relative conformational energies (RCE), using n-octane as a representative system. Additionally, a non-linear projection of the individual property-specific parts of the optimized loss function is deployed to further improve the balance between them. The results show that the overall error is reduced. One interesting outcome is a large variety in the resulting optimized force field parameters (FFParams) and corresponding errors, suggesting that the optimization landscape is multi-modal and very dependent on the weighting factor setup. We conclude that adjusting the weighting factors can be a very important feature to lower the overall error in the FF optimization procedure, giving researchers the possibility to fine-tune their FFs.
The transport of carbon dioxide through pipelines is one of the important components of Carbon dioxide Capture and Storage (CCS) systems that are currently being developed. If high flow rates are desired a transportation in the liquid or supercritical phase is to be preferred. For technical reasons, the transport must stay in that phase, without transitioning to the gaseous state. In this paper, a numerical simulation of the stationary process of carbon dioxide transport with impurities and phase transitions is considered. We use the Homogeneous Equilibrium Model (HEM) and the GERG-2008 thermodynamic equation of state to describe the transport parameters. The algorithms used allow to solve scenarios of carbon dioxide transport in the liquid or supercritical phase, with the detection of approaching the phase transition region. Convergence of the solution algorithms is analyzed in connection with fast and abrupt changes of the equation of state and the enthalpy function in the region of phase transitions.
Modern engineering relies heavily on utilizing computer technologies. This is especially true for thermoplastic manufacturing, such as blow molding. A crucial milestone for digitalization is the continuous integration of data in unified or interoperable systems. While new simulation technologies are constantly developed, data management standards such as STEP fail at integrating them. On the other hand, industrial standards such as ”VMAP” manage to improve interoperability for Small and Medium-sized Enterprises. However, they do not provide Simulation Process and Data Management (SPDM) technologies. For SPDM integration of VMAP data, Ontology-Based Data Access is used to allow continuing the digital thread in custom semantic-based open-source solutions. An ontology of the database format (VMAP) was generated alongside an expandable knowledge graph of data access methods. A Python-based software architecture was developed, automatically using the semantic representations of database format and data access to query data and metadata within the VMAP file. The result is a software architecture template that can be adapted for other data standards and integrated into semantic data management systems. It allows semantic queries on simulation data down to element-wise resolution without integrating the whole model information. The architecture can instantiate a file in a knowledge graph, query a file’s metadatum and, in case it is not yet available, find a semantically represented process that allows the creation and instantiation of the required metadatum. See Figure 1. The results of this thesis can be expected to form a basis for semantic SPDM tools.
Stably stratified Taylor–Green vortex simulations are performed by lattice Boltzmann methods (LBM) and compared to other recent works using Navier–Stokes solvers. The density variation is modeled with a separate distribution function in addition to the particle distribution function modeling the flow physics. Different stencils, forcing schemes, and collision models are tested and assessed. The overall agreement of the lattice Boltzmann solutions with reference solutions from other works is very good, even when no explicit subgrid model is used, but the quality depends on the LBM setup. Although the LBM forcing scheme is not decisive for the quality of the solution, the choice of the collision model and of the stencil are crucial for adequate solutions in underresolved conditions. The LBM simulations confirm the suppression of vertical flow motion for decreasing initial Froude numbers. To gain further insight into buoyancy effects, energy decay, dissipation rates, and flux coefficients are evaluated using the LBM model for various Froude numbers.
In dieser Arbeit wird eine kompressible Semi-Lagrangesche Lattice-Boltzmann-Methode neu entwickelt und erprobt. Die Lattice-Boltzmann-Methode ist ein Verfahren zur numerischen Strömungssimulation, das auf einer Modellierung von Partikeldichten und deren Interaktion untereinander basiert. In ihrer Ursprungsform ist die Methode jedoch auf schwach kompressible Strömungen mit niedriger Machzahl beschränkt. Wesentliche Nachteile der bisherigen Versuche zur Erweiterung auf supersonische Strömungen sind entweder mangelhafte Stabilität der Verfahren, unpraktikabel große Geschwindigkeitssätze oder die Beschränktheit auf kleine Zeitschrittweiten. Als Alternative zu bisherigen Ansätzen wird in dieser Arbeit ein Semi-Lagrangescher Strömungsschritt eingesetzt. Semi-Lagrangesche Verfahren entkoppeln mittels Interpolation die Orts-, Zeit- und Geschwindigkeitsdiskretisierung der ursprünglichen Lattice-Boltzmann-Methode. Nach der Einleitung wird im zweiten und dritten Kapitel dieser Arbeit zunächst auf die Grundlagen und Prinzipien der Lattice-Boltzmann-Methode eingegangen sowie bisherige Ansätze zur Simulation kompressibler Strömungen aufgeführt. Im Anschluss wird die kompressible Semi-Lagrangesche Lattice-Boltzmann-Methode entwickelt und beschrieben. Die Erweiterung erfolgt im Wesentlichen durch die Verknüpfung der Methode mit geeigneten Gleichgewichtsfunktionen und Geschwindigkeitssätzen. Im vierten Kapitel der Arbeit werden neue Kubatur-basierte Geschwindigkeitssätze entwickelt und getestet, darunter ein D3Q45-Geschwindigkeitssatz zur Berechnung kompressibler Strömungen, der den Rechenaufwand gegenüber konventionellen Geschwindigkeitsdiskretisierungen erheblich verringert. Im fünften Kapitel der Arbeit werden zur Validierung Simulationen von eindimensionalen Stoßrohren, zweidimensionalen Riemann-Problemen und Stoß-Wirbel-Interaktionen durchgeführt. Im Anschluss zeigen Simulationen von dreidimensionalen, kompressiblen Taylor-Green-Wirbeln sowie von wandgebundenen Testfällen die Vorteile der Methode für kompressible Strömungssimulationen. Zu diesem Zweck werden die Überschallströmung um ein zweidimensionales NACA-0012-Profil und um eine dreidimensionale Kugel sowie eine supersonische Kanalströmung untersucht. Dem Simulationsteil folgt eine umfangreiche Diskussion der Semi-Lagrangeschen Lattice-Boltzmann-Methode im Vergleich zu anderen Methoden. Die Vorteile der Methode, wie vergleichsweise große Zeitschrittweiten, körperangepasste Netze und die Stabilität der Methode, werden hier herausgearbeitet.
This paper presents a novel approach to address noise, vibration, and harshness (NVH) issues in electrically assisted bicycles (e-bikes) caused by the drive unit. By investigating and optimising the structural dynamics during early product development, NVH can decisively be improved and valuable resources can be saved, emphasising its significance for enhancing riding performance. The paper offers a comprehensive analysis of the e-bike drive unit’s mechanical interactions among relevant components, culminating—to the best of our knowledge—in the development of the first high-fidelity model of an entire e-bike drive unit. The proposed model uses the principles of elastic multi body dynamics (eMBD) to elucidate the structural dynamics in dynamic-transient calculations. Comparing power spectra between measured and simulated motion variables validates the chosen model assumptions. The measurements of physical samples utilise accelerometers, contactless laser Doppler vibrometry (LDV) and various test arrangements, which are replicated in simulations and provide accessibility to measure vibrations onto rotating shafts and stationary structures. In summary, this integrated system-level approach can serve as a viable starting point for comprehending and managing the NVH behaviour of e-bikes.
Heutzutage werden alternative Mobilitätslösungen immer wichtiger. Dabei haben eBikes ihr Potential längst unter Beweis gestellt. Der zugehörige Markt ist über die letzten 10 Jahre enorm gewachsen und gleichermaßen auch die Erwartungen an das Produkt, wie bspw. eine Fahrt ohne störende Vibrationen und Geräusche zu haben. Der Motorfreilauf leistet dabei einen maßgeblichen Einfluss auf das dynamische Verhalten. In diesem Beitrag soll daher eine methodische Vorgehensweise vorgestellt werden, um mittels Versuch und Simulation den Einfluss des Motorfeilaufs auf das dynamische Verhalten der eBike Antriebseinheit zu bestimmen.
Trends of environmental awareness, combined with a focus on personal fitness and health, motivate many people to switch from cars and public transport to micromobility solutions, namely bicycles, electric bicycles, cargo bikes, or scooters. To accommodate urban planning for these changes, cities and communities need to know how many micromobility vehicles are on the road. In a previous work, we proposed a concept for a compact, mobile, and energy-efficient system to classify and count micromobility vehicles utilizing uncooled long-wave infrared (LWIR) image sensors and a neuromorphic co-processor. In this work, we elaborate on this concept by focusing on the feature extraction process with the goal to increase the classification accuracy. We demonstrate that even with a reduced feature list compared with our early concept, we manage to increase the detection precision to more than 90%. This is achieved by reducing the images of 160 × 120 pixels to only 12 × 18 pixels and combining them with contour moments to a feature vector of only 247 bytes.
A Fourier scatterometry setup is evaluated to recover the key parameters of optical phase gratings. Based on these parameters, systematic errors in the printing process of two-photon polymerization (TPP) gray-scale lithography three-dimensional printers can be compensated, namely tilt and curvature deviations. The proposed setup is significantly cheaper than a confocal microscope, which is usually used to determine calibration parameters for compensation of the TPP printing process. The grating parameters recovered this way are compared to those obtained with a confocal microscope. A clear correlation between confocal and scatterometric measurements is first shown for structures containing either tilt or curvature. The correlation is also shown for structures containing a mixture of tilt and curvature errors (squared Pearson coefficient r2 = 0.92). This compensation method is demonstrated on a TPP printer: a diffractive optical element printed with correction parameters obtained from Fourier scatterometry shows a significant reduction in noise as compared to the uncompensated system. This verifies the successful reduction of tilt and curvature errors. Further improvements of the method are proposed, which may enable the measurements to become more precise than confocal measurements in the future, since scatterometry is not affected by the diffraction limit.
In dieser Arbeit wird im Rahmen von FFE+, einem internen Projekt des Deutschen Zentrums für Luft- und Raumfahrt, eine entscheidungsbasierte Fertigungsstrategie für die Herstellung einer Mikrogasturbinenblisk aus oxidkeramischem Faserverbundwerkstoff entwickelt. Hierfür soll das vakuumbasierte Infusionsverfahren der Abteilung Struktur- und Funktionskeramik des Instituts für Werksstoffforschung verwendet werden. Zunächst wird der theoretische Hintergrund des Materials und die davon etablierte Verarbeitung betrachtet. Aus Basis dieser Grundlage können das System und Funktionen der oxidkeramischen Blisk im Sinne der methodischen Prozessentwicklung bestimmt werden. Die darin formulierten Anforderungen und Bewertungskriterien lassen eine aufwandsreduzierte Entwurfsphase von Konzepten oder Lösungsprinzipien zu. Hierbei ist die Faserstruktur der maßgeblicher Einflussfaktor in der Lösungsfindung. Nach der Bewertung, Validierung und Anpassung der Ergebnisse wird die Fertigungsstrategie auf dem best-bewerteten Konzept und den bisherigen Projekten der Abteilung entworfen. Zusätzlich ist in dieser Arbeit eine Machbarkeitsstudie am Institut für Flugzeugbau der Universität Stuttgart von einem bislang unbekannten Verfahren zur Herstellung oxidkeramischer Faserpreforms durchgeführt worden. Da eine Aussage über die Materialkennwerte für eine sichere Funktionsgewährleistung notwendig ist, sind Materialversuche bei Raum- und Hochtemperatur geplant. Das abschließende Ziel einer Prozessketten-Grundlage von Projekten mit dem vakuumbasierten Infusionsverfahren des Instituts für Werkstoffforschung fasst die Ergebnisse von dieser Arbeit und anderen Erfahrungsberichten zusammen.
A Fourier scatterometry setup is evaluated to recover the key parameters of optical phase gratings. Based on these parameters, systematic errors in the printing process of two photon polymerization (TPP) gray-scale lithography 3d printers can be compensated, namely tilt and curvature deviations. The proposed setup is significantly cheaper than a confocal microscope, which is usually used to determine calibrations parameters for compensation of the TPP printing process. The grating parameters recovered this way are compared to those obtained with a confocal microscope. A clear correlation between confocal and scatterometric measurements is first shown for structures containing either tilt or curvature. The correlation is also shown for structures containing a mixture of tilt and curvature errors (squared Pearson coefficient $r^2$ = 0.92). This new compensation method is demonstrated on a TPP printer: A diffractive optical element (DOE) printed with correction parameters obtained from Fourier scatterometry shows a significant reduction in noise as compared to the uncompensated system. This verifies the successful reduction of tilt and curvature errors. Further improvements of the method are proposed, which may enable the measurements to become more precise than confocal measurements in the future, since scatterometry is not affected by the diffraction limit.
In (dynamic) adaptive mesh refinement (AMR) an input mesh is refined or coarsened to the need of the numerical application. This refinement happens with no respect to the originally meshed domain and is therefore limited to the geometrical accuracy of the original input mesh. We presented a novel approach to equip this input mesh with additional geometry information, to allow refinement and high-order cells based on the geometry of the original domain. We already showed a limited implementation of this algorithm. Now we evaluate this prototype with a numerical application and we prove its influence on the accuracy of certain numerical results. To be as practical as possible, we implement the ability to import meshes generated by Gmsh and equip them with the needed geometry information. Furthermore, we improve the mapping algorithm, which maps the geometry information of the boundary of a cell into the cell's volume. With these preliminary steps done, we use out new approach in a simulation of the advection of a concentration along the boundary of a sphere shell and past the boundary of a rotating cylinder. We evaluate the accuracy of our approach in comparison to the conventional refinement of cells to answer our research question: How does the performance and accuracy of the hexahedral curved domain AMR algorithm compare to linear AMR when solving the advection equation with the linear finite volume method? To answer this question, we show the influence of curved AMR on our simulation results and see, that it is even able to outperform far finer linear meshes in terms of accuracy. We also see that the current implementation of this approach is too slow for practical usage. We can therefore prove the benefits of curved AMR in certain, geometry-related application scenarios and show possible improvements to make it more feasible and practical in the future.
This edited volume on “Recent Advances in Renewable Energy” presents a selection of refereed papers presented at the 1st International Conference on Electrical Systems and Automation. The book provides rigorous discussions, the state of the art, and recent developments in the field of renewable energy sources supported by examples and case studies, making it an educational tool for relevant undergraduate and graduate courses. The book will be a valuable reference for beginners, researchers, and professionals interested in renewable energy.
This book which is the second part of two volumes on ''Control of Electrical and Electronic Systems” presents a compilation of selected contributions to the 1st International Conference on Electrical Systems & Automation. The book provides rigorous discussions, the state of the art, and recent developments in the modelling, simulation and control of power electronics, industrial systems, and embedded systems. The book will be a valuable reference for beginners, researchers, and professionals interested in control of electrical and electronic systems.