Fachbereich Ingenieurwissenschaften und Kommunikation
Refine
H-BRS Bibliography
- yes (386)
Departments, institutes and facilities
- Fachbereich Ingenieurwissenschaften und Kommunikation (386)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (239)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (41)
- Fachbereich Informatik (40)
- Institute of Visual Computing (IVC) (29)
- Zentrum für Innovation und Entwicklung in der Lehre (ZIEL) (18)
- Fachbereich Wirtschaftswissenschaften (5)
- Fachbereich Angewandte Naturwissenschaften (4)
- Institut für Sicherheitsforschung (ISF) (4)
- Institut für funktionale Gen-Analytik (IFGA) (4)
Document Type
- Conference Object (196)
- Article (140)
- Preprint (20)
- Part of a Book (19)
- Research Data (4)
- Book (monograph, edited volume) (2)
- Report (2)
- Conference Proceedings (1)
- Doctoral Thesis (1)
- Lecture (1)
Year of publication
Language
- English (386) (remove)
Keywords
- FPGA (10)
- ISM: molecules (6)
- Education (5)
- West Africa (5)
- advanced applications (5)
- education (5)
- modeling of complex systems (5)
- Hydrogen storage (4)
- Lattice Boltzmann Method (4)
- digital design (4)
Trends of environmental awareness, combined with a focus on personal fitness and health, motivate many people to switch from cars and public transport to micromobility solutions, namely bicycles, electric bicycles, cargo bikes, or scooters. To accommodate urban planning for these changes, cities and communities need to know how many micromobility vehicles are on the road. In a previous work, we proposed a concept for a compact, mobile, and energy-efficient system to classify and count micromobility vehicles utilizing uncooled long-wave infrared (LWIR) image sensors and a neuromorphic co-processor. In this work, we elaborate on this concept by focusing on the feature extraction process with the goal to increase the classification accuracy. We demonstrate that even with a reduced feature list compared with our early concept, we manage to increase the detection precision to more than 90%. This is achieved by reducing the images of 160 × 120 pixels to only 12 × 18 pixels and combining them with contour moments to a feature vector of only 247 bytes.
A Fourier scatterometry setup is evaluated to recover the key parameters of optical phase gratings. Based on these parameters, systematic errors in the printing process of two-photon polymerization (TPP) gray-scale lithography three-dimensional printers can be compensated, namely tilt and curvature deviations. The proposed setup is significantly cheaper than a confocal microscope, which is usually used to determine calibration parameters for compensation of the TPP printing process. The grating parameters recovered this way are compared to those obtained with a confocal microscope. A clear correlation between confocal and scatterometric measurements is first shown for structures containing either tilt or curvature. The correlation is also shown for structures containing a mixture of tilt and curvature errors (squared Pearson coefficient r2 = 0.92). This compensation method is demonstrated on a TPP printer: a diffractive optical element printed with correction parameters obtained from Fourier scatterometry shows a significant reduction in noise as compared to the uncompensated system. This verifies the successful reduction of tilt and curvature errors. Further improvements of the method are proposed, which may enable the measurements to become more precise than confocal measurements in the future, since scatterometry is not affected by the diffraction limit.
A Fourier scatterometry setup is evaluated to recover the key parameters of optical phase gratings. Based on these parameters, systematic errors in the printing process of two photon polymerization (TPP) gray-scale lithography 3d printers can be compensated, namely tilt and curvature deviations. The proposed setup is significantly cheaper than a confocal microscope, which is usually used to determine calibrations parameters for compensation of the TPP printing process. The grating parameters recovered this way are compared to those obtained with a confocal microscope. A clear correlation between confocal and scatterometric measurements is first shown for structures containing either tilt or curvature. The correlation is also shown for structures containing a mixture of tilt and curvature errors (squared Pearson coefficient $r^2$ = 0.92). This new compensation method is demonstrated on a TPP printer: A diffractive optical element (DOE) printed with correction parameters obtained from Fourier scatterometry shows a significant reduction in noise as compared to the uncompensated system. This verifies the successful reduction of tilt and curvature errors. Further improvements of the method are proposed, which may enable the measurements to become more precise than confocal measurements in the future, since scatterometry is not affected by the diffraction limit.
Differential-Algebraic Equations and Beyond: From Smooth to Nonsmooth Constrained Dynamical Systems
(2022)
In her recent article, Bender discusses several aspects of research–practice–collaborations (RPCs). In this commentary, we apply Bender's arguments to experiences in engineering research and development (R&D). We investigate the influence of interaction with practice partners on relevance, credibility, and legitimacy in the special engineering field of product development and analyze which methodological approaches are already being pursued for dealing with diverging interests and asymmetries and which steps will be necessary to include interests of civil society beyond traditional customer relations.
Novel methods for contingency analysis of gas transport networks are presented. They are motivated by the transition of our energy system where hydrogen plays a growing role. The novel methods are based on a specific method for topological reduction and so-called supernodes. Stationary Euler equations with advanced compressor thermodynamics and a gas law allowing for gas compositions with up to 100% hydrogen are used. Several measures and plots support an intuitive comparison and analysis of the results. In particular, it is shown that the newly developed methods can estimate locations and magnitudes of additional capacities (injection, buffering, storage etc.) with a reasonable performance for networks of relevant composition and size.
The cube in cube approach was used by Paul and Ishai-Cohen to model and derive formulas for filler content dependent Young's moduli of particle filled composites assuming perfect filler matrix adhesion. Their formulas were chosen because of their simplicity, and recalculated using an elementary volume approach which transforms spherical inclusions to cubic inclusions. The EV approach led to expression of the composites moduli that allows introducing an adhesion factor kadh ranging from 0 and 1 to take into account reduced filler matrix adhesion. This adhesion factor scales the edge length of the cubic inclusions, thus reducing the stress transfer area between matrix and filler. Fitting the experimental data with the modified Paul model provides reasonable kadh for PA66, PBT, PP, PE-LD and BR which are in line with their surface energies. Further analysis showed that stiffening only occurs if kadh exceeds [Formula: see text] and depends on the ratio of matrix modulus and filler modulus. The modified model allows for a quick calculation of any particle filled composites for known matrix modulus EM, filler modulus EF, filler volume content vF and adhesion factor kadh. Thus, finite element analysis (FEA) simulations of any particle filled polymer parts as well as materials selection are significantly eased. FEA of cubic and hexagonal EV arrangements show that stress distributions within the EV exhibit more shear stresses if one deviates from the cubic arrangement. At high filler contents the assumption that the property of the EV is representative for the whole composite, holds only for filler volume contents up to 15 or 20% (corresponding to 30 to 40 weight %). Thus, for vast majority of commercially available particulate composites, the modified model can be applied. Furthermore, this indicates that the cube in cube approach reaches two limits: (i) the occurrence of increasing shear stresses at filler contents above 20% due to deviations of EV arrangements or spatial filler distribution from cubic arrangements (singular), and (ii) increasing interaction between particles with the formation of particle network within the matrix violating the EV assumption of their homogeneous dispersion.
This study investigates the initial stage of the thermo-mechanical crystallization behavior for uni- and biaxially stretched polyethylene. The models are based on a mesoscale molecular dynamics approach. We take constraints that occur in real-life polymer processing into account, especially with respect to the blowing stage of the extrusion blow-molding process. For this purpose, we deform our systems using a wide range of stretching levels before they are quenched. We discuss the effects of the stretching procedures on the micro-mechanical state of the systems, characterized by entanglement behavior and nematic ordering of chain segments. For the cooling stage, we use two different approaches which allow for free or hindered shrinkage, respectively. During cooling, crystallization kinetics are monitored: We precisely evaluate how the interplay of chain length, temperature, local entanglements and orientation of chain segments influence crystallization behavior. Our models reveal that the main stretching direction dominates microscopic states of the different systems. We are able to show that crystallization mainly depends on the (dis-)entanglement behavior. Nematic ordering plays a secondary role.
In this paper, modeling of piston and generic type gas compressors for a globally convergent algorithm for solving stationary gas transport problems is carried out. A theoretical analysis of the simulation stability, its practical implementation and verification of convergence on a realistic gas network have been carried out. The relevance of the paper for the topics of the conference is defined by a significance of gas transport networks as an advanced application of simulation and modeling, including the development of novel mathematical and numerical algorithms and methods.
In this paper, an analysis of the error ellipsoid in the space of solutions of stationary gas transport problems is carried out. For this purpose, a Principal Component Analysis of the solution set has been performed. The presence of unstable directions is shown associated with the marginal fulfillment of the resistivity conditions for the equations of compressors and other control elements in gas networks. Practically, the instabilities occur when multiple compressors or regulators try to control pressures or flows in the same part of the network. Such problems can occur, in particular, when the compressors or regulators reach their working limits. Possible ways of resolving instabilities are considered.
The cube in cube approach was used by Paul and Ishai-Cohen to model and derive formulas for filler content dependent Young´s moduli of particle filled composites assuming perfect filler matrix adhesion. Their formulas were chosen because of their simplicity, recalculated using an elementary volume approach which transforms spherical inclusions to cubic inclusions. The EV approach led to expression for the composites moduli that allow for introducing an adhesion factor kadh ranging from 0 and 1 to take into account none perfect reduced filler matrix adhesion. This adhesion factor scales the edge length of the cubic inclusions, thus, reducing the stress transfer area between matrix and filler. Fitting the experimental data with the modified Paul model provides reasonable kadh for PA66, PBT, PP, PE-LD and BR which are in line with their surface energies. Further analysis showed that stiffening only occurs if kadh exceeds <span class="math-tex">\( { \ \sqrt{E^M/E^F} \ }\) and depends on the ratio of matrix modulus and filler modulus. The modified model allows for a quick calculation of any particle filled composites for known matrix modulus EM, filler modulus EF, filler volume content vF and adhesion factor kadh. Thus, finite element analysis (FEA) simulations of any particle filled polymer parts as well as materials selection are significantly eased. FEA of cubic and hexagonal EV arrangements show that stress distributions within the EV exhibit more shear stresses if one deviates from the cubic arrangement. At high filler contents the assumption that the property of the EV is representative for the whole composite, holds only for filler volume contents up to 15 or 20 % (corresponding to 30 to 40 weight %). Thus, for vast majority of commercially available particulate composites, the modified model can be applied. Furthermore, this indicates that the cube in cube approach reaches two limits: i) the occurrence of increasing shear stresses at filler contents above 20 % due to deviations of EV arrangements or spatial filler distribution from cubic arrangements (singular), and ii) increasing interaction between particles with the formation of particle network within the matrix violating the EV assumption of their homogeneous dispersion.
The utilization of simulation procedures is gaining increasing attention in the product development of extrusion blow molded parts. However, some simulation steps, like the simulation of shrinkage and warpage, are still associated with uncertainties. The reason for this is on the one hand a lack of standardized interfaces for the transfer of simulation data between different simulation tools, and on the other hand the complex time-, temperature- and process-dependent material behavior of the used semi crystalline polymers. Using a new vendor neutral interface standard for the data transfer, the shrinkage analysis of a simple blow molded part is investigated and compared to experimental data. A linear viscoelastic material model in combination with an orthotropic process- and temperature-dependent thermal expansion coefficient is used for the shrinkage prediction. A good agreement is observed. Finally, critical parameters in the simulation models that strongly influence the shrinkage analysis are identified by a sensitivity study.
In this paper, a gas-to-power (GtoP) system for power outages is digitally modeled and experimentally developed. The design includes a solid-state hydrogen storage system composed of TiFeMn as a hydride forming alloy (6.7 kg of alloy in five tanks) and an air-cooled fuel cell (maximum power: 1.6 kW). The hydrogen storage system is charged under room temperature and 40 bar of hydrogen pressure, reaching about 110 g of hydrogen capacity. In an emergency use case of the system, hydrogen is supplied to the fuel cell, and the waste heat coming from the exhaust air of the fuel cell is used for the endothermic dehydrogenation reaction of the metal hydride. This GtoP system demonstrates fast, stable, and reliable responses, providing from 149 W to 596 W under different constant as well as dynamic conditions. A comprehensive and novel simulation approach based on a network model is also applied. The developed model is validated under static and dynamic power load scenarios, demonstrating excellent agreement with the experimental results.
West Africa has great potential for the use of solar energy systems, as it has both a high solar radiation rate and a lack of energy production. West Africa is a very aerosol-rich region, whose effects on photovoltaic (PV) use are due to both atmospheric conditions and existing solar technology. This study reports the variability of aerosol optical properties in the city of Koforidua, Ghana over the period 2016 to 2020, and their impact on the radiation intensity and efficiency of a PV cell. The study used AERONET ground (Giles et al., 2019) and satellite data produced by CAMS (Gschwind, et al., 2019), which both provide aerosol optical depth (AOD) and metrological parameters used for radiative transfer calculations with libRadtran (Emde, et al., 2016). A spectrally resolved PV model (Herman-Czezuch et al., 2022) is then used to calculate the PV yield of two PV technologies: polycrystalline and amorphous silicon. It is observed that for both data sets, the aerosol is mainly composed of dust and organic matter, with a very increased AOD load during the harmattan period (December-February), also due to the fires observed during this period.
The design of a fully superconducting wind power generator is influenced by several factors. Among them, a low number of pole pairs is desirable to achieve low AC losses in the superconducting stator winding, which greatly influences the cooling system design and, consecutively, the efficiency of the entire wind power plant. However, it has been identified that a low number of pole pairs in a superconducting generator tends to greatly increase its output voltage, which in turn creates challenging conditions for the necessary power electronic converter. This study highlights the interdependencies between the design of a fully superconducting 10 MW wind power generator and the corresponding design of its power electronic converter.
Comparing Armature Windings for a 10 MW Fully Superconducting Synchronous Wind Turbine Generator
(2022)
How self-reliant Peer Teaching can be set up to augment learning outcomes for university learners
(2022)
The electricity grid of the future will be built on renewable energy sources, which are highly variable and dependent on atmospheric conditions. In power grids with an increasingly high penetration of solar photovoltaics (PV), an accurate knowledge of the incoming solar irradiance is indispensable for grid operation and planning, and reliable irradiance forecasts are thus invaluable for energy system operators. In order to better characterise shortwave solar radiation in time and space, data from PV systems themselves can be used, since the measured power provides information about both irradiance and the optical properties of the atmosphere, in particular the cloud optical depth (COD). Indeed, in the European context with highly variable cloud cover, the cloud fraction and COD are important parameters in determining the irradiance, whereas aerosol effects are only of secondary importance.
This paper investigates the effect of voltage sensors on the measurement of transient voltages for power semiconductors in a Double Pulse Test (DPT) environment.We adapt previously published models that were developed for current sensors and apply them to voltage sensors to evaluate their suitability for DPT applications. Similarities and differences between transient current and voltage sensors are investigated and the resulting methodology is applied to commercially available and experimental voltage sensors. Finally, a selection aid for given measurement tasks is derived that focuses on the measurement of fast-switching power semiconductors.
Intention: Within the research project EnerSHelF (Energy-Self-Sufficiency for Health Facilities in Ghana), i. a. energy-meteorological and load-related measurement data are collected, for which an overview of the availability is to be presented on a poster.
Context: In Ghana, the total electricity consumed has almost doubled between 2008 and 2018 according to the Energy Commission of Ghana. This goes along with an unstable power grid, resulting in power outages whenever electricity consumption peaks. The blackouts called "dumsor" in Ghana, pose a severe burden to the healthcare sector. Innovative solutions are needed to reduce greenhouse gas emissions and improve energy and health access.
From Conclusion to Coda
(2022)
Off-lattice Boltzmann methods increase the flexibility and applicability of lattice Boltzmann methods by decoupling the discretizations of time, space, and particle velocities. However, the velocity sets that are mostly used in off-lattice Boltzmann simulations were originally tailored to on-lattice Boltzmann methods. In this contribution, we show how the accuracy and efficiency of weakly and fully compressible semi-Lagrangian off-lattice Boltzmann simulations is increased by velocity sets derived from cubature rules, i.e. multivariate quadratures, which have not been produced by the Gauss-product rule. In particular, simulations of 2D shock-vortex interactions indicate that the cubature-derived degree-nine D2Q19 velocity set is capable to replace the Gauss-product rule-derived D2Q25. Likewise, the degree-five velocity sets D3Q13 and D3Q21, as well as a degree-seven D3V27 velocity set were successfully tested for 3D Taylor-Green vortex flows to challenge and surpass the quality of the customary D3Q27 velocity set. In compressible 3D Taylor-Green vortex flows with Mach numbers Ma={0.5;1.0;1.5;2.0} on-lattice simulations with velocity sets D3Q103 and D3V107 showed only limited stability, while the off-lattice degree-nine D3Q45 velocity set accurately reproduced the kinetic energy provided by literature.
In this contribution, we perform computer simulations to expedite the development of hydrogen storages based on metal hydride. These simulations enable in-depth analysis of the processes within the systems which otherwise could not be achieved. That is, because the determination of crucial process properties require measurement instruments in the setup which are currently not available. Therefore, we investigate the reliability of reaction values that are determined by a design of experiments.
Specifically, we first explain our model setup in detail. We define the mathematical terms to obtain insights into the thermal processes and reaction kinetics. We then compare the simulated results to measurements of a 5-gram sample consisting of iron-titanium-manganese (FeTiMn) to obtain the values with the highest agreement with the experimental data. In addition, we improve the model by replacing the commonly used Van’t-Hoff equation by a mathematical expression of the pressure-composition-isotherms (PCI) to calculate the equilibrium pressure.
Finally, the parameters’ accuracy is checked in yet another with an existing metal hydride system. The simulated results demonstrate high concordance with experimental data, which advocate the usage of approximated kinetic reaction properties by a design of experiments for further design studies. Furthermore, we are able to determine process parameters like the entropy and enthalpy.
The clear-sky radiative effect of aerosol-radiation interactions is of relevance for our understanding of the climate system. The influence of aerosol on the surface energy budget is of high interest for the renewable energy sector. In this study, the radiative effect is investigated in particular with respect to seasonal and regional variations for the region of Germany and the year 2015 at the surface and top of atmosphere using two complementary approaches.
First, an ensemble of clear-sky models which explicitly consider aerosols is utilized to retrieve the aerosol optical depth and the surface direct radiative effect of aerosols by means of a clear sky fitting technique. For this, short-wave broadband irradiance measurements in the absence of clouds are used as a basis. A clear sky detection algorithm is used to identify cloud free observations. Considered are measurements of the shortwave broadband global and diffuse horizontal irradiance with shaded and unshaded pyranometers at 25 stations across Germany within the observational network of the German Weather Service (DWD). Clear sky models used are MMAC, MRMv6.1, METSTAT, ESRA, Heliosat-1, CEM and the simplified Solis model. The definition of aerosol and atmospheric characteristics of the models are examined in detail for their suitability for this approach.
Second, the radiative effect is estimated using explicit radiative transfer simulations with inputs on the meteorological state of the atmosphere, trace-gases and aerosol from CAMS reanalysis. The aerosol optical properties (aerosol optical depth, Ångström exponent, single scattering albedo and assymetrie parameter) are first evaluated with AERONET direct sun and inversion products. The largest inconsistency is found for the aerosol absorption, which is overestimated by about 0.03 or about 30 % by the CAMS reanalysis. Compared to the DWD observational network, the simulated global, direct and diffuse irradiances show reasonable agreement within the measurement uncertainty. The radiative kernel method is used to estimate the resulting uncertainty and bias of the simulated direct radiative effect. The uncertainty is estimated to −1.5 ± 7.7 and 0.6 ± 3.5 W m−2 at the surface and top of atmosphere, respectively, while the annual-mean biases at the surface, top of atmosphere and total atmosphere are −10.6, −6.5 and 4.1 W m−2, respectively.
The retrieval of the aerosol radiative effect with the clear sky models shows a high level of agreement with the radiative transfer simulations, with an RMSE of 5.8 W m−2 and a correlation of 0.75. The annual mean of the REari at the surface for the 25 DWD stations shows a value of −12.8 ± 5 W m−2 as average over the clear sky models, compared to −11 W m−2 from the radiative transfer simulations. Since all models assume a fixed aerosol characterisation, the annual cycle of the aerosol radiation effect cannot be reproduced. Out of this set of clear sky models, the largest level of agreement is shown by the ESRA and MRMv6.1 models.
Atomic oxygen in the mesosphere and lower thermosphere measured by terahertz heterodyne spectroscopy
(2021)
Atomic oxygen is a main component of the mesosphere and lower thermosphere (MLT). The photochemistry and the energy balance of the MLT are governed by atomic oxygen. In addition, it is a tracer for dynamical motions in the MLT. It is difficult to measure with remote sensing techniques. Concentrations can be inferred indirectly from the oxygen air glow or from observations of OH, which is involved in photochemical processes related to atomic oxygen. Such measurements have been performed with several satellite instruments such as SCIAMACHY, SABER, WINDII and OSIRIS. However, the methods are indirect and rely on photochemical models and assumptions such as quenching rates, radiative lifetimes, and reaction coefficients. The results are not always in agreement, particularly when obtained with different instruments.
Turbulent compressible flows are traditionally simulated using explicit time integrators applied to discretized versions of the Navier-Stokes equations. However, the associated Courant-Friedrichs-Lewy condition severely restricts the maximum time-step size. Exploiting the Lagrangian nature of the Boltzmann equation’s material derivative, we now introduce a feasible three-dimensional semi-Lagrangian lattice Boltzmann method (SLLBM), which circumvents this restriction. While many lattice Boltzmann methods for compressible flows were restricted to two dimensions due to the enormous number of discrete velocities in three dimensions, the SLLBM uses only 45 discrete velocities. Based on compressible Taylor-Green vortex simulations we show that the new method accurately captures shocks or shocklets as well as turbulence in 3D without utilizing additional filtering or stabilizing techniques other than the filtering introduced by the interpolation, even when the time-step sizes are up to two orders of magnitude larger compared to simulations in the literature. Our new method therefore enables researchers to study compressible turbulent flows by a fully explicit scheme, whose range of admissible time-step sizes is dictated by physics rather than spatial discretization.
In this paper, the electrochemical alkaline methanol oxidation process, which is relevant for the design of efficient fuel cells, is considered. An algorithm for reconstructing the reaction constants for this process from the experimentally measured polarization curve is presented. The approach combines statistical and principal component analysis and determination of the trust region for a linearized model. It is shown that this experiment does not allow one to determine accurately the reaction constants, but only some of their linear combinations. The possibilities of extending the method to additional experiments, including dynamic cyclic voltammetry and variations in the concentration of the main reagents, are discussed.
Animal models are often needed in cancer research but some research questions may be answered with other models, e.g., 3D replicas of patient-specific data, as these mirror the anatomy in more detail. We, therefore, developed a simple eight-step process to fabricate a 3D replica from computer tomography (CT) data using solely open access software and described the method in detail. For evaluation, we performed experiments regarding endoscopic tumor treatment with magnetic nanoparticles by magnetic hyperthermia and local drug release. For this, the magnetic nanoparticles need to be accumulated at the tumor site via a magnetic field trap. Using the developed eight-step process, we printed a replica of a locally advanced pancreatic cancer and used it to find the best position for the magnetic field trap. In addition, we described a method to hold these magnetic field traps stably in place. The results are highly important for the development of endoscopic tumor treatment with magnetic nanoparticles as the handling and the stable positioning of the magnetic field trap at the stomach wall in close proximity to the pancreatic tumor could be defined and practiced. Finally, the detailed description of the workflow and use of open access software allows for a wide range of possible uses.