Fachbereich Ingenieurwissenschaften und Kommunikation
Refine
H-BRS Bibliography
- yes (140)
Departments, institutes and facilities
- Fachbereich Ingenieurwissenschaften und Kommunikation (140)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (88)
- Fachbereich Informatik (14)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (13)
- Institute of Visual Computing (IVC) (6)
- Institut für funktionale Gen-Analytik (IFGA) (4)
- Fachbereich Angewandte Naturwissenschaften (2)
- Institut für Sicherheitsforschung (ISF) (2)
- Fachbereich Sozialpolitik und Soziale Sicherung (1)
- Institut für KI und Autonome Systeme (A2S) (1)
Document Type
- Article (140) (remove)
Year of publication
Language
- English (140) (remove)
Keywords
- ISM: molecules (6)
- West Africa (4)
- Hydrogen storage (3)
- Lattice Boltzmann Method (3)
- Optimization (3)
- error analysis (3)
- magnetic resonance spectrometers auxiliary equipment (3)
- polyethylene (3)
- pulsars: general (3)
- radiowave spectrometers (3)
Integrating physical simulation data into data ecosystems challenges the compatibility and interoperability of data management tools. Semantic web technologies and relational databases mostly use other data types, such as measurement or manufacturing design data. Standardizing simulation data storage and harmonizing the data structures with other domains is still a challenge, as current standards such as the ISO standard STEP (ISO 10303 ”Standard for the Exchange of Product model data”) fail to bridge the gap between design and simulation data. This challenge requires new methods, such as ontologies, to rethink simulation results integration. This research describes a new software architecture and application methodology based on the industrial standard ”Virtual Material Modelling in Manufacturing” (VMAP). The architecture integrates large quantities of structured simulation data and their analyses into a semantic data structure. It is capable of providing data permeability from the global digital twin level to the detailed numerical values of data entries and even new key indicators in a three-step approach: It represents a file as an instance in a knowledge graph, queries the file’s metadata, and finds a semantically represented process that enables new metadata to be created and instantiated.
Accurate global horizontal irradiance (GHI) forecasting is critical for integrating solar energy into the power grid and operating solar power plants. The Weather Research and Forecasting model with its solar radiation extension (WRF-Solar) has been used to forecast solar irradiance in different regions around the world. However, the application of the WRF-Solar model to the prediction of GHI in West Africa, particularly Ghana, has not yet been investigated. The aim of this study is to evaluate the performance of the WRF-Solar model for predicting GHI in Ghana, focusing on three automatic weather stations (Akwatia, Kumasi and Kologo) for the year 2021. We used two one-way nested domains (D1 = 15 km and D2 = 3 km) to investigate the ability of the fully coupled WRF-Solar model to forecast GHI up to 72-hour ahead under different atmospheric conditions. The initial and lateral boundary conditions were taken from the ECMWF high-resolution operational forecasts. Our findings reveal that the WRF-Solar model performs better under clear skies than cloudy skies. Under clear skies, Kologo performed best in predicting 72-hour GHI, with a first day nRMSE of 9.62 %. However, forecasting GHI under cloudy skies at all three sites had significant uncertainties. Additionally, WRF-Solar model is able to reproduce the observed GHI diurnal cycle under high AOD conditions in most of the selected days. This study enhances the understanding of the WRF-Solar model’s capabilities and limitations for GHI forecasting in West Africa, particularly in Ghana. The findings provide valuable information for stakeholders involved in solar energy generation and grid integration towards optimized management in the region.
The lattice Boltzmann method (LBM) stands apart from conventional macroscopic approaches due to its low numerical dissipation and reduced computational cost, attributed to a simple streaming and local collision step. While this property makes the method particularly attractive for applications such as direct noise computation, it also renders the method highly susceptible to instabilities. A vast body of literature exists on stability-enhancing techniques, which can be categorized into selective filtering, regularized LBM, and multi-relaxation time (MRT) models. Although each technique bolsters stability by adding numerical dissipation, they act on different modes. Consequently, there is not a universal scheme optimally suited for a wide range of different flows. The reason for this lies in the static nature of these methods; they cannot adapt to local or global flow features. Still, adaptive filtering using a shear sensor constitutes an exception to this. For this reason, we developed a novel collision operator that uses space- and time-variant collision rates associated with the bulk viscosity. These rates are optimized by a physically informed neural net. In this study, the training data consists of a time series of different instances of a 2D barotropic vortex solution, obtained from a high-order Navier–Stokes solver that embodies desirable numerical features. For this specific text case our results demonstrate that the relaxation times adapt to the local flow and show a dependence on the velocity field. Furthermore, the novel collision operator demonstrates a better stability-to-precision ratio and outperforms conventional techniques that use an empirical constant for the bulk viscosity.
Protocol for conducting advanced cyclic tests in lithium-ion batteries to estimate capacity fade
(2024)
Using advanced cyclic testing techniques improves accuracy in estimating capacity fade and incorporates real-world scenarios in battery cycle aging assessment. Here, we present a protocol for conducting cyclic tests in lithium-ion batteries to estimate capacity fade. We describe steps for implementing strategies for accounting for variations in rest periods, charge-discharge rates, and temperatures. We also detail procedures for validating tests experimentally within a climate-controlled chamber and for developing an empirical model to estimate capacity fading under various testing objectives. For complete details on the use and execution of this protocol, please refer to Mulpuri et al.1.
This work proposes a novel approach for probabilistic end-to-end all-sky imager-based nowcasting with horizons of up to 30 min using an ImageNet pre-trained deep neural network. The method involves a two-stage approach. First, a backbone model is trained to estimate the irradiance from all-sky imager (ASI) images. The model is then extended and retrained on image and parameter sequences for forecasting. An open access data set is used for training and evaluation. We investigated the impact of simultaneously considering global horizontal (GHI), direct normal (DNI), and diffuse horizontal irradiance (DHI) on training time and forecast performance as well as the effect of adding parameters describing the irradiance variability proposed in the literature. The backbone model estimates current GHI with an RMSE and MAE of 58.06 and 29.33 W m−2, respectively. When extended for forecasting, the model achieves an overall positive skill score reaching 18.6 % compared to a smart persistence forecast. Minor modifications to the deterministic backbone and forecasting models enables the architecture to output an asymmetrical probability distribution and reduces training time while leading to similar errors for the backbone models. Investigating the impact of variability parameters shows that they reduce training time but have no significant impact on the GHI forecasting performance for both deterministic and probabilistic forecasting while simultaneously forecasting GHI, DNI, and DHI reduces the forecast performance.
Pipeline transport is an efficient method for transporting fluids in energy supply and other technical applications. While natural gas is the classical example, the transport of hydrogen is becoming more and more important; both are transmitted under high pressure in a gaseous state. Also relevant is the transport of carbon dioxide, captured in the places of formation, transferred under high pressure in a liquid or supercritical state and pumped into underground reservoirs for storage. The transport of other fluids is also required in technical applications. Meanwhile, the transport equations for different fluids are essentially the same, and the simulation can be performed using the same methods. In this paper, the effect of control elements such as compressors, regulators and flaptraps on the stability of fluid transport simulations is studied. It is shown that modeling of these elements can lead to instabilities, both in stationary and dynamic simulations. Special regularization methods were developed to overcome these problems. Their functionality also for dynamic simulations is demonstrated for a number of numerical experiments.
In addition to the long-term goal of mitigating climate change, the current geopolitical upheavals heighten the urgency to transform Europe's energy system. This involves expanding renewable energies while managing intermittent electricity generation. Hydrogen is a promising solution to balance generation and demand, simultaneously decarbonizing complex applications. To model the energy system's transformation, the project TransHyDE-Sys, funded by the German Federal Ministry of Education and Research, takes an integrated approach beyond traditional energy system analysis, incorporating a diverse range of more detailed methods and tools. Herein, TransHyDE-Sys is situated within the recent policy discussion. It addresses the requirements for energy system modeling to gain insights into transforming the European hydrogen and energy infrastructure. It identifies knowledge gaps in the existing literature on hydrogen infrastructure-oriented energy system modeling and presents the research approach of TransHyDE-Sys. TransHyDE-Sys analyzes the development of hydrogen and energy infrastructures from “the system” and “the stakeholder” perspectives. The integrated modeling landscape captures temporal and spatial interactions among hydrogen, electricity, and natural gas infrastructure, providing comprehensive insights for systemic infrastructure planning. This allows a more accurate representation of the energy system's dynamics and aids in decision-making for achieving sustainable and efficient hydrogen network development integration.
This study addresses the common occurrence of cell-to-cell variations arising from manufacturing tolerances and their implications during battery production. The focus is on assessing the impact of these inherent differences in cells and exploring diverse cell and module connection methods on battery pack performance and their subsequent influence on the driving range of electric vehicles (EVs). The analysis spans three battery pack sizes, encompassing various constant discharge rates and nine distinct drive cycles representative of driving behaviours across different regions of India. Two interconnection topologies, categorised as “string” and “cross”, are examined. The findings reveal that cross-connected packs exhibit reduced energy output compared to string-connected configurations, which is reflected in the driving range outcomes observed during drive cycle simulations. Additionally, the study investigates the effects of standard deviation in cell parameters, concluding that an increased standard deviation (SD) leads to decreased energy output from the packs. Notably, string-connected packs demonstrate superior performance in terms of extractable energy under such conditions.
Due to their user-friendliness and reliability, biometric systems have taken a central role in everyday digital identity management for all kinds of private, financial and governmental applications with increasing security requirements. A central security aspect of unsupervised biometric authentication systems is the presentation attack detection (PAD) mechanism, which defines the robustness to fake or altered biometric features. Artifacts like photos, artificial fingers, face masks and fake iris contact lenses are a general security threat for all biometric modalities. The Biometric Evaluation Center of the Institute of Safety and Security Research (ISF) at the University of Applied Sciences Bonn-Rhein-Sieg has specialized in the development of a near-infrared (NIR)-based contact-less detection technology that can distinguish between human skin and most artifact materials. This technology is highly adaptable and has already been successfully integrated into fingerprint scanners, face recognition devices and hand vein scanners. In this work, we introduce a cutting-edge, miniaturized near-infrared presentation attack detection (NIR-PAD) device. It includes an innovative signal processing chain and an integrated distance measurement feature to boost both reliability and resilience. We detail the device’s modular configuration and conceptual decisions, highlighting its suitability as a versatile platform for sensor fusion and seamless integration into future biometric systems. This paper elucidates the technological foundations and conceptual framework of the NIR-PAD reference platform, alongside an exploration of its potential applications and prospective enhancements.
Trueness and precision of milled and 3D printed root-analogue implants: A comparative in vitro study
(2023)
Rosenbrock–Wanner methods for systems of stiff ordinary differential equations are well known since the seventies. They have been continuously developed and are efficient for differential-algebraic equations of index-1, as well. Their disadvantage that the Jacobian matrix has to be updated in every time step becomes more and more obsolete when automatic differentiation is used. Especially the family of Rodas methods has proven to be a standard in the Julia package DifferentialEquations. However, the fifth-order Rodas5 method undergoes order reduction for certain problem classes. Therefore, the goal of this paper is to compute a new set of coefficients for Rodas5 such that this order reduction is reduced. The procedure is similar to the derivation of the methods Rodas4P and Rodas4P2. In addition, it is possible to provide new dense output formulas for Rodas5 and the new method Rodas5P. Numerical tests show that for higher accuracy requirements Rodas5P always belongs to the best methods within the Rodas family.
Solar photovoltaic power output is modulated by atmospheric aerosols and clouds and thus contains valuable information on the optical properties of the atmosphere. As a ground-based data source with high spatiotemporal resolution it has great potential to complement other ground-based solar irradiance measurements as well as those of weather models and satellites, thus leading to an improved characterisation of global horizontal irradiance. In this work several algorithms are presented that can retrieve global tilted and horizontal irradiance and atmospheric optical properties from solar photovoltaic data and/or pyranometer measurements. The method is tested on data from two measurement campaigns that took place in the Allgäu region in Germany in autumn 2018 and summer 2019, and the results are compared with local pyranometer measurements as well as satellite and weather model data. Using power data measured at 1 Hz and averaged to 1 min resolution along with a non-linear photovoltaic module temperature model, global horizontal irradiance is extracted with a mean bias error compared to concurrent pyranometer measurements of 5.79 W m−2 (7.35 W m−2) under clear (cloudy) skies, averaged over the two campaigns, whereas for the retrieval using coarser 15 min power data with a linear temperature model the mean bias error is 5.88 and 41.87 W m−2 under clear and cloudy skies, respectively.
During completely overcast periods the cloud optical depth is extracted from photovoltaic power using a lookup table method based on a 1D radiative transfer simulation, and the results are compared to both satellite retrievals and data from the Consortium for Small-scale Modelling (COSMO) weather model. Potential applications of this approach for extracting cloud optical properties are discussed, as well as certain limitations, such as the representation of 3D radiative effects that occur under broken-cloud conditions. In principle this method could provide an unprecedented amount of ground-based data on both irradiance and optical properties of the atmosphere, as long as the required photovoltaic power data are available and properly pre-screened to remove unwanted artefacts in the signal. Possible solutions to this problem are discussed in the context of future work.
Atomic oxygen is a key species in the mesosphere and thermosphere of Venus. It peaks in the transition region between the two dominant atmospheric circulation patterns, the retrograde super-rotating zonal flow below 70 km and the subsolar to antisolar flow above 120 km altitude. However, past and current detection methods are indirect and based on measurements of other molecules in combination with photochemical models. Here, we show direct detection of atomic oxygen on the dayside as well as on the nightside of Venus by measuring its ground-state transition at 4.74 THz (63.2 µm). The atomic oxygen is concentrated at altitudes around 100 km with a maximum column density on the dayside where it is generated by photolysis of carbon dioxide and carbon monoxide. This method enables detailed investigations of the Venusian atmosphere in the region between the two atmospheric circulation patterns in support of future space missions to Venus.
Estimates of global horizontal irradiance (GHI) from reanalysis and satellite-based data are the most important information for the design and monitoring of PV systems in Africa, but their quality is unknown due to the lack of in situ measurements. In this study, we evaluate the performance of hourly GHI from state-of-the-art reanalysis and satellite-based products (ERA5, CAMS, MERRA-2, and SARAH-2) with 37 quality-controlled in situ measurements from novel meteorological networks established in Burkina Faso and Ghana under different weather conditions for the year 2020. The effects of clouds and aerosols are also considered in the analysis by using common performance measures for the main quality attributes and a new overall performance value for the joint assessment. The results show that satellite data performs better than reanalysis data under different atmospheric conditions. Nevertheless, both data sources exhibit significant bias of more than 150 W/m2 in terms of RMSE under cloudy skies compared to clear skies. The new measure of overall performance clearly shows that the hourly GHI derived from CAMS and SARAH-2 could serve as viable alternative data for assessing solar energy in the different climatic zones of West Africa.
Stably stratified Taylor–Green vortex simulations are performed by lattice Boltzmann methods (LBM) and compared to other recent works using Navier–Stokes solvers. The density variation is modeled with a separate distribution function in addition to the particle distribution function modeling the flow physics. Different stencils, forcing schemes, and collision models are tested and assessed. The overall agreement of the lattice Boltzmann solutions with reference solutions from other works is very good, even when no explicit subgrid model is used, but the quality depends on the LBM setup. Although the LBM forcing scheme is not decisive for the quality of the solution, the choice of the collision model and of the stencil are crucial for adequate solutions in underresolved conditions. The LBM simulations confirm the suppression of vertical flow motion for decreasing initial Froude numbers. To gain further insight into buoyancy effects, energy decay, dissipation rates, and flux coefficients are evaluated using the LBM model for various Froude numbers.
This paper presents a novel approach to address noise, vibration, and harshness (NVH) issues in electrically assisted bicycles (e-bikes) caused by the drive unit. By investigating and optimising the structural dynamics during early product development, NVH can decisively be improved and valuable resources can be saved, emphasising its significance for enhancing riding performance. The paper offers a comprehensive analysis of the e-bike drive unit’s mechanical interactions among relevant components, culminating—to the best of our knowledge—in the development of the first high-fidelity model of an entire e-bike drive unit. The proposed model uses the principles of elastic multi body dynamics (eMBD) to elucidate the structural dynamics in dynamic-transient calculations. Comparing power spectra between measured and simulated motion variables validates the chosen model assumptions. The measurements of physical samples utilise accelerometers, contactless laser Doppler vibrometry (LDV) and various test arrangements, which are replicated in simulations and provide accessibility to measure vibrations onto rotating shafts and stationary structures. In summary, this integrated system-level approach can serve as a viable starting point for comprehending and managing the NVH behaviour of e-bikes.
Trends of environmental awareness, combined with a focus on personal fitness and health, motivate many people to switch from cars and public transport to micromobility solutions, namely bicycles, electric bicycles, cargo bikes, or scooters. To accommodate urban planning for these changes, cities and communities need to know how many micromobility vehicles are on the road. In a previous work, we proposed a concept for a compact, mobile, and energy-efficient system to classify and count micromobility vehicles utilizing uncooled long-wave infrared (LWIR) image sensors and a neuromorphic co-processor. In this work, we elaborate on this concept by focusing on the feature extraction process with the goal to increase the classification accuracy. We demonstrate that even with a reduced feature list compared with our early concept, we manage to increase the detection precision to more than 90%. This is achieved by reducing the images of 160 × 120 pixels to only 12 × 18 pixels and combining them with contour moments to a feature vector of only 247 bytes.
Battery lifespan estimation is essential for effective battery management systems, aiding users and manufacturers in strategic planning. However, accurately estimating battery capacity is complex, owing to diverse capacity fading phenomena tied to factors such as temperature, charge-discharge rate, and rest period duration. In this work, we present an innovative approach that integrates real-world driving behaviors into cyclic testing. Unlike conventional methods that lack rest periods and involve fixed charge-discharge rates, our approach involves 1000 unique test cycles tailored to specific objectives and applications, capturing the nuanced effects of temperature, charge-discharge rate, and rest duration on capacity fading. This yields comprehensive insights into cell-level battery degradation, unveiling growth patterns of the solid electrolyte interface (SEI) layer and lithium plating, influenced by cyclic test parameters. The results yield critical empirical relations for evaluating capacity fading under specific testing conditions.
A Fourier scatterometry setup is evaluated to recover the key parameters of optical phase gratings. Based on these parameters, systematic errors in the printing process of two-photon polymerization (TPP) gray-scale lithography three-dimensional printers can be compensated, namely tilt and curvature deviations. The proposed setup is significantly cheaper than a confocal microscope, which is usually used to determine calibration parameters for compensation of the TPP printing process. The grating parameters recovered this way are compared to those obtained with a confocal microscope. A clear correlation between confocal and scatterometric measurements is first shown for structures containing either tilt or curvature. The correlation is also shown for structures containing a mixture of tilt and curvature errors (squared Pearson coefficient r2 = 0.92). This compensation method is demonstrated on a TPP printer: a diffractive optical element printed with correction parameters obtained from Fourier scatterometry shows a significant reduction in noise as compared to the uncompensated system. This verifies the successful reduction of tilt and curvature errors. Further improvements of the method are proposed, which may enable the measurements to become more precise than confocal measurements in the future, since scatterometry is not affected by the diffraction limit.
In her recent article, Bender discusses several aspects of research–practice–collaborations (RPCs). In this commentary, we apply Bender's arguments to experiences in engineering research and development (R&D). We investigate the influence of interaction with practice partners on relevance, credibility, and legitimacy in the special engineering field of product development and analyze which methodological approaches are already being pursued for dealing with diverging interests and asymmetries and which steps will be necessary to include interests of civil society beyond traditional customer relations.
Novel methods for contingency analysis of gas transport networks are presented. They are motivated by the transition of our energy system where hydrogen plays a growing role. The novel methods are based on a specific method for topological reduction and so-called supernodes. Stationary Euler equations with advanced compressor thermodynamics and a gas law allowing for gas compositions with up to 100% hydrogen are used. Several measures and plots support an intuitive comparison and analysis of the results. In particular, it is shown that the newly developed methods can estimate locations and magnitudes of additional capacities (injection, buffering, storage etc.) with a reasonable performance for networks of relevant composition and size.
The cube in cube approach was used by Paul and Ishai-Cohen to model and derive formulas for filler content dependent Young's moduli of particle filled composites assuming perfect filler matrix adhesion. Their formulas were chosen because of their simplicity, and recalculated using an elementary volume approach which transforms spherical inclusions to cubic inclusions. The EV approach led to expression of the composites moduli that allows introducing an adhesion factor kadh ranging from 0 and 1 to take into account reduced filler matrix adhesion. This adhesion factor scales the edge length of the cubic inclusions, thus reducing the stress transfer area between matrix and filler. Fitting the experimental data with the modified Paul model provides reasonable kadh for PA66, PBT, PP, PE-LD and BR which are in line with their surface energies. Further analysis showed that stiffening only occurs if kadh exceeds [Formula: see text] and depends on the ratio of matrix modulus and filler modulus. The modified model allows for a quick calculation of any particle filled composites for known matrix modulus EM, filler modulus EF, filler volume content vF and adhesion factor kadh. Thus, finite element analysis (FEA) simulations of any particle filled polymer parts as well as materials selection are significantly eased. FEA of cubic and hexagonal EV arrangements show that stress distributions within the EV exhibit more shear stresses if one deviates from the cubic arrangement. At high filler contents the assumption that the property of the EV is representative for the whole composite, holds only for filler volume contents up to 15 or 20% (corresponding to 30 to 40 weight %). Thus, for vast majority of commercially available particulate composites, the modified model can be applied. Furthermore, this indicates that the cube in cube approach reaches two limits: (i) the occurrence of increasing shear stresses at filler contents above 20% due to deviations of EV arrangements or spatial filler distribution from cubic arrangements (singular), and (ii) increasing interaction between particles with the formation of particle network within the matrix violating the EV assumption of their homogeneous dispersion.
This study investigates the initial stage of the thermo-mechanical crystallization behavior for uni- and biaxially stretched polyethylene. The models are based on a mesoscale molecular dynamics approach. We take constraints that occur in real-life polymer processing into account, especially with respect to the blowing stage of the extrusion blow-molding process. For this purpose, we deform our systems using a wide range of stretching levels before they are quenched. We discuss the effects of the stretching procedures on the micro-mechanical state of the systems, characterized by entanglement behavior and nematic ordering of chain segments. For the cooling stage, we use two different approaches which allow for free or hindered shrinkage, respectively. During cooling, crystallization kinetics are monitored: We precisely evaluate how the interplay of chain length, temperature, local entanglements and orientation of chain segments influence crystallization behavior. Our models reveal that the main stretching direction dominates microscopic states of the different systems. We are able to show that crystallization mainly depends on the (dis-)entanglement behavior. Nematic ordering plays a secondary role.
The utilization of simulation procedures is gaining increasing attention in the product development of extrusion blow molded parts. However, some simulation steps, like the simulation of shrinkage and warpage, are still associated with uncertainties. The reason for this is on the one hand a lack of standardized interfaces for the transfer of simulation data between different simulation tools, and on the other hand the complex time-, temperature- and process-dependent material behavior of the used semi crystalline polymers. Using a new vendor neutral interface standard for the data transfer, the shrinkage analysis of a simple blow molded part is investigated and compared to experimental data. A linear viscoelastic material model in combination with an orthotropic process- and temperature-dependent thermal expansion coefficient is used for the shrinkage prediction. A good agreement is observed. Finally, critical parameters in the simulation models that strongly influence the shrinkage analysis are identified by a sensitivity study.
In this paper, a gas-to-power (GtoP) system for power outages is digitally modeled and experimentally developed. The design includes a solid-state hydrogen storage system composed of TiFeMn as a hydride forming alloy (6.7 kg of alloy in five tanks) and an air-cooled fuel cell (maximum power: 1.6 kW). The hydrogen storage system is charged under room temperature and 40 bar of hydrogen pressure, reaching about 110 g of hydrogen capacity. In an emergency use case of the system, hydrogen is supplied to the fuel cell, and the waste heat coming from the exhaust air of the fuel cell is used for the endothermic dehydrogenation reaction of the metal hydride. This GtoP system demonstrates fast, stable, and reliable responses, providing from 149 W to 596 W under different constant as well as dynamic conditions. A comprehensive and novel simulation approach based on a network model is also applied. The developed model is validated under static and dynamic power load scenarios, demonstrating excellent agreement with the experimental results.
The design of a fully superconducting wind power generator is influenced by several factors. Among them, a low number of pole pairs is desirable to achieve low AC losses in the superconducting stator winding, which greatly influences the cooling system design and, consecutively, the efficiency of the entire wind power plant. However, it has been identified that a low number of pole pairs in a superconducting generator tends to greatly increase its output voltage, which in turn creates challenging conditions for the necessary power electronic converter. This study highlights the interdependencies between the design of a fully superconducting 10 MW wind power generator and the corresponding design of its power electronic converter.
This paper investigates the effect of voltage sensors on the measurement of transient voltages for power semiconductors in a Double Pulse Test (DPT) environment.We adapt previously published models that were developed for current sensors and apply them to voltage sensors to evaluate their suitability for DPT applications. Similarities and differences between transient current and voltage sensors are investigated and the resulting methodology is applied to commercially available and experimental voltage sensors. Finally, a selection aid for given measurement tasks is derived that focuses on the measurement of fast-switching power semiconductors.
Turbulent compressible flows are traditionally simulated using explicit time integrators applied to discretized versions of the Navier-Stokes equations. However, the associated Courant-Friedrichs-Lewy condition severely restricts the maximum time-step size. Exploiting the Lagrangian nature of the Boltzmann equation’s material derivative, we now introduce a feasible three-dimensional semi-Lagrangian lattice Boltzmann method (SLLBM), which circumvents this restriction. While many lattice Boltzmann methods for compressible flows were restricted to two dimensions due to the enormous number of discrete velocities in three dimensions, the SLLBM uses only 45 discrete velocities. Based on compressible Taylor-Green vortex simulations we show that the new method accurately captures shocks or shocklets as well as turbulence in 3D without utilizing additional filtering or stabilizing techniques other than the filtering introduced by the interpolation, even when the time-step sizes are up to two orders of magnitude larger compared to simulations in the literature. Our new method therefore enables researchers to study compressible turbulent flows by a fully explicit scheme, whose range of admissible time-step sizes is dictated by physics rather than spatial discretization.
Animal models are often needed in cancer research but some research questions may be answered with other models, e.g., 3D replicas of patient-specific data, as these mirror the anatomy in more detail. We, therefore, developed a simple eight-step process to fabricate a 3D replica from computer tomography (CT) data using solely open access software and described the method in detail. For evaluation, we performed experiments regarding endoscopic tumor treatment with magnetic nanoparticles by magnetic hyperthermia and local drug release. For this, the magnetic nanoparticles need to be accumulated at the tumor site via a magnetic field trap. Using the developed eight-step process, we printed a replica of a locally advanced pancreatic cancer and used it to find the best position for the magnetic field trap. In addition, we described a method to hold these magnetic field traps stably in place. The results are highly important for the development of endoscopic tumor treatment with magnetic nanoparticles as the handling and the stable positioning of the magnetic field trap at the stomach wall in close proximity to the pancreatic tumor could be defined and practiced. Finally, the detailed description of the workflow and use of open access software allows for a wide range of possible uses.
In this study, we investigate the thermo-mechanical relaxation and crystallization behavior of polyethylene using mesoscale molecular dynamics simulations. Our models specifically mimic constraints that occur in real-life polymer processing: After strong uniaxial stretching of the melt, we quench and release the polymer chains at different loading conditions. These conditions allow for free or hindered shrinkage, respectively. We present the shrinkage and swelling behavior as well as the crystallization kinetics over up to 600 ns simulation time. We are able to precisely evaluate how the interplay of chain length, temperature, local entanglements and orientation of chain segments influences crystallization and relaxation behavior. From our models, we determine the temperature dependent crystallization rate of polyethylene, including crystallization onset temperature.
Multi-epoch searches for relativistic binary pulsars and fast transients in the Galactic Centre
(2021)
Introduction: Chronic pain is a frequent severe disease and often associated with anxiety, depression, insomnia, disability, and reduced quality of life. This maladaptive condition is further characterized by sensory loss, hyperalgesia, and allodynia. Blue light has been hypothesized to modulate sensory neurons and thereby influence nociception.
Objectives: Here, we compared the effects of blue light vs red light and thermal control on pain sensation in a human experimental pain model.
Methods: Pain, hyperalgesia, and allodynia were induced in 30 healthy volunteers through high-density transcutaneous electrical stimulation. Subsequently, blue light, red light, or thermal control treatment was applied in a cross-over design. The nonvisual effects of the respective light treatments were examined using a well-established quantitative sensory testing protocol. Somatosensory parameters as well as pain intensity and quality were scored.
Results: Blue light substantially reduced spontaneous pain as assessed by numeric rating scale pain scoring. Similarly, pain quality was significantly altered as assessed by the German counterpart of the McGill Pain Questionnaire. Furthermore, blue light showed antihyperalgesic, antiallodynic, and antihypesthesic effects in contrast to red light or thermal control treatment.
Conclusion: Blue-light phototherapy ameliorates pain intensity and quality in a human experimental pain model and reveals antihyperalgesic, antiallodynic, and antihypesthesic effects. Therefore, blue-light phototherapy may be a novel approach to treat pain in multiple conditions.
Design of a Medium Voltage Generator with DC-Cascade for High Power Wind Energy Conversion Systems
(2021)
This paper shows a new concept to generate medium voltage (MV) in wind power application to avoid an additional transformer. Therefore, the generator must be redesigned with additional constraints and a new topology for the power rectifier system by using multiple low voltage (LV) power rectifiers connected in series and parallel to increase the DC output voltage. The combination of parallel and series connection of rectifiers is further introduced as DC-cascade. With the resulting DC-cascade, medium output voltage is achieved with low voltage rectifiers and without a bulky transformer. This approach to form a DC-cascade reduces the effort required to achieve medium DC voltage with a simple rectifier system. In this context, a suitable DC-cascade control was presented and verified with a laboratory test setup. A gearless synchronous generator, which is highly segmented so that each segment can be connected to its own power rectifier, is investigated. Due to the mixed AC and DC voltage given by the DC-cascade structure, it becomes more demanding to the design of the generator insulation, which influences the copper fill factor and the design of the cooling system. A design strategy for the overall generator design is carried out considering the new boundary conditions.
Ghana suffers from frequent power outages, which can be compensated by off-grid energy solutions. Photovoltaic-hybrid systems become more and more important for rural electrification due to their potential to offer a clean and cost-effective energy supply. However, uncertainties related to the prediction of electrical loads and solar irradiance result in inefficient system control and can lead to an unstable electricity supply, which is vital for the high reliability required for applications within the health sector. Model predictive control (MPC) algorithms present a viable option to tackle those uncertainties compared to rule-based methods, but strongly rely on the quality of the forecasts. This study tests and evaluates (a) a seasonal autoregressive integrated moving average (SARIMA) algorithm, (b) an incremental linear regression (ILR) algorithm, (c) a long short-term memory (LSTM) model, and (d) a customized statistical approach for electrical load forecasting on real load data of a Ghanaian health facility, considering initially limited knowledge of load and pattern changes through the implementation of incremental learning. The correlation of the electrical load with exogenous variables was determined to map out possible enhancements within the algorithms. Results show that all algorithms show high accuracies with a median normalized root mean square error (nRMSE) <0.1 and differing robustness towards load-shifting events, gradients, and noise. While the SARIMA algorithm and the linear regression model show extreme error outliers of nRMSE >1, methods via the LSTM model and the customized statistical approaches perform better with a median nRMSE of 0.061 and stable error distribution with a maximum nRMSE of <0.255. The conclusion of this study is a favoring towards the LSTM model and the statistical approach, with regard to MPC applications within photovoltaic-hybrid system solutions in the Ghanaian health sector.
Suitability of Current Sensors for the Measurement of Switching Currents in Power Semiconductors
(2021)
This paper investigates the impact of current sensors on the measurement of transient currents in fast-switching power semiconductors in a double pulse test (DPT environment. We review previous research that assesses the influence of current sensors on a DPT circuit through mathematical modeling. The developed selection aids can be used to identify suitable current sensors for transient current measurements of fast-switching power semiconductors and to estimate the error introduced by their insertion into the DPT circuit. Afterwards, this analysis is extended by including further elements from real DPT applications to increase the consistency of the error estimation with practical situations and setups. Both methods are compared and their individual advantages and drawbacks are discussed. Finally, a recommendation on when to use which method is derived.