Refine
Departments, institutes and facilities
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (485) (remove)
Document Type
- Conference Object (216)
- Article (181)
- Part of a Book (27)
- Preprint (17)
- Report (11)
- Doctoral Thesis (8)
- Contribution to a Periodical (6)
- Research Data (6)
- Book (monograph, edited volume) (4)
- Part of Periodical (4)
- Lecture (2)
- Other (1)
- Patent (1)
- Working Paper (1)
Year of publication
Keywords
- lignin (7)
- Quality diversity (6)
- West Africa (6)
- advanced applications (5)
- modeling of complex systems (5)
- stem cells (5)
- Hydrogen storage (4)
- Lattice Boltzmann Method (4)
- Lignin (4)
- additive (4)
Integrating physical simulation data into data ecosystems challenges the compatibility and interoperability of data management tools. Semantic web technologies and relational databases mostly use other data types, such as measurement or manufacturing design data. Standardizing simulation data storage and harmonizing the data structures with other domains is still a challenge, as current standards such as the ISO standard STEP (ISO 10303 ”Standard for the Exchange of Product model data”) fail to bridge the gap between design and simulation data. This challenge requires new methods, such as ontologies, to rethink simulation results integration. This research describes a new software architecture and application methodology based on the industrial standard ”Virtual Material Modelling in Manufacturing” (VMAP). The architecture integrates large quantities of structured simulation data and their analyses into a semantic data structure. It is capable of providing data permeability from the global digital twin level to the detailed numerical values of data entries and even new key indicators in a three-step approach: It represents a file as an instance in a knowledge graph, queries the file’s metadata, and finds a semantically represented process that enables new metadata to be created and instantiated.
Accurate global horizontal irradiance (GHI) forecasting is critical for integrating solar energy into the power grid and operating solar power plants. The Weather Research and Forecasting model with its solar radiation extension (WRF-Solar) has been used to forecast solar irradiance in different regions around the world. However, the application of the WRF-Solar model to the prediction of GHI in West Africa, particularly Ghana, has not yet been investigated. The aim of this study is to evaluate the performance of the WRF-Solar model for predicting GHI in Ghana, focusing on three automatic weather stations (Akwatia, Kumasi and Kologo) for the year 2021. We used two one-way nested domains (D1 = 15 km and D2 = 3 km) to investigate the ability of the fully coupled WRF-Solar model to forecast GHI up to 72-hour ahead under different atmospheric conditions. The initial and lateral boundary conditions were taken from the ECMWF high-resolution operational forecasts. Our findings reveal that the WRF-Solar model performs better under clear skies than cloudy skies. Under clear skies, Kologo performed best in predicting 72-hour GHI, with a first day nRMSE of 9.62 %. However, forecasting GHI under cloudy skies at all three sites had significant uncertainties. Additionally, WRF-Solar model is able to reproduce the observed GHI diurnal cycle under high AOD conditions in most of the selected days. This study enhances the understanding of the WRF-Solar model’s capabilities and limitations for GHI forecasting in West Africa, particularly in Ghana. The findings provide valuable information for stakeholders involved in solar energy generation and grid integration towards optimized management in the region.
The lattice Boltzmann method (LBM) stands apart from conventional macroscopic approaches due to its low numerical dissipation and reduced computational cost, attributed to a simple streaming and local collision step. While this property makes the method particularly attractive for applications such as direct noise computation, it also renders the method highly susceptible to instabilities. A vast body of literature exists on stability-enhancing techniques, which can be categorized into selective filtering, regularized LBM, and multi-relaxation time (MRT) models. Although each technique bolsters stability by adding numerical dissipation, they act on different modes. Consequently, there is not a universal scheme optimally suited for a wide range of different flows. The reason for this lies in the static nature of these methods; they cannot adapt to local or global flow features. Still, adaptive filtering using a shear sensor constitutes an exception to this. For this reason, we developed a novel collision operator that uses space- and time-variant collision rates associated with the bulk viscosity. These rates are optimized by a physically informed neural net. In this study, the training data consists of a time series of different instances of a 2D barotropic vortex solution, obtained from a high-order Navier–Stokes solver that embodies desirable numerical features. For this specific text case our results demonstrate that the relaxation times adapt to the local flow and show a dependence on the velocity field. Furthermore, the novel collision operator demonstrates a better stability-to-precision ratio and outperforms conventional techniques that use an empirical constant for the bulk viscosity.
Protocol for conducting advanced cyclic tests in lithium-ion batteries to estimate capacity fade
(2024)
Using advanced cyclic testing techniques improves accuracy in estimating capacity fade and incorporates real-world scenarios in battery cycle aging assessment. Here, we present a protocol for conducting cyclic tests in lithium-ion batteries to estimate capacity fade. We describe steps for implementing strategies for accounting for variations in rest periods, charge-discharge rates, and temperatures. We also detail procedures for validating tests experimentally within a climate-controlled chamber and for developing an empirical model to estimate capacity fading under various testing objectives. For complete details on the use and execution of this protocol, please refer to Mulpuri et al.1.
Traditional and newly developed testing methods were used for extensive application-related characterization of transdermal therapeutic systems (TTS) and pressure sensitive adhesives (PSA). Large amplitude oscillatory shear tests of PSAs were correlated to the material behavior during the patient’s motion and showed that all PSAs were located close to the gel point. Furthermore, an increasing strain amplitude results in stretching and yielding of the PSA´s microstructure causing a consolidation of the network and a release with increasing strain amplitude. RheoTack approach was developed to allow for an advanced tack characterization of TTS with visual inspection. The results showed a clear resin content and rod geometry dependent behavior, and displays the PSA´s viscoelasticity resulting in either high tack and long stretched fibrils or non-adhesion and brittle behavior. Moreover, diffusion of water / sweat during TTS´s application might influence its performance. Therefore, a dielectric analysis based evaluation method displayed occurring water diffusion into the PSA from which the diffusion coefficient can be determined, and showed clear material and resin content dependent behavior. All methods allow for an advanced product-oriented material testing that can be utilized within further TTS development.
Force field (FF) based molecular modeling is an often used method to investigate and study structural and dynamic properties of (bio-)chemical substances and systems. When such a system is modeled or refined, the force field parameters need to be adjusted. This force field parameter optimization can be a tedious task and is always a trade-off in terms of errors regarding the targeted properties. To better control the balance of various properties’ errors, in this study we introduce weighting factors for the optimization objectives. Different weighting strategies are compared to fine-tune the balance between bulk-phase density and relative conformational energies (RCE), using n-octane as a representative system. Additionally, a non-linear projection of the individual property-specific parts of the optimized loss function is deployed to further improve the balance between them. The results show that the overall error is reduced. One interesting outcome is a large variety in the resulting optimized force field parameters (FFParams) and corresponding errors, suggesting that the optimization landscape is multi-modal and very dependent on the weighting factor setup. We conclude that adjusting the weighting factors can be a very important feature to lower the overall error in the FF optimization procedure, giving researchers the possibility to fine-tune their FFs.
This work proposes a novel approach for probabilistic end-to-end all-sky imager-based nowcasting with horizons of up to 30 min using an ImageNet pre-trained deep neural network. The method involves a two-stage approach. First, a backbone model is trained to estimate the irradiance from all-sky imager (ASI) images. The model is then extended and retrained on image and parameter sequences for forecasting. An open access data set is used for training and evaluation. We investigated the impact of simultaneously considering global horizontal (GHI), direct normal (DNI), and diffuse horizontal irradiance (DHI) on training time and forecast performance as well as the effect of adding parameters describing the irradiance variability proposed in the literature. The backbone model estimates current GHI with an RMSE and MAE of 58.06 and 29.33 W m−2, respectively. When extended for forecasting, the model achieves an overall positive skill score reaching 18.6 % compared to a smart persistence forecast. Minor modifications to the deterministic backbone and forecasting models enables the architecture to output an asymmetrical probability distribution and reduces training time while leading to similar errors for the backbone models. Investigating the impact of variability parameters shows that they reduce training time but have no significant impact on the GHI forecasting performance for both deterministic and probabilistic forecasting while simultaneously forecasting GHI, DNI, and DHI reduces the forecast performance.
In vision tasks, a larger effective receptive field (ERF) is associated with better performance. While attention natively supports global context, convolution requires multiple stacked layers and a hierarchical structure for large context. In this work, we extend Hyena, a convolution-based attention replacement, from causal sequences to the non-causal two-dimensional image space. We scale the Hyena convolution kernels beyond the feature map size up to 191$\times$191 to maximize the ERF while maintaining sub-quadratic complexity in the number of pixels. We integrate our two-dimensional Hyena, HyenaPixel, and bidirectional Hyena into the MetaFormer framework. For image categorization, HyenaPixel and bidirectional Hyena achieve a competitive ImageNet-1k top-1 accuracy of 83.0% and 83.5%, respectively, while outperforming other large-kernel networks. Combining HyenaPixel with attention further increases accuracy to 83.6%. We attribute the success of attention to the lack of spatial bias in later stages and support this finding with bidirectional Hyena.
Pipeline transport is an efficient method for transporting fluids in energy supply and other technical applications. While natural gas is the classical example, the transport of hydrogen is becoming more and more important; both are transmitted under high pressure in a gaseous state. Also relevant is the transport of carbon dioxide, captured in the places of formation, transferred under high pressure in a liquid or supercritical state and pumped into underground reservoirs for storage. The transport of other fluids is also required in technical applications. Meanwhile, the transport equations for different fluids are essentially the same, and the simulation can be performed using the same methods. In this paper, the effect of control elements such as compressors, regulators and flaptraps on the stability of fluid transport simulations is studied. It is shown that modeling of these elements can lead to instabilities, both in stationary and dynamic simulations. Special regularization methods were developed to overcome these problems. Their functionality also for dynamic simulations is demonstrated for a number of numerical experiments.
In addition to the long-term goal of mitigating climate change, the current geopolitical upheavals heighten the urgency to transform Europe's energy system. This involves expanding renewable energies while managing intermittent electricity generation. Hydrogen is a promising solution to balance generation and demand, simultaneously decarbonizing complex applications. To model the energy system's transformation, the project TransHyDE-Sys, funded by the German Federal Ministry of Education and Research, takes an integrated approach beyond traditional energy system analysis, incorporating a diverse range of more detailed methods and tools. Herein, TransHyDE-Sys is situated within the recent policy discussion. It addresses the requirements for energy system modeling to gain insights into transforming the European hydrogen and energy infrastructure. It identifies knowledge gaps in the existing literature on hydrogen infrastructure-oriented energy system modeling and presents the research approach of TransHyDE-Sys. TransHyDE-Sys analyzes the development of hydrogen and energy infrastructures from “the system” and “the stakeholder” perspectives. The integrated modeling landscape captures temporal and spatial interactions among hydrogen, electricity, and natural gas infrastructure, providing comprehensive insights for systemic infrastructure planning. This allows a more accurate representation of the energy system's dynamics and aids in decision-making for achieving sustainable and efficient hydrogen network development integration.
This study addresses the common occurrence of cell-to-cell variations arising from manufacturing tolerances and their implications during battery production. The focus is on assessing the impact of these inherent differences in cells and exploring diverse cell and module connection methods on battery pack performance and their subsequent influence on the driving range of electric vehicles (EVs). The analysis spans three battery pack sizes, encompassing various constant discharge rates and nine distinct drive cycles representative of driving behaviours across different regions of India. Two interconnection topologies, categorised as “string” and “cross”, are examined. The findings reveal that cross-connected packs exhibit reduced energy output compared to string-connected configurations, which is reflected in the driving range outcomes observed during drive cycle simulations. Additionally, the study investigates the effects of standard deviation in cell parameters, concluding that an increased standard deviation (SD) leads to decreased energy output from the packs. Notably, string-connected packs demonstrate superior performance in terms of extractable energy under such conditions.
Pollution with anthropogenic waste, particularly persistent plastic, has now reached every remote corner of the world. The French Atlantic coast, given its extensive coastline, is particularly affected. To gain an overview of current plastic pollution, this study examined a stretch of 250 km along the Silver Coast of France. Sampling was conducted at a total of 14 beach sections, each with five sampling sites in a transect. At each collection site, a square of 0.25 m2 was marked. The top 5 cm of beach sediment was collected and sieved on-site using an analysis sieve (mesh size 1 mm), resulting in a total of approximately 0.8 m3 of sediment, corresponding to a total weight of 1300 kg of examined beach sediment. A total of 1972 plastic particles were extracted and analysed using infrared spectroscopy, corresponding to 1.5 particles kg−1 of beach sediment. Pellets (885 particles), polyethylene as the polymer type (1349 particles), and particles in the size range of microplastics (943 particles) were most frequently found. The significant pollution by pellets suggests that the spread of plastic waste is not primarily attributable to tourism (in February/March 2023). The substantial accumulation of meso- and macro-waste (with 863 and 166 particles) also indicates that research focusing on microplastics should be expanded to include these size categories, as microplastics can develop from them over time.
This paper addresses the classification of Arabic text data in the field of Natural Language Processing (NLP), with a particular focus on Natural Language Inference (NLI) and Contradiction Detection (CD). Arabic is considered a resource-poor language, meaning that there are few data sets available, which leads to limited availability of NLP methods. To overcome this limitation, we create a dedicated data set from publicly available resources. Subsequently, transformer-based machine learning models are being trained and evaluated. We find that a language-specific model (AraBERT) performs competitively with state-of-the-art multilingual approaches, when we apply linguistically informed pre-training methods such as Named Entity Recognition (NER). To our knowledge, this is the first large-scale evaluation for this task in Arabic, as well as the first application of multi-task pre-training in this context.
TREE Jahresbericht 2021/2022
(2023)
Das Institut TREE freut sich, ihnen den Jahresbericht der Jahre 2021 und 2022 präsentieren zu können. Blicken sie mit uns zurück auf zwei herausfordernde Jahre.
Unser neuer Doppel-Jahresbericht 2021/2022 enthält viele, interessante, Beiträgen unserer spannenden, interdisziplinären Forschungprojekte der Bereiche Energie, Modellbildung Simulation, Drohnenforschung, Materialien und Prozesse und Technikkommunikation.
Airborne and spaceborne platforms are the primary data sources for large-scale forest mapping, but visual interpretation for individual species determination is labor-intensive. Hence, various studies focusing on forests have investigated the benefits of multiple sensors for automated tree species classification. However, transferable deep learning approaches for large-scale applications are still lacking. This gap motivated us to create a novel dataset for tree species classification in central Europe based on multi-sensor data from aerial, Sentinel-1 and Sentinel-2 imagery. In this paper, we introduce the TreeSatAI Benchmark Archive, which contains labels of 20 European tree species (i.e., 15 tree genera) derived from forest administration data of the federal state of Lower Saxony, Germany. We propose models and guidelines for the application of the latest machine learning techniques for the task of tree species classification with multi-label data. Finally, we provide various benchmark experiments showcasing the information which can be derived from the different sensors including artificial neural networks and tree-based machine learning methods. We found that residual neural networks (ResNet) perform sufficiently well with weighted precision scores up to 79 % only by using the RGB bands of aerial imagery. This result indicates that the spatial content present within the 0.2 m resolution data is very informative for tree species classification. With the incorporation of Sentinel-1 and Sentinel-2 imagery, performance improved marginally. However, the sole use of Sentinel-2 still allows for weighted precision scores of up to 74 % using either multi-layer perceptron (MLP) or Light Gradient Boosting Machine (LightGBM) models. Since the dataset is derived from real-world reference data, it contains high class imbalances. We found that this dataset attribute negatively affects the models' performances for many of the underrepresented classes (i.e., scarce tree species). However, the class-wise precision of the best-performing late fusion model still reached values ranging from 54 % (Acer) to 88 % (Pinus). Based on our results, we conclude that deep learning techniques using aerial imagery could considerably support forestry administration in the provision of large-scale tree species maps at a very high resolution to plan for challenges driven by global environmental change. The original dataset used in this paper is shared via Zenodo (https://doi.org/10.5281/zenodo.6598390, Schulz et al., 2022). For citation of the dataset, we refer to this article.
Question Answering (QA) has gained significant attention in recent years, with transformer-based models improving natural language processing. However, issues of explainability remain, as it is difficult to determine whether an answer is based on a true fact or a hallucination. Knowledge-based question answering (KBQA) methods can address this problem by retrieving answers from a knowledge graph. This paper proposes a hybrid approach to KBQA called FRED, which combines pattern-based entity retrieval with a transformer-based question encoder. The method uses an evolutionary approach to learn SPARQL patterns, which retrieve candidate entities from a knowledge base. The transformer-based regressor is then trained to estimate each pattern’s expected F1 score for answering the question, resulting in a ranking ofcandidate entities. Unlike other approaches, FRED can attribute results to learned SPARQL patterns, making them more interpretable. The method is evaluated on two datasets and yields MAP scores of up to 73 percent, with the transformer-based interpretation falling only 4 pp short of an oracle run. Additionally, the learned patterns successfully complement manually generated ones and generalize well to novel questions.