Refine
H-BRS Bibliography
- yes (476) (remove)
Departments, institutes and facilities
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (476) (remove)
Document Type
- Conference Object (216)
- Article (173)
- Part of a Book (26)
- Preprint (17)
- Report (11)
- Doctoral Thesis (8)
- Contribution to a Periodical (6)
- Research Data (6)
- Book (monograph, edited volume) (4)
- Part of Periodical (4)
Year of publication
Keywords
- lignin (7)
- Quality diversity (6)
- West Africa (6)
- advanced applications (5)
- modeling of complex systems (5)
- stem cells (5)
- Hydrogen storage (4)
- Lattice Boltzmann Method (4)
- Lignin (4)
- additive (4)
In vision tasks, a larger effective receptive field (ERF) is associated with better performance. While attention natively supports global context, convolution requires multiple stacked layers and a hierarchical structure for large context. In this work, we extend Hyena, a convolution-based attention replacement, from causal sequences to the non-causal two-dimensional image space. We scale the Hyena convolution kernels beyond the feature map size up to 191$\times$191 to maximize the ERF while maintaining sub-quadratic complexity in the number of pixels. We integrate our two-dimensional Hyena, HyenaPixel, and bidirectional Hyena into the MetaFormer framework. For image categorization, HyenaPixel and bidirectional Hyena achieve a competitive ImageNet-1k top-1 accuracy of 83.0% and 83.5%, respectively, while outperforming other large-kernel networks. Combining HyenaPixel with attention further increases accuracy to 83.6%. We attribute the success of attention to the lack of spatial bias in later stages and support this finding with bidirectional Hyena.
Pipeline transport is an efficient method for transporting fluids in energy supply and other technical applications. While natural gas is the classical example, the transport of hydrogen is becoming more and more important; both are transmitted under high pressure in a gaseous state. Also relevant is the transport of carbon dioxide, captured in the places of formation, transferred under high pressure in a liquid or supercritical state and pumped into underground reservoirs for storage. The transport of other fluids is also required in technical applications. Meanwhile, the transport equations for different fluids are essentially the same, and the simulation can be performed using the same methods. In this paper, the effect of control elements such as compressors, regulators and flaptraps on the stability of fluid transport simulations is studied. It is shown that modeling of these elements can lead to instabilities, both in stationary and dynamic simulations. Special regularization methods were developed to overcome these problems. Their functionality also for dynamic simulations is demonstrated for a number of numerical experiments.
Integrating physical simulation data into data ecosystems challenges the compatibility and interoperability of data management tools. Semantic web technologies and relational databases mostly use other data types, such as measurement or manufacturing design data. Standardizing simulation data storage and harmonizing the data structures with other domains is still a challenge, as current standards such as the ISO standard STEP (ISO 10303 ”Standard for the Exchange of Product model data”) fail to bridge the gap between design and simulation data. This challenge requires new methods, such as ontologies, to rethink simulation results integration. This research describes a new software architecture and application methodology based on the industrial standard ”Virtual Material Modelling in Manufacturing” (VMAP). The architecture integrates large quantities of structured simulation data and their analyses into a semantic data structure. It is capable of providing data permeability from the global digital twin level to the detailed numerical values of data entries and even new key indicators in a three-step approach: It represents a file as an instance in a knowledge graph, queries the file’s metadata, and finds a semantically represented process that enables new metadata to be created and instantiated.
This work proposes a novel approach for probabilistic end-to-end all-sky imager-based nowcasting with horizons of up to 30 min using an ImageNet pre-trained deep neural network. The method involves a two-stage approach. First, a backbone model is trained to estimate the irradiance from all-sky imager (ASI) images. The model is then extended and retrained on image and parameter sequences for forecasting. An open access data set is used for training and evaluation. We investigated the impact of simultaneously considering global horizontal (GHI), direct normal (DNI), and diffuse horizontal irradiance (DHI) on training time and forecast performance as well as the effect of adding parameters describing the irradiance variability proposed in the literature. The backbone model estimates current GHI with an RMSE and MAE of 58.06 and 29.33 W m−2, respectively. When extended for forecasting, the model achieves an overall positive skill score reaching 18.6 % compared to a smart persistence forecast. Minor modifications to the deterministic backbone and forecasting models enables the architecture to output an asymmetrical probability distribution and reduces training time while leading to similar errors for the backbone models. Investigating the impact of variability parameters shows that they reduce training time but have no significant impact on the GHI forecasting performance for both deterministic and probabilistic forecasting while simultaneously forecasting GHI, DNI, and DHI reduces the forecast performance.
Accurate global horizontal irradiance (GHI) forecasting is critical for integrating solar energy into the power grid and operating solar power plants. The Weather Research and Forecasting model with its solar radiation extension (WRF-Solar) has been used to forecast solar irradiance in different regions around the world. However, the application of the WRF-Solar model to the prediction of GHI in West Africa, particularly Ghana, has not yet been investigated. The aim of this study is to evaluate the performance of the WRF-Solar model for predicting GHI in Ghana, focusing on three automatic weather stations (Akwatia, Kumasi and Kologo) for the year 2021. We used two one-way nested domains (D1 = 15 km and D2 = 3 km) to investigate the ability of the fully coupled WRF-Solar model to forecast GHI up to 72-hour ahead under different atmospheric conditions. The initial and lateral boundary conditions were taken from the ECMWF high-resolution operational forecasts. Our findings reveal that the WRF-Solar model performs better under clear skies than cloudy skies. Under clear skies, Kologo performed best in predicting 72-hour GHI, with a first day nRMSE of 9.62 %. However, forecasting GHI under cloudy skies at all three sites had significant uncertainties. Additionally, WRF-Solar model is able to reproduce the observed GHI diurnal cycle under high AOD conditions in most of the selected days. This study enhances the understanding of the WRF-Solar model’s capabilities and limitations for GHI forecasting in West Africa, particularly in Ghana. The findings provide valuable information for stakeholders involved in solar energy generation and grid integration towards optimized management in the region.
In addition to the long-term goal of mitigating climate change, the current geopolitical upheavals heighten the urgency to transform Europe's energy system. This involves expanding renewable energies while managing intermittent electricity generation. Hydrogen is a promising solution to balance generation and demand, simultaneously decarbonizing complex applications. To model the energy system's transformation, the project TransHyDE-Sys, funded by the German Federal Ministry of Education and Research, takes an integrated approach beyond traditional energy system analysis, incorporating a diverse range of more detailed methods and tools. Herein, TransHyDE-Sys is situated within the recent policy discussion. It addresses the requirements for energy system modeling to gain insights into transforming the European hydrogen and energy infrastructure. It identifies knowledge gaps in the existing literature on hydrogen infrastructure-oriented energy system modeling and presents the research approach of TransHyDE-Sys. TransHyDE-Sys analyzes the development of hydrogen and energy infrastructures from “the system” and “the stakeholder” perspectives. The integrated modeling landscape captures temporal and spatial interactions among hydrogen, electricity, and natural gas infrastructure, providing comprehensive insights for systemic infrastructure planning. This allows a more accurate representation of the energy system's dynamics and aids in decision-making for achieving sustainable and efficient hydrogen network development integration.
Force field (FF) based molecular modeling is an often used method to investigate and study structural and dynamic properties of (bio-)chemical substances and systems. When such a system is modeled or refined, the force field parameters need to be adjusted. This force field parameter optimization can be a tedious task and is always a trade-off in terms of errors regarding the targeted properties. To better control the balance of various properties’ errors, in this study we introduce weighting factors for the optimization objectives. Different weighting strategies are compared to fine-tune the balance between bulk-phase density and relative conformational energies (RCE), using n-octane as a representative system. Additionally, a non-linear projection of the individual property-specific parts of the optimized loss function is deployed to further improve the balance between them. The results show that the overall error is reduced. One interesting outcome is a large variety in the resulting optimized force field parameters (FFParams) and corresponding errors, suggesting that the optimization landscape is multi-modal and very dependent on the weighting factor setup. We conclude that adjusting the weighting factors can be a very important feature to lower the overall error in the FF optimization procedure, giving researchers the possibility to fine-tune their FFs.
Protocol for conducting advanced cyclic tests in lithium-ion batteries to estimate capacity fade
(2024)
Using advanced cyclic testing techniques improves accuracy in estimating capacity fade and incorporates real-world scenarios in battery cycle aging assessment. Here, we present a protocol for conducting cyclic tests in lithium-ion batteries to estimate capacity fade. We describe steps for implementing strategies for accounting for variations in rest periods, charge-discharge rates, and temperatures. We also detail procedures for validating tests experimentally within a climate-controlled chamber and for developing an empirical model to estimate capacity fading under various testing objectives. For complete details on the use and execution of this protocol, please refer to Mulpuri et al.1.
Traditional and newly developed testing methods were used for extensive application-related characterization of transdermal therapeutic systems (TTS) and pressure sensitive adhesives (PSA). Large amplitude oscillatory shear tests of PSAs were correlated to the material behavior during the patient’s motion and showed that all PSAs were located close to the gel point. Furthermore, an increasing strain amplitude results in stretching and yielding of the PSA´s microstructure causing a consolidation of the network and a release with increasing strain amplitude. RheoTack approach was developed to allow for an advanced tack characterization of TTS with visual inspection. The results showed a clear resin content and rod geometry dependent behavior, and displays the PSA´s viscoelasticity resulting in either high tack and long stretched fibrils or non-adhesion and brittle behavior. Moreover, diffusion of water / sweat during TTS´s application might influence its performance. Therefore, a dielectric analysis based evaluation method displayed occurring water diffusion into the PSA from which the diffusion coefficient can be determined, and showed clear material and resin content dependent behavior. All methods allow for an advanced product-oriented material testing that can be utilized within further TTS development.
The lattice Boltzmann method (LBM) stands apart from conventional macroscopic approaches due to its low numerical dissipation and reduced computational cost, attributed to a simple streaming and local collision step. While this property makes the method particularly attractive for applications such as direct noise computation, it also renders the method highly susceptible to instabilities. A vast body of literature exists on stability-enhancing techniques, which can be categorized into selective filtering, regularized LBM, and multi-relaxation time (MRT) models. Although each technique bolsters stability by adding numerical dissipation, they act on different modes. Consequently, there is not a universal scheme optimally suited for a wide range of different flows. The reason for this lies in the static nature of these methods; they cannot adapt to local or global flow features. Still, adaptive filtering using a shear sensor constitutes an exception to this. For this reason, we developed a novel collision operator that uses space- and time-variant collision rates associated with the bulk viscosity. These rates are optimized by a physically informed neural net. In this study, the training data consists of a time series of different instances of a 2D barotropic vortex solution, obtained from a high-order Navier–Stokes solver that embodies desirable numerical features. For this specific text case our results demonstrate that the relaxation times adapt to the local flow and show a dependence on the velocity field. Furthermore, the novel collision operator demonstrates a better stability-to-precision ratio and outperforms conventional techniques that use an empirical constant for the bulk viscosity.
This paper addresses the classification of Arabic text data in the field of Natural Language Processing (NLP), with a particular focus on Natural Language Inference (NLI) and Contradiction Detection (CD). Arabic is considered a resource-poor language, meaning that there are few data sets available, which leads to limited availability of NLP methods. To overcome this limitation, we create a dedicated data set from publicly available resources. Subsequently, transformer-based machine learning models are being trained and evaluated. We find that a language-specific model (AraBERT) performs competitively with state-of-the-art multilingual approaches, when we apply linguistically informed pre-training methods such as Named Entity Recognition (NER). To our knowledge, this is the first large-scale evaluation for this task in Arabic, as well as the first application of multi-task pre-training in this context.
TREE Jahresbericht 2021/2022
(2023)
Das Institut TREE freut sich, ihnen den Jahresbericht der Jahre 2021 und 2022 präsentieren zu können. Blicken sie mit uns zurück auf zwei herausfordernde Jahre.
Unser neuer Doppel-Jahresbericht 2021/2022 enthält viele, interessante, Beiträgen unserer spannenden, interdisziplinären Forschungprojekte der Bereiche Energie, Modellbildung Simulation, Drohnenforschung, Materialien und Prozesse und Technikkommunikation.
Airborne and spaceborne platforms are the primary data sources for large-scale forest mapping, but visual interpretation for individual species determination is labor-intensive. Hence, various studies focusing on forests have investigated the benefits of multiple sensors for automated tree species classification. However, transferable deep learning approaches for large-scale applications are still lacking. This gap motivated us to create a novel dataset for tree species classification in central Europe based on multi-sensor data from aerial, Sentinel-1 and Sentinel-2 imagery. In this paper, we introduce the TreeSatAI Benchmark Archive, which contains labels of 20 European tree species (i.e., 15 tree genera) derived from forest administration data of the federal state of Lower Saxony, Germany. We propose models and guidelines for the application of the latest machine learning techniques for the task of tree species classification with multi-label data. Finally, we provide various benchmark experiments showcasing the information which can be derived from the different sensors including artificial neural networks and tree-based machine learning methods. We found that residual neural networks (ResNet) perform sufficiently well with weighted precision scores up to 79 % only by using the RGB bands of aerial imagery. This result indicates that the spatial content present within the 0.2 m resolution data is very informative for tree species classification. With the incorporation of Sentinel-1 and Sentinel-2 imagery, performance improved marginally. However, the sole use of Sentinel-2 still allows for weighted precision scores of up to 74 % using either multi-layer perceptron (MLP) or Light Gradient Boosting Machine (LightGBM) models. Since the dataset is derived from real-world reference data, it contains high class imbalances. We found that this dataset attribute negatively affects the models' performances for many of the underrepresented classes (i.e., scarce tree species). However, the class-wise precision of the best-performing late fusion model still reached values ranging from 54 % (Acer) to 88 % (Pinus). Based on our results, we conclude that deep learning techniques using aerial imagery could considerably support forestry administration in the provision of large-scale tree species maps at a very high resolution to plan for challenges driven by global environmental change. The original dataset used in this paper is shared via Zenodo (https://doi.org/10.5281/zenodo.6598390, Schulz et al., 2022). For citation of the dataset, we refer to this article.
A company's financial documents use tables along with text to organize the data containing key performance indicators (KPIs) (such as profit and loss) and a financial quantity linked to them. The KPI’s linked quantity in a table might not be equal to the similarly described KPI's quantity in a text. Auditors take substantial time to manually audit these financial mistakes and this process is called consistency checking. As compared to existing work, this paper attempts to automate this task with the help of transformer-based models. Furthermore, for consistency checking it is essential for the table's KPIs embeddings to encode the semantic knowledge of the KPIs and the structural knowledge of the table. Therefore, this paper proposes a pipeline that uses a tabular model to get the table's KPIs embeddings. The pipeline takes input table and text KPIs, generates their embeddings, and then checks whether these KPIs are identical. The pipeline is evaluated on the financial documents in the German language and a comparative analysis of the cell embeddings' quality from the three tabular models is also presented. From the evaluation results, the experiment that used the English-translated text and table KPIs and Tabbie model to generate table KPIs’ embeddings achieved an accuracy of 72.81% on the consistency checking task, outperforming the benchmark, and other tabular models.
Question Answering (QA) has gained significant attention in recent years, with transformer-based models improving natural language processing. However, issues of explainability remain, as it is difficult to determine whether an answer is based on a true fact or a hallucination. Knowledge-based question answering (KBQA) methods can address this problem by retrieving answers from a knowledge graph. This paper proposes a hybrid approach to KBQA called FRED, which combines pattern-based entity retrieval with a transformer-based question encoder. The method uses an evolutionary approach to learn SPARQL patterns, which retrieve candidate entities from a knowledge base. The transformer-based regressor is then trained to estimate each pattern’s expected F1 score for answering the question, resulting in a ranking ofcandidate entities. Unlike other approaches, FRED can attribute results to learned SPARQL patterns, making them more interpretable. The method is evaluated on two datasets and yields MAP scores of up to 73 percent, with the transformer-based interpretation falling only 4 pp short of an oracle run. Additionally, the learned patterns successfully complement manually generated ones and generalize well to novel questions.
Trueness and precision of milled and 3D printed root-analogue implants: A comparative in vitro study
(2023)
A biodegradable blend of PBAT—poly(butylene adipate-co-terephthalate)—and PLA—poly(lactic acid)—for blown film extrusion was modified with four multi-functional chain extending cross-linkers (CECL). The anisotropic morphology introduced during film blowing affects the degradation processes. Given that two CECL increased the melt flow rate (MFR) of tris(2,4-di-tert-butylphenyl)phosphite (V1) and 1,3-phenylenebisoxazoline (V2) and the other two reduced it (aromatic polycarbodiimide (V3) and poly(4,4-dicyclohexylmethanecarbodiimide) (V4)), their compost (bio-)disintegration behavior was investigated. It was significantly altered with respect to the unmodified reference blend (REF). The disintegration behavior at 30 and 60 °C was investigated by determining changes in mass, Young’s moduli, tensile strengths, elongations at break and thermal properties. In order to quantify the disintegration behavior, the hole areas of blown films were evaluated after compost storage at 60 °C to calculate the kinetics of the time dependent degrees of disintegration. The kinetic model of disintegration provides two parameters: initiation time and disintegration time. They quantify the effects of the CECL on the disintegration behavior of the PBAT/PLA compound. Differential scanning calorimetry (DSC) revealed a pronounced annealing effect during storage in compost at 30 °C, as well as the occurrence of an additional step-like increase in the heat flow at 75 °C after storage at 60 °C. The disintegration consists of processes which affect amorphous and crystalline phase of PBAT in different manner that cannot be understood by a hydrolytic chain degradation only. Furthermore, gel permeation chromatography (GPC) revealed molecular degradation only at 60 °C for the REF and V1 after 7 days of compost storage. The observed losses of mass and cross-sectional area seem to be attributed more to mechanical decay than to molecular degradation for the given compost storage times.
Rosenbrock–Wanner methods for systems of stiff ordinary differential equations are well known since the seventies. They have been continuously developed and are efficient for differential-algebraic equations of index-1, as well. Their disadvantage that the Jacobian matrix has to be updated in every time step becomes more and more obsolete when automatic differentiation is used. Especially the family of Rodas methods has proven to be a standard in the Julia package DifferentialEquations. However, the fifth-order Rodas5 method undergoes order reduction for certain problem classes. Therefore, the goal of this paper is to compute a new set of coefficients for Rodas5 such that this order reduction is reduced. The procedure is similar to the derivation of the methods Rodas4P and Rodas4P2. In addition, it is possible to provide new dense output formulas for Rodas5 and the new method Rodas5P. Numerical tests show that for higher accuracy requirements Rodas5P always belongs to the best methods within the Rodas family.
The transport of carbon dioxide through pipelines is one of the important components of Carbon dioxide Capture and Storage (CCS) systems that are currently being developed. If high flow rates are desired a transportation in the liquid or supercritical phase is to be preferred. For technical reasons, the transport must stay in that phase, without transitioning to the gaseous state. In this paper, a numerical simulation of the stationary process of carbon dioxide transport with impurities and phase transitions is considered. We use the Homogeneous Equilibrium Model (HEM) and the GERG-2008 thermodynamic equation of state to describe the transport parameters. The algorithms used allow to solve scenarios of carbon dioxide transport in the liquid or supercritical phase, with the detection of approaching the phase transition region. Convergence of the solution algorithms is analyzed in connection with fast and abrupt changes of the equation of state and the enthalpy function in the region of phase transitions.
Solar photovoltaic power output is modulated by atmospheric aerosols and clouds and thus contains valuable information on the optical properties of the atmosphere. As a ground-based data source with high spatiotemporal resolution it has great potential to complement other ground-based solar irradiance measurements as well as those of weather models and satellites, thus leading to an improved characterisation of global horizontal irradiance. In this work several algorithms are presented that can retrieve global tilted and horizontal irradiance and atmospheric optical properties from solar photovoltaic data and/or pyranometer measurements. The method is tested on data from two measurement campaigns that took place in the Allgäu region in Germany in autumn 2018 and summer 2019, and the results are compared with local pyranometer measurements as well as satellite and weather model data. Using power data measured at 1 Hz and averaged to 1 min resolution along with a non-linear photovoltaic module temperature model, global horizontal irradiance is extracted with a mean bias error compared to concurrent pyranometer measurements of 5.79 W m−2 (7.35 W m−2) under clear (cloudy) skies, averaged over the two campaigns, whereas for the retrieval using coarser 15 min power data with a linear temperature model the mean bias error is 5.88 and 41.87 W m−2 under clear and cloudy skies, respectively.
During completely overcast periods the cloud optical depth is extracted from photovoltaic power using a lookup table method based on a 1D radiative transfer simulation, and the results are compared to both satellite retrievals and data from the Consortium for Small-scale Modelling (COSMO) weather model. Potential applications of this approach for extracting cloud optical properties are discussed, as well as certain limitations, such as the representation of 3D radiative effects that occur under broken-cloud conditions. In principle this method could provide an unprecedented amount of ground-based data on both irradiance and optical properties of the atmosphere, as long as the required photovoltaic power data are available and properly pre-screened to remove unwanted artefacts in the signal. Possible solutions to this problem are discussed in the context of future work.
The cystic fibrosis transmembrane conductance regulator (CFTR) anion channel and the epithelial Na+ channel (ENaC) play essential roles in transepithelial ion and fluid transport in numerous epithelial tissues. Inhibitors of both channels have been important tools for defining their physiological role in vitro. However, two commonly used CFTR inhibitors, CFTRinh-172 and GlyH-101, also inhibit non-CFTR anion channels, indicating they are not CFTR specific. However, the potential off-target effects of these inhibitors on epithelial cation channels has to date not been addressed. Here, we show that both CFTR blockers, at concentrations routinely employed by many researchers, caused a significant inhibition of store-operated calcium entry (SOCE) that was time-dependent, poorly reversible and independent of CFTR. Patch clamp experiments showed that both CFTRinh-172 and GlyH-101 caused a significant block of Orai1-mediated whole cell currents, establishing that they likely reduce SOCE via modulation of this Ca2+ release-activated Ca2+ (CRAC) channel. In addition to off-target effects on calcium channels, both inhibitors significantly reduced human αβγ-ENaC-mediated currents after heterologous expression in Xenopus oocytes, but had differential effects on δβγ-ENaC function. Molecular docking identified two putative binding sites in the extracellular domain of ENaC for both CFTR blockers. Together, our results indicate that caution is needed when using these two CFTR inhibitors to dissect the role of CFTR, and potentially ENaC, in physiological processes.
Accurate forecasting of solar irradiance is crucial for the integration of solar energy into the power grid, power system planning, and the operation of solar power plants. The Weather Research and Forecasting (WRF) model, with its solar radiation (WRF-Solar) extension, has been used to forecast solar irradiance in various regions worldwide. However, the application of the WRF-Solar model for global horizontal irradiance (GHI) forecasting in West Africa, specifically in Ghana, has not been studied. This study aims to evaluate the performance of the WRF-Solar model for GHI forecasting in Ghana, focusing on 3 health centers (Kologo, Kumasi and Akwatia) for the year 2021. We applied a two one-way nested domain (D1=15 km and D2=3 km) to investigate the ability of the WRF solar model to forecast GHI up to 72 hours in advance under different atmospheric conditions. The initial and lateral boundary conditions were taken from the ECMWF operational forecasts. In addition, the optical aerosol depth (AOD) data at 550 nm from the Copernicus Atmosphere Monitoring Service (CAMS) were considered. The study uses statistical metrics such as mean bias error (MBE), root mean square error (RMSE), to evaluate the performance of the WRF-Solar model with the observational data obtained from automatic weather stations in the three health centers in Ghana. The results of this study will contribute to the understanding of the capabilities and limitations of the WRF-Solar model for forecasting GHI in West Africa, particularly in Ghana, and provide valuable information for stakeholders involved in solar energy generation and grid integration towards optimized management of in the region.
The representation, or encoding, utilized in evolutionary algorithms has a substantial effect on their performance. Examination of the suitability of widely used representations for quality diversity optimization (QD) in robotic domains has yielded inconsistent results regarding the most appropriate encoding method. Given the domain-dependent nature of QD, additional evidence from other domains is necessary. This study compares the impact of several representations, including direct encoding, a dictionary-based representation, parametric encoding, compositional pattern producing networks, and cellular automata, on the generation of voxelized meshes in an architecture setting. The results reveal that some indirect encodings outperform direct encodings and can generate more diverse solution sets, especially when considering full phenotypic diversity. The paper introduces a multi-encoding QD approach that incorporates all evaluated representations in the same archive. Species of encodings compete on the basis of phenotypic features, leading to an approach that demonstrates similar performance to the best single-encoding QD approach. This is noteworthy, as it does not always require the contribution of the best-performing single encoding.
The epithelial sodium channel (ENaC) is a key regulator of sodium homeostasis that contributes to blood pressure control. ENaC open probability is adjusted by extracellular sodium ions, a mechanism referred to as sodium self-inhibition (SSI). With a growing number of identified ENaC gene variants associated with hypertension, there is an increasing demand for medium- to high-throughput assays allowing the detection of alterations in ENaC activity and SSI. We evaluated a commercially available automated two-electrode voltage-clamp (TEVC) system that records transmembrane currents of ENaC-expressing Xenopus oocytes in 96-well microtiter plates. We employed guinea pig, human and Xenopus laevis ENaC orthologs that display specific magnitudes of SSI. While demonstrating some limitations over traditional TEVC systems with customized perfusion chambers, the automated TEVC system was able to detect the established SSI characteristics of the employed ENaC orthologs. We were able to confirm a reduced SSI in a gene variant, leading to C479R substitution in the human α-ENaC subunit that has been reported in Liddle syndrome. In conclusion, automated TEVC in Xenopus oocytes can detect SSI of ENaC orthologs and variants associated with hypertension. For precise mechanistic and kinetic analyses of SSI, optimization for faster solution exchange rates is recommended.
Solar photovoltaic power output is modulated by atmospheric aerosols and clouds and thus contains valuable information on the optical properties of the atmosphere. As a ground-based data source with high spatiotemporal resolution it has great potential to complement other ground-based solar irradiance measurements as well as those of weather models and satellites, thus leading to an improved characterisation of global horizontal irradiance. In this work several algorithms are presented that can retrieve global tilted and horizontal irradiance and atmospheric optical properties from solar photovoltaic data and/or pyranometer measurements. Specifically, the aerosol (cloud) optical depth is inferred during clear sky (completely overcast) conditions. The method is tested on data from two measurement campaigns that took place in Allgäu, Germany in autumn 2018 and summer 2019, and the results are compared with local pyranometer measurements as well as satellite and weather model data. Using power data measured at 1 Hz and averaged to 1 minute resolution, the hourly global horizontal irradiance is extracted with a mean bias error compared to concurrent pyranometer measurements of 11.45 W m−2, averaged over the two campaigns, whereas for the retrieval using coarser 15 minute power data the mean bias error is 16.39 W m−2.
During completely overcast periods the cloud optical depth is extracted from photovoltaic power using a lookup table method based on a one-dimensional radiative transfer simulation, and the results are compared to both satellite retrievals as well as data from the COSMO weather model. Potential applications of this approach for extracting cloud optical properties are discussed, as well as certain limitations, such as the representation of 3D radiative effects that occur under broken cloud conditions. In principle this method could provide an unprecedented amount of ground-based data on both irradiance and optical properties of the atmosphere, as long as the required photovoltaic power data are available and are properly pre-screened to remove unwanted artefacts in the signal. Possible solutions to this problem are discussed in the context of future work.
Estimates of global horizontal irradiance (GHI) from reanalysis and satellite-based data are the most important information for the design and monitoring of PV systems in Africa, but their quality is unknown due to the lack of in situ measurements. In this study, we evaluate the performance of hourly GHI from state-of-the-art reanalysis and satellite-based products (ERA5, CAMS, MERRA-2, and SARAH-2) with 37 quality-controlled in situ measurements from novel meteorological networks established in Burkina Faso and Ghana under different weather conditions for the year 2020. The effects of clouds and aerosols are also considered in the analysis by using common performance measures for the main quality attributes and a new overall performance value for the joint assessment. The results show that satellite data performs better than reanalysis data under different atmospheric conditions. Nevertheless, both data sources exhibit significant bias of more than 150 W/m2 in terms of RMSE under cloudy skies compared to clear skies. The new measure of overall performance clearly shows that the hourly GHI derived from CAMS and SARAH-2 could serve as viable alternative data for assessing solar energy in the different climatic zones of West Africa.
Stably stratified Taylor–Green vortex simulations are performed by lattice Boltzmann methods (LBM) and compared to other recent works using Navier–Stokes solvers. The density variation is modeled with a separate distribution function in addition to the particle distribution function modeling the flow physics. Different stencils, forcing schemes, and collision models are tested and assessed. The overall agreement of the lattice Boltzmann solutions with reference solutions from other works is very good, even when no explicit subgrid model is used, but the quality depends on the LBM setup. Although the LBM forcing scheme is not decisive for the quality of the solution, the choice of the collision model and of the stencil are crucial for adequate solutions in underresolved conditions. The LBM simulations confirm the suppression of vertical flow motion for decreasing initial Froude numbers. To gain further insight into buoyancy effects, energy decay, dissipation rates, and flux coefficients are evaluated using the LBM model for various Froude numbers.