Refine
H-BRS Bibliography
- yes (17) (remove)
Departments, institutes and facilities
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (17) (remove)
Document Type
- Preprint (17) (remove)
Year of publication
Keywords
- lignin (2)
- ATR-FTIR (1)
- Automatic Differentiation (1)
- Folin-Ciocalteu assay (1)
- Hydrogen storage (1)
- Lattice Boltzmann Method Code (1)
- Lennard-Jones parameters (1)
- Machine learning (1)
- Metal hydride (1)
- Multidimensional Z-transforms (1)
- Neural networks (1)
- Nonlinear sampled-data system (1)
- OH-number (1)
- Pulse-width modulation (1)
- Pytorch (1)
- SEC (1)
- Simulation (1)
- UV-VIS (1)
- Volterra-Wiener series (1)
- XRD (1)
- actinometry (1)
- adhesion (1)
- antioxidant activity (1)
- biomass (1)
- biomaterial (1)
- bone regeneration (1)
- dc electric drive (1)
- drug release (1)
- force field (1)
- force-retraction displacement-curve (1)
- hydrogel (1)
- kraft lignin (1)
- lignocellulose feedstock (1)
- local optimization (1)
- multiscale parameterization (1)
- multivariate data processing (1)
- non-linear projection (1)
- objective function (1)
- organosolv (1)
- osteogenesis (1)
- photocatalysis (1)
- photolysis (1)
- pressure sensitive adhesive (1)
- scaffolds (1)
- stem cells (1)
- tack (1)
- tissue engineering (1)
- total phenol content (1)
- transdermal therapeutic systems (1)
- weighting factors (1)
Fatigue strength estimation is a costly manual material characterization process in which state-of-the-art approaches follow a standardized experiment and analysis procedure. In this paper, we examine a modular, Machine Learning-based approach for fatigue strength estimation that is likely to reduce the number of experiments and, thus, the overall experimental costs. Despite its high potential, deployment of a new approach in a real-life lab requires more than the theoretical definition and simulation. Therefore, we study the robustness of the approach against misspecification of the prior and discretization of the specified loads. We identify its applicability and its advantageous behavior over the state-of-the-art methods, potentially reducing the number of costly experiments.
In this contribution, we perform computer simulations to expedite the development of hydrogen storages based on metal hydride. These simulations enable in-depth analysis of the processes within the systems which otherwise could not be achieved. That is, because the determination of crucial process properties require measurement instruments in the setup which are currently not available. Therefore, we investigate the reliability of reaction values that are determined by a design of experiments.
Specifically, we first explain our model setup in detail. We define the mathematical terms to obtain insights into the thermal processes and reaction kinetics. We then compare the simulated results to measurements of a 5-gram sample consisting of iron-titanium-manganese (FeTiMn) to obtain the values with the highest agreement with the experimental data. In addition, we improve the model by replacing the commonly used Van’t-Hoff equation by a mathematical expression of the pressure-composition-isotherms (PCI) to calculate the equilibrium pressure.
Finally, the parameters’ accuracy is checked in yet another with an existing metal hydride system. The simulated results demonstrate high concordance with experimental data, which advocate the usage of approximated kinetic reaction properties by a design of experiments for further design studies. Furthermore, we are able to determine process parameters like the entropy and enthalpy.
Transdermal therapeutic systems (TTS) represent an up-to-day medication applied to human skin, which consists of a drug-containing pressure-sensitive adhesive (PSA) and a flexible backing layer. The development of a reliable TTS requires precise knowledge of the viscoelastic tack behavior of PSA in terms of adhesion and detaching. Tailoring of a PSA can be achieved by altering the resin content or modifying the chemical properties of the macromolecules. In this study, three different resin content of two silicone-based PSA – non-amine compatible, and less tack, amine-compatible – were investigated with the help of recently developed RheoTack method to characterize the retraction speed dependent tack behavior for various geometries of the testing rods. The obtained force-retraction displacement-curves clearly depict the effect of the chemical structure as well as the resin content. Decreasing the resin content shifts the start of fibril fracture to larger deformations states and significantly enhances the stretchability of the fibrils. To compare various rod geometries precisely, the force-retraction displacement curves were normalized to account for effective contact areas. The flat and spherical rods led to completely different failure and tack behaviors. Furthermore, the adhesion formation between TTS with flexible backing layers and rods during the dwell phase happens in a different manner compared to rigid plates, in particular for flat rods, where maximum compression stresses occur at the edges and not uniformly over the cross-section. Thus, the approach to follow ASTM D2949 has to be reconsidered for tests of these materials.
Antioxidant activity is an essential feature required for oxygen-sensitive merchandise and goods, such as food and corresponding packaging as well as materials used in cosmetics and biomedicine. For example, vanillin, one of the most prominent antioxidants, is fabricated from lignin, the second most abundant natural polymer in the world. Antioxidant potential is primarily related to the termination of oxidation propagation reactions through hydrogen transfer. The application of technical lignin as a natural antioxidant has not yet been implemented in the industrial sector, mainly due to the complex heterogeneous structure and polydispersity of lignin. Thus, current research focuses on various isolation and purification strategies to improve the compatibility of lignin material with substrates and enhancing its stabilizing effect.
Renewable resources gain increasing interest as source for environmentally benign biomaterials, such as drug encapsulation/release compounds, and scaffolds for tissue engineering in regenerative medicine. Being the second largest naturally abundant polymer, the interest in lignin valorization for biomedical utilization is rapidly growing. Depending on resource and isolation procedure, lignin shows specific antioxidant and antimicrobial activity. Today, efforts in research and industry are directed toward lignin utilization as renewable macromolecular building block for the preparation of polymeric drug encapsulation and scaffold materials. Within the last five years, remarkable progress has been made in isolation, functionalization and modification of lignin and lignin-derived compounds. However, literature so far mainly focuses lignin-derived fuels, lubricants and resins. The purpose of this review is to summarize the current state of the art and to highlight the most important results in the field of lignin-based materials for potential use in biomedicine (reported in 2014–2018). Special focus is drawn on lignin-derived nanomaterials for drug encapsulation and release as well as lignin hybrid materials used as scaffolds for guided bone regeneration in stem cell-based therapies.
Today, more than 70 million tons of lignin are produced by the pulp and paper industry every year. However, the utilization of lignin as a source for chemical synthesis is still limited due to the complex and heterogeneous lignin structure. The purpose of this study was a selective photodegradation of industrially available kraft lignin in order to obtain appropriate fragments and building block chemicals for further utilization, e.g. polymerization. Thus, kraft lignin obtained from soft wood black liquor by acidification was dissolved in sodium hydroxide and irradiated at a wavelength of 254 nm with and without the presence of titanium dioxide in various concentrations. Analyses of the irradiated products via SEC showed decreasing molar masses and decreasing polydispersity indices over time. At the end of the irradiation period the lignin was depolymerised to form fragments as small as the lignin monomers. TOC analyses showed minimal mineralisation due to the depolymerisation process.
The lattice Boltzmann method (LBM) is an efficient simulation technique for computational fluid mechanics and beyond. It is based on a simple stream-and-collide algorithm on Cartesian grids, which is easily compatible with modern machine learning architectures. While it is becoming increasingly clear that deep learning can provide a decisive stimulus for classical simulation techniques, recent studies have not addressed possible connections between machine learning and LBM. Here, we introduce Lettuce, a PyTorch-based LBM code with a threefold aim. Lettuce enables GPU accelerated calculations with minimal source code, facilitates rapid prototyping of LBM models, and enables integrating LBM simulations with PyTorch's deep learning and automatic differentiation facility. As a proof of concept for combining machine learning with the LBM, a neural collision model is developed, trained on a doubly periodic shear layer and then transferred to a different flow, a decaying turbulence. We also exemplify the added benefit of PyTorch's automatic differentiation framework in flow control and optimization. To this end, the spectrum of a forced isotropic turbulence is maintained without further constraining the velocity field.
Solar photovoltaic power output is modulated by atmospheric aerosols and clouds and thus contains valuable information on the optical properties of the atmosphere. As a ground-based data source with high spatiotemporal resolution it has great potential to complement other ground-based solar irradiance measurements as well as those of weather models and satellites, thus leading to an improved characterisation of global horizontal irradiance. In this work several algorithms are presented that can retrieve global tilted and horizontal irradiance and atmospheric optical properties from solar photovoltaic data and/or pyranometer measurements. Specifically, the aerosol (cloud) optical depth is inferred during clear sky (completely overcast) conditions. The method is tested on data from two measurement campaigns that took place in Allgäu, Germany in autumn 2018 and summer 2019, and the results are compared with local pyranometer measurements as well as satellite and weather model data. Using power data measured at 1 Hz and averaged to 1 minute resolution, the hourly global horizontal irradiance is extracted with a mean bias error compared to concurrent pyranometer measurements of 11.45 W m−2, averaged over the two campaigns, whereas for the retrieval using coarser 15 minute power data the mean bias error is 16.39 W m−2.
During completely overcast periods the cloud optical depth is extracted from photovoltaic power using a lookup table method based on a one-dimensional radiative transfer simulation, and the results are compared to both satellite retrievals as well as data from the COSMO weather model. Potential applications of this approach for extracting cloud optical properties are discussed, as well as certain limitations, such as the representation of 3D radiative effects that occur under broken cloud conditions. In principle this method could provide an unprecedented amount of ground-based data on both irradiance and optical properties of the atmosphere, as long as the required photovoltaic power data are available and are properly pre-screened to remove unwanted artefacts in the signal. Possible solutions to this problem are discussed in the context of future work.
This paper addresses the classification of Arabic text data in the field of Natural Language Processing (NLP), with a particular focus on Natural Language Inference (NLI) and Contradiction Detection (CD). Arabic is considered a resource-poor language, meaning that there are few data sets available, which leads to limited availability of NLP methods. To overcome this limitation, we create a dedicated data set from publicly available resources. Subsequently, transformer-based machine learning models are being trained and evaluated. We find that a language-specific model (AraBERT) performs competitively with state-of-the-art multilingual approaches, when we apply linguistically informed pre-training methods such as Named Entity Recognition (NER). To our knowledge, this is the first large-scale evaluation for this task in Arabic, as well as the first application of multi-task pre-training in this context.
In vision tasks, a larger effective receptive field (ERF) is associated with better performance. While attention natively supports global context, convolution requires multiple stacked layers and a hierarchical structure for large context. In this work, we extend Hyena, a convolution-based attention replacement, from causal sequences to the non-causal two-dimensional image space. We scale the Hyena convolution kernels beyond the feature map size up to 191$\times$191 to maximize the ERF while maintaining sub-quadratic complexity in the number of pixels. We integrate our two-dimensional Hyena, HyenaPixel, and bidirectional Hyena into the MetaFormer framework. For image categorization, HyenaPixel and bidirectional Hyena achieve a competitive ImageNet-1k top-1 accuracy of 83.0% and 83.5%, respectively, while outperforming other large-kernel networks. Combining HyenaPixel with attention further increases accuracy to 83.6%. We attribute the success of attention to the lack of spatial bias in later stages and support this finding with bidirectional Hyena.