Refine
Departments, institutes and facilities
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (485) (remove)
Document Type
- Conference Object (216)
- Article (181)
- Part of a Book (27)
- Preprint (17)
- Report (11)
- Doctoral Thesis (8)
- Contribution to a Periodical (6)
- Research Data (6)
- Book (monograph, edited volume) (4)
- Part of Periodical (4)
Year of publication
Keywords
- lignin (7)
- Quality diversity (6)
- West Africa (6)
- advanced applications (5)
- modeling of complex systems (5)
- stem cells (5)
- Hydrogen storage (4)
- Lattice Boltzmann Method (4)
- Lignin (4)
- additive (4)
Antioxidant activity is an essential feature required for oxygen-sensitive merchandise and goods, such as food and corresponding packaging as well as materials used in cosmetics and biomedicine. For example, vanillin, one of the most prominent antioxidants, is fabricated from lignin, the second most abundant natural polymer in the world. Antioxidant potential is primarily related to the termination of oxidation propagation reactions through hydrogen transfer. The application of technical lignin as a natural antioxidant has not yet been implemented in the industrial sector, mainly due to the complex heterogeneous structure and polydispersity of lignin. Thus, current research focuses on various isolation and purification strategies to improve the compatibility of lignin material with substrates and enhancing its stabilizing effect.
Antioxidant activity is an essential aspect of oxygen-sensitive merchandise and goods, such as food and corresponding packaging, cosmetics, and biomedicine. Technical lignin has not yet been applied as a natural antioxidant, mainly due to the complex heterogeneous structure and polydispersity of lignin. This report presents antioxidant capacity studies completed using the 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay. The influence of purification on lignin structure and activity was investigated. The purification procedure showed that double-fold selective extraction is the most efficient (confirmed by ultraviolet-visible (UV/Vis), Fourier transform infrared (FTIR), heteronuclear single quantum coherence (HSQC) and 31P nuclear magnetic resonance spectroscopy, size exclusion chromatography, and X-ray diffraction), resulting in fractions of very narrow polydispersity (3.2⁻1.6), up to four distinct absorption bands in UV/Vis spectroscopy. Due to differential scanning calorimetry measurements, the glass transition temperature increased from 123 to 185 °C for the purest fraction. Antioxidant capacity is discussed regarding the biomass source, pulping process, and degree of purification. Lignin obtained from industrial black liquor are compared with beech wood samples: antioxidant activity (DPPH inhibition) of kraft lignin fractions were 62⁻68%, whereas beech and spruce/pine-mixed lignin showed values of 42% and 64%, respectively. Total phenol content (TPC) of the isolated kraft lignin fractions varied between 26 and 35%, whereas beech and spruce/pine lignin were 33% and 34%, respectively. Storage decreased the TPC values but increased the DPPH inhibition.
Renewable resources gain increasing interest as source for environmentally benign biomaterials, such as drug encapsulation/release compounds, and scaffolds for tissue engineering in regenerative medicine. Being the second largest naturally abundant polymer, the interest in lignin valorization for biomedical utilization is rapidly growing. Depending on resource and isolation procedure, lignin shows specific antioxidant and antimicrobial activity. Today, efforts in research and industry are directed toward lignin utilization as renewable macromolecular building block for the preparation of polymeric drug encapsulation and scaffold materials. Within the last five years, remarkable progress has been made in isolation, functionalization and modification of lignin and lignin-derived compounds. However, literature so far mainly focuses lignin-derived fuels, lubricants and resins. The purpose of this review is to summarize the current state of the art and to highlight the most important results in the field of lignin-based materials for potential use in biomedicine (reported in 2014–2018). Special focus is drawn on lignin-derived nanomaterials for drug encapsulation and release as well as lignin hybrid materials used as scaffolds for guided bone regeneration in stem cell-based therapies.
Renewable resources are gaining increasing interest as a source for environmentally benign biomaterials, such as drug encapsulation/release compounds, and scaffolds for tissue engineering in regenerative medicine. Being the second largest naturally abundant polymer, the interest in lignin valorization for biomedical utilization is rapidly growing. Depending on its resource and isolation procedure, lignin shows specific antioxidant and antimicrobial activity. Today, efforts in research and industry are directed toward lignin utilization as a renewable macromolecular building block for the preparation of polymeric drug encapsulation and scaffold materials. Within the last five years, remarkable progress has been made in isolation, functionalization and modification of lignin and lignin-derived compounds. However, the literature so far mainly focuses lignin-derived fuels, lubricants and resins. The purpose of this review is to summarize the current state of the art and to highlight the most important results in the field of lignin-based materials for potential use in biomedicine (reported in 2014⁻2018). Special focus is placed on lignin-derived nanomaterials for drug encapsulation and release as well as lignin hybrid materials used as scaffolds for guided bone regeneration in stem cell-based therapies.
Today, more than 70 million tons of lignin are produced by the pulp and paper industry every year. However, the utilization of lignin as a source for chemical synthesis is still limited due to the complex and heterogeneous lignin structure. The purpose of this study was a selective photodegradation of industrially available kraft lignin in order to obtain appropriate fragments and building block chemicals for further utilization, e.g. polymerization. Thus, kraft lignin obtained from soft wood black liquor by acidification was dissolved in sodium hydroxide and irradiated at a wavelength of 254 nm with and without the presence of titanium dioxide in various concentrations. Analyses of the irradiated products via SEC showed decreasing molar masses and decreasing polydispersity indices over time. At the end of the irradiation period the lignin was depolymerised to form fragments as small as the lignin monomers. TOC analyses showed minimal mineralisation due to the depolymerisation process.
The lattice Boltzmann method (LBM) is an efficient simulation technique for computational fluid mechanics and beyond. It is based on a simple stream-and-collide algorithm on Cartesian grids, which is easily compatible with modern machine learning architectures. While it is becoming increasingly clear that deep learning can provide a decisive stimulus for classical simulation techniques, recent studies have not addressed possible connections between machine learning and LBM. Here, we introduce Lettuce, a PyTorch-based LBM code with a threefold aim. Lettuce enables GPU accelerated calculations with minimal source code, facilitates rapid prototyping of LBM models, and enables integrating LBM simulations with PyTorch's deep learning and automatic differentiation facility. As a proof of concept for combining machine learning with the LBM, a neural collision model is developed, trained on a doubly periodic shear layer and then transferred to a different flow, a decaying turbulence. We also exemplify the added benefit of PyTorch's automatic differentiation framework in flow control and optimization. To this end, the spectrum of a forced isotropic turbulence is maintained without further constraining the velocity field.
In dieser Arbeit werden neuartige methodische Erweiterungen der Lattice-Boltzmann-Methode (LBM) entwickelt, die effizientere Simulationen inkompressibler Wirbelströmungen ermöglichen. Diese Erweiterungen beheben zwei Hauptprobleme der Standard-LBM: ihre Instabilität in unteraufgelösten turbulenten Simulationen und ihre Beschränkung auf reguläre Rechengitter. Dazu wird zunächst eine Pseudo-Entropische Stabilisierung (PES) entwickelt. Diese kombiniert Ansätze der Multiple-Relaxation-Time (MRT)-Modelle und der Entropischen LBM zu einem expliziten, lokalen und flexiblen Stabilisierungsoperator. Diese Modifikation des Kollisionsschritts erlaubt selbst auf stark unteraufgelösten Gittern stabile und qualitativ korrekte Simulationen. Zur Erweiterung der LBM auf irreguläre Rechengitter wird zunächst eine moderne Discontinuous-Galerkin-LBM untersucht und um stabilere Zeitintegratoren ergänzt. Diese Studie demonstriert die drastischen Schwächen existierender LBMAnsätze auf irregulären Gittern. Basierend auf den gewonnenen Erkenntnissen gelingt die Formulierung einer neuartigen Semi-Lagrangeschen LBM (SLLBM). Diese ermöglicht in einzigartigerWeise sowohl die Verwendung irregulärer Gitter und großer Zeitschritte als auch eine hohe räumliche Konvergenzordnung. Anhand von Beispielsimulationen wird demonstriert, wieso dieser Ansatz anderen aktuellen Off-Lattice-Boltzmann-Methoden (OLBMs) in Effizienz und Genauigkeit überlegen ist. Weitere neuartige Aspekte dieser Arbeit sind die Entwicklung eines modularen Off-Lattice-Boltzmann-Codes und die Erweiterung der LBM um implizite Mehrschrittverfahren, mit denen eine Erhöhung der zeitlichen Konvergenzordnung gelingt.
The lattice Boltzmann method (LBM) stands apart from conventional macroscopic approaches due to its low numerical dissipation and reduced computational cost, attributed to a simple streaming and local collision step. While this property makes the method particularly attractive for applications such as direct noise computation, it also renders the method highly susceptible to instabilities. A vast body of literature exists on stability-enhancing techniques, which can be categorized into selective filtering, regularized LBM, and multi-relaxation time (MRT) models. Although each technique bolsters stability by adding numerical dissipation, they act on different modes. Consequently, there is not a universal scheme optimally suited for a wide range of different flows. The reason for this lies in the static nature of these methods; they cannot adapt to local or global flow features. Still, adaptive filtering using a shear sensor constitutes an exception to this. For this reason, we developed a novel collision operator that uses space- and time-variant collision rates associated with the bulk viscosity. These rates are optimized by a physically informed neural net. In this study, the training data consists of a time series of different instances of a 2D barotropic vortex solution, obtained from a high-order Navier–Stokes solver that embodies desirable numerical features. For this specific text case our results demonstrate that the relaxation times adapt to the local flow and show a dependence on the velocity field. Furthermore, the novel collision operator demonstrates a better stability-to-precision ratio and outperforms conventional techniques that use an empirical constant for the bulk viscosity.
Künstliche Intelligenz (KI) ist aus der heutigen Gesellschaft kaum noch wegzudenken. Auch im Sport haben Methoden der KI in den letzten Jahren mehr und mehr Einzug gehalten. Ob und inwieweit dabei allerdings die derzeitigen Potenziale der KI tatsächlich ausgeschöpft werden, ist bislang nicht untersucht worden. Der Nutzen von Methoden der KI im Sport ist unbestritten, jedoch treten bei der Umsetzung in die Praxis gravierende Probleme auf, was den Zugang zu Ressourcen, die Verfügbarkeit von Experten und den Umgang mit den Methoden und Daten betrifft. Die Ursache für die, verglichen mit anderen Anwendungsgebieten, langsame An- bzw. Übernahme von Methoden der KI in den Spitzensport ist nach Hypothese des Autorenteams auf mehrere Mismatches zwischen dem Anwendungsfeld und den KI-Methoden zurückzuführen. Diese Mismatches sind methodischer, struktureller und auch kommunikativer Art. In der vorliegenden Expertise werden Vorschläge abgeleitet, die zur Auflösung der Mismatches führen können und zugleich neue Transfer- und Synergiemöglichkeiten aufzeigen. Außerdem wurden drei Use Cases zu Trainingssteuerung, Leistungsdiagnostik und Wettkampfdiagnostik exemplarisch umgesetzt. Dies erfolgte in Form entsprechender Projektbeschreibungen. Dabei zeigt die Ausarbeitung, auf welche Art und Weise Probleme, die heute noch bei der Verbindung zwischen KI und Sport bestehen, möglichst ausgeräumt werden können. Eine empirische Umsetzung des Use Case Trainingssteuerung erfolgte im Radsport, weshalb dieser ausführlicher dargestellt wird.
Wo Laborexperimente zu aufwendig, zu teuer, zu langsam oder zu gefährlich oder Stoffeigenschaften gar nicht erst experimentell zugänglich sind, können Computersimulationen von Atomen und Molekülen diese ersetzen oder ergänzen. Sie ermöglichen dadurch Reduktion von Kosten, Entwicklungszeit und Materialeinsatz. Die für diese Simulationen benötigten Molekülmodelle beinhalten zahlreiche Parameter, die der Simulant einstellen oder auswählen muss. Eine passende Parametrierung ist nur bei entsprechenden Kenntnissen über die Auswirkungen der Parameter auf die zu berechnenden Größen und Eigenschaften möglich. Eine Gruppe von Standardparametern in molekularen Simulationen sind die Partialladungen der einzelnen Atome innerhalb eines Moleküls. Die räumliche Ladungsverteilung innerhalb des Moleküls wird durch Punktladungen auf den Atomzentren angenähert. Für diese Annäherung existieren diverse Ansätze für verschiedene Molekülklassen und Anwendungen. In diesem Teilprojekt des Promotionsvorhabens wurde systematisch der Einfluss der Wahl des Partialladungssatzes auf potentielle Energien und ausgewählte makroskopische Eigenschaften aus Molekulardynamik-Simulationen evaluiert. Es konnte gezeigt werden, dass insbesondere bei stark polaren Molekülen die Auswahl des geeigneten Partialladungssatzes entscheidenden Einfluss auf die Simulationsergebnisse hat und daher nicht naiv, sondern nur ganz gezielt getroffen werden darf.
Im Rahmen dieser Arbeit wurden zunächst neuartige ionische Agarosederivate synthetisiert und anschließend umfassend charakterisiert. Anionische Agarosesulfate mit einer regioselektiven Derivatisierung in Position G6 wurden durch homogene Umsetzung in ionischer Flüssigkeit erhalten. Kationische Agarosecarbamate mit einstellbarem Funktionalisierungsgrad waren durch einen zweistufigen Syntheseansatz zugänglich. Hierzu wurden zunächst Agarosephenylcarbonate in einer homogenen Synthese hergestellt, im Anschluss folgte eine Aminolyse zu den gewünschten funktionalen Agarosederivaten. Die ionischen Agarosederivate waren bereits bei geringen Funktionalisierungsgraden vollständig löslich in Wasser. Damit war es möglich, Alginatmikrokapseln polyelektrolytisch zu beschichten und diese als Träger für eine kontrollierte Wirkstofffreisetzung zu verwenden. Ebenfalls konnten Kompositgele aus Agarose, Hydroxyapatit und Agarosederivaten hergestellt und charakterisiert werden. Im zweiten Teil wurden sowohl die Kompositträgermaterialien als auch die Alginatmikrokapseln mit vier verschiedenen Modellwirkstoffen (ATP, Suramin, Methylenblau und A740003) beladen und die Wirkstofffreisetzung über einen Zeitraum von zwei Wochen untersucht. Für die ionischen Modellwirkstoffe erwiesen sich Kompositträgermaterialien mit ionischem Agarosederivat, die beschichteten Mikrokapseln sowie die Kombination aus Komposit und Kapseln als effektiv, um die Freisetzung auf bis zu 40% zu verlangsamen. Für die schlecht wasserlösliche Substanz A740003, ein Rezeptorligand für die osteogene Differenzierung von Stammzellen, wurde eine stark verzögerte Freisetzung aus Polyelektrolytemikrokapseln festgestellt. Mithilfe von literaturbekannten und neu entwickelten Anpassungsmodellen gelang es, die Diffusion als Hauptmechanismus der Wirkstofffreisetzung zu identifizieren und die Freisetzungskurven mathematisch akkurat zu beschreiben und daraus Rückschlüsse über die einzelnen Phasen der Freisetzung zu ziehen.
Question Answering (QA) has gained significant attention in recent years, with transformer-based models improving natural language processing. However, issues of explainability remain, as it is difficult to determine whether an answer is based on a true fact or a hallucination. Knowledge-based question answering (KBQA) methods can address this problem by retrieving answers from a knowledge graph. This paper proposes a hybrid approach to KBQA called FRED, which combines pattern-based entity retrieval with a transformer-based question encoder. The method uses an evolutionary approach to learn SPARQL patterns, which retrieve candidate entities from a knowledge base. The transformer-based regressor is then trained to estimate each pattern’s expected F1 score for answering the question, resulting in a ranking ofcandidate entities. Unlike other approaches, FRED can attribute results to learned SPARQL patterns, making them more interpretable. The method is evaluated on two datasets and yields MAP scores of up to 73 percent, with the transformer-based interpretation falling only 4 pp short of an oracle run. Additionally, the learned patterns successfully complement manually generated ones and generalize well to novel questions.
Solar photovoltaic power output is modulated by atmospheric aerosols and clouds and thus contains valuable information on the optical properties of the atmosphere. As a ground-based data source with high spatiotemporal resolution it has great potential to complement other ground-based solar irradiance measurements as well as those of weather models and satellites, thus leading to an improved characterisation of global horizontal irradiance. In this work several algorithms are presented that can retrieve global tilted and horizontal irradiance and atmospheric optical properties from solar photovoltaic data and/or pyranometer measurements. The method is tested on data from two measurement campaigns that took place in the Allgäu region in Germany in autumn 2018 and summer 2019, and the results are compared with local pyranometer measurements as well as satellite and weather model data. Using power data measured at 1 Hz and averaged to 1 min resolution along with a non-linear photovoltaic module temperature model, global horizontal irradiance is extracted with a mean bias error compared to concurrent pyranometer measurements of 5.79 W m−2 (7.35 W m−2) under clear (cloudy) skies, averaged over the two campaigns, whereas for the retrieval using coarser 15 min power data with a linear temperature model the mean bias error is 5.88 and 41.87 W m−2 under clear and cloudy skies, respectively.
During completely overcast periods the cloud optical depth is extracted from photovoltaic power using a lookup table method based on a 1D radiative transfer simulation, and the results are compared to both satellite retrievals and data from the Consortium for Small-scale Modelling (COSMO) weather model. Potential applications of this approach for extracting cloud optical properties are discussed, as well as certain limitations, such as the representation of 3D radiative effects that occur under broken-cloud conditions. In principle this method could provide an unprecedented amount of ground-based data on both irradiance and optical properties of the atmosphere, as long as the required photovoltaic power data are available and properly pre-screened to remove unwanted artefacts in the signal. Possible solutions to this problem are discussed in the context of future work.
Solar photovoltaic power output is modulated by atmospheric aerosols and clouds and thus contains valuable information on the optical properties of the atmosphere. As a ground-based data source with high spatiotemporal resolution it has great potential to complement other ground-based solar irradiance measurements as well as those of weather models and satellites, thus leading to an improved characterisation of global horizontal irradiance. In this work several algorithms are presented that can retrieve global tilted and horizontal irradiance and atmospheric optical properties from solar photovoltaic data and/or pyranometer measurements. Specifically, the aerosol (cloud) optical depth is inferred during clear sky (completely overcast) conditions. The method is tested on data from two measurement campaigns that took place in Allgäu, Germany in autumn 2018 and summer 2019, and the results are compared with local pyranometer measurements as well as satellite and weather model data. Using power data measured at 1 Hz and averaged to 1 minute resolution, the hourly global horizontal irradiance is extracted with a mean bias error compared to concurrent pyranometer measurements of 11.45 W m−2, averaged over the two campaigns, whereas for the retrieval using coarser 15 minute power data the mean bias error is 16.39 W m−2.
During completely overcast periods the cloud optical depth is extracted from photovoltaic power using a lookup table method based on a one-dimensional radiative transfer simulation, and the results are compared to both satellite retrievals as well as data from the COSMO weather model. Potential applications of this approach for extracting cloud optical properties are discussed, as well as certain limitations, such as the representation of 3D radiative effects that occur under broken cloud conditions. In principle this method could provide an unprecedented amount of ground-based data on both irradiance and optical properties of the atmosphere, as long as the required photovoltaic power data are available and are properly pre-screened to remove unwanted artefacts in the signal. Possible solutions to this problem are discussed in the context of future work.
The electricity grid of the future will be built on renewable energy sources, which are highly variable and dependent on atmospheric conditions. In power grids with an increasingly high penetration of solar photovoltaics (PV), an accurate knowledge of the incoming solar irradiance is indispensable for grid operation and planning, and reliable irradiance forecasts are thus invaluable for energy system operators. In order to better characterise shortwave solar radiation in time and space, data from PV systems themselves can be used, since the measured power provides information about both irradiance and the optical properties of the atmosphere, in particular the cloud optical depth (COD). Indeed, in the European context with highly variable cloud cover, the cloud fraction and COD are important parameters in determining the irradiance, whereas aerosol effects are only of secondary importance.
Photovoltaic (PV) power data are a valuable but as yet under-utilised resource that could be used to characterise global irradiance with unprecedented spatio-temporal resolution. The resulting knowledge of atmospheric conditions can then be fed back into weather models and will ultimately serve to improve forecasts of PV power itself. This provides a data-driven alternative to statistical methods that use post-processing to overcome inconsistencies between ground-based irradiance measurements and the corresponding predictions of regional weather models (see for instance Frank et al., 2018). This work reports first results from an algorithm developed to infer global horizontal irradiance as well as atmospheric optical properties such as aerosol or cloud optical depth from PV power measurements.
The mechanical properties of plastic components, especially if they are made of semi-crystalline polymers, are considerably influenced by the process conditions. The degree of crystallization influences thermal and mechanical properties. Even more important is the orientation of molecules due to stretching of the polymer melt. Anisotropic material properties are the result of such orientations. Up to now all these effects are not considered within the simulation models of blow molded parts.