Refine
H-BRS Bibliography
- yes (2484) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (975)
- Fachbereich Angewandte Naturwissenschaften (627)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (385)
- Fachbereich Ingenieurwissenschaften und Kommunikation (382)
- Fachbereich Wirtschaftswissenschaften (315)
- Institute of Visual Computing (IVC) (286)
- Institut für funktionale Gen-Analytik (IFGA) (231)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (126)
- Institut für Cyber Security & Privacy (ICSP) (105)
- Institut für Verbraucherinformatik (IVI) (103)
Document Type
- Article (1012)
- Conference Object (918)
- Part of a Book (187)
- Preprint (86)
- Doctoral Thesis (52)
- Report (51)
- Book (monograph, edited volume) (43)
- Master's Thesis (29)
- Working Paper (28)
- Research Data (22)
Year of publication
Language
- English (2484) (remove)
Keywords
- Virtual Reality (15)
- FPGA (14)
- Machine Learning (14)
- GC/MS (13)
- Robotics (13)
- Sustainability (11)
- virtual reality (11)
- Augmented Reality (9)
- Lignin (9)
- Social Protection (9)
In order to help journalists investigate inside large audiovisual archives, as maintained by news broadcast agencies, the multimedia data must be indexed by text-based search engies. By automatically creating a transcript through automatic speech recognition (ASR), the spoken word becomes accessible to text search, and queries for keywords are made possible. But stil, important contextual information like the identity of the speaker is not captured. Especially when gathering original footage in the political domain, the identity of the speaker can be the most important query constraint, although this name may not be prominent in the words spoken. It is thus desireable to have this information provided explicitely to the search engine. To provide this information, the archive must be an alyzed by automatic Speaker Identification (SID). While this research topic has seen substantial gains in accuracy and robustness over last years, it has not yet established itself as a helpful, large-scale tool outside the research community. This thesis sets out to establish a workflow to provide automatic speaker identification. Its application is to help journalists searching on speeches given in the German parliament (Bundestag). This is a contribution to the News-Stream 3.0 project, a BMBF funded research project that addresses accessibility of various data sources for journalists.
DNA Sequencing
(2011)
Interactive Distributed Rendering of 3D Scenes on Multiple Xbox 360 Systems and Personal Computers
(2012)
Transient up-regulation of P2 receptors influence differentiation of human mesenchymal stem cells
(2012)
Exposure to microgravity conditions causes cardiovascular deconditioning in astronauts during spaceflight. Until now, no specific drugs are available for countermeasure, since the underlying mechanism is largely unknown. Endothelial cells (ECs) and smooth muscle cells (SMCs) play key roles in various vascular functions, many of which are regulated by purinergic 2 (P2) receptors. However, their function in ECs and SMCs under microgravity conditions is still unclear. In this study, primary ECs and SMCs were isolated from bovine aorta and verified with specific markers. We show for the first time that the P2 receptor expression pattern is altered in ECs and SMCs after 24 h exposure to simulated microgravity using a clinostat. However, conditioned medium compensates this change in specific P2 receptors, for example, P2X7. Notably, P2 receptors such as P2X7 might be the important players during the paracrine interaction. Additionally, ECs and SMCs secreted different cytokines under simulated microgravity, leading into a pathogenic proliferation and migration. In conclusion, our data indicate P2 receptors might be important players responding to gravity changes in ECs and SMCs. Since some artificial P2 receptor ligands are applied as drugs, it is reasonable to assume that they might be promising candidates against cardiovascular deconditioning in the future.
Human mesenchymal stem cells (hMSCs) are considered a promising cell source for regenerative medicine, because they have the potential to differentiate into a variety of lineages among which the mesoderm-derived lineages such adipo- or osteogenesis are investigated best. Human MSCs can be harvested in reasonable to large amounts from several parts of the patient’s body and due to this possible autologous origin, allorecognition can be avoided. In addition, even in allogenic origin-derived donor cells, hMSCs generate a local immunosuppressive microenvironment, causing only a weak immune reaction. There is an increasing need for bone replacement in patients from all ages, due to a variety of reasons such as a new recreational behavior in young adults or age-related diseases. Adipogenic differentiation is another interesting lineage, because fat tissue is considered to be a major factor triggering atherosclerosis that ultimately leads to cardiovascular diseases, the main cause of death in industrialized countries. However, understanding the differentiation process in detail is obligatory to achieve a tight control of the process for future clinical applications to avoid undesired side effects. In this review, the current findings for adipo- and osteo-differentiation are summarized together with a brief statement on first clinical trials.
Background: Human mesenchymal stem cells (hMSCs) have shown their multipotential including differentiating towards endothelial and smooth muscle cell lineages, which triggers a new interest for using hMSCs as a putative source for cardiovascular regenerative medicine. Our recent publication has shown for the first time that purinergic 2 receptors are key players during hMSC differentiation towards adipocytes and osteoblasts. Purinergic 2 receptors play an important role in cardiovascular function when they bind to extracellular nucleotides. In this study, the possible functional role of purinergic 2 receptors during MSC endothelial and smooth muscle differentiation was investigated. Methods and Results: Human MSCs were isolated from liposuction materials. Then, endothelial and smooth muscle-like cells were differentiated and characterized by specific markers via Reverse Transcriptase-PCR (RT-PCR), Western blot and immunochemical stainings. Interestingly, some purinergic 2 receptor subtypes were found to be differently regulated during these specific lineage commitments: P2Y4 and P2Y14 were involved in the early stage commitment while P2Y1 was the key player in controlling MSC differentiation towards either endothelial or smooth muscle cells. The administration of natural and artificial purinergic 2 receptor agonists and antagonists had a direct influence on these differentiations. Moreover, a feedback loop via exogenous extracellular nucleotides on these particular differentiations was shown by apyrase digest. Conclusions: Purinergic 2 receptors play a crucial role during the differentiation towards endothelial and smooth muscle cell lineages. Some highly selective and potent artificial purinergic 2 ligands can control hMSC differentiation, which might improve the use of adult stem cells in cardiovascular tissue engineering in the future.
During space missions astronauts suffer from cardiovascular deconditioning, when they are exposed to microgravity conditions. Until now, no specific drugs are available for effective countermeasures, since the underlying mechanism is not completely understood. Endothelial cells (ECs) and smooth muscle cells (SMCs) play crucial roles in a variety of cardiovascular functions, many of which are regulated via P2 receptors. However, their function in ECs and SMCs under microgravity condition is still unknown. In this study, ECs and SMCs were isolated from bovine aorta and differentiated from human mesenchymal stem cells (hMSCs), respectively. Subsequently, the cells were verified based on specific markers. An altered P2 receptor expression pattern was detected during the commitment of hMSC towards ECs and SMCs. The administration of natural and artificial P2 receptor agonists and antagonists directly affected the differentiation process. By using EC growth medium as conditioned medium, a vessel cell model was created to culture SMCs and vice versa. Within this study, we were able to show for the first time that the expression of some P2 receptors were altered in ECs and SMCs grown for 24h under simulated microgravity conditions. On the other hand, in some P2 receptor expressions such as P2X7 conditioned medium compensated this change.
In conclusion, our data show that P2 receptors play an important functional role in hMSC differentiation towards ECs and SMCs. Since some P2 receptor artificial ligands are already used as drugs for patients with cardiovascular diseases, it is reasonable to assume that in the future they might be promising candidates for treating cardiovascular deconditioning.
Cytokine-induced killer cells (CIK) in combination with dendritic cells (DCs) have shown favorable outcomes in renal cell carcinoma (RCC), yet some patients exhibit recurrence or no response to this therapy. In a broader perspective, enhancing the antitumor response of DC-CIK cells may help to address this issue. Considering this, herein, we investigated the effect of anti-CD40 and anti-CTLA-4 antibodies on the antitumor response of DC-CIK cells against RCC cell lines. Our analysis showed that, a) anti-CD40 antibody (G28.5) increased the CD3+CD56+ effector cells of CIK cells by promoting the maturation and activation of DCs, b) G28.5 also increased CTLA-4 expression in CIK cells via DCs, but the increase could be hindered by the CTLA-4 inhibitor (ipilimumab), c) adding ipilimumab was also able to significantly increase the proportion of CD3+CD56+ cells in DC-CIK cells, d) anti-CD40 antibodies predominated over anti-CTLA-4 antibodies for cytotoxicity, apoptotic effect and IFN-g secretion of DC-CIK cells against RCC cells, e) after ipilimumab treatment, the population of Tregs in CIK cells remained unaffected, but ipilimumab combined with G28.5 significantly reduced the expression of CD28 in CIK cells. Taken together, we suggest that the agonistic anti-CD40 antibody rather than CTLA-4 inhibitor may improve the antitumor response of DC-CIK cells, particularly in RCC. In addition, we pointed towards the yet to be known contribution of CD28 in the crosstalk between anti-CTLA-4 and CIK cells.
Cancer is a complex disease where resistance to therapies and relapses often pose a serious clinical challenge. The scenario is even more complicated when the cancer type itself is heterogeneous in nature, e.g., lymphoma, a cancer of the lymphocytes which constitutes more than 70 different subtypes. Indeed, the treatment options continue to expand in lymphomas. Herein, we provide insights into lymphoma-specific clinical trials based on cytokine-induced killer (CIK) cell therapy and other pre-clinical lymphoma models where CIK cells have been used along with other synergetic tumor-targeting immune modules to improve their therapeutic potential. From a broader perspective, we will highlight that CIK cell therapy has potential, and in this rapidly evolving landscape of cancer therapies its optimization (as a personalized therapeutic approach) will be beneficial in lymphomas.
Graph drawing with spring embedders employs a V x V computation phase over the graph's vertex set to compute repulsive forces. Here, the efficacy of forces diminishes with distance: a vertex can effectively only influence other vertices in a certain radius around its position. Therefore, the algorithm lends itself to an implementation using search data structures to reduce the runtime complexity. NVIDIA RT cores implement hierarchical tree traversal in hardware. We show how to map the problem of finding graph layouts with force-directed methods to a ray tracing problem that can subsequently be implemented with dedicated ray tracing hardware. With that, we observe speedups of 4x to 13x over a CUDA software implementation.
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
Modern GPUs come with dedicated hardware to perform ray/triangle intersections and bounding volume hierarchy (BVH) traversal. While the primary use case for this hardware is photorealistic 3D computer graphics, with careful algorithm design scientists can also use this special-purpose hardware to accelerate general-purpose computations such as point containment queries. This article explains the principles behind these techniques and their application to vector field visualization of large simulation data using particle tracing.
When the Artemis missions launch, NASA's Orion spacecraft (and crew as of the Artemis II mission) will be exposed to the deep space radiation environment beyond the protection of Earth's magnetosphere. Hence, it is essential to characterize the effects of space radiation, microgravity, and the combination thereof on cells and organisms, i.e., to quantify any correlations between the deep space radiation environment, genetic variation, and induced genetic changes in cells. To address this, the Artemis I mission will include the Peristaltic Laboratory for Automated Science with Multigenerations (PLASM) hardware containing the Deep Space Radiation Genomics (DSRG) experiment. The scientific aims of DSRG are (i) to identify the metabolic and genomic pathways in yeast affected by microgravity, space radiation, and their combination, and (ii) to differentiate between gravity and radiation exposure on single-gene deletion/overexpressing strains' ability to thrive in the spaceflight environment. Yeast is used as a model system because 70% of its essential genes have a human homolog, and over half of these homologs can functionally replace their human counterpart. As part of the experiment preparation towards spaceflight, an Experiment Verification Test (EVT) was performed at the Kennedy Space Center to verify that the experiment design, hardware, and approach to automated operations will enable achieving the scientific aims. For the EVT, fluidic systems were assembled, sterilized, loaded, and acceptance-tested, and subsequently integrated with the engineering parts to produce a flight-like PLASM unit. Each fluidic system consisted of (i) a Media Bag, (ii) four Culture Bags loaded with Saccharomyces cerevisiae (two with deletion series and the remaining two with overexpression series), and (iii) tubing and check valves. The EVT PLASM unit was put under a temperature profile replicating the anticipated different phases of flight, including handover to launch, spaceflight, and splashdown to handover back to the science team, for a 58-day period. At EVT completion, the rate of activation, cellular growth, RNA integrity, and sample contamination were interrogated. All of the experiment's success criteria were satisfied, encouraging our efforts to perform this investigation on Artemis I. This manuscript thus describes the process of spaceflight experiment design maturation with a focus on the EVT, its results, DSRG's preparation for its planned launch on Artemis I in 2022, and how the PLASM hardware can enable other scientific goals on future Artemis missions and/or the Lunar Orbital Platform – Gateway.
Error analysis in a high accuracy sampled-data velocity stabilising system using Volterra series
(2015)
Extremophiles are optimal models in experimentally addressing questions about the effects of cosmic radiation on biological systems. The resistance to high charge energy (HZE) particles, and helium (He) ions and iron (Fe) ions (LET at 2.2 and 200 keV/µm, respectively, until 1000 Gy), of spores from two thermophiles, Bacillushorneckiae SBP3 and Bacilluslicheniformis T14, and two psychrotolerants, Bacillus sp. A34 and A43, was investigated. Spores survived He irradiation better, whereas they were more sensitive to Fe irradiation (until 500 Gy), with spores from thermophiles being more resistant to irradiations than psychrotolerants. The survived spores showed different germination kinetics, depending on the type/dose of irradiation and the germinant used. After exposure to He 1000 Gy, D-glucose increased the lag time of thermophilic spores and induced germination of psychrotolerants, whereas L-alanine and L-valine increased the germination efficiency, except alanine for A43. FTIR spectra showed important modifications to the structural components of spores after Fe irradiation at 250 Gy, which could explain the block in spore germination, whereas minor changes were observed after He radiation that could be related to the increased permeability of the inner membranes and alterations of receptor complex structures. Our results give new insights on HZE resistance of extremophiles that are useful in different contexts, including astrobiology.
We present GEM-NI -- a graph-based generative-design tool that supports parallel exploration of alternative designs. Producing alternatives is a key feature of creative work, yet it is not strongly supported in most extant tools. GEM-NI enables various forms of exploration with alternatives such as parallel editing, recalling history, branching, merging, comparing, and Cartesian products of and for alternatives. Further, GEM-NI provides a modal graphical user interface and a design gallery, which both allow designers to control and manage their design exploration. We conducted an exploratory user study followed by in-depth one-on-one interviews with moderately and highly skills participants and obtained positive feedback for the system features, showing that GEM-NI supports creative design work well.
We present a new interface for interactive comparisons of more than two alternative documents in the context of a generative design system that uses generative data-flow networks defined via directed acyclic graphs. To better show differences between such networks, we emphasize added, deleted, (un)changed nodes and edges. We emphasize differences in the output as well as parameters using highlighting and enable post-hoc merging of the state of a parameter across a selected set of alternatives. To minimize visual clutter, we introduce new difference visualizations for selected nodes and alternatives using additive and subtractive encodings, which improve readability and keep visual clutter low. We analyzed similarities in networks from a set of alternative designs produced by architecture students and found that the number of similarities outweighs the differences, which motivates use of subtractive encoding. We ran a user study to evaluate the two main proposed difference visualization encodings and found that they are equally effective.
The increasing complexity of tasks that are required to be executed by robots demands higher reliability of robotic platforms. For this, it is crucial for robot developers to consider fault diagnosis. In this study, a general non-intrusive fault diagnosis system for robotic platforms is proposed. A mini-PC is non-intrusively attached to a robot that is used to detect and diagnose faults. The health data and diagnosis produced by the mini-PC is then standardized and transmitted to a remote-PC. A storage device is also attached to the mini-PC for data logging of health data in case of loss of communication with the remote-PC. In this study, a hybrid fault diagnosis method is compared to consistency-based diagnosis (CBD), and CBD is selected to be deployed on the system. The proposed system is modular and can be deployed on different robotic platforms with minimum setup.
This work presents the preliminary research towards developing an adaptive tool for fault detection and diagnosis of distributed robotic systems, using explainable machine learning methods. Autonomous robots are complex systems that require high reliability in order to operate in different environments. Even more so, when considering distributed robotic systems, the task of fault detection and diagnosis becomes exponentially difficult.
To diagnose systems, models representing the behaviour under investigation need to be developed, and with distributed robotic systems generating large amount of data, machine learning becomes an attractive method of modelling especially because of its high performance. However, with current day methods such as artificial neural networks (ANNs), the issue of explainability arises where learnt models lack the ability to give explainable reasons behind their decisions.
This paper presents current trends in methods for data collection from distributed systems, inductive logic programming (ILP); an explainable machine learning method, and fault detection and diagnosis.
Intention: Within the research project EnerSHelF (Energy-Self-Sufficiency for Health Facilities in Ghana), i. a. energy-meteorological and load-related measurement data are collected, for which an overview of the availability is to be presented on a poster.
Context: In Ghana, the total electricity consumed has almost doubled between 2008 and 2018 according to the Energy Commission of Ghana. This goes along with an unstable power grid, resulting in power outages whenever electricity consumption peaks. The blackouts called "dumsor" in Ghana, pose a severe burden to the healthcare sector. Innovative solutions are needed to reduce greenhouse gas emissions and improve energy and health access.
Orešković and Porsdam Mann draw a distinction between ‘fast’ and ‘slow’ science. Whereas the latter involves rigorous and laborious adherence to the scientific method, the former represents the reality that much scientific work faces time pressures which at times force shortcuts. The distinction can be seen to operate in contemporary research into the coronavirus pandemic: whereas the development of vaccines and treatments usually requires years of meticulous laboratory work and several more years of clinical testing, the many millions suffering from the disease need a treatment now. However, by taking too many safeguards off the treatment discovery and testing pipelines, or by refusing to act in accordance with scientific advice, governments risk sacrificing the public’s trust not only in the government’s scientific bona fides but in the scientific process itself. This is a heavy price to pay, argue Orešković and Porsdam Mann, and point to evidence indicating that the success of Germany and Japan in combating COVID-19 can be traced to public trust in science and government, as well as scientifically-informed and respectful national leadership.
Microwave Kinetic Inductance Detectors have great potential for large very sensitive detector arrays for use in, for example, ground and spaced based sub?mm imaging. Being intrinsically readout in the frequency domain, they are particularly suited for frequency domain multiplexing allowing 1000s of devices to be readout with one pair of coaxial cables. However, this moves the complexity of the detector from the cryogenics to the warm electronics. We present the use of a readout based on a Fast Fourier transform Spectrometer, showing no deterioration of the noise performance compared to low noise analog mixing while allowing high multiplexing ratios (>100). We present use of this technique to multiplex 44 MKIDs, while this and similar setups are regularly now being used in our array development. This development will help the realization of large cameras, particularly in the short term for ground based astronomy.
Cytokine-induced killer (CIK) cells are an ex vivo expanded heterogeneous cell population with an enriched NK-T phenotype (CD3+CD56+). Due to the convenient and relatively inexpensive expansion capability, together with low incidence of graft versus host disease (GVHD) in allogeneic cancer patients, CIK cells are a promising candidate for immunotherapy. It is well known that natural killer group 2D (NKG2D) plays an important role in CIK cell-mediated antitumor activity; however, it remains unclear whether its engagement alone is sufficient or if it requires additional co-stimulatory signals to activate the CIK cells. Likewise, the role of 2B4 has not yet been identified in CIK cells. Herein, we investigated the individual and cumulative contribution of NKG2D and 2B4 in the activation of CIK cells. Our analysis suggests that (a) NKG2D (not 2B4) is implicated in CIK cell (especially CD3+CD56+ subset)-mediated cytotoxicity, IFN-γ secretion, E/T conjugate formation, and degranulation; (b) NKG2D alone is adequate enough to induce degranulation, IFN-γ secretion, and LFA-1 activation in CIK cells, while 2B4 only provides limited synergy with NKG2D (e.g., in LFA-1 activation); and (c) NKG2D was unable to costimulate CD3. Collectively, we conclude that NKG2D engagement alone suffices to activate CIK cells, thereby strengthening the idea that targeting the NKG2D axis is a promising approach to improve CIK cell therapy for cancer patients. Furthermore, CIK cells exhibit similarities to classical invariant natural killer (iNKT) cells with deficiencies in 2B4 stimulation and in the costimulation of CD3 with NKG2D. In addition, based on the current data, the divergence in receptor function between CIK cells and NK (or T) cells can be assumed, pointing to the possibility that molecular modifications (e.g., using chimeric antigen receptor technology) on CIK cells may need to be customized and optimized to maximize their functional potential.
Structure-activity relationships of thiostrepton derivatives: implications for rational drug design
(2014)
Healing of large bone defects requires implants or scaffolds that provide structural guidance for cell growth, differentiation, and vascularization. In the present work, an agarose-hydroxyapatite composite scaffold was developed that acts not only as a 3D matrix, but also as a release system. Hydroxyapatite (HA) was incorporated into the agarose gels in situ in various ratios by a simple procedure consisting of precipitation, cooling, washing, and drying. The resulting gels were characterized regarding composition, porosity, mechanical properties, and biocompatibility. A pure phase of carbonated HA was identified in the scaffolds, which had pore sizes of up to several hundred micrometers. Mechanical testing revealed elastic moduli of up to 2.8 MPa for lyophilized composites. MTT testing on Lw35human mesenchymal stem cells (hMSCs) and osteosarcoma MG-63 cells proved the biocompatibility of the scaffolds. Furthermore, scaffolds were loaded with model drug compounds for guided hMSC differentiation. Different release kinetic models were evaluated for adenosine 5′-triphosphate (ATP) and suramin, and data showed a sustained release behavior over four days.
Recent approaches in scaffold engineering for bone defects feature hybrid hydrogels made of a polymeric network (retains water and provides light and porous structures) and inorganic ceramics (add mechanical strength and improve cell-adhesion). Innovative scaffold materials should also induce bone tissue formation and incorporation of stem cells (osteogenic differentiation) and/or growth factors (inducing/supporting differentiation). Recently, purinergic P2X and P2Y receptors have been found to significantly influence the osteogenic differentiation process of human mesenchymal stem cells (hMSC). (1) Aim of this work is to develop polysaccharide (PS) composites to be used as scaffolds containing complementary receptor ligands to enable guided stem cell differentiation towards bone formation.
Bone tissue engineering is an ever-changing, rapidly evolving, and highly interdisciplinary field of study, where scientists try to mimic natural bone structure as closely as possible in order to facilitate bone healing. New insights from cell biology, specifically from mesenchymal stem cell differentiation and signaling, lead to new approaches in bone regeneration. Novel scaffold and drug release materials based on polysaccharides gain increasing attention due to their wide availability and good biocompatibility to be used as hydrogels and/or hybrid components for drug release and tissue engineering. This article reviews the current state of the art, recent developments, and future perspectives in polysaccharide-based systems used for bone regeneration.
Renewable resources are gaining increasing interest as a source for environmentally benign biomaterials, such as drug encapsulation/release compounds, and scaffolds for tissue engineering in regenerative medicine. Being the second largest naturally abundant polymer, the interest in lignin valorization for biomedical utilization is rapidly growing. Depending on its resource and isolation procedure, lignin shows specific antioxidant and antimicrobial activity. Today, efforts in research and industry are directed toward lignin utilization as a renewable macromolecular building block for the preparation of polymeric drug encapsulation and scaffold materials. Within the last five years, remarkable progress has been made in isolation, functionalization and modification of lignin and lignin-derived compounds. However, the literature so far mainly focuses lignin-derived fuels, lubricants and resins. The purpose of this review is to summarize the current state of the art and to highlight the most important results in the field of lignin-based materials for potential use in biomedicine (reported in 2014⁻2018). Special focus is placed on lignin-derived nanomaterials for drug encapsulation and release as well as lignin hybrid materials used as scaffolds for guided bone regeneration in stem cell-based therapies.
Polyether and polyether/ester based TPU (thermoplastic polyurethanes) were investigated with wide-angle XRD (X-ray diffraction) and SAXS (small angle X-ray scattering). Furthermore, SAXS measurements were performed in the temperature range of 30 °C to 130 °C. Polyether based polymers exhibit only one broad diffraction signal in a region of 2 θ 15° to 25°. In case of polyurethanes with ether/ester modification, the broad diffraction signal arises with small sharp diffraction signals. SAXS measurements of polymers reveal the size and shape of the crystalline zones of the polymer. Between 30 °C and 130 °C the size of the crystalline zone changes significantly. The size decreases in most of investigated TPU. In the case of Desmopan 9365D an increase of the particle size was observed.
Temperature Dependency of Morphological Structure of Thermoplastic Polyurethane using WAXS and SAXS
(2016)
Polyurethanes achieved an exceptional position among the most important organic polymers due to their highly specific technological application areas. Polyurethanes represent a polyaddition product of isocyanate and diols. In terms of their enormous industrial importance, the chemistry of isocyanates has been extensively studied.
Approximately 45% of global greenhouse gas emissions are caused by the construction and use of buildings. Thermal insulation of buildings in the current context of climate change is a well-known strategy to improve the energy efficiency of buildings. The development of renewable insulation material can overcome the drawbacks of widely used insulation systems based on polystyrene or mineral wool. This study analyzes the sustainability and thermal conductivity of new insulation materials made of Miscanthus x giganteus fibers, foaming agents, and alkali-activated fly ash binder. Life cycle assessments (LCA) are necessary to perform benchmarking of environmental impacts of new formulations of geopolymer-based insulation materials. The global warming potential (GWP) of the product is primarily determined by the main binder component sodium silicate. Sodium silicate's CO2 emissions depend on local production, transportation, and energy consumption. The results, which have been published during recent years, vary in a wide range from 0.3 kg to 3.3 kg CO2-eq. kg-1. The overall GWP of the insulation system based on Miscanthus fibers, with properties according to current thermal insulation regulations, reaches up to 95% savings of CO2 emissions compared to conventional systems. Carbon neutrality can be achieved through formulations containing raw materials with carbon dioxide emissions and renewable materials with negative GWP, thus balancing CO2 emissions.
The clear-sky radiative effect of aerosol-radiation interactions is of relevance for our understanding of the climate system. The influence of aerosol on the surface energy budget is of high interest for the renewable energy sector. In this study, the radiative effect is investigated in particular with respect to seasonal and regional variations for the region of Germany and the year 2015 at the surface and top of atmosphere using two complementary approaches.
First, an ensemble of clear-sky models which explicitly consider aerosols is utilized to retrieve the aerosol optical depth and the surface direct radiative effect of aerosols by means of a clear sky fitting technique. For this, short-wave broadband irradiance measurements in the absence of clouds are used as a basis. A clear sky detection algorithm is used to identify cloud free observations. Considered are measurements of the shortwave broadband global and diffuse horizontal irradiance with shaded and unshaded pyranometers at 25 stations across Germany within the observational network of the German Weather Service (DWD). Clear sky models used are MMAC, MRMv6.1, METSTAT, ESRA, Heliosat-1, CEM and the simplified Solis model. The definition of aerosol and atmospheric characteristics of the models are examined in detail for their suitability for this approach.
Second, the radiative effect is estimated using explicit radiative transfer simulations with inputs on the meteorological state of the atmosphere, trace-gases and aerosol from CAMS reanalysis. The aerosol optical properties (aerosol optical depth, Ångström exponent, single scattering albedo and assymetrie parameter) are first evaluated with AERONET direct sun and inversion products. The largest inconsistency is found for the aerosol absorption, which is overestimated by about 0.03 or about 30 % by the CAMS reanalysis. Compared to the DWD observational network, the simulated global, direct and diffuse irradiances show reasonable agreement within the measurement uncertainty. The radiative kernel method is used to estimate the resulting uncertainty and bias of the simulated direct radiative effect. The uncertainty is estimated to −1.5 ± 7.7 and 0.6 ± 3.5 W m−2 at the surface and top of atmosphere, respectively, while the annual-mean biases at the surface, top of atmosphere and total atmosphere are −10.6, −6.5 and 4.1 W m−2, respectively.
The retrieval of the aerosol radiative effect with the clear sky models shows a high level of agreement with the radiative transfer simulations, with an RMSE of 5.8 W m−2 and a correlation of 0.75. The annual mean of the REari at the surface for the 25 DWD stations shows a value of −12.8 ± 5 W m−2 as average over the clear sky models, compared to −11 W m−2 from the radiative transfer simulations. Since all models assume a fixed aerosol characterisation, the annual cycle of the aerosol radiation effect cannot be reproduced. Out of this set of clear sky models, the largest level of agreement is shown by the ESRA and MRMv6.1 models.
The clear-sky radiative effect of aerosol–radiation interactions is of relevance for our understanding of the climate system. The influence of aerosol on the surface energy budget is of high interest for the renewable energy sector. In this study, the radiative effect is investigated in particular with respect to seasonal and regional variations for the region of Germany and the year 2015 at the surface and top of atmosphere using two complementary approaches.
First, an ensemble of clear-sky models which explicitly consider aerosols is utilized to retrieve the aerosol optical depth and the surface direct radiative effect of aerosols by means of a clear-sky fitting technique. For this, short-wave broadband irradiance measurements in the absence of clouds are used as a basis. A clear-sky detection algorithm is used to identify cloud-free observations. Considered are measurements of the short-wave broadband global and diffuse horizontal irradiance with shaded and unshaded pyranometers at 25 stations across Germany within the observational network of the German Weather Service (DWD). The clear-sky models used are the Modified MAC model (MMAC), the Meteorological Radiation Model (MRM) v6.1, the Meteorological–Statistical solar radiation model (METSTAT), the European Solar Radiation Atlas (ESRA), Heliosat-1, the Center for Environment and Man solar radiation model (CEM), and the simplified Solis model. The definition of aerosol and atmospheric characteristics of the models are examined in detail for their suitability for this approach.
Second, the radiative effect is estimated using explicit radiative transfer simulations with inputs on the meteorological state of the atmosphere, trace gases and aerosol from the Copernicus Atmosphere Monitoring Service (CAMS) reanalysis. The aerosol optical properties (aerosol optical depth, Ångström exponent, single scattering albedo and asymmetry parameter) are first evaluated with AERONET direct sun and inversion products. The largest inconsistency is found for the aerosol absorption, which is overestimated by about 0.03 or about 30 % by the CAMS reanalysis. Compared to the DWD observational network, the simulated global, direct and diffuse irradiances show reasonable agreement within the measurement uncertainty. The radiative kernel method is used to estimate the resulting uncertainty and bias of the simulated direct radiative effect. The uncertainty is estimated to −1.5 ± 7.7 and 0.6 ± 3.5 W m−2 at the surface and top of atmosphere, respectively, while the annual-mean biases at the surface, top of atmosphere and total atmosphere are −10.6, −6.5 and 4.1 W m−2, respectively.
The retrieval of the aerosol radiative effect with the clear-sky models shows a high level of agreement with the radiative transfer simulations, with an RMSE of 5.8 W m−2 and a correlation of 0.75. The annual mean of the REari at the surface for the 25 DWD stations shows a value of −12.8 ± 5 W m−2 as the average over the clear-sky models, compared to −11 W m−2 from the radiative transfer simulations. Since all models assume a fixed aerosol characterization, the annual cycle of the aerosol radiation effect cannot be reproduced. Out of this set of clear-sky models, the largest level of agreement is shown by the ESRA and MRM v6.1 models.
The Project SupraMetall: Towards Commercial Fabrication of High-Temperature Superconducting Tapes
(2014)
The development of mobile robotic systems is a demanding task regarding its complexity, required resources and skills in multiple fields such as software development, artificial intelligence, mechanical design, electrical engineering, signal processing, sensor technology or control theory. This holds true particularly for soccer playing robots, where additional aspects like high dynamics, cooperation and high physical stress have to be dealt with. In robot competitions such as RoboCup, additional skills in the domains of team, project and knowledge management are of importance.
An electronic display often has to present information from several sources. This contribution reports about an approach, in which programmable logic (FPGA) synchronises and combines several graphics inputs. The application area is computer graphics, especially rendering of large 3D models, which is a computing intensive task. Therefore, complex scenes are generated on parallel systems and merged to give the requested output image. So far, the transportation of intermediate results is often done by a local area network. However, as this can be a limiting factor, the new approach removes this bottleneck and combines the graphic signals with an FPGA.
Improving the study entry supports students in a decisive phase of their university education. Implementing improvements is a change process and can only be successful if the relevant stakeholders are addressed and convinced. In the described Teaching Quality Pact project evaluation data is used as a mean to discuss in the university the situation of the study programs. As these discussions were based on empirical data rather than on opinion, it was possible to achieve an open discussion about measures that are implemented. The open discussion is maintained during the project when results of the measures taken are analyzed.
Low power dissipation is a current topic in digital design, and therefore, it should be covered in a state-of-the-art electrical engineering curriculum. This paper describes how low-power design can be addressed within a digital design course. Doing so would be beneficial for both topics because low-power design is not detached from the systems perspective, and the digital design course would be enriched by references to current challenges and applications. Thus, the presented course should serve as an example of how a course can be developed to also teach students about sustainable engineering.
Background: the potency of drugs that interfere with glucose metabolism, i.e., glucose transporters (GLUT) and nicotinamide phosphoribosyltransferase (NAMPT) was analyzed in neuroendocrine tumor (NET, BON-1, and QPG-1 cells) and small cell lung cancer (SCLC, GLC-2, and GLC-36 cells) tumor cell lines. (2) Methods: the proliferation and survival rate of tumor cells was significantly affected by the GLUT-inhibitors fasentin and WZB1127, as well as by the NAMPT inhibitors GMX1778 and STF-31. (3) Results: none of the NET cell lines that were treated with NAMPT inhibitors could be rescued with nicotinic acid (usage of the Preiss–Handler salvage pathway), although NAPRT expression could be detected in two NET cell lines. We finally analyzed the specificity of GMX1778 and STF-31 in NET cells in glucose uptake experiments. As previously shown for STF-31 in a panel NET-excluding tumor cell lines, both drugs specifically inhibited glucose uptake at higher (50 μM), but not at lower (5 μM) concentrations. (4) Conclusions: our data suggest that GLUT and especially NAMPT inhibitors are potential candidates for the treatment of NET tumors.
In thyroid carcinoma cells, the soluble βgalactosidespecific lectin, galectin3, is extra and intracellularly expressed and plays a significant role in thyroid cancer diagnosis. The functional relevance of this molecule, particularly in its extracellular environment however, warrants further elucidation. To gain insight into this topic, the present study characterized principal functional properties of galectin3 in 3 commonly used thyroid carcinoma cell lines (BCPAP, Cal62 and FTC133) that express the molecule intra and extracellulary. Cellintrinsic galectin3 harbors a functional carbohydrate recognition domain as determined by affinity purification. Moreover, cell surface expressed galectin3 can be partially removed by treatment with lactose or asialofetuin, but not with sucrose. Thyroid carcinoma cells adhere to substratebound galectin3 in a βgalactosidespecific manner, whereby only cell adhesion, but not cell migration is promoted. Thus, thyroid tumor cells harbor functional active galectin3 that, inter alia, specifically interacts with cell surfaceexpressed molecular ligands in a βgalactosidedependent manner, whereby the molecule can at least interfere with cell adhesion. The modulation of galectin3 expression level or its ligands in such tumor cells could be of therapeutic interest and needs further experimental clarification.
After replanting apple (Malus domestica Borkh.) on the same site severe growth suppressions, and a decline in yield and fruit quality are observed in all apple producing areas worldwide. The causes of this complex phenomenon, called apple replant disease (ARD), are only poorly understood up to now which is in part due to inconsistencies in terms and methodologies. Therefore we suggest the following definition for ARD: ARD describes a harmfully disturbed physiological and morphological reaction of apple plants to soils that faced alterations in their (micro-) biome due to the previous apple cultures. The underlying interactions likely have multiple causes that extend beyond common analytical tools in microbial ecology. They are influenced by soil properties, faunal vectors, and trophic cascades, with genotype-specific effects on plant secondary metabolism, particularly phytoalexin biosynthesis. Yet, emerging tools allow to unravel the soil and rhizosphere (micro-) biome, to characterize alterations of habitat quality, and to decipher the plant reactions. Thereby, deep insights into the reactions taking place at the root rhizosphere interface will be gained. Counteractions are suggested, taking into account that culture management should emphasize on improving soil microbial and faunal diversity as well as habitat quality rather than focus on soil disinfection.
Over the past two decades social protection has gained importance at the international and the national level of many low and middle income countries. Despite reforms in this sector being a global phenomenon, they differ from country to country. Traditional efforts to explain these dif- ferences focus on domestic factors. Yet it remains unclear how international influences and interdependencies contrib- ute to policy change. The study ‘International Policy Learn- ing and Policy Change’ aims at providing an answer to this question, by focusing on ‘soft governance’ via horizontal processes, meaning processes between equal actors. The studie was carried out in two parts. While in Part I the cur- rent state of the art in relevant research fields was assessed, in Part II the findings from Part I were used to conduct a survey which analyses the role of policy networks.
Stably stratified Taylor–Green vortex simulations are performed by lattice Boltzmann methods (LBM) and compared to other recent works using Navier–Stokes solvers. The density variation is modeled with a separate distribution function in addition to the particle distribution function modeling the flow physics. Different stencils, forcing schemes, and collision models are tested and assessed. The overall agreement of the lattice Boltzmann solutions with reference solutions from other works is very good, even when no explicit subgrid model is used, but the quality depends on the LBM setup. Although the LBM forcing scheme is not decisive for the quality of the solution, the choice of the collision model and of the stencil are crucial for adequate solutions in underresolved conditions. The LBM simulations confirm the suppression of vertical flow motion for decreasing initial Froude numbers. To gain further insight into buoyancy effects, energy decay, dissipation rates, and flux coefficients are evaluated using the LBM model for various Froude numbers.
Turbulent compressible flows are traditionally simulated using explicit time integrators applied to discretized versions of the Navier-Stokes equations. However, the associated Courant-Friedrichs-Lewy condition severely restricts the maximum time-step size. Exploiting the Lagrangian nature of the Boltzmann equation’s material derivative, we now introduce a feasible three-dimensional semi-Lagrangian lattice Boltzmann method (SLLBM), which circumvents this restriction. While many lattice Boltzmann methods for compressible flows were restricted to two dimensions due to the enormous number of discrete velocities in three dimensions, the SLLBM uses only 45 discrete velocities. Based on compressible Taylor-Green vortex simulations we show that the new method accurately captures shocks or shocklets as well as turbulence in 3D without utilizing additional filtering or stabilizing techniques other than the filtering introduced by the interpolation, even when the time-step sizes are up to two orders of magnitude larger compared to simulations in the literature. Our new method therefore enables researchers to study compressible turbulent flows by a fully explicit scheme, whose range of admissible time-step sizes is dictated by physics rather than spatial discretization.
This work thoroughly investigates a semi-Lagrangian lattice Boltzmann (SLLBM) solver for compressible flows. In contrast to other LBM for compressible flows, the vertices are organized in cells, and interpolation polynomials up to fourth order are used to attain the off-vertex distribution function values. Differing from the recently introduced Particles on Demand (PoD) method , the method operates in a static, non-moving reference frame. Yet the SLLBM in the present formulation grants supersonic flows and exhibits a high degree of Galilean invariance. The SLLBM solver allows for an independent time step size due to the integration along characteristics and for the use of unusual velocity sets, like the D2Q25, which is constructed by the roots of the fifth-order Hermite polynomial. The properties of the present model are shown in diverse example simulations of a two-dimensional Taylor-Green vortex, a Sod shock tube, a two-dimensional Riemann problem and a shock-vortex interaction. It is shown that the cell-based interpolation and the use of Gauss-Lobatto-Chebyshev support points allow for spatially high-order solutions and minimize the mass loss caused by the interpolation. Transformed grids in the shock-vortex interaction show the general applicability to non-uniform grids.
This work introduces a semi-Lagrangian lattice Boltzmann (SLLBM) solver for compressible flows (with or without discontinuities). It makes use of a cell-wise representation of the simulation domain and utilizes interpolation polynomials up to fourth order to conduct the streaming step. The SLLBM solver allows for an independent time step size due to the absence of a time integrator and for the use of unusual velocity sets, like a D2Q25, which is constructed by the roots of the fifth-order Hermite polynomial. The properties of the proposed model are shown in diverse example simulations of a Sod shock tube, a two-dimensional Riemann problem and a shock-vortex interaction. It is shown that the cell-based interpolation and the use of Gauss-Lobatto-Chebyshev support points allow for spatially high-order solutions and minimize the mass loss caused by the interpolation. Transformed grids in the shock-vortex interaction show the general applicability to non-uniform grids.
Turbulent compressible flows are traditionally simulated using explicit Eulerian time integration applied to the Navier-Stokes equations. However, the associated Courant-Friedrichs-Lewy condition severely restricts the maximum time step size. Exploiting the Lagrangian nature of the Boltzmann equation's material derivative, we now introduce a feasible three-dimensional semi-Lagrangian lattice Boltzmann method (SLLBM), which elegantly circumvents this restriction. Previous lattice Boltzmann methods for compressible flows were mostly restricted to two dimensions due to the enormous number of discrete velocities needed in three dimensions. In contrast, this Rapid Communication demonstrates how cubature rules enhance the SLLBM to yield a three-dimensional velocity set with only 45 discrete velocities. Based on simulations of a compressible Taylor-Green vortex we show that the new method accurately captures shocks or shocklets as well as turbulence in 3D without utilizing additional filtering or stabilizing techniques, even when the time step sizes are up to two orders of magnitude larger compared to simulations in the literature. Our new method therefore enables researchers for the first time to study compressible turbulent flows by a fully explicit scheme, whose range of admissible time step sizes is only dictated by physics, while being decoupled from the spatial discretization.
Off-lattice Boltzmann methods increase the flexibility and applicability of lattice Boltzmann methods by decoupling the discretizations of time, space, and particle velocities. However, the velocity sets that are mostly used in off-lattice Boltzmann simulations were originally tailored to on-lattice Boltzmann methods. In this contribution, we show how the accuracy and efficiency of weakly and fully compressible semi-Lagrangian off-lattice Boltzmann simulations is increased by velocity sets derived from cubature rules, i.e. multivariate quadratures, which have not been produced by the Gauss-product rule. In particular, simulations of 2D shock-vortex interactions indicate that the cubature-derived degree-nine D2Q19 velocity set is capable to replace the Gauss-product rule-derived D2Q25. Likewise, the degree-five velocity sets D3Q13 and D3Q21, as well as a degree-seven D3V27 velocity set were successfully tested for 3D Taylor-Green vortex flows to challenge and surpass the quality of the customary D3Q27 velocity set. In compressible 3D Taylor-Green vortex flows with Mach numbers Ma={0.5;1.0;1.5;2.0} on-lattice simulations with velocity sets D3Q103 and D3V107 showed only limited stability, while the off-lattice degree-nine D3Q45 velocity set accurately reproduced the kinetic energy provided by literature.
Off-lattice Boltzmann methods increase the flexibility and applicability of lattice Boltzmann methods by decoupling the discretizations of time, space, and particle velocities. However, the velocity sets that are mostly used in off-lattice Boltzmann simulations were originally tailored to on-lattice Boltzmann methods. In this contribution, we show how the accuracy and efficiency of weakly and fully compressible semi-Lagrangian off-lattice Boltzmann simulations is increased by velocity sets derived from cubature rules, i.e. multivariate quadratures, which have not been produced by the Gauß-product rule. In particular, simulations of 2D shock-vortex interactions indicate that the cubature-derived degree-nine D2Q19 velocity set is capable to replace the Gauß-product rule-derived D2Q25. Likewise, the degree-five velocity sets D3Q13 and D3Q21, as well as a degree-seven D3V27 velocity set were successfully tested for 3D Taylor–Green vortex flows to challenge and surpass the quality of the customary D3Q27 velocity set. In compressible 3D Taylor–Green vortex flows with Mach numbers on-lattice simulations with velocity sets D3Q103 and D3V107 showed only limited stability, while the off-lattice degree-nine D3Q45 velocity set accurately reproduced the kinetic energy provided by literature.