Refine
H-BRS Bibliography
- yes (1021) (remove)
Departments, institutes and facilities
- Fachbereich Angewandte Naturwissenschaften (493)
- Fachbereich Informatik (221)
- Institut für funktionale Gen-Analytik (IFGA) (179)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (159)
- Fachbereich Ingenieurwissenschaften und Kommunikation (140)
- Fachbereich Wirtschaftswissenschaften (136)
- Institute of Visual Computing (IVC) (57)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (52)
- Institut für Sicherheitsforschung (ISF) (39)
- Institut für Verbraucherinformatik (IVI) (27)
Document Type
- Article (1021) (remove)
Year of publication
Language
- English (1021) (remove)
Keywords
- cytokine-induced killer cells (9)
- GC/MS (8)
- immunotherapy (8)
- ISM: molecules (6)
- virtual reality (6)
- Africa (5)
- Gene expression (5)
- Lignin (5)
- Performance (5)
- Virtual reality (5)
In this paper, the performance evaluation of Frequency Modulated Chaotic On-Off Keying (FM-COOK) in AWGN, Rayleigh and Rician fading channels is given. The simulation results show that an improvement in BER can be gained by incorporating the FM modulation with COOK for SNR values less than 10dB in AWGN case and less than 6dB for Rayleigh and Rician fading channels.
For many different applications, current information about the bandwidth-related metrics of the utilized connection is very useful as they directly impact the performance of throughput sensitive applications such as streaming servers, IPTV and VoIP applications. In literature, several tools have been proposed to estimate major bandwidth-related metrics such as capacity, available bandwidth and achievable throughput. The vast majority of these tools fall into one of Packet Pair (PP), Variable Packet Size (VPS), Self-Loading of Periodic Streams (SLoPS) or Throughput approaches. In this study, seven popular bandwidth estimation tools including nettimer, pathrate, pathchar, pchar, clink, pathload and iperf belonging to these four well-known estimation techniques are presented and experimentally evaluated in a controlled testbed environment. Differently from the rest of studies in literature, all tools have been uniformly classified and evaluated according to an objective and sophisticated classification and evaluation scheme. The performance comparison of the tools incorporates not only the estimation accuracy but also the probing time and overhead caused.
YAWL (Yet Another Workflow Language) is an open source Business Process Management System, first released in 2003. YAWL grew out of a university research environment to become a unique system that has been deployed worldwide as a laboratory environment for research in Business Process Management and as a productive system in other scientific domains.
When users in virtual reality cannot physically walk and self-motions are instead only visually simulated, spatial updating is often impaired. In this paper, we report on a study that investigated if HeadJoystick, an embodied leaning-based flying interface, could improve performance in a 3D navigational search task that relies on maintaining situational awareness and spatial updating in VR. We compared it to Gamepad, a standard flying interface. For both interfaces, participants were seated on a swivel chair and controlled simulated rotations by physically rotating. They either leaned (forward/backward, right/left, up/down) or used the Gamepad thumbsticks for simulated translation. In a gamified 3D navigational search task, participants had to find eight balls within 5 min. Those balls were hidden amongst 16 randomly positioned boxes in a dark environment devoid of any landmarks. Compared to the Gamepad, participants collected more balls using the HeadJoystick. It also minimized the distance travelled, motion sickness, and mental task demand. Moreover, the HeadJoystick was rated better in terms of ease of use, controllability, learnability, overall usability, and self-motion perception. However, participants rated HeadJoystick could be more physically fatiguing after a long use. Overall, participants felt more engaged with HeadJoystick, enjoyed it more, and preferred it. Together, this provides evidence that leaning-based interfaces like HeadJoystick can provide an affordable and effective alternative for flying in VR and potentially telepresence drones.
As competition for tourists becomes more global, understanding and accommodating the needs of international tourists, with their different cultural backgrounds, has become increasingly important. This study highlights the variations in tourist industry service--particularly as they relate to different cultures. Specifically, service failures experienced by Japanese and German tourists in the U.S. were categorized using the Critical Incident Technique (CIT). The results were compared with earlier studies of service failures experienced by American consumers in the tourist industry. The sample consists of 128 Japanese and 94 “Germanic” (German, Austrian, Swiss-German) respondents. The Japanese and German sample rated “Inappropriate employee behavior” most significant category of service failure. More than half of these respondents said that, because of the failure, they would avoid the offending U.S. business. This is a much stronger response than an American sample had reported in an earlier study. The implications for managers and researchers are discussed.
Airborne and spaceborne platforms are the primary data sources for large-scale forest mapping, but visual interpretation for individual species determination is labor-intensive. Hence, various studies focusing on forests have investigated the benefits of multiple sensors for automated tree species classification. However, transferable deep learning approaches for large-scale applications are still lacking. This gap motivated us to create a novel dataset for tree species classification in central Europe based on multi-sensor data from aerial, Sentinel-1 and Sentinel-2 imagery. In this paper, we introduce the TreeSatAI Benchmark Archive, which contains labels of 20 European tree species (i.e., 15 tree genera) derived from forest administration data of the federal state of Lower Saxony, Germany. We propose models and guidelines for the application of the latest machine learning techniques for the task of tree species classification with multi-label data. Finally, we provide various benchmark experiments showcasing the information which can be derived from the different sensors including artificial neural networks and tree-based machine learning methods. We found that residual neural networks (ResNet) perform sufficiently well with weighted precision scores up to 79 % only by using the RGB bands of aerial imagery. This result indicates that the spatial content present within the 0.2 m resolution data is very informative for tree species classification. With the incorporation of Sentinel-1 and Sentinel-2 imagery, performance improved marginally. However, the sole use of Sentinel-2 still allows for weighted precision scores of up to 74 % using either multi-layer perceptron (MLP) or Light Gradient Boosting Machine (LightGBM) models. Since the dataset is derived from real-world reference data, it contains high class imbalances. We found that this dataset attribute negatively affects the models' performances for many of the underrepresented classes (i.e., scarce tree species). However, the class-wise precision of the best-performing late fusion model still reached values ranging from 54 % (Acer) to 88 % (Pinus). Based on our results, we conclude that deep learning techniques using aerial imagery could considerably support forestry administration in the provision of large-scale tree species maps at a very high resolution to plan for challenges driven by global environmental change. The original dataset used in this paper is shared via Zenodo (https://doi.org/10.5281/zenodo.6598390, Schulz et al., 2022). For citation of the dataset, we refer to this article.
Trueness and precision of milled and 3D printed root-analogue implants: A comparative in vitro study
(2023)
A company's financial documents use tables along with text to organize the data containing key performance indicators (KPIs) (such as profit and loss) and a financial quantity linked to them. The KPI’s linked quantity in a table might not be equal to the similarly described KPI's quantity in a text. Auditors take substantial time to manually audit these financial mistakes and this process is called consistency checking. As compared to existing work, this paper attempts to automate this task with the help of transformer-based models. Furthermore, for consistency checking it is essential for the table's KPIs embeddings to encode the semantic knowledge of the KPIs and the structural knowledge of the table. Therefore, this paper proposes a pipeline that uses a tabular model to get the table's KPIs embeddings. The pipeline takes input table and text KPIs, generates their embeddings, and then checks whether these KPIs are identical. The pipeline is evaluated on the financial documents in the German language and a comparative analysis of the cell embeddings' quality from the three tabular models is also presented. From the evaluation results, the experiment that used the English-translated text and table KPIs and Tabbie model to generate table KPIs’ embeddings achieved an accuracy of 72.81% on the consistency checking task, outperforming the benchmark, and other tabular models.
AI (artificial intelligence) systems are increasingly being used in all aspects of our lives, from mundane routines to sensitive decision-making and even creative tasks. Therefore, an appropriate level of trust is required so that users know when to rely on the system and when to override it. While research has looked extensively at fostering trust in human-AI interactions, the lack of standardized procedures for human-AI trust makes it difficult to interpret results and compare across studies. As a result, the fundamental understanding of trust between humans and AI remains fragmented. This workshop invites researchers to revisit existing approaches and work toward a standardized framework for studying AI trust to answer the open questions: (1) What does trust mean between humans and AI in different contexts? (2) How can we create and convey the calibrated level of trust in interactions with AI? And (3) How can we develop a standardized framework to address new challenges?
Software testing in web services environment faces different challenges in comparison with testing in traditional software environments. Regression testing activities are triggered based on software changes or evolutions. In web services, evolution is not a choice for service clients. They have always to use the current updated version of the software. In addition test execution or invocation is expensive in web services and hence providing algorithms to optimize test case generation and execution is vital. In this environment, we proposed several approach for test cases' selection in web services' regression testing. Testing in this new environment should evolve to be included part of the service contract. Service providers should provide data or usage sessions that can help service clients reduce testing expenses through optimizing the selected and executed test cases.
The epithelial sodium channel (ENaC) is a heterotrimeric ion channel that plays a key role in sodium and water homeostasis in tetrapod vertebrates. In the aldosterone-sensitive distal nephron, hormonally controlled ENaC expression matches dietary sodium intake to its excretion. Furthermore, ENaC mediates sodium absorption across the epithelia of the colon, sweat ducts, reproductive tract, and lung. ENaC is a constitutively active ion channel and its expression, membrane abundance, and open probability (PO) are controlled by multiple intracellular and extracellular mediators and mechanisms [9]. Aberrant ENaC regulation is associated with severe human diseases, including hypertension, cystic fibrosis, pulmonary edema, pseudohypoaldosteronism type 1, and nephrotic syndrome [9].
Lignocellulose feedstock (LCF) provides a sustainable source of components to produce bioenergy, biofuel, and novel biomaterials. Besides hard and soft wood, so-called low-input plants such as Miscanthus are interesting crops to be investigated as potential feedstock for the second generation biorefinery. The status quo regarding the availability and composition of different plants, including grasses and fast-growing trees (i.e., Miscanthus, Paulownia), is reviewed here. The second focus of this review is the potential of multivariate data processing to be used for biomass analysis and quality control. Experimental data obtained by spectroscopic methods, such as nuclear magnetic resonance (NMR) and Fourier-transform infrared spectroscopy (FTIR), can be processed using computational techniques to characterize the 3D structure and energetic properties of the feedstock building blocks, including complex linkages. Here, we provide a brief summary of recently reported experimental data for structural analysis of LCF biomasses, and give our perspectives on the role of chemometrics in understanding and elucidating on LCF composition and lignin 3D structure.
Antioxidant activity is an essential aspect of oxygen-sensitive merchandise and goods, such as food and corresponding packaging, cosmetics, and biomedicine. Technical lignin has not yet been applied as a natural antioxidant, mainly due to the complex heterogeneous structure and polydispersity of lignin. This report presents antioxidant capacity studies completed using the 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay. The influence of purification on lignin structure and activity was investigated. The purification procedure showed that double-fold selective extraction is the most efficient (confirmed by ultraviolet-visible (UV/Vis), Fourier transform infrared (FTIR), heteronuclear single quantum coherence (HSQC) and 31P nuclear magnetic resonance spectroscopy, size exclusion chromatography, and X-ray diffraction), resulting in fractions of very narrow polydispersity (3.2⁻1.6), up to four distinct absorption bands in UV/Vis spectroscopy. Due to differential scanning calorimetry measurements, the glass transition temperature increased from 123 to 185 °C for the purest fraction. Antioxidant capacity is discussed regarding the biomass source, pulping process, and degree of purification. Lignin obtained from industrial black liquor are compared with beech wood samples: antioxidant activity (DPPH inhibition) of kraft lignin fractions were 62⁻68%, whereas beech and spruce/pine-mixed lignin showed values of 42% and 64%, respectively. Total phenol content (TPC) of the isolated kraft lignin fractions varied between 26 and 35%, whereas beech and spruce/pine lignin were 33% and 34%, respectively. Storage decreased the TPC values but increased the DPPH inhibition.
The antiradical and antimicrobial activity of lignin and lignin-based films are both of great interest for applications such as food packaging additives. The polyphenolic structure of lignin in addition to the presence of O-containing functional groups is potentially responsible for these activities. This study used DPPH assays to discuss the antiradical activity of HPMC/lignin and HPMC/lignin/chitosan films. The scavenging activity (SA) of both binary (HPMC/lignin) and ternary (HPMC/lignin/chitosan) systems was affected by the percentage of the added lignin: the 5% addition showed the highest activity and the 30% addition had the lowest. Both scavenging activity and antimicrobial activity are dependent on the biomass source showing the following trend: organosolv of softwood > kraft of softwood > organosolv of grass. Testing the antimicrobial activities of lignins and lignin-containing films showed high antimicrobial activities against Gram-positive and Gram-negative bacteria at 35 °C and at low temperatures (0-7 °C). Purification of kraft lignin has a negative effect on the antimicrobial activity while storage has positive effect. The lignin release in the produced films affected the activity positively and the chitosan addition enhances the activity even more for both Gram-positive and Gram-negative bacteria. Testing the films against spoilage bacteria that grow at low temperatures revealed the activity of the 30% addition on HPMC/L1 film against both B. thermosphacta and P. fluorescens while L5 was active only against B. thermosphacta. In HPMC/lignin/chitosan films, the 5% addition exhibited activity against both B. thermosphacta and P. fluorescens.
Once aberrantly activated, the Wnt/βcatenin pathway may result in uncontrolled proliferation and eventually cancer. Efforts to counter and inhibit this pathway are mainly directed against βcatenin, as it serves a role on the cytoplasm and the nucleus. In addition, speciallygenerated lymphocytes are recruited for the purpose of treating liver cancer. Peripheral blood mononuclear lymphocytes are expanded by the timely addition of interferon γ, interleukin (IL)1β, IL2 and anticluster of differentiation 3 antibody. The resulting cells are called cytokineinduced killer (CIK) cells. The present study utilised these cells and combine them with drugs inhibiting the Wnt pathway in order to examine whether this resulted in an improvement in the killing ability of CIK cells against liver cancer cells. Drugs including ethacrynic acid (EA) and ciclopirox olamine (CPX) were determined to be suitable candidates, as determined by previous studies. Drugs were administered on their own and combined with CIK cells and then a cell viability assay was performed. These results suggest that EAtreated cells demonstrated apoptosis and were significantly affected compared with untreated cells. Unlike EA, CPX killed normal and cancerous cells even at low concentrations. Subsequent to combining EA with CIK cells, the potency of killing was increased and a greater number of cells died, which proves a synergistic action. In summary, EA may be used as an antihepatocellular carcinoma drug, while CPX possesses a high toxicity to cancerous as well as to normal cells. It was proposed that EA should be integrated into present therapeutic methods for cancer.
Competitions for Benchmarking: Task and Functionality Scoring Complete Performance Assessment
(2015)
This paper presents the preliminary results of the Socialist Republic of Vietnam country case study conducted as part of the research project Sustainable Labour Migration implemented by the University of Applied Science Bonn-Rhein-Sieg. The project focuses on stakeholder perspectives on countries of origin benefits and the sustainability of different transnational skill partnership schemes. Existing and ongoing small-scale initiatives indicate that opportunities exist for all three types of labour mobility pathways, from recruiting youth for apprenticeships and subsequent skilled work to recruitment and recognition of skilled 'professionals' certificates for direct work contracts to initial vocational education and training programs in a dual-track approach. While the latter has the highest potential to be more beneficial than other approaches, pursuing and supporting the scaling up of all three pathways in parallel will have additional, mutually reinforcing and supporting effects. The potential for benefits over and above those already realised by existing skill partnerships appears high, especially considering the favourable framework conditions specific to the long-standing German-Vietnamese relationship. If the potential of well-managed skill partnerships was realised, such sustainable models of skilled labour migration could serve as a unique selling point in the international competition for skilled labour.
SLC6A14 (ATB0,+) is unique among SLC proteins in its ability to transport 18 of the 20 proteinogenic (dipolar and cationic) amino acids and naturally occurring and synthetic analogues (including anti-viral prodrugs and nitric oxide synthase (NOS) inhibitors). SLC6A14 mediates amino acid uptake in multiple cell types where increased expression is associated with pathophysiological conditions including some cancers. Here, we investigated how a key position within the core LeuT-fold structure of SLC6A14 influences substrate specificity. Homology modelling and sequence analysis identified the transmembrane domain 3 residue V128 as equivalent to a position known to influence substrate specificity in distantly related SLC36 and SLC38 amino acid transporters. SLC6A14, with and without V128 mutations, was heterologously expressed and function determined by radiotracer solute uptake and electrophysiological measurement of transporter-associated current. Substituting the amino acid residue occupying the SLC6A14 128 position modified the binding pocket environment and selectively disrupted transport of cationic (but not dipolar) amino acids and related NOS inhibitors. By understanding the molecular basis of amino acid transporter substrate specificity we can improve knowledge of how this multi-functional transporter can be targeted and how the LeuT-fold facilitates such diversity in function among the SLC6 family and other SLC amino acid transporters.
Pipeline transport is an efficient method for transporting fluids in energy supply and other technical applications. While natural gas is the classical example, the transport of hydrogen is becoming more and more important; both are transmitted under high pressure in a gaseous state. Also relevant is the transport of carbon dioxide, captured in the places of formation, transferred under high pressure in a liquid or supercritical state and pumped into underground reservoirs for storage. The transport of other fluids is also required in technical applications. Meanwhile, the transport equations for different fluids are essentially the same, and the simulation can be performed using the same methods. In this paper, the effect of control elements such as compressors, regulators and flaptraps on the stability of fluid transport simulations is studied. It is shown that modeling of these elements can lead to instabilities, both in stationary and dynamic simulations. Special regularization methods were developed to overcome these problems. Their functionality also for dynamic simulations is demonstrated for a number of numerical experiments.
Farming communities confronted with climate change adopt formal and informal adaptation strategies to mitigate the effects of climate change. While the environmental and social effects of climate change are well documented, there is still a dearth of literature on girl-child marriage (formal marriage or informal union between a child under the age of 18 and an adult or another child) as a response to the effects of climate change. In this research, we ask if girl-child marriage is promoted as a social protection mechanism first, rather than as simply a response to climate-induced poverty. We use qualitative semi-structured interviews and focus group discussions to explore this question in a rural farming community in Northern Ghana. Our findings reveal that climate change shocks result in poverty and compel farmers to marry off their young daughters. The unmarried girl-child is perceived as an ‘extra mouth to feed’, a liability whose marriage becomes a strategy for protecting the family, the family’s reputation, and the girl child. The emphasis in girl-child marriage is not on the girl-child as an individual but on the family as a group. Hence, what is good for the family is assumed to be in the best interest of the girl-child. We place our analysis at the intersection of climate change, social protection, and the incidence of girl-child marriages. We argue that understanding this link is crucial and can contribute significantly to our knowledge of girl-child marriage as well as our ability to address this in Sub-Saharan Africa.
Among the celestial bodies in the Solar System, Mars currently represents the main target for the search for life beyond Earth. However, its surface is constantly exposed to high doses of cosmic rays (CRs) that may pose a threat to any biological system. For this reason, investigations into the limits of resistance of life to space relevant radiation is fundamental to speculate on the chance of finding extraterrestrial organisms on Mars. In the present work, as part of the STARLIFE project, the responses of dried colonies of the black fungus Cryomyces antarcticus Culture Collection of Fungi from Extreme Environments (CCFEE) 515 to the exposure to accelerated iron (LET: 200 keV/μm) ions, which mimic part of CRs spectrum, were investigated. Samples were exposed to the iron ions up to 1000 Gy in the presence of Martian regolith analogues. Our results showed an extraordinary resistance of the fungus in terms of survival, recovery of metabolic activity and DNA integrity. These experiments give new insights into the survival probability of possible terrestrial-like life forms on the present or past Martian surface and shallow subsurface environments.
A biodegradable blend of PBAT—poly(butylene adipate-co-terephthalate)—and PLA—poly(lactic acid)—for blown film extrusion was modified with four multi-functional chain extending cross-linkers (CECL). The anisotropic morphology introduced during film blowing affects the degradation processes. Given that two CECL increased the melt flow rate (MFR) of tris(2,4-di-tert-butylphenyl)phosphite (V1) and 1,3-phenylenebisoxazoline (V2) and the other two reduced it (aromatic polycarbodiimide (V3) and poly(4,4-dicyclohexylmethanecarbodiimide) (V4)), their compost (bio-)disintegration behavior was investigated. It was significantly altered with respect to the unmodified reference blend (REF). The disintegration behavior at 30 and 60 °C was investigated by determining changes in mass, Young’s moduli, tensile strengths, elongations at break and thermal properties. In order to quantify the disintegration behavior, the hole areas of blown films were evaluated after compost storage at 60 °C to calculate the kinetics of the time dependent degrees of disintegration. The kinetic model of disintegration provides two parameters: initiation time and disintegration time. They quantify the effects of the CECL on the disintegration behavior of the PBAT/PLA compound. Differential scanning calorimetry (DSC) revealed a pronounced annealing effect during storage in compost at 30 °C, as well as the occurrence of an additional step-like increase in the heat flow at 75 °C after storage at 60 °C. The disintegration consists of processes which affect amorphous and crystalline phase of PBAT in different manner that cannot be understood by a hydrolytic chain degradation only. Furthermore, gel permeation chromatography (GPC) revealed molecular degradation only at 60 °C for the REF and V1 after 7 days of compost storage. The observed losses of mass and cross-sectional area seem to be attributed more to mechanical decay than to molecular degradation for the given compost storage times.
Process-induced changes in the morphology of biodegradable polybutylene adipate terephthalate (PBAT) and polylactic acid (PLA) blends modified with various multifunctional chainextending cross-linkers (CECLs) are presented. The morphology of unmodified and modified films produced with blown film extrusion is examined in an extrusion direction (ED) and a transverse direction (TD). While FTIR analysis showed only small peak shifts indicating that the CECLs modify the molecular weight of the PBAT/PLA blend, SEM investigations of the fracture surfaces of blown extrusion films revealed their significant effect on the morphology formed during the processing. Due to the combined shear and elongation deformation during blown film extrusion, rather spherical PLA islands were partly transformed into long fibrils, which tended to decay to chains of elliptical islands if cooled slowly. The CECL introduction into the blend changed the thickness of the PLA fibrils, modified the interface adhesion, and altered the deformation behavior of the PBAT matrix from brittle to ductile. The results proved that CECLs react selectively with PBAT, PLA, and their interface. Furthermore, the reactions of CECLs with PBAT/PLA induced by the processing depended on the deformation directions (ED and TD), thus resulting in further non-uniformities of blown extrusion films.
This study investigates the effects of four multifunctional chain-extending cross-linkers (CECL) on the processability, mechanical performance, and structure of polybutylene adipate terephthalate (PBAT) and polylactic acid (PLA) blends produced using film blowing technology. The newly developed reference compound (M·VERA® B5029) and the CECL modified blends are characterized with respect to the initial properties and the corresponding properties after aging at 50 °C for 1 and 2 months. The tensile strength, seal strength, and melt volume rate (MVR) are markedly changed after thermal aging, whereas the storage modulus, elongation at the break, and tear resistance remain constant. The degradation of the polymer chains and crosslinking with increased and decreased MVR, respectively, is examined thoroughly with differential scanning calorimetry (DSC), with the results indicating that the CECL-modified blends do not generally endure thermo-oxidation over time. Further, DSC measurements of 25 µm and 100 µm films reveal that film blowing pronouncedly changes the structures of the compounds. These findings are also confirmed by dynamic mechanical analysis, with the conclusion that tris(2,4-di-tert-butylphenyl)phosphite barely affects the glass transition temperature, while with the other changes in CECL are seen. Cross-linking is found for aromatic polycarbodiimide and poly(4,4-dicyclohexylmethanecarbodiimide) CECL after melting of granules and films, although overall the most synergetic effect of the CECL is shown by 1,3-phenylenebisoxazoline.
This review is divided into two interconnected parts, namely a biological and a chemical one. The focus of the first part is on the biological background for constructing tissue-engineered vascular grafts to promote vascular healing. Various cell types, such as embryonic, mesenchymal and induced pluripotent stem cells, progenitor cells and endothelial- and smooth muscle cells will be discussed with respect to their specific markers. The in vitro and in vivo models and their potential to treat vascular diseases are also introduced. The chemical part focuses on strategies using either artificial or natural polymers for scaffold fabrication, including decellularized cardiovascular tissue. An overview will be given on scaffold fabrication including conventional methods and nanotechnologies. Special attention is given to 3D network formation via different chemical and physical cross-linking methods. In particular, electron beam treatment is introduced as a method to combine 3D network formation and surface modification. The review includes recently published scientific data and patents which have been registered within the last decade.
(1) Background: Autologous bone is supposed to contain vital cells that might improve the osseointegration of dental implants. The aim of this study was to investigate particulate and filtered bone chips collected during oral surgery intervention with respect to their osteogenic potential and the extent of microbial contamination to evaluate its usefulness for jawbone reconstruction prior to implant placement. (2) Methods: Cortical and cortical-cancellous bone chip samples of 84 patients were collected. The stem cell character of outgrowing cells was characterized by expression of CD73, CD90 and CD105, followed by osteogenic differentiation. The degree of bacterial contamination was determined by Gram staining, catalase and oxidase tests and tests to evaluate the genera of the found bacteria (3) Results: Pre-surgical antibiotic treatment of the patients significantly increased viability of the collected bone chip cells. No significant difference in plasticity was observed between cells isolated from the cortical and cortical-cancellous bone chip samples. Thus, both types of bone tissue can be used for jawbone reconstruction. The osteogenic differentiation was independent of the quantity and quality of the detected microorganisms, which comprise the most common bacteria in the oral cavity. (4) Discussion: This study shows that the quality of bone chip-derived stem cells is independent of the donor site and the extent of present common microorganisms, highlighting autologous bone tissue, assessable without additional surgical intervention for the patient, as a useful material for dental implantology.
MOTIVATION
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited.
RESULTS
To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs (KGs). This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations in a shared embedding space. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against three baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.084 (i.e., from 0.881 to 0.965). Finally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications.
AVAILABILITY
We make the source code and the Python package of STonKGs available at GitHub (https://github.com/stonkgs/stonkgs) and PyPI (https://pypi.org/project/stonkgs/). The pre-trained STonKGs models and the task-specific classification models are respectively available at https://huggingface.co/stonkgs/stonkgs-150k and https://zenodo.org/communities/stonkgs.
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online.
The general method of topological reduction for the network problems is presented on example of gas transport networks. The method is based on a contraction of series, parallel and tree-like subgraphs for the element equations of quadratic, power law and general monotone dependencies. The method allows to reduce significantly the complexity of the graph and to accelerate the solution procedure for stationary network problems. The method has been tested on a large set of realistic network scenarios. Possible extensions of the method have been described, including triangulated element equations, continuation of the equations at infinity, providing uniqueness of solution, a choice of Newtonian stabilizer for nearly degenerated systems. The method is applicable for various sectors in the field of energetics, including gas networks, water networks, electric networks, as well as for coupling of different sectors.
With increasing life expectancy, demands for dental tissue and whole-tooth regeneration are becoming more significant. Despite great progress in medicine, including regenerative therapies, the complex structure of dental tissues introduces several challenges to the field of regenerative dentistry. Interdisciplinary efforts from cellular biologists, material scientists, and clinical odontologists are being made to establish strategies and find the solutions for dental tissue regeneration and/or whole-tooth regeneration. In recent years, many significant discoveries were done regarding signaling pathways and factors shaping calcified tissue genesis, including those of tooth. Novel biocompatible scaffolds and polymer-based drug release systems are under development and may soon result in clinically applicable biomaterials with the potential to modulate signaling cascades involved in dental tissue genesis and regeneration. Approaches for whole-tooth regeneration utilizing adult stem cells, induced pluripotent stem cells, or tooth germ cells transplantation are emerging as promising alternatives to overcome existing in vitro tissue generation hurdles. In this interdisciplinary review, most recent advances in cellular signaling guiding dental tissue genesis, novel functionalized scaffolds and drug release material, various odontogenic cell sources, and methods for tooth regeneration are discussed thus providing a multi-faceted, up-to-date, and illustrative overview on the tooth regeneration matter, alongside hints for future directions in the challenging field of regenerative dentistry.
The temperature of photovoltaic modules is modelled as a dynamic function of ambient temperature, shortwave and longwave irradiance and wind speed, in order to allow for a more accurate characterisation of their efficiency. A simple dynamic thermal model is developed by extending an existing parametric steady-state model using an exponential smoothing kernel to include the effect of the heat capacity of the system. The four parameters of the model are fitted to measured data from three photovoltaic systems in the Allgäu region in Germany using non-linear optimisation. The dynamic model reduces the root-mean-square error between measured and modelled module temperature to 1.58 K on average, compared to 3.03 K for the steady-state model, whereas the maximum instantaneous error is reduced from 20.02 to 6.58 K.
Solar photovoltaic power output is modulated by atmospheric aerosols and clouds and thus contains valuable information on the optical properties of the atmosphere. As a ground-based data source with high spatiotemporal resolution it has great potential to complement other ground-based solar irradiance measurements as well as those of weather models and satellites, thus leading to an improved characterisation of global horizontal irradiance. In this work several algorithms are presented that can retrieve global tilted and horizontal irradiance and atmospheric optical properties from solar photovoltaic data and/or pyranometer measurements. The method is tested on data from two measurement campaigns that took place in the Allgäu region in Germany in autumn 2018 and summer 2019, and the results are compared with local pyranometer measurements as well as satellite and weather model data. Using power data measured at 1 Hz and averaged to 1 min resolution along with a non-linear photovoltaic module temperature model, global horizontal irradiance is extracted with a mean bias error compared to concurrent pyranometer measurements of 5.79 W m−2 (7.35 W m−2) under clear (cloudy) skies, averaged over the two campaigns, whereas for the retrieval using coarser 15 min power data with a linear temperature model the mean bias error is 5.88 and 41.87 W m−2 under clear and cloudy skies, respectively.
During completely overcast periods the cloud optical depth is extracted from photovoltaic power using a lookup table method based on a 1D radiative transfer simulation, and the results are compared to both satellite retrievals and data from the Consortium for Small-scale Modelling (COSMO) weather model. Potential applications of this approach for extracting cloud optical properties are discussed, as well as certain limitations, such as the representation of 3D radiative effects that occur under broken-cloud conditions. In principle this method could provide an unprecedented amount of ground-based data on both irradiance and optical properties of the atmosphere, as long as the required photovoltaic power data are available and properly pre-screened to remove unwanted artefacts in the signal. Possible solutions to this problem are discussed in the context of future work.
Thermo-chemical conversion of cucumber peel waste for biobased energy and chemical production
(2022)
Object-Based Trace Model for Automatic Indicator Computation in the Human Learning Environments
(2021)
This paper proposes a traces model in the form of an object or class model (in the UML sense) which allows the automatic calculation of indicators of various kinds and independently of the computer environment for human learning (CEHL). The model is based on the establishment of a trace-based system that encompasses all the logic of traces collecting and indicators calculation. It is im-plemented in the form of a trace database. It is an important contribution in the field of the exploitation of the traces of apprenticeship in a CEHL because it pro-vides a general formalism for modeling the traces and allowing the calculation of several indicators at the same time. Also, with the inclusion of calculated indica-tors as potential learning traces, our model provides a formalism for classifying the various indicators in the form of inheritance relationships, which promotes the reuse of indicators already calculated. Economically, the model can allow organi-zations with different learning platforms to invest only in one traces Management System. At the social level, it can allow a better sharing of trace databases be-tween the various research institutions in the field of CEHL.
Telepresence robots allow users to be spatially and socially present in remote environments. Yet, it can be challenging to remotely operate telepresence robots, especially in dense environments such as academic conferences or workplaces. In this paper, we primarily focus on the effect that a speed control method, which automatically slows the telepresence robot down when getting closer to obstacles, has on user behaviors. In our first user study, participants drove the robot through a static obstacle course with narrow sections. Results indicate that the automatic speed control method significantly decreases the number of collisions. For the second study we designed a more naturalistic, conference-like experimental environment with tasks that require social interaction, and collected subjective responses from the participants when they were asked to navigate through the environment. While about half of the participants preferred automatic speed control because it allowed for smoother and safer navigation, others did not want to be influenced by an automatic mechanism. Overall, the results suggest that automatic speed control simplifies the user interface for telepresence robots in static dense environments, but should be considered as optionally available, especially in situations involving social interactions.
Salts and proteins comprise two of the basic molecular components of biological materials. Kosmotropic/chaotropic co-solvation and matching ion water affinities explain basic ionic effects on protein aggregation observed in simple solutions. However, it is unclear how these theories apply to proteins in complex biological environments and what the underlying ionic binding patterns are. Using the positive ion Ca2+ and the negatively charged membrane protein SNAP25, we studied ion effects on protein oligomerization in solution, in native membranes and in molecular dynamics (MD) simulations. We find that concentration-dependent ion-induced protein oligomerization is a fundamental chemico-physical principle applying not only to soluble but also to membrane-anchored proteins in their native environment. Oligomerization is driven by the interaction of Ca2+ ions with the carboxylate groups of aspartate and glutamate. From low up to middle concentrations, salt bridges between Ca2+ ions and two or more protein residues lead to increasingly larger oligomers, while at high concentrations oligomers disperse due to overcharging effects. The insights provide a conceptual framework at the interface of physics, chemistry and biology to explain binding of ions to charged protein surfaces on an atomistic scale, as occurring during protein solubilisation, aggregation and oligomerization both in simple solutions and membrane systems.
Defect evolution in thermal barrier coating systems under multi-axial thermomechanical loading
(2005)
Advanced thermal gradient mechanical fatigue testing of CMSX-4 with an oxidation protection coating
(2008)
The cooperation between researchers and practitioners during the different stages of the research process is promoted as it can be of benefit to both society and research supporting processes of ‘transformation’. While acknowledging the important potential of research–practice–collaborations (RPCs), this paper reflects on RPCs from a political-economic perspective to also address potential unintended adverse effects on knowledge generation due to divergent interests, incomplete information or the unequal distribution of resources. Asymmetries between actors may induce distorted and biased knowledge and even help produce or exacerbate existing inequalities. Potential merits and limitations of RPCs, therefore, need to be gauged. Taking RPCs seriously requires paying attention to these possible tensions—both in general and with respect to international development research, in particular: On the one hand, there are attempts to contribute to societal change and ethical concerns of equity at the heart of international development research, and on the other hand, there is the relative risk of encountering asymmetries more likely.
Purpose – To describe the development of a novel polyether(meth)acrylate-based resin material class for stereolithography with alterable material characteristics.
Design/methodology/approach – A complete overview of details to composition parameters, the optimization and bandwidth of mechanical and processing parameters is given. Initial biological characterization experiments and future application felds are depicted. Process parameters are studied in a commercial 3D systems Viper stereolithography system, and a new method to determine these parameters is described herein.
Findings – Initial biological characterizations show the non-toxic behavior in a biological environment, caused mainly by the (meth)acrylate-based core components. These photolithographic resins combine an adjustable low Young’s modulus with the advantages of a non-toxic (meth)acrylate-based process material. In contrast to the mostly rigid process materials used today in the rapid prototyping industry, these polymeric formulations are able to fulfll the extended need for a soft engineering material. A short overview of sample applications is given.
Practical implications – These polymeric formulations are able to meet the growing demand for a resin class for rapid manufacturing that covers a bandwidth from softer to stiffer materials.
Originality/value – This paper gives an overview about the novel developed material class for stereolithography and should be therefore of high interest to people with interest in novel rapid manufacturing materials and technology.
Cathepsin K (CatK) is a target for the treatment of osteoporosis, arthritis, and bone metastasis. Peptidomimetics with a cyanohydrazide warhead represent a new class of highly potent CatK inhibitors; however, their binding mechanism is unknown. We investigated two model cyanohydrazide inhibitors with differently positioned warheads: an azadipeptide nitrile Gü1303 and a 3-cyano-3-aza-β-amino acid Gü2602. Crystal structures of their covalent complexes were determined with mature CatK as well as a zymogen-like activation intermediate of CatK. Binding mode analysis, together with quantum chemical calculations, revealed that the extraordinary picomolar potency of Gü2602 is entropically favoured by its conformational flexibility at the nonprimed-primed subsites boundary. Furthermore, we demonstrated by live cell imaging that cyanohydrazides effectively target mature CatK in osteosarcoma cells. Cyanohydrazides also suppressed the maturation of CatK by inhibiting the autoactivation of the CatK zymogen. Our results provide structural insights for the rational design of cyanohydrazide inhibitors of CatK as potential drugs.
When optimizing the process parameters of the acidic ethanolic organosolv process, the aim is usually to maximize the delignification and/or lignin purity. However, process parameters such as temperature, time, ethanol and catalyst concentration, respectively, can also be used to vary the structural properties of the obtained organosolv lignin, including the molecular weight and the ratio of aliphatic versus phenolic hydroxyl groups, among others. This review particularly focuses on these influencing factors and establishes a trend analysis between the variation of the process parameters and the effect on lignin structure. Especially when larger data sets are available, as for process temperature and time, correlations between the distribution of depolymerization and condensation reactions are found, which allow direct conclusions on the proportion of lignin's structural features, independent of the diversity of the biomass used. The newfound insights gained from this review can be used to tailor organosolv lignins isolated for a specific application.
Miscanthus crops possess very attractive properties such as high photosynthesis yield and carbon fixation rate. Because of these properties, it is currently considered for use in second-generation biorefineries. Here we analyze the differences in chemical composition between M. x giganteus, a commonly studied Miscanthus genotype, and M. nagara, which is relatively understudied but has useful properties such as increased frost resistance and higher stem stability. Samples of M. x giganteus (Gig35) and M. nagara (NagG10) have been separated by plant portion (leaves and stems) in order to isolate the corresponding lignins. The organosolv process was used for biomass pulping (80% ethanol solution, 170 °C, 15 bar). Biomass composition and lignin structure analysis were performed using composition analysis, Fourier-transform infrared (FTIR), ultraviolet-visible (UV-Vis) and nuclear magnetic resonance (NMR) spectroscopy, thermogravimetric analysis (TGA), size exclusion chromatography (SEC) and pyrolysis gas-chromatography/mass spectrometry (Py-GC/MS) to determine the 3D structure of the isolated lignins, monolignol ratio and most abundant linkages depending on genotype and harvesting season. SEC data showed significant differences in the molecular weight and polydispersity indices for stem versus leaf-derived lignins. Py-GC/MS and hetero-nuclear single quantum correlation (HSQC) NMR revealed different monolignol compositions for the two genotypes (Gig35, NagG10). The monolignol ratio is slightly influenced by the time of harvest: stem-derived lignins of M. nagara showed increasing H and decreasing G unit content over the studied harvesting period (December–April).
As a low-input crop, Miscanthus offers numerous advantages that, in addition to agricultural applications, permits its exploitation for energy, fuel, and material production. Depending on the Miscanthus genotype, season, and harvest time as well as plant component (leaf versus stem), correlations between structure and properties of the corresponding isolated lignins differ. Here, a comparative study is presented between lignins isolated from M. x giganteus, M. sinensis, M. robustus and M. nagara using a catalyst-free organosolv pulping process. The lignins from different plant constituents are also compared regarding their similarities and differences regarding monolignol ratio and important linkages. Results showed that the plant genotype has the weakest influence on monolignol content and interunit linkages. In contrast, structural differences are more significant among lignins of different harvest time and/or season. Analyses were performed using fast and simple methods such as nuclear magnetic resonance (NMR) spectroscopy. Data was assigned to four different linkages (A: β-O-4 linkage, B: phenylcoumaran, C: resinol, D: β-unsaturated ester). In conclusion, A content is particularly high in leaf-derived lignins at just under 70% and significantly lower in stem and mixture lignins at around 60% and almost 65%. The second most common linkage pattern is D in all isolated lignins, the proportion of which is also strongly dependent on the crop portion. Both stem and mixture lignins, have a relatively high share of approximately 20% or more (maximum is M. sinensis Sin2 with over 30%). In the leaf-derived lignins, the proportions are significantly lower on average. Stem samples should be chosen if the highest possible lignin content is desired, specifically from the M. x giganteus genotype, which revealed lignin contents up to 27%. Due to the better frost resistance and higher stem stability, M. nagara offers some advantages compared to M. x giganteus. Miscanthus crops are shown to be very attractive lignocellulose feedstock (LCF) for second generation biorefineries and lignin generation in Europe.
Miscanthus x giganteus Stem Versus Leaf-Derived Lignins Differing in Monolignol Ratio and Linkage
(2019)
As a renewable, Miscanthus offers numerous advantages such as high photosynthesis activity (as a C4 plant) and an exceptional CO2 fixation rate. These properties make Miscanthus very attractive for industrial exploitation, such as lignin generation. In this paper, we present a systematic study analyzing the correlation of the lignin structure with the Miscanthus genotype and plant portion (stem versus leaf). Specifically, the ratio of the three monolignols and corresponding building blocks as well as the linkages formed between the units have been studied. The lignin amount has been determined for M. x giganteus (Gig17, Gig34, Gig35), M. nagara (NagG10), M. sinensis (Sin2), and M. robustus (Rob4) harvested at different time points (September, December, and April). The influence of the Miscanthus genotype and plant component (leaf vs. stem) has been studied to develop corresponding structure-property relationships (i.e., correlations in molecular weight, polydispersity, and decomposition temperature). Lignin isolation was performed using non-catalyzed organosolv pulping and the structure analysis includes compositional analysis, Fourier transform infradred (FTIR), ultraviolet/visible (UV-Vis), hetero-nuclear single quantum correlation nuclear magnetic resonsnce (HSQC-NMR), thermogravimetric analysis (TGA), and pyrolysis gaschromatography/mass spectrometry (GC/MS). Structural differences were found for stem and leaf-derived lignins. Compared to beech wood lignins, Miscanthus lignins possess lower molecular weight and narrow polydispersities (<1.5 Miscanthus vs. >2.5 beech) corresponding to improved homogeneity. In addition to conventional univariate analysis of FTIR spectra, multivariate chemometrics revealed distinct differences for aromatic in-plane deformations of stem versus leaf-derived lignins. These results emphasize the potential of Miscanthus as a low-input resource and a Miscanthus-derived lignin as promising agricultural feedstock.
There has been a growing interest in taste research in the HCI and CSCW communities. However, the focus is more on stimulating the senses, while the socio-cultural aspects have received less attention. However, individual taste perception is mediated through social interaction and collective negotiation and is not only dependent on physical stimulation. Therefore, we study the digital mediation of taste by drawing on ethnographic research of four online wine tastings and one self-organized event. Hence, we investigated the materials, associated meanings, competences, procedures, and engagements that shaped the performative character of tasting practices. We illustrate how the tastings are built around the taste-making process and how online contexts differ in providing a more diverse and distributed environment. We then explore the implications of our findings for the further mediation of taste as a social and democratized phenomenon through online interaction.
Herein we report an update to ACPYPE, a Python3 tool that now properly converts AMBER to GROMACS topologies for force fields that utilize nondefault and nonuniform 1–4 electrostatic and nonbonded scaling factors or negative dihedral force constants. Prior to this work, ACPYPE only converted AMBER topologies that used uniform, default 1–4 scaling factors and positive dihedral force constants. We demonstrate that the updated ACPYPE accurately transfers the GLYCAM06 force field from AMBER to GROMACS topology files, which employs non-uniform 1–4 scaling factors as well as negative dihedral force constants. Validation was performed using β-d-GlcNAc through gas-phase analysis of dihedral energy curves and probability density functions. The updated ACPYPE retains all of its original functionality, but now allows the simulation of complex glycomolecular systems in GROMACS using AMBER-originated force fields. ACPYPE is available for download at https://github.com/alanwilter/acpype.
Human butyrylcholinesterase (BChE) is a glycoprotein capable of bioscavenging toxic compounds such as organophosphorus (OP) nerve agents. For commercial production of BChE, it is practical to synthesize BChE in non-human expression systems, such as plants or animals. However, the glycosylation profile in these systems is significantly different from the human glycosylation profile, which could result in changes in BChE's structure and function. From our investigation, we found that the glycan attached to ASN241 is both structurally and functionally important due to its close proximity to the BChE tetramerization domain and the active site gorge. To investigate the effects of populating glycosylation site ASN241, monomeric human BChE glycoforms were simulated with and without site ASN241 glycosylated. Our simulations indicate that the structure and function of human BChE are significantly affected by the absence of glycan 241.
Updating a shared data structure in a parallel program is usually done with some sort of high-level synchronization operation to ensure correctness and consistency. The realization of such high-level synchronization operations is done with appropriate low-level atomic synchronization instructions that the target processor architecture provides. These instructions are costly and often limited in their scalability on larger multi-core / multi-processor systems. In this paper, a technique is discussed that replaces atomic updates of a shared data structure with ordinary and cheaper read/write operations. The necessary conditions are specified that must be fulfilled to ensure overall correctness of the program despite missing synchronization. The advantage of this technique is the reduction of access costs as well as more scalability due to elided atomic operations. But on the other side, possibly more work has to be done caused by missing synchronization. Therefore, additional work is traded against costly atomic operations. A practical application is shown with level-synchronous parallel Breadth-First Search on an undirected graph where two vertex frontiers are accessed in parallel. This application scenario is also used for an evaluation of the technique. Tests were done on four different large parallel systems with up to 64-way parallelism. It will be shown that for the graph application examined the amount of additional work caused by missing synchronization is neglectible and the performance is almost always better than the approach with atomic operations.
SpMV Runtime Improvements with Program Optimization Techniques on Different Abstraction Levels
(2016)
The multiplication of a sparse matrix with a dense vector is a performance critical computational kernel in many applications, especially in natural and engineering sciences. To speed up this operation, many optimization techniques have been developed in the past, mainly focusing on the data layout for the sparse matrix. Strongly related to the data layout is the program code for the multiplication. But even for a fixed data layout with an accommodated kernel, there are several alternatives for program optimizations. This paper discusses a spectrum of program optimization techniques on different abstraction layers for six different sparse matrix data format and kernels. At the one end of the spectrum, compiler options can be used that hide from the programmer all optimizations done by the compiler internally. On the other end of the spectrum, a multiplication kernel can be programmed that use highly sophisticated intrinsics on an assembler level that ask for a programmer with a deep understanding of processor architectures. These special instructions can be used to efficiently utilize hardware features in processors like vector units that have the potential to speed up sparse matrix computations. The paper compares the programming effort and required knowledge level for certain program optimizations in relation to the gained runtime improvements.
Dried serum spots that are well prepared can be attractive alternatives to frozen serum samples for shelving specimens in a medical or research center's biobank and mailing freshly prepared serum to specialized laboratories. During the pre-analytical phase, complications can arise which are often challenging to identify or are entirely overlooked. These complications can lead to reproducibility issues, which can be avoided in serum protein analysis by implementing optimized storage and transfer procedures. With a method that ensures accurate loading of filter paper discs with donor or patient serum, a gap in dried serum spot preparation and subsequent serum analysis shall be filled. Pre-punched filter paper discs with a 3 mm diameter are loaded within seconds in a highly reproducible fashion (approximately 10% standard deviation) when fully submerged in 10 μl of serum, named the "Submerge and Dry" protocol. Such prepared dried serum spots can store several hundred micrograms of proteins and other serum components. Serum-borne antigens and antibodies are reproducibly released in 20 μl elution buffer in high yields (approximately 90%). Dried serum spot-stored and eluted antigens kept their epitopes and antibodies their antigen binding abilities as was assessed by SDS-PAGE, 2D gel electrophoresis-based proteomics, and Western blot analysis, suggesting pre-punched filter paper discs as handy solution for serological tests.
Using Visual and Auditory Cues to Locate Out-of-View Objects in Head-Mounted Augmented Reality
(2021)
While humans can effortlessly pick a view from multiple streams, automatically choosing the best view is a challenge. Choosing the best view from multi-camera streams poses a problem regarding which objective metrics should be considered. Existing works on view selection lack consensus about which metrics should be considered to select the best view. The literature on view selection describes diverse possible metrics. And strategies such as information-theoretic, instructional design, or aesthetics-motivated fail to incorporate all approaches. In this work, we postulate a strategy incorporating information-theoretic and instructional design-based objective metrics to select the best view from a set of views. Traditionally, information-theoretic measures have been used to find the goodness of a view, such as in 3D rendering. We adapted a similar measure known as the viewpoint entropy for real-world 2D images. Additionally, we incorporated similarity penalization to get a more accurate measure of the entropy of a view, which is one of the metrics for the best view selection. Since the choice of the best view is domain-dependent, we chose demonstration-based training scenarios as our use case. The limitation of our chosen scenarios is that they do not include collaborative training and solely feature a single trainer. To incorporate instructional design considerations, we included the trainer’s body pose, face, face when instructing, and hands visibility as metrics. To incorporate domain knowledge we included predetermined regions’ visibility as another metric. All of those metrics are taken into account to produce a parameterized view recommendation approach for demonstration-based training. An online study using recorded multi-camera video streams from a simulation environment was used to validate those metrics. Furthermore, the responses from the online study were used to optimize the view recommendation performance with a normalized discounted cumulative gain (NDCG) value of 0.912, which shows good performance with respect to matching user choices.
Although most individuals who gamble do so without any adverse consequences, some individuals develop a recurrent, maladaptive pattern of gambling behaviour, often called pathological gambling or gambling disorder, that is associated with financial losses, disruption of family and interpersonal relationships, and co-occurring psychiatric disorders. Identifying whether different types of gambling modalities vary in their ability to lead to maladaptive patterns of gambling behaviour is essential to develop public policies that seek to balance access to gambling opportunities with minimizing risk for the potential adverse consequences of gambling behaviour. Until recently, assessing the risk potential of different types of gambling products was nearly impossible. ASTERIG, initially developed in Germany in 2006-2010, is an assessment tool to measure and to evaluate the risk potential of any gambling product based on scores on ten dimensions. In doing so, it also allows a comparison to be drawn between the addictive potential of different gambling products. Furthermore, the tool highlights where the specific risk potential of each specific gambling product lies. This makes it a valuable tool at the legislative, case law, and administrative levels as it allows the risk potential of individual gambling products to be identified and to be compared globally and across 10 different dimensions of risk potential. We note that specific gambling products should always be evaluated rather than product groups (lotteries, slot machines) or providers, as there may be variations among those product groups that impact their risk potential. For example, slot machines may vary on the amount of jackpot, which may influence their risk potential.
Several species of (poly)saccharides and organic acids can be found often simultaneously in various biological matrices, e.g., fruits, plant materials, and biological fluids. The analysis of such matrices sometimes represents a challenging task. Using Aloe vera (A. vera) plant materials as an example, the performance of several spectroscopic methods (80 MHz benchtop NMR, NIR, ATR-FTIR and UV-Vis) for the simultaneous analysis of quality parameters of this plant material was compared. The determined parameters include (poly)saccharides such as aloverose, fructose and glucose as well as organic acids (malic, lactic, citric, isocitric, acetic, fumaric, benzoic and sorbic acids). 500 MHz NMR and high-performance liquid chromatography (HPLC) were used as the reference methods.
UV-VIS data can be used only for identification of added preservatives (benzoic and sorbic acids) and drying agent (maltodextrin) and semiquantitative analysis of malic acid. NIR and MIR spectroscopies combined with multivariate regression can deliver more informative overview of A. vera extracts being able to additionally quantify glucose, aloverose, citric, isocitric, malic, lactic acids and fructose. Low-field NMR measurements can be used for the quantification of aloverose, glucose, malic, lactic, acetic, and benzoic acids. The benchtop NMR method was successfully validated in terms of robustness, stability, precision, reproducibility and limit of detection (LOD) and quantification (LOQ), respectively.
All spectroscopic techniques are useful for the screening of (poly)saccharides and organic acids in plant extracts and should be applied according to its availability as well as information and confidence required for the specific analytical goal. Benchtop NMR spectroscopy seems to be the most feasible solution for quality control of A. vera products.
Pollution with anthropogenic waste, particularly persistent plastic, has now reached every remote corner of the world. The French Atlantic coast, given its extensive coastline, is particularly affected. To gain an overview of current plastic pollution, this study examined a stretch of 250 km along the Silver Coast of France. Sampling was conducted at a total of 14 beach sections, each with five sampling sites in a transect. At each collection site, a square of 0.25 m2 was marked. The top 5 cm of beach sediment was collected and sieved on-site using an analysis sieve (mesh size 1 mm), resulting in a total of approximately 0.8 m3 of sediment, corresponding to a total weight of 1300 kg of examined beach sediment. A total of 1972 plastic particles were extracted and analysed using infrared spectroscopy, corresponding to 1.5 particles kg−1 of beach sediment. Pellets (885 particles), polyethylene as the polymer type (1349 particles), and particles in the size range of microplastics (943 particles) were most frequently found. The significant pollution by pellets suggests that the spread of plastic waste is not primarily attributable to tourism (in February/March 2023). The substantial accumulation of meso- and macro-waste (with 863 and 166 particles) also indicates that research focusing on microplastics should be expanded to include these size categories, as microplastics can develop from them over time.
Bond graph modelling was devised by Professor Paynter at the Massachusetts Institute of Technology in 1959 and subsequently developed into a methodology for modelling multidisciplinary systems at a time when nobody was speaking of object-oriented modelling. On the other hand, so-called object-oriented modelling has become increasingly popular during the last few years. By relating the characteristics of both approaches, it is shown that bond graph modelling, although much older, may be viewed as a special form of object-oriented modelling. For that purpose the new object-oriented modelling language Modelica is used as a working language which aims at supporting multiple formalisms. Although it turns out that bond graph models can be described rather easily, it is obvious that Modelica started from generalized networks and was not designed to support bond graphs. The description of bond graph models in Modelica is illustrated by means of a hydraulic drive. Since VHDL-AMS as an important language standardized and supported by IEEE has been extended to support also modelling of non-electrical systems, it is briefly investigated as to whether it can be used for description of bond graphs. It turns out that currently it does not seem to be suitable.
Multidisciplinary systems are described most suitably by bond graphs. In order to determine unnormalized frequency domain sensitivities in symbolic form, this paper proposes to construct in a systematic manner a bond graph from another bond graph, which is called the associated incremental bond graph in this paper. Contrary to other approaches reported in the literature the variables at the bonds of the incremental bond graph are not sensitivities but variations (incremental changes) in the power variables from their nominal values due to parameter changes. Thus their product is power. For linear elements their corresponding model in the incremental bond graph also has a linear characteristic. By deriving the system equations in symbolic state space form from the incremental bond graph in the same way as they are derived from the initial bond graph, the sensitivity matrix of the system can be set up in symbolic form. Its entries are transfer functions depending on the nominal parameter values and on the nominal states and the inputs of the original model. The sensitivities can be determined automatically by the bond graph preprocessor CAMP-G and the widely used program MATLAB together with the Symbolic Toolbox for symbolic mathematical calculation. No particular program is needed for the approach proposed. The initial bond graph model may be non-linear and may contain controlled sources and multiport elements. In that case the sensitivity model is linear time variant and must be solved in the time domain. The rationale and the generality of the proposed approach are presented. For illustration purposes a mechatronic example system, a load positioned by a constant-excitation d.c. motor, is presented and sensitivities are determined in symbolic form by means of CAMP-G/MATLAB.
BGML - a novel XML format for the exchange and the reuse of bond graph models of engineering systems
(2006)
In this paper, residual sinks are used in bond graph model-based quantitative fault detection for the coupling of a model of a faultless process engineering system to a bond graph model of the faulty system. By this way, integral causality can be used as the preferred computational causality in both models. There is no need for numerical differentiation. Furthermore, unknown variables do not need to be eliminated from power continuity equations in order to obtain analytical redundancy relations (ARRs) in symbolic form. Residuals indicating faults are computed numerically as components of a descriptor vector of a differential algebraic equation system derived from the coupled bond graphs. The presented bond graph approach especially aims at models with non-linearities that make it cumbersome or even impossible to derive ARRs from model equations by elimination of unknown variables. For illustration, the approach is applied to a non-controlled as well as to a controlled hydraulic two-tank system. Finally, it is shown that not only the numerical computation of residuals but also the simultaneous numerical computation of their sensitivities with respect to a parameter can be supported by bond graph modelling.
For the case when the abstraction of instantaneous state transitions is adopted, this paper proposes to start fault detection and isolation in an engineering system from a single time-invariant causality bond graph representation of a hybrid model. To that end, the paper picks up on a long-known proposal to model switching devices by a transformer modulated by a Boolean variable and a resistor in fixed conductance causality accounting for its ON resistance. Bond graph representations of hybrid system models developed in this way have been used so far mainly for the purpose of simulation. The paper shows that they can well constitute an approach to the bond-graph-based quantitative fault detection and isolation of hybrid models. Advantages are that the standard sequential causality assignment procedure can be a used without modification. A single set of analytical redundancy relations valid for all physically feasible system modes can be (automatically) derived from the bond graph. Stiff model equations due to small values of the ON resistance in the switch model may be avoided by symbolic reformulation of equations and letting the ON resistance of some switches tend to zero, turning them into ideal switches.
First, for two examples considered in the literature, it is shown that the approach proposed in this paper can produce the same analytical redundancy relations as were obtained from a hybrid bond graph with controlled junctions and the use of a sequential causality assignment procedure especially for fault detection and isolation purpose. Moreover, the usefulness of the proposed approach is illustrated in two case studies by its application to standard switching circuits extensively used in power electronic systems and by simulation of some fault scenarios. The approach, however, is not confined to the fault detection and isolation of such systems. Analytically validated simulation results obtained by means of the program Scilab give confidence in the approach.
A bond graph representation of switching devices known for a long time has been a modulated transformer with a modulus b(t)∈{0,1}∀t≥0 in conjunction with a resistor R:Ron accounting for the ON-resistance of a switch considered non-ideal. Besides other representations, this simple model has been used in bond graphs for simulation of the dynamic behaviour of hybrid systems. A previous article of the author has proposed to use the transformer–resistor pair in bond graphs for fault diagnosis in hybrid systems. Advantages are a unique bond graph for all system modes, the application of the unmodified standard Sequential Causality Assignment Procedure, fixed computational causalities and the derivation of analytical redundancy relations incorporating ‘Boolean’ transformer moduli so that they hold for all system modes. Switches temporarily connect and disconnect model parts. As a result, some independent storage elements may temporarily become dependent, so that the number of state variables is not time-invariant. This article addresses this problem in the context of modelling and simulation of fault scenarios in hybrid systems. In order to keep time-invariant preferred integral causality at storage ports, residual sinks previously introduced by the author are used. When two storage elements become dependent at a switching time instance ts, a residual sink is activated. It enforces that the outputs of two dependent storage elements become immediately equal by imposing the conjugate3 power variable of appropriate value on their inputs. The approach is illustrated by the bond graph modelling and simulation of some fault scenarios in a standard three-phase switched power inverter supplying power into an RL-load in a delta configuration. A well-developed approach to model-based fault detection and isolation is to evaluate the residual of analytical redundancy relations. In this article, analytical redundancy relation residuals have been computed numerically by coupling a bond graph of the faulty system to one of the non-faulty systems by means of residual sinks. The presented approach is not confined to power electronic systems but can be used for hybrid systems in other domains as well. In further work, the RL-load may be replaced by a bond graph model of an alternating current motor in order to study the effect of switch failures in the power inverter on to the dynamic behaviour of the motor.
Hybrid system models exploit the modelling abstraction that fast state transitions take place instantaneously so that they encompass discrete events and the continuous time behaviour for the while of a system mode. If a system is in a certain mode, e.g. two rigid bodies stick together, then residuals of analytical redundancy relations (ARRs) within certain small bounds indicate that the system is healthy. An unobserved mode change, however, invalidates the current model for the dynamic behaviour. As a result, ARR residuals may exceed current thresholds indicating faults in system components that have not happened. The paper shows that ARR residuals derived from a bond graph cannot only serve as fault indicators but may also be used for bond graph model-based system mode identification. ARR residuals are numerically computed in an off-line simulation by coupling a bond graph of the faulty system to a non-faulty system bond graph through residual sinks. In real-time simulation, the faulty system model is to be replaced by measurements from the real system. As parameter values are uncertain, it is important to determine adaptive ARR thresholds that, given uncertain parameters, allow to decide whether the dynamic behaviour in a current system mode is the one of the healthy system so that false alarms or overlooking of true faults can be avoided. The paper shows how incremental bond graphs can be used to determine adaptive mode-dependent ARR thresholds for switched linear time-invariant systems with uncertain parameters in order to support robust fault detection. Bond graph-based hybrid system mode identification as well as the determination of adaptive fault thresholds is illustrated by application to a power electronic system easy to survey. Some simulation results have been analytically validated.
Failure prognostic builds up on constant data acquisition and processing and fault diagnosis and is an essential part of predictive maintenance of smart manufacturing systems enabling condition based maintenance, optimised use of plant equipment, improved uptime and yield and to prevent safety problems. Given known control inputs into a plant and real sensor outputs or simulated measurements, the model-based part of the proposed hybrid method provides numerical values of unknown parameter degradation functions at sampling time points by the evaluation of equations that have been derived offline from a bicausal diagnostic bond graph. These numerical values are computed concurrently to the constant monitoring of a system and are stored in a buffer of fixed length. The data-driven part of the method provides a sequence of remaining useful life estimates by repeated projection of the parameter degradation into the future based on the use of values in a sliding time window. Existing software can be used to determine the best fitting function and can account for its random parameters. The continuous parameter estimation and their projection into the future can be performed in parallel for multiple isolated simultaneous parametric faults on a multicore, multiprocessor computer.
The proposed hybrid bond graph model-based, data-driven method is verified by an offline simulation case study of a typical power electronic circuit. It can be used to implement embedded systems that enable cooperating machines in smart manufacturing to perform prognostic themselves.
Analytical redundancy relations are fundamental in model-based fault detection and isolation. Their numerical evaluation yields a residual that may serve as a fault indicator. Considering switching linear time-invariant system models that use ideal switches, it is shown that analytical redundancy relations can be systematically deduced from a diagnostic bond graph with fixed causalities that hold for all modes of operation. Moreover, as to a faultless system, the presented bond graph–based approach enables to deduce a unique implicit state equation with coefficients that are functions of the discrete switch states. Devices or phenomena with fast state transitions, for example, electronic diodes and transistors, clutches, or hard mechanical stops are often represented by ideal switches which give rise to variable causalities. However, in the presented approach, fixed causalities are assigned only once to a diagnostic bond graph. That is, causal strokes at switch ports in the diagnostic bond graph reflect only the switch-state configuration in a specific system mode. The actual discrete switch states are implicitly taken into account by the discrete values of the switch moduli. The presented approach starts from a diagnostic bond graph with fixed causalities and from a partitioning of the bond graph junction structure and systematically deduces a set of equations that determines the wanted residuals. Elimination steps result in analytical redundancy relations in which the states of the storage elements and the outputs of the ideal switches are unknowns. For the later two unknowns, the approach produces an implicit differential algebraic equations system. For illustration of the general matrix-based approach, an electromechanical system and two small electronic circuits are considered. Their equations are directly derived from a diagnostic bond graph by following causal paths and are reformulated so that they conform with the matrix equations obtained by the formal approach based on a partitioning of the bond graph junction structure. For one of the three mode-switching examples, a fault scenario has been simulated.