Refine
H-BRS Bibliography
- yes (359) (remove)
Departments, institutes and facilities
- Fachbereich Wirtschaftswissenschaften (89)
- Fachbereich Informatik (65)
- Fachbereich Angewandte Naturwissenschaften (59)
- Fachbereich Ingenieurwissenschaften und Kommunikation (58)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (50)
- Fachbereich Sozialpolitik und Soziale Sicherung (46)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (23)
- Institut für Medienentwicklung und -analyse (IMEA) (16)
- Institut für Verbraucherinformatik (IVI) (16)
- Institut für funktionale Gen-Analytik (IFGA) (15)
Document Type
- Article (138)
- Part of a Book (68)
- Conference Object (66)
- Book (monograph, edited volume) (19)
- Preprint (12)
- Contribution to a Periodical (8)
- Report (8)
- Research Data (6)
- Doctoral Thesis (6)
- Master's Thesis (6)
Year of publication
- 2022 (359) (remove)
Keywords
- Machine Learning (5)
- Lehrbuch (4)
- Medienästhetik (4)
- virtual reality (4)
- Cathepsin K (3)
- GDPR (3)
- Knowledge Graphs (3)
- Lignin (3)
- Medien (3)
- Medienwissenschaft (3)
Auswirkungen einer anhaltenden, inflationären Geldpolitik in der Eurozone auf den privaten Sparer
(2022)
Die vorliegende Bachelorarbeit setzt sich kritisch mit den Auswirkungen einer anhaltenden, inflationären Geldpolitik in der Eurozone auf den privaten Sparer auseinander. Im Rahmen dieser Arbeit wird aufgezeigt, wie die starke Erhöhung der Geldmenge Einfluss auf die Möglichkeiten und Entscheidungen des Sparers hat und wie weit eine solche Geldpolitik mit den Interessen des Sparers vereinbar ist.
Self-supervised learning has proved to be a powerful approach to learn image representations without the need of large labeled datasets. For underwater robotics, it is of great interest to design computer vision algorithms to improve perception capabilities such as sonar image classification. Due to the confidential nature of sonar imaging and the difficulty to interpret sonar images, it is challenging to create public large labeled sonar datasets to train supervised learning algorithms. In this work, we investigate the potential of three self-supervised learning methods (RotNet, Denoising Autoencoders, and Jigsaw) to learn high-quality sonar image representation without the need of human labels. We present pre-training and transfer learning results on real-life sonar image datasets. Our results indicate that self-supervised pre-training yields classification performance comparable to supervised pre-training in a few-shot transfer learning setup across all three methods. Code and self-supervised pre-trained models are be available at https://github.com/agrija9/ssl-sonar-images
We introduce canonical weight normalization for convolutional neural networks. Inspired by the canonical tensor decomposition, we express the weight tensors in so-called canonical networks as scaled sums of outer vector products. In particular, we train network weights in the decomposed form, where scale weights are optimized separately for each mode. Additionally, similarly to weight normalization, we include a global scaling parameter. We study the initialization of the canonical form by running the power method and by drawing randomly from Gaussian or uniform distributions. Our results indicate that we can replace the power method with cheaper initializations drawn from standard distributions. The canonical re-parametrization leads to competitive normalization performance on the MNIST, CIFAR10, and SVHN data sets. Moreover, the formulation simplifies network compression. Once training has converged, the canonical form allows convenient model-compression by truncating the parameter sums.
Comparative study of 3D object detection frameworks based on LiDAR data and sensor fusion techniques
(2022)
Estimating and understanding the surroundings of the vehicle precisely forms the basic and crucial step for the autonomous vehicle. The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time. Generally, the perception system involves various subsystems such as localization, obstacle (static and dynamic) detection, and avoidance, mapping systems, and others. For perceiving the environment, these vehicles will be equipped with various exteroceptive (both passive and active) sensors in particular cameras, Radars, LiDARs, and others. These systems are equipped with deep learning techniques that transform the huge amount of data from the sensors into semantic information on which the object detection and localization tasks are performed. For numerous driving tasks, to provide accurate results, the location and depth information of a particular object is necessary. 3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object. Based on recent research, 3D object detection frameworks performing object detection and localization on LiDAR data and sensor fusion techniques show significant improvement in their performance. In this work, a comparative study of the effect of using LiDAR data for object detection frameworks and the performance improvement seen by using sensor fusion techniques are performed. Along with discussing various state-of-the-art methods in both the cases, performing experimental analysis, and providing future research directions.
Vietnam requires a sustainable urbanization, for which city sensing is used in planning and de-cision-making. Large cities need portable, scalable, and inexpensive digital technology for this purpose. End-to-end air quality monitoring companies such as AirVisual and Plume Air have shown their reliability with portable devices outfitted with superior air sensors. They are pricey, yet homeowners use them to get local air data without evaluating the causal effect. Our air quality inspection system is scalable, reasonably priced, and flexible. Minicomputer of the sys-tem remotely monitors PMS7003 and BME280 sensor data through a microcontroller processor. The 5-megapixel camera module enables researchers to infer the causal relationship between traffic intensity and dust concentration. The design enables inexpensive, commercial-grade hardware, with Azure Blob storing air pollution data and surrounding-area imagery and pre-venting the system from physically expanding. In addition, by including an air channel that re-plenishes and distributes temperature, the design improves ventilation and safeguards electrical components. The gadget allows for the analysis of the correlation between traffic and air quali-ty data, which might aid in the establishment of sustainable urban development plans and poli-cies.
Jahresbericht 2021
(2022)
In Forschung, Lehre und Transfer neue Wege beschreiten und Akzente setzen – das hat die Hochschule Bonn-Rhein-Sieg (H-BRS) im vergangenen Jahr trotz der Corona-Pandemie geschafft. Talente, Ideen und Kooperationen sind in unterschiedlicher Weise zur Geltung gekommen, stets im engen Austausch zwischen angewandter Wissenschaft, Gesellschaft und Wirtschaft. „Entfalten“ lautet deshalb das Motto des Jahresberichts der H-BRS für das Jahr 2021, der jetzt erschienen ist.
Comparative study of 3D object detection frameworks based on LiDAR data and sensor fusion techniques
(2022)
Estimating and understanding the surroundings of the vehicle precisely forms the basic and crucial step for the autonomous vehicle. The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time. Generally, the perception system involves various subsystems such as localization, obstacle (static and dynamic) detection, and avoidance, mapping systems, and others. For perceiving the environment, these vehicles will be equipped with various exteroceptive (both passive and active) sensors in particular cameras, Radars, LiDARs, and others. These systems are equipped with deep learning techniques that transform the huge amount of data from the sensors into semantic information on which the object detection and localization tasks are performed. For numerous driving tasks, to provide accurate results, the location and depth information of a particular object is necessary. 3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object. Based on recent research, 3D object detection frameworks performing object detection and localization on LiDAR data and sensor fusion techniques show significant improvement in their performance. In this work, a comparative study of the effect of using LiDAR data for object detection frameworks and the performance improvement seen by using sensor fusion techniques are performed. Along with discussing various state-of-the-art methods in both the cases, performing experimental analysis, and providing future research directions.
Breaking new ground and setting new trends in research, teaching and transfer - this is what the Hochschule Bonn-Rhein-Sieg (H-BRS) managed to do last year despite the Corona pandemic. Talents, ideas and cooperations have come to fruition in various ways, always in close exchange between applied science, society and business. "expand" is therefore the motto of the annual report of the H-BRS for the year 2021, which has now been published.
Recent advances in Natural Language Processing have substantially improved contextualized representations of language. However, the inclusion of factual knowledge, particularly in the biomedical domain, remains challenging. Hence, many Language Models (LMs) are extended by Knowledge Graphs (KGs), but most approaches require entity linking (i.e., explicit alignment between text and KG entities). Inspired by single-stream multimodal Transformers operating on text, image and video data, this thesis proposes the Sophisticated Transformer trained on biomedical text and Knowledge Graphs (STonKGs). STonKGs incorporates a novel multimodal architecture based on a cross encoder that uses the attention mechanism on a concatenation of input sequences derived from text and KG triples, respectively. Over 13 million so-called text-triple pairs, coming from PubMed and assembled using the Integrated Network and Dynamical Reasoning Assembler (INDRA), were used in an unsupervised pre-training procedure to learn representations of biomedical knowledge in STonKGs. By comparing STonKGs to an NLP- and a KG-baseline (operating on either text or KG data) on a benchmark consisting of eight fine-tuning tasks, the proposed knowledge integration method applied in STonKGs was empirically validated. Specifically, on tasks with a comparatively small dataset size and a larger number of classes, STonKGs resulted in considerable performance gains, beating the F1-score of the best baseline by up to 0.083. Both the source code as well as the code used to implement STonKGs are made publicly available so that the proposed method of this thesis can be extended to many other biomedical applications.
Contextual information is widely considered for NLP and knowledge discovery in life sciences since it highly influences the exact meaning of natural language. The scientific challenge is not only to extract such context data, but also to store this data for further query and discovery approaches. Classical approaches use RDF triple stores, which have serious limitations. Here, we propose a multiple step knowledge graph approach using labeled property graphs based on polyglot persistence systems to utilize context data for context mining, graph queries, knowledge discovery and extraction. We introduce the graph-theoretic foundation for a general context concept within semantic networks and show a proof of concept based on biomedical literature and text mining. Our test system contains a knowledge graph derived from the entirety of PubMed and SCAIView data and is enriched with text mining data and domain-specific language data using Biological Expression Language. Here, context is a more general concept than annotations. This dense graph has more than 71M nodes and 850M relationships. We discuss the impact of this novel approach with 27 real-world use cases represented by graph queries. Storing and querying a giant knowledge graph as a labeled property graph is still a technological challenge. Here, we demonstrate how our data model is able to support the understanding and interpretation of biomedical data. We present several real-world use cases that utilize our massive, generated knowledge graph derived from PubMed data and enriched with additional contextual data. Finally, we show a working example in context of biologically relevant information using SCAIView.
For research in audiovisual interview archives often it is not only of interest what is said but also how. Sentiment analysis and emotion recognition can help capture, categorize and make these different facets searchable. In particular, for oral history archives, such indexing technologies can be of great interest. These technologies can help understand the role of emotions in historical remembering. However, humans often perceive sentiments and emotions ambiguously and subjectively. Moreover, oral history interviews have multi-layered levels of complex, sometimes contradictory, sometimes very subtle facets of emotions. Therefore, the question arises of the chance machines and humans have capturing and assigning these into predefined categories. This paper investigates the ambiguity in human perception of emotions and sentiment in German oral history interviews and the impact on machine learning systems. Our experiments reveal substantial differences in human perception for different emotions. Furthermore, we report from ongoing machine learning experiments with different modalities. We show that the human perceptual ambiguity and other challenges, such as class imbalance and lack of training data, currently limit the opportunities of these technologies for oral history archives. Nonetheless, our work uncovers promising observations and possibilities for further research.
State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
21 pages, with supplementary
ProtSTonKGs: A Sophisticated Transformer Trained on Protein Sequences, Text, and Knowledge Graphs
(2022)
While most approaches individually exploit unstructured data from the biomedical literature or structured data from biomedical knowledge graphs, their union can better exploit the advantages of such approaches, ultimately improving representations of biology. Using multimodal transformers for such purposes can improve performance on context dependent classication tasks, as demonstrated by our previous model, the Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs (STonKGs). In this work, we introduce ProtSTonKGs, a transformer aimed at learning all-encompassing representations of protein-protein interactions. ProtSTonKGs presents an extension to our previous work by adding textual protein descriptions and amino acid sequences (i.e., structural information) to the text- and knowledge graph-based input sequence used in STonKGs. We benchmark ProtSTonKGs against STonKGs, resulting in improved F1 scores by up to 0.066 (i.e., from 0.204 to 0.270) in several tasks such as predicting protein interactions in several contexts. Our work demonstrates how multimodal transformers can be used to integrate heterogeneous sources of information, paving the foundation for future approaches that use multiple modalities for biomedical applications.
Due to the COVID-19 pandemic, health education programs and workplace health promotion (WHP) could only be offered under difficult conditions, if at all. In Germany for example, mandatory lockdowns, working from home, and physical distancing have led to a sharp decline in expenditure on prevention and health promotion from 2019 to 2020. At the same time, the pandemic has negatively affected many people’s mental health. Therefore, our goal was to examine audiovisual stimulation as a possible measure in the context of WHP, because its usage is contact-free, time flexible, and offers, additionally, voice-guided health education programs. In an online survey following a cross-sectional single case study design with 393 study participants, we examined the associations between audiovisual stimulation and mental health, work engagement, and burnout. Using multiple regression analyses, we could identify positive associations between audiovisual stimulation and mental health, burnout, and work engagement. However, longitudinal data are needed to further investigate causal mechanisms between mental health and the use of audiovisual stimulation. Nevertheless, especially with regard to the pandemic, audiovisual stimulation may represent a promising measure for improving mental health at the workplace.
Differential-Algebraic Equations and Beyond: From Smooth to Nonsmooth Constrained Dynamical Systems
(2022)
Effective Neighborhood Feature Exploitation in Graph CNNs for Point Cloud Object-Part Segmentation
(2022)
Part segmentation is the task of semantic segmentation applied on objects and carries a wide range of applications from robotic manipulation to medical imaging. This work deals with the problem of part segmentation on raw, unordered point clouds of 3D objects. While pioneering works on deep learning for point clouds typically ignore taking advantage of local geometric structure around individual points, the subsequent methods proposed to extract features by exploiting local geometry have not yielded significant improvements either. In order to investigate further, a graph convolutional network (GCN) is used in this work in an attempt to increase the effectiveness of such neighborhood feature exploitation approaches. Most of the previous works also focus only on segmenting complete point cloud data. Considering the impracticality of such approaches, taking into consideration the real world scenarios where complete point clouds are scarcely available, this work proposes approaches to deal with partial point cloud segmentation.
In the attempt to better capture neighborhood features, this work proposes a novel method to learn regional part descriptors which guide and refine the segmentation predictions. The proposed approach helps the network achieve state-of-the-art performance of 86.4% mIoU on the ShapeNetPart dataset for methods which do not use any preprocessing techniques or voting strategies. In order to better deal with partial point clouds, this work also proposes new strategies to train and test on partial data. While achieving significant improvements compared to the baseline performance, the problem of partial point cloud segmentation is also viewed through an alternate lens of semantic shape completion.
Semantic shape completion networks not only help deal with partial point cloud segmentation but also enrich the information captured by the system by predicting complete point clouds with corresponding semantic labels for each point. To this end, a new network architecture for semantic shape completion is also proposed based on point completion network (PCN) which takes advantage of a graph convolution based hierarchical decoder for completion as well as segmentation. In addition to predicting complete point clouds, results indicate that the network is capable of reaching within a margin of 5% to the mIoU performance of dedicated segmentation networks for partial point cloud segmentation.
As cameras are ubiquitous in autonomous systems, object detection is a crucial task. Object detectors are widely used in applications such as autonomous driving, healthcare, and robotics. Given an image, an object detector outputs both the bounding box coordinates as well as classification probabilities for each object detected. The state-of-the-art detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications in particular. It is therefore crucial to explain the reason behind each detector decision in order to gain user trust, enhance detector performance, and analyze their failure.
Previous work fails to explain as well as evaluate both bounding box and classification decisions individually for various detectors. Moreover, no tools explain each detector decision, evaluate the explanations, and also identify the reasons for detector failures. This restricts the flexibility to analyze detectors. The main contribution presented here is an open-source Detector Explanation Toolkit (DExT). It is used to explain the detector decisions, evaluate the explanations, and analyze detector errors. The detector decisions are explained visually by highlighting the image pixels that most influence a particular decision. The toolkit implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. To the author’s knowledge, this is the first work to conduct extensive qualitative and novel quantitative evaluations of different explanation methods across various detectors. The qualitative evaluation incorporates a visual analysis of the explanations carried out by the author as well as a human-centric evaluation. The human-centric evaluation includes a user study to understand user trust in the explanations generated across various explanation methods for different detectors. Four multi-object visualization methods are provided to merge the explanations of multiple objects detected in an image as well as the corresponding detector outputs in a single image. Finally, DExT implements the procedure to analyze detector failures using the formulated approach.
The visual analysis illustrates that the ability to explain a model is more dependent on the model itself than the actual ability of the explanation method. In addition, the explanations are affected by the object explained, the decision explained, detector architecture, training data labels, and model parameters. The results of the quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. In addition, a single explanation method cannot generate more faithful explanations than other methods for both the bounding box and the classification decision across different detectors. Both the quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. Finally, a convex polygon-based multi-object visualization method provides more human-understandable visualization than other methods.
The author expects that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
Trojanized software packages used in software supply chain attacks constitute an emerging threat. Unfortunately, there is still a lack of scalable approaches that allow automated and timely detection of malicious software packages and thus most detections are based on manual labor and expertise. However, it has been observed that most attack campaigns comprise multiple packages that share the same or similar malicious code. We leverage that fact to automatically reproduce manually identified clusters of known malicious packages that have been used in real world attacks, thus, reducing the need for expert knowledge and manual inspection. Our approach, AST Clustering using MCL to mimic Expertise (ACME), yields promising results with a 𝐹1 score of 0.99. Signatures are automatically generated based on characteristic code fragments from clusters and are subsequently used to scan the whole npm registry for unreported malicious packages. We are able to identify and report six malicious packages that have been removed from npm consequentially. Therefore, our approach can support the detection by reducing manual labor and hence may be employed by maintainers of package repositories to detect possible software supply chain attacks through trojanized software packages.
Hydrogen is a versatile energy carrier. When produced with renewable energy by water splitting, it is a carbon neutral alternative to fossil fuels. The industrialization process of this technology is currently dominated by electrolyzers powered by solar or wind energy. For small scale applications, however, more integrated device designs for water splitting using solar energy might optimize hydrogen production due to lower balance of system costs and a smarter thermal management. Such devices offer the opportunity to thermally couple the solar cell and the electrochemical compartment. In this way, heat losses in the absorber can be turned into an efficiency boost for the device via simultaneously enhancing the catalytic performance of the water splitting reactions, cooling the absorber, and decreasing the ohmic losses.[1,2] However,integrated devices (sometimes also referred to as “artificial leaves”), currently suffer from a lower technology readiness level (TRL) than the completely decoupled approach.
Integrated solar water splitting devices that produce hydrogen without the use of power inverters operate outdoors and are hence exposed to varying weather conditions. As a result, they might sometimes work at non-optimal operation points below or above the maximum power point of the photovoltaic component, which would directly translate into efficiency losses. Up until now, however, no common parameter describing and quantifying this and other real-life operating related losses (e.g. spectral mismatch) exists in the community. Therefore, the annual-hydrogen-yield-climatic-response (AHYCR) ratio is introduced as a figure of merit to evaluate the outdoor performance of integrated solar water splitting devices. This value is defined as the ratio between the real annual hydrogen yield and the theoretical yield assuming the solar-to-hydrogen device efficiency at standard conditions. This parameter is derived for an exemplary system based on state-of-the-art AlGaAs//Si dual-junction solar cells and an anion exchange membrane electrolyzer using hourly resolved climate data from a location in southern California and from reanalysis data of Antarctica. This work will help to evaluate, compare and optimize the climatic response of solar water splitting devices in different climate zones.
Research-Practice-Collaborations Addressing One Health and Urban Transformation. A Case Study
(2022)
One Health is an integrative approach at the interface of humans, animals and the environment, which can be implemented as Research-Practice-Collaboration (RPC) for its interdisciplinarity and intersectoral focus on the co-production of knowledge. To exemplify this, the present commentary shows the example of the Forschungskolleg “One Health and Urban Transformation” funded by the Ministry of Culture and Science of the State Government of Nord Rhine Westphalia in Germany. After analysis, the factors identified for a better implementation of RPC for One Health were the ones that allowed for constant communication and the reduction of power asymmetries between practitioners and academics in the co-production of knowledge. In this light, the training of a new generation of scientists at the boundaries of different disciplines that have mediation skills between academia and practice is an important contribution with great implications for societal change that can aid the further development of RPC.
Schulungen in neun Prozessschritten gestalten! Digitalisierung des Masterfaches „Integrierte Managementsysteme“ im Studiengang „Material Science and Sustainability Methods“ im Fachbereich Naturwissenschaften an der Hochschule Bonn-Rhein-Sieg. Am Beispiel einer jahrelang in Präsenz gelehrten Lehrveranstaltung mit Vorlesungen und seminaristischen Übungen wird gezeigt, wie das Gestalten und Durchführen zur Vermittlung prüfungsrelevanter Kompetenzen auch „online“ gelingt. Das passende „Setting“ des Lehr- und Lernprozesses unter Beachtung von Qualitätskriterien und Handlungsempfehlungen ist für jede Art von Schulung in Universitäten, Behörden, Unternehmen und anderen Organsitationen relevant.
Forschungsdatenmanagement (FDM) nimmt im Forschungsalltag der Hochschulen für angewandte Wissenschaften (HAW) eine zunehmende größere Rolle ein und stellt an viele Forschende bisher unbekannte Anforderungen. So gilt es ein FAIRes und nachhaltiges Datenmanagement im Sinne des Kodex „Leitlinien zur Sicherung guter wissenschaftlicher Praxis“ der DFG zu betreiben – für sich selbst, für die eigene Forschungsgruppe und für die Forschungsgemeinschaft. In dieser Veranstaltung anlässlich des Tags der Forschungsdaten in NRW am 15.11.2022 wurden die wesentlichen Grundzüge des Forschungsdatenmanagements anhand des Forschungsdatenlebenszyklus nähergebracht und Praxisbeispiele und nützliche Links vorgestellt.
50 Jahre: Von der FH zur HAW
(2022)
Silicon carbide and graphene possess extraordinary chemical and physical properties. Here, these different systems are linked and the changes in structural and dynamic properties are investigated. For the simulations performed a classical molecular dynamic (MD) approach was used. In this approach, a graphene layer (N = 240 atoms) was grafted at different distances on top of a 6H-SiC structure (N = 2400 atoms) and onto a 3C-SiC structure (N = 1728 atoms). The distances between the graphene and the 6H are 1.0, 1.3 and 1.5 Å and the distances between the graphene layer and the 3C-SiC are 2.0, 2.3, and 2.5 Å. Each system has been equilibrated at room temperature until no further relaxation was observed. The 6H-SiC structure in combination with graphene proves to be more stable compared to the combination with 3C-SiC. This can be seen well in the determined energies. Pair distribution functions were influenced slightly by the graphene layer due to steric and energetic changes. This becomes clear from the small shifts of the C-C distances. Interactions as well as bonds between graphene and SiC lead to the fact that small shoulders of the high-frequency SiC-peaks are visible in the spectra and at the same time the high-frequency peaks of graphene are completely absent.
Graph databases employ graph structures such as nodes, attributes and edges to model and store relationships among data. To access this data, graph query languages (GQL) such as Cypher are typically used, which might be difficult to master for end-users. In the context of relational databases, sequence to SQL models, which translate natural language questions to SQL queries, have been proposed. While these Neural Machine Translation (NMT) models increase the accessibility of relational databases, NMT models for graph databases are not yet available mainly due to the lack of suitable parallel training data. In this short paper we sketch an architecture which enables the generation of synthetic training data for the graph query language Cypher.
MOTIVATION
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited.
RESULTS
To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs (KGs). This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations in a shared embedding space. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against three baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.084 (i.e., from 0.881 to 0.965). Finally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications.
AVAILABILITY
We make the source code and the Python package of STonKGs available at GitHub (https://github.com/stonkgs/stonkgs) and PyPI (https://pypi.org/project/stonkgs/). The pre-trained STonKGs models and the task-specific classification models are respectively available at https://huggingface.co/stonkgs/stonkgs-150k and https://zenodo.org/communities/stonkgs.
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online.
Current research in augmented, virtual, and mixed reality (XR) reveals a lack of tool support for designing and, in particular, prototyping XR applications. While recent tools research is often motivated by studying the requirements of non-technical designers and end-user developers, the perspective of industry practitioners is less well understood. In an interview study with 17 practitioners from different industry sectors working on professional XR projects, we establish the design practices in industry, from early project stages to the final product. To better understand XR design challenges, we characterize the different methods and tools used for prototyping and describe the role and use of key prototypes in the different projects. We extract common elements of XR prototyping, elaborating on the tools and materials used for prototyping and establishing different views on the notion of fidelity. Finally, we highlight key issues for future XR tools research.
Modern GPUs come with dedicated hardware to perform ray/triangle intersections and bounding volume hierarchy (BVH) traversal. While the primary use case for this hardware is photorealistic 3D computer graphics, with careful algorithm design scientists can also use this special-purpose hardware to accelerate general-purpose computations such as point containment queries. This article explains the principles behind these techniques and their application to vector field visualization of large simulation data using particle tracing.
BACKGROUND: Humans demonstrate many physiological changes in microgravity for which long-duration head down bed rest (HDBR) is a reliable analog. However, information on how HDBR affects sensory processing is lacking.
OBJECTIVE: We previously showed [25] that microgravity alters the weighting applied to visual cues in determining the perceptual upright (PU), an effect that lasts long after return. Does long-duration HDBR have comparable effects?
METHODS: We assessed static spatial orientation using the luminous line test (subjective visual vertical, SVV) and the oriented character recognition test (PU) before, during and after 21 days of 6° HDBR in 10 participants. Methods were essentially identical as previously used in orbit [25].
RESULTS: Overall, HDBR had no effect on the reliance on visual relative to body cues in determining the PU. However, when considering the three critical time points (pre-bed rest, end of bed rest, and 14 days post-bed rest) there was a significant decrease in reliance on visual relative to body cues, as found in microgravity. The ratio had an average time constant of 7.28 days and returned to pre-bed-rest levels within 14 days. The SVV was unaffected.
CONCLUSIONS: We conclude that bed rest can be a useful analog for the study of the perception of static self-orientation during long-term exposure to microgravity. More detailed work on the precise time course of our effects is needed in both bed rest and microgravity conditions.
The latest trends in inverse rendering techniques for reconstruction use neural networks to learn 3D representations as neural fields. NeRF-based techniques fit multi-layer perceptrons (MLPs) to a set of training images to estimate a radiance field which can then be rendered from any virtual camera by means of volume rendering algorithms. Major drawbacks of these representations are the lack of well-defined surfaces and non-interactive rendering times, as wide and deep MLPs must be queried millions of times per single frame. These limitations have recently been singularly overcome, but managing to accomplish this simultaneously opens up new use cases. We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates. KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes, and it demonstrably performs in real-time with plenty of room for future optimizations and extensions.
This project focuses on object detection in dense volume data. There are several types of dense volume data, namely Computed Tomography (CT) scan, Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI). This work focuses on CT scans. CT scans are not limited to the medical domain; they are also used in industries. CT scans are used in airport baggage screening, assembly lines, and the object detection systems in these places should be able to detect objects fast. One of the ways to address the issue of computational complexity and make the object detection systems fast is to use low-resolution images. Low-resolution CT scanning is fast. The entire process of scanning and detection can be made faster by using low-resolution images. Even in the medical domain, to reduce the rad iation dose, the exposure time of the patient should be reduced. The exposure time of patients could be reduced by allowing low-resolution CT scans. Hence it is essential to find out which object detection model has better accuracy as well as speed at low-resolution CT scans. However, the existing approaches did not provide details about how the model would perform when the resolution of CT scans is varied. Hence in this project, the goal is to analyze the impact of varying resolution of CT scans on both the speed and accuracy of the model. Three object detection models, namely RetinaNet, YOLOv3, and YOLOv5, were trained at various resolutions. Among the three models, it was found that YOLOv5 has the best mAP and f1 score at multiple resolutions on the DeepLesion dataset. RetinaNet model h as the least inference time on the DeepLesion dataset. From the experiments, it could be asserted that sacrificing mean average precision (mAP) to improve inference time by reducing resolution is feasible.
In (dynamic) adaptive mesh refinement (AMR) an input mesh is refined or coarsened to the need of the numerical application. This refinement happens with no respect to the originally meshed domain and is therefore limited to the geometrical accuracy of the original input mesh. We presented a novel approach to equip this input mesh with additional geometry information, to allow refinement and high-order cells based on the geometry of the original domain. We already showed a limited implementation of this algorithm. Now we evaluate this prototype with a numerical application and we prove its influence on the accuracy of certain numerical results. To be as practical as possible, we implement the ability to import meshes generated by Gmsh and equip them with the needed geometry information. Furthermore, we improve the mapping algorithm, which maps the geometry information of the boundary of a cell into the cell's volume. With these preliminary steps done, we use out new approach in a simulation of the advection of a concentration along the boundary of a sphere shell and past the boundary of a rotating cylinder. We evaluate the accuracy of our approach in comparison to the conventional refinement of cells to answer our research question: How does the performance and accuracy of the hexahedral curved domain AMR algorithm compare to linear AMR when solving the advection equation with the linear finite volume method? To answer this question, we show the influence of curved AMR on our simulation results and see, that it is even able to outperform far finer linear meshes in terms of accuracy. We also see that the current implementation of this approach is too slow for practical usage. We can therefore prove the benefits of curved AMR in certain, geometry-related application scenarios and show possible improvements to make it more feasible and practical in the future.
In the field of autonomous robotics, sensors have played a major role in defining the scope of technology and to a great extent, limitations of it as well. This cycle of constant updates and hence technological advancement has made given birth to some serious industries which were once inconceivable. Industries like autonomous driving which has a serious impact on safety and security of people, also has an equally harsh implication on the dynamics and economics of the market. With sensors like LiDAR and RADAR delivering 3D measurements as point clouds, there is a necessity to process the raw measurements directly and many research groups are working on the same. A sizable research has gone in solving the task of object detection on 2D images. In this thesis we aim to develop a LiDAR based 3D object detection scheme. We combine the ideas of PointPillars and feature pyramid networks from 2D vision to propose Pillar-FPN. The proposed method directly takes 3D point clouds as input and outputs a 3D bounding box. Our pipeline consists of multiple variations of proposed Pillar-FPN at the feature fusion level that are described in the results section. We have trained our model on the KITTI train dataset and evaluated it on KITTI validation dataset.
Modeling of Creep Behavior of Particulate Composites with Focus on Interfacial Adhesion Effect
(2022)
Evaluation of creep compliance of particulate composites using empirical models always provides parameters depending on initial stress and material composition. The effort spent to connect model parameters with physical properties has not resulted in success yet. Further, during the creep, delamination between matrix and filler may occur depending on time and initial stress, reducing an interface adhesion and load transfer to filler particles. In this paper, the creep compliance curves of glass beads reinforced poly(butylene terephthalate) composites were fitted with Burgers and Findley models providing different sets of time-dependent model parameters for each initial stress. Despite the finding that the Findley model performs well in a primary creep, the Burgers model is more suitable if secondary creep comes into play; they allow only for a qualitative prediction of creep behavior because the interface adhesion and its time dependency is an implicit, hidden parameter. As Young’s modulus is a parameter of these models (and the majority of other creep models), it was selected to be introduced as a filler content-dependent parameter with the help of the cube in cube elementary volume approach of Paul. The analysis led to the time-dependent creep compliance that depends only on the time-dependent creep of the matrix and the normalized particle distance (or the filler volume content), and it allowed accounting for the adhesion effect. Comparison with the experimental data confirmed that the elementary volume-based creep compliance function can be used to predict the realistic creep behavior of particulate composites.
Entspannung im Arbeitsalltag – Einsatz von Mentalsystemen für die betriebliche Gesundheitsförderung
(2022)
Diese Formelsammlung enthält und erklärt finanzmathematische Formeln innerhalb finanzwirtschaftlicher Zusammenhänge, wie sie in den Wirtschaftswissenschaften und in der wirtschaftswissenschaftlichen Praxis fundamental notwendig sind. Das Verständnis der Formeln und deren praktische Anwendung werden durch nützliche Hilfen und verständliche Beispiele sinnvoll unterstützt, so dass der Kontext finanzmathematischer Formeln klar und erklärlich dargestellt wird. Diese Formelsammlung ist ein unverzichtbares Tool für Studierende der Wirtschaftswissenschaften, aber auch ein nützliches Nachschlagewerk für Verantwortliche aus Wirtschaft, Politik und Lehre. (Verlagsangaben)
Controlling
(2022)
Technological objects present themselves as necessary, only to become obsolete faster than ever before. This phenomenon has led to a population that experiences a plethora of technological objects and interfaces as they age, which become associated with certain stages of life and disappear thereafter. Noting the expanding body of literature within HCI about appropriation, our work pinpoints an area that needs more attention, “outdated technologies.” In other words, we assert that design practices can profit as much from imaginaries of the future as they can from reassessing artefacts from the past in a critical way. In a two-week fieldwork with 37 HCI students, we gathered an international collection of nostalgic devices from 14 different countries to investigate what memories people still have of older technologies and the ways in which these memories reveal normative and accidental use of technological objects. We found that participants primarily remembered older technologies with positive connotations and shared memories of how they had adapted and appropriated these technologies, rather than normative uses. We refer to this phenomenon as nostalgic reminiscence. In the future, we would like to develop this concept further by discussing how nostalgic reminiscence can be operationalized to stimulate speculative design in the present.
This edited volume on “Recent Advances in Renewable Energy” presents a selection of refereed papers presented at the 1st International Conference on Electrical Systems and Automation. The book provides rigorous discussions, the state of the art, and recent developments in the field of renewable energy sources supported by examples and case studies, making it an educational tool for relevant undergraduate and graduate courses. The book will be a valuable reference for beginners, researchers, and professionals interested in renewable energy.
This book which is the second part of two volumes on ''Control of Electrical and Electronic Systems” presents a compilation of selected contributions to the 1st International Conference on Electrical Systems & Automation. The book provides rigorous discussions, the state of the art, and recent developments in the modelling, simulation and control of power electronics, industrial systems, and embedded systems. The book will be a valuable reference for beginners, researchers, and professionals interested in control of electrical and electronic systems.
Telogene Einzelhaare sind häufig vorkommende Spurentypen an Tatorten. Derzeit werden sie zumeist von der STR-Typisierung ausgeschlossen, weil ihre STR-Profile aufgrund geringer DNA-Mengen und starker DNA-Degradierung in vielen Fällen unvollständig und schwierig zu interpretieren sind. In der vorliegenden Arbeit wurde eine systematische Vorgehensweise angewandt, um Korrelationen zwischen der DNA-Menge und DNA-Degradierung zu dem Erfolg der STR-Typisierung aufzuweisen und darauf basierend den Typisierungs-Erfolg von DNA aus Haaren vorhersagen zu können.
Zu diesem Zweck wurde ein human- (RiboD) und ein canin-spezifischer (RiboDog) qPCR-basierter Assay zur Messung der DNA-Menge und Bewertung der DNA-Integrität mittels eines Degradierungswerts (D-Wert) entwickelt. Aufgrund der Lage der genutzten Primer, welche auf ubiquitär vorkommende ribosomale DNA-Sequenzen abzielen, ist das Funktionsprinzip schnell und kostengünstig auf unterschiedliche Spezies anzuwenden. Die Funktionsweise der Assays wurde mittels seriell degradierter DNA bestätigt und der humane Assay wurde im Vergleich zum kommerziellen Quantifiler? Trio DNA Quantification Kit validiert. Schließlich wurde mit den Assays an DNA aus telogenen und katagenen Einzelhaaren von Menschen und Hunden der Zusammenhang zwischen DNA-Menge und DNA-Integrität zu der Vollständigkeit der STR-Allele (Allel Recovery) von DNA-Profilen untersucht, die mittels kapillarelektrophoretischer (CE) STR-Kits erhaltenen wurde. Es zeigte sich, dass bei humanen Einzelhaaren die Allel-Recovery sowohl von der DNA-Menge als auch der DNA-Integrität abhängt. Dagegen war die DNA-Degradierung bei einzelnen Hundehaaren durchweg geringer und die Allel-Recovery hing allein von der extrahierten DNA-Menge ab.
Um die STR-Analytik degradierter humaner DNA-Proben weiter zu verbessern, wurde ein neuartiger NGS-basierter Assay (maSTR, Mini-Amplikon-STR) etabliert, der die 16 forensischen STR-Loci des European Standard Sets und Amelogenin als sehr kurze Amplikons (76-296 bp) parallel amplifiziert. Mit intakter DNA generierte der maSTR-Assay im Mengenbereich von 200 pg eingesetzter DNA reproduzierbare, vollständige Profile ohne Allelic Drop-ins. Bei niedrigeren DNA-Mengen traten vereinzelt Allelic Drop-ins auf, wobei unter Verwendung von mindestens 43 pg DNA vollständige Profile erhalten wurden.
Die kombinierte Strategie aus RiboD-Messungen der DNA-Menge und -Integrität und daraus resultierendem STR-Typisierungserfolg des maSTR-Assays wurde an degradierter DNA validiert. Anschließend wurde die Strategie auf DNA aus telogenen und katagenen Einzelhaaren angewandt und mit den Ergebnissen des CE-basierten PowerPlex? ESX 17-Kits verglichen, das dasselbe STR-Marker-Set analysiert. Dabei zeigte sich, dass der Erfolg der STR-Typisierung beider STR-Assays sowohl von der optimalen Menge der Template-DNA als auch von der DNA-Integrität abhängt. Mit dem maSTR-Assay wurden vollständige Profile mit ungefähr 50 pg Input-DNA für leicht degradierte DNA aus Einzelhaaren nachgewiesen, sowie mit ungefähr 500 pg stark degradierter DNA. Aufgrund der geringen DNA-Mengen von telogenen Einzelhaaren schwankte die Reproduzierbarkeit der maSTR-Ergebnisse, war jedoch stets dem PowerPlex? ESX 17-Kit in Bezug auf die Allel-Recovery überlegen.
Ein Vergleich mit zwei, hinsichtlich der Längenverteilung der Amplikons komplementären CE-basierten STR-Kits (PowerPlex? ESX 17 und ESI 17 Fast), sowie mit einem kommerziellen NGS-Kit (ForenSeq? DNA Signature Prep) ergab, dass nicht die Technik der NGS, sondern die Kürze der Amplikons der wichtigste Faktor zur Typisierung degradierter DNA ist. Der maSTR-Assay wies in allen Vergleichen mit den genutzten kommerziellen Kits jedoch eine höhere Anzahl an Allelic Drop-ins auf. Diese traten umso häufiger auf, je geringer die verwendete DNA-Menge und je stärker degradiert diese war.
Weil Profile mit Allelic Drop-ins Mischprofilen entsprechen, wurden die per maSTR-Assay generierten STR-Profile mit Verfahren zur Interpretation von Mischspuren untersucht. Bei der Composite-Interpretation werden alle vorkommenden Allele von Replikaten gezählt, bei der Consensus-Interpretation lediglich die reproduzierbaren Allele. Dabei stellte sich heraus, dass im Fall von wenigen Allelic Drop-ins (PowerPlex? ESX 17-generierte Profile) die Composite-Interpretation und bei Allelic Drop-in-haltigen Profilen (maSTR-generierte Profile) die Consensus-Interpretation am besten geeignet ist.
Schließlich wurde mittels der GenoProof Mixture 3-Software untersucht, inwieweit semi- und vollständig kontinuierliche probabilistische Verfahren bei der biostatistischen Bewertung der DNA-Profile aus Einzelhaaren geeignet sind. Dabei zeigte sich, dass der maSTR-Assay aufgrund der hohen Anzahl an Allelic Drop-ins den CE-basierten Methoden nur in Fällen von DNA leicht überlegen ist, die in ausreichender Menge und gering degradiert vorliegt. In diesem Bereich gelingt die Zuordnung des Profils aus Haaren zum Referenzprofil jedoch ebenfalls mittels CE-basierten Methoden.
Aus allen Ergebnissen wurde eine Empfehlung für die Handhabung von DNA aus ausgefallenen Einzelhaaren abgeleitet, die auf dem DNA-Degradierungsgrad in Kombination mit der DNA-Menge basiert. Die vorliegende Arbeit schafft somit eine Grundlage, um ausgefallene Einzelhaare in der Routine-Arbeit von kriminaltechnischen Ermittlungen nutzbar zu machen, sowie gegebenenfalls auf andere Spurentypen mit degradierter DNA geringer Menge anzuwenden. Dadurch könnte die Nutzbarkeit solcher Spurentypen für die forensische Kriminalistik erhöht werden, insbesondere wenn die standardmäßig verwendeten CE-basierten Methoden versagen. Perspektivisch ist die Technik der NGS aufgrund der großen Multiplexierbarkeit uniformer, kurzer Marker generell der CE-basierten Technik bei der Typisierung degradierter DNA überlegen.
Social protection has been increasingly recognized by experts from different fields as a key instrument for social, economic, political, and environmental development. It is also known for tackling multiple goals related to the reduction of risk, poverty and inequality at once. Yet, its instruments are often seen in isolation, programmes are still managed in silos and the systemic aspect is often overlooked. Engaging in critical discussions about the systemic aspect of social protection and outlining what it really takes to pursue a systemic approach has motivated the two editors, Prof. Dr. Esther Schüring from H-BRS and Dr. Markus Loewe from the German Institute of Development and Sustainability (IDOS) to launch the very first Handbook on Social Protection Systems in late 2021.
The human enzymes GLYAT (glycine N-acyltransferase), GLYATL1 (glutamine N-phenylacetyltransferase) and GLYATL2 (glycine N-acyltransferase-like protein 2) are not only important in the detoxification of xenobiotics via the human liver, but are also involved in the elimination of acyl residues that accumulate in the form of their coenzyme A (coA) esters in some rare inborn errors of metabolism. This concerns, for example, disorders in the degradation of branched-chain amino acids, such as isovaleric acidemia or propionic acidemia. In addition, they also assist in the elimination of ammonium, which is produced during the transamination of amino acids and accumulates in urea cycle defects. Sequence variants of the enzymes have also been investigated, which may provide evidence of impaired enzyme activities, from which therapy adjustments can potentially be derived. A modified Escherichia coli strain was chosen for the overexpression and partial biochemical characterization of the enzymes, which may allow solubility and proper folding. Since post-translational protein modifications are very limited in bacteria, we also attempted to overexpress the enzymes in HEK293 cells (human-derived). In addition to characterization via immunoblots and activity assays, intracellular localization of the enzymes was also performed using GFP coupling and confocal laser scanning microscopy in transfected HEK293 cells. The GLYATL2 enzyme may have tasks beyond detoxification and metabolic defects and the preliminary molecular biology work has been performed as part of this project - the enzyme activity determinations were outsourced to a co-supervised bachelor thesis. The enzyme activity determinations with purified recombinant human enzyme from Escherichia coli provided a threefold higher activity of the sequence variant p.(Asn156Ser) for GLYAT, which should be considered as the probably authentic wild type of the enzyme. In addition, a reduced activity of the GLYAT variant p.(Gln61Leu), which is very common in South Africa, was shown, which could be of particular importance in the treatment of isovaleric acidemia, which is also common in South Africa. Intracellularly, GLYAT and GLYATL1 could be localized mitochondrially. As the analyses have shown, sequence variations of GLYAT and GLYATL1 influence their enzyme activity. As an example, the GLYAT variant p.(Gln61Leu) is frequently found in South Africa. In the case of reduced GLYAT activity, patients could be increasingly treated with L-carnitine in the sense of an individualized therapy, since the conjugation of the toxic isovaleryl-coA with glycine is restricted by the GLYAT sequence variation. Activity-reducing variants identified in this project are of particular interest, as they may influence the treatment of certain metabolic defects.
While the recent discussion on Art. 25 GDPR often considers the approach of data protection by design as an innovative idea, the notion of making data protection law more effective through requiring the data controller to implement the legal norms into the processing design is almost as old as the data protection debate. However, there is another, more recent shift in establishing the data protection by design approach through law, which is not yet understood to its fullest extent in the debate. Art. 25 GDPR requires the controller to not only implement the legal norms into the processing design but to do so in an effective manner. By explicitly declaring the effectiveness of the protection measures to be the legally required result, the legislator inevitably raises the question of which methods can be used to test and assure such efficacy. In our opinion, extending the legal compatibility assessment to the real effects of the required measures opens this approach to interdisciplinary methodologies. In this paper, we first summarise the current state of research on the methodology established in Art. 25 sect. 1 GDPR, and pinpoint some of the challenges of incorporating interdisciplinary research methodologies. On this premise, we present an empirical research methodology and first findings which offer one approach to answering the question on how to specify processing purposes effectively. Lastly, we discuss the implications of these findings for the legal interpretation of Art. 25 GDPR and related provisions, especially with respect to a more effective implementation of transparency and consent, and provide an outlook on possible next research steps.
SLC6A14 (ATB0,+) is unique among SLC proteins in its ability to transport 18 of the 20 proteinogenic (dipolar and cationic) amino acids and naturally occurring and synthetic analogues (including anti-viral prodrugs and nitric oxide synthase (NOS) inhibitors). SLC6A14 mediates amino acid uptake in multiple cell types where increased expression is associated with pathophysiological conditions including some cancers. Here, we investigated how a key position within the core LeuT-fold structure of SLC6A14 influences substrate specificity. Homology modelling and sequence analysis identified the transmembrane domain 3 residue V128 as equivalent to a position known to influence substrate specificity in distantly related SLC36 and SLC38 amino acid transporters. SLC6A14, with and without V128 mutations, was heterologously expressed and function determined by radiotracer solute uptake and electrophysiological measurement of transporter-associated current. Substituting the amino acid residue occupying the SLC6A14 128 position modified the binding pocket environment and selectively disrupted transport of cationic (but not dipolar) amino acids and related NOS inhibitors. By understanding the molecular basis of amino acid transporter substrate specificity we can improve knowledge of how this multi-functional transporter can be targeted and how the LeuT-fold facilitates such diversity in function among the SLC6 family and other SLC amino acid transporters.
While many proteins are known clients of heat shock protein 90 (Hsp90), it is unclear whether the transcription factor, thyroid hormone receptor beta (TRb), interacts with Hsp90 to control hormonal perception and signaling. Higher Hsp90 expression in mouse fibroblasts was elicited by the addition of triiodothyronine (T3). T3 bound to Hsp90 and enhanced adenosine triphosphate (ATP) binding of Hsp90 due to a specific binding site for T3, as identified by molecular docking experiments. The binding of TRb to Hsp90 was prevented by T3 or by the thyroid mimetic sobetirome. Purified recombinant TRb trapped Hsp90 from cell lysate or purified Hsp90 in pull-down experiments. The affinity of Hsp90 for TRb was 124 nM. Furthermore, T3 induced the release of bound TRb from Hsp90, which was shown by streptavidin-conjugated quantum dot (SAv-QD) masking assay. The data indicate that the T3 interaction with TRb and Hsp90 may be an amplifier of the cellular stress response by blocking Hsp90 activity.
Due to expected positive impacts on business, the application of artificial intelligence has been widely increased. The decision-making procedures of those models are often complex and not easily understandable to the company’s stakeholders, i.e. the people having to follow up on recommendations or try to understand automated decisions of a system. This opaqueness and black-box nature might hinder adoption, as users struggle to make sense and trust the predictions of AI models. Recent research on eXplainable Artificial Intelligence (XAI) focused mainly on explaining the models to AI experts with the purpose of debugging and improving the performance of the models. In this article, we explore how such systems could be made explainable to the stakeholders. For doing so, we propose a new convolutional neural network (CNN)-based explainable predictive model for product backorder prediction in inventory management. Backorders are orders that customers place for products that are currently not in stock. The company now takes the risk to produce or acquire the backordered products while in the meantime, customers can cancel their orders if that takes too long, leaving the company with unsold items in their inventory. Hence, for their strategic inventory management, companies need to make decisions based on assumptions. Our argument is that these tasks can be improved by offering explanations for AI recommendations. Hence, our research investigates how such explanations could be provided, employing Shapley additive explanations to explain the overall models’ priority in decision-making. Besides that, we introduce locally interpretable surrogate models that can explain any individual prediction of a model. The experimental results demonstrate effectiveness in predicting backorders in terms of standard evaluation metrics and outperform known related works with AUC 0.9489. Our approach demonstrates how current limitations of predictive technologies can be addressed in the business domain.
This paper explores the role of artificial intelligence (AI) in elite sports. We approach the topic from two perspectives. Firstly, we provide a literature based overview of AI success stories in areas other than sports. We identified multiple approaches in the area of Machine Perception, Machine Learning and Modeling, Planning and Optimization as well as Interaction and Intervention, holding a potential for improving training and competition. Secondly, we discover the present status of AI use in elite sports. Therefore, in addition to another literature review, we interviewed leading sports scientist, which are closely connected to the main national service institute for elite sports in their countries. The analysis of this literature review and the interviews show that the most activity is carried out in the methodical categories of signal and image processing. However, projects in the field of modeling & planning have become increasingly popular within the last years. Based on these two perspectives, we extract deficits, issues and opportunities and summarize them in six key challenges faced by the sports analytics community. These challenges include data collection, controllability of an AI by the practitioners and explainability of AI results.
Der technische Fortschritt im Bereich der Erhebung, Speicherung und Verarbeitung von Daten macht es erforderlich, neue Fragen zu sozialverträglichen Datenmärkten aufzuwerfen. So gibt es sowohl eine Tendenz zur vereinfachten Datenteilung als auch die Forderung, die informationelle Selbstbestimmung besser zu schützen. Innerhalb dieses Spannungsfeldes bewegt sich die Idee von Datentreuhändern. Ziel des Beitrags ist darzulegen, dass zwischen verschiedenen Formen der Datentreuhänderschaft unterschieden werden sollte, um der Komplexität des Themas gerecht zu werden. Insbesondere bedarf es neben der mehrseitigen Treuhänderschaft, mit dem Treuhänder als neutraler Instanz, auch der einseitigen Treuhänderschaft, bei dem der Treuhänder als Anwalt der Verbraucherinteressen fungiert. Aus dieser Perspektive wird das Modell der Datentreuhänderschaft als stellvertretende Deutung der Interessen individueller und kollektiver Identitäten systematisch entwickelt.
The backdated research dedicated to digital entrepreneurship education is immense, which makes it difficult to create an overview. Conversely, forward-thinking bibliometric visualization mapping and clustering can assist in visualizing and structuring difficult research literature. Hence, the goal of this mapping visualization study is to thoroughly discover and create clusters of EE to convey a taxonomic structure that can oblige as a basis for upcoming research. The analyzed data, which is drawn from Google Scholar through Publish or Perish tool, contain 1000 documents published between 2007 and 2022. This taxonomy should generate stronger bonds with digital entrepreneurial education research; on the other, it should stand in international research association to boost both interdisciplinary digital entrepreneurial education and its influence on a universal basis. This work strengthens student’s understanding of current digital entrepreneurial education research by classifying and decontaminating the most powerful knowledgeable relationship among its contributions and contributors. The bibliographic analysis includes ‘citation network’, ‘author’s research area’ and ‘paper content’ regarding the desired topic. In this paper, the above three mentioned terms are integrated which produces a bibliographic model of authors, titles of their papers, keywords and abstract by using Harzing’s Publish or Perish tool for extracting data from Google Scholar and further using VOSViewer to visualize networking map of co-authorship and term co-occurrence to administer the data for an instinctive and appropriate understanding of university students concerning ‘digital entrepreneurial intention’ research. This paper uses bibliometric analysis to analyze the keyword co-occurrence and co-authorship and VOSViewer is used for visualization.
Microarray-based experiments revealed that thyroid hormone triiodothyronine (T3) enhanced the binding of Cy5-labeled ATP on heat shock protein 90 (Hsp90). By molecular docking experiments with T3 on Hsp90, we identified a T3 binding site (TBS) near the ATP binding site on Hsp90. A synthetic peptide encoding HHHHHHRIKEIVKKHSQFIGYPITLFVEKE derived from the TBS on Hsp90 showed, in MST experiments, the binding of T3 at an EC50 of 50 μM. The binding motif can influence the activity of Hsp90 by hindering ATP accessibility or the release of ADP.
Cathepsin K (CatK) is a target for the treatment of osteoporosis, arthritis, and bone metastasis. Peptidomimetics with a cyanohydrazide warhead represent a new class of highly potent CatK inhibitors; however, their binding mechanism is unknown. We investigated two model cyanohydrazide inhibitors with differently positioned warheads: an azadipeptide nitrile Gü1303 and a 3-cyano-3-aza-β-amino acid Gü2602. Crystal structures of their covalent complexes were determined with mature CatK as well as a zymogen-like activation intermediate of CatK. Binding mode analysis, together with quantum chemical calculations, revealed that the extraordinary picomolar potency of Gü2602 is entropically favoured by its conformational flexibility at the nonprimed-primed subsites boundary. Furthermore, we demonstrated by live cell imaging that cyanohydrazides effectively target mature CatK in osteosarcoma cells. Cyanohydrazides also suppressed the maturation of CatK by inhibiting the autoactivation of the CatK zymogen. Our results provide structural insights for the rational design of cyanohydrazide inhibitors of CatK as potential drugs.
The epithelial sodium channel (ENaC) is a heterotrimeric ion channel that plays a key role in sodium and water homeostasis in tetrapod vertebrates. In the aldosterone-sensitive distal nephron, hormonally controlled ENaC expression matches dietary sodium intake to its excretion. Furthermore, ENaC mediates sodium absorption across the epithelia of the colon, sweat ducts, reproductive tract, and lung. ENaC is a constitutively active ion channel and its expression, membrane abundance, and open probability (PO) are controlled by multiple intracellular and extracellular mediators and mechanisms [9]. Aberrant ENaC regulation is associated with severe human diseases, including hypertension, cystic fibrosis, pulmonary edema, pseudohypoaldosteronism type 1, and nephrotic syndrome [9].
The implementation of the Sustainable Development Goals (SDGs) and the conservation and protection of nature are among the greatest challenges facing urban regions. There are few approaches so far that link the SDGs to natural diversity and related ecosystem services at the local level and track them in terms of increasing sustainable development at the local level. We want to close this gap by developing a set of indicators that capture ecosystem services in the sense of the SDGs and which are based on data that are freely available throughout Germany and Europe. Based on 10 SDGs and 35 SDG indicators, we are developing an ecosystem service and biodiversity-related indicator set for the evaluation of sustainable development in urban areas. We further show that it is possible to close many of the data gaps between SDGs and locally collected data mentioned in the literature and to translate the universal SDGs to the local level. Our example develops this set of indicators for the Bonn/Rhein-Sieg metropolitan area in North Rhine-Westphalia, Germany, which comprises both rural and densely populated settlements. This set of indicators can also help improve communication and plan sustainable development by increasing transparency in local sustainability, implementing a visible sustainability monitoring system, and strengthening the collaboration between local stakeholders.
Education for Sustainable Development (ESD, SDG 4) and human well-being (SDG 3) are among the central subjects of the Sustainable Development Goals (SDGs). In this article, based on the Questionnaire for Eudaimonic Well-Being (QEWB), we investigate to what extent (a) there is a connection between EWB and practical commitment to the SDGs and whether (b) there is a deficit in EWB among young people in general. We also want to use the article to draw attention to the need for further research on the links between human well-being and commitment for sustainable development. A total of 114 students between the ages of 18 and 34, who are either engaged in (extra)curricular activities of sustainable development (28 students) or not (86 students), completed the QEWB. The students were interviewed twice: once regarding their current and their aspired EWB. Our results show that students who are actively engaged in activities for sustainable development report a higher EWB than non-active students. Furthermore, we show that students generally report deficits in EWB and wish for an improvement in their well-being. This especially applies to aspects of EWB related to self-discovery and the sense of meaning in life. Our study suggests that a practice-oriented ESD in particular can have a positive effect on the quality of life of young students and can support them in working on deficits in EWB.
Mobiles Laser-Schneidsystem zur Unterstützung der USBV-Entschärfung und Beweissicherung (mobiLaS)
(2022)
Regions and their innovation ecosystems have increasingly become of interest to CSCW research as the context in which work, research and design takes place. Our study adds to this growing discourse, by providing preliminary data and reflections from an ongoing attempt to intervene and support a regional innovation ecosystem. We report on the benefits and shortcomings of a practice-oriented approach in such regional projects and highlight the importance of relations and the notion of spillover. Lastly, we discuss methodological and pragmatic hurdles that CSCW research needs to overcome in order to support regional innovation ecosystems successfully.
P30 - Das Elektrospinnen von halbleitenden Zinndioxidfasern für die Detektion von Wasserstoff
(2022)
Das Ziel dieser Arbeit ist die Entwicklung von dünnen keramischen Fasern als halbleitendes Sensormaterial zum Nachweis von Wasserstoff, möglichst bei Zimmertemperatur. Die elektrische Leitfähigkeit halbleitender Metalloxide ändert sich durch die Einwirkung von oxidierenden und reduzierenden Gasen auf die Oberfläche des Metalloxids. Dieser Effekt kann zur Messung der Gaskonzentration genutzt werden. Die Reaktion von Zinn(IV)-oxid mit Wasserstoff basiert auf der Reduktion des Zinn(IV)-oxids zum Zinn, wobei die Elektronen des Zinn(IV)-oxids im metallischen Zinn verbleiben und dort im nicht gebundenen Zustand zu einer Leitfähigkeitserhöhung beitragen. Die Reaktion des Wasserstoffes kann sowohl mit den Sauerstoffatomen des Oxids als auch mit adsorbierten Sauerstoffatomen an der Oxidoberfläche stattfinden.[ 6] Da die Reaktionen an der Oberfläche des Oxids stattfinden, sollten Sensoren mit einer großen Oberfläche im Vergleich zu metalloxidischen Bulkmaterialien eine höhere Empfindlichkeit aufweisen. [3] Die Verwendung von Fasern anstelle von Dünn- oder Dickschichten führt dabei zu einer besseren Sensitivität gegenüber Gasen.
Taste is a complex phenomenon that depends on the individual experience and is a matter of collective negotiation and mediation. On the contrary, it is uncommon to include taste and its many facets in everyday design, particularly online shopping for fresh food products. To realize this unused potential, we conducted two Co-Design workshops. Based on the participants’ results in the workshops, we prototyped and evaluated a click-dummy smart-phone app to explore consumers’ needs for digital taste depiction. We found that emphasizing the natural qualities of food products, external reviews, and personalizing features lead to a reflection on the individual taste experience. The self-reflection through our design enables consumers to develop their taste competencies and thus strengthen their autonomy in decision-making. Ultimately, exploring taste as a social experience adds to a broader understanding of taste beyond a sensory phenomenon.
Focus on what matters: improved feature selection techniques for personal thermal comfort modelling
(2022)
Occupants' personal thermal comfort (PTC) is indispensable for their well-being, physical and mental health, and work efficiency. Predicting PTC preferences in a smart home can be a prerequisite to adjusting the indoor temperature for providing a comfortable environment. In this research, we focus on identifying relevant features for predicting PTC preferences. We propose a machine learning-based predictive framework by employing supervised feature selection techniques. We apply two feature selection techniques to select the optimal sets of features to improve the thermal preference prediction performance. The experimental results on a public PTC dataset demonstrated the efficiency of the feature selection techniques that we have applied. In turn, our PTC prediction framework with feature selection techniques achieved state-of-the-art performance in terms of accuracy, Cohen's kappa, and area under the curve (AUC), outperforming conventional methods.
Process-induced changes in the morphology of biodegradable polybutylene adipate terephthalate (PBAT) and polylactic acid (PLA) blends modified with various multifunctional chainextending cross-linkers (CECLs) are presented. The morphology of unmodified and modified films produced with blown film extrusion is examined in an extrusion direction (ED) and a transverse direction (TD). While FTIR analysis showed only small peak shifts indicating that the CECLs modify the molecular weight of the PBAT/PLA blend, SEM investigations of the fracture surfaces of blown extrusion films revealed their significant effect on the morphology formed during the processing. Due to the combined shear and elongation deformation during blown film extrusion, rather spherical PLA islands were partly transformed into long fibrils, which tended to decay to chains of elliptical islands if cooled slowly. The CECL introduction into the blend changed the thickness of the PLA fibrils, modified the interface adhesion, and altered the deformation behavior of the PBAT matrix from brittle to ductile. The results proved that CECLs react selectively with PBAT, PLA, and their interface. Furthermore, the reactions of CECLs with PBAT/PLA induced by the processing depended on the deformation directions (ED and TD), thus resulting in further non-uniformities of blown extrusion films.
Seit Sokrates bildet die Frage „Was macht ein glückliches Leben aus?“ den Ausgangspunkt der Entwicklung einer Vielfalt von Wohlbefindenstheorien. Den Kern dieses Aufsatzes bildet die Erörterung der Fragen, inwieweit das Konzept der empirischen Lebenszufriedenheit und die dadurch gewonnenen Korrelate einen Beitrag zur Beantwortung dieser Frage leisten und ob diese Antworten eine Wohlbefindenstheorie begründen können, welche die philosophische Theorie mit empirischen Ergebnissen verknüpft.
Im Zentrum dieses Aufsatzes steht eine Diskussion der wichtigsten Wohlbefindenstheorien, ihrer Qualitäten, Gemeinsamkeiten und Unterschiede. Einen Schwerpunkt bildet die Theorie der subjektiven Lebenszufriedenheit. Ich diskutiere Stärken und Schwächen des Konzeptes und stelle die wichtigsten Ergebnisse der empirischen Lebenszufriedenheitsforschung in einem Überblick dar.
Im Ergebnis argumentiere ich, dass die Resultate der empirischen Forschung als Grundlage einer subjektiv-objektiven Wohlbefindenstheorie dienen können. Qualitativ hochwertige zwischenmenschliche Beziehungen, ein gesunder Lebensstil, eine ausgewogene Work-Life-Balance, der Einsatz für Andere, das Verfolgen von Lebenszielen und persönlichen Interessen bilden die Grundlage einer Wohlbefindenstheorie, die sich auf empirische Lebenszufriedenheitsforschung stützt.
In her recent article, Bender discusses several aspects of research–practice–collaborations (RPCs). In this commentary, we apply Bender's arguments to experiences in engineering research and development (R&D). We investigate the influence of interaction with practice partners on relevance, credibility, and legitimacy in the special engineering field of product development and analyze which methodological approaches are already being pursued for dealing with diverging interests and asymmetries and which steps will be necessary to include interests of civil society beyond traditional customer relations.
Professor Dr. Dietmar Fink, Inhaber des Lehrstuhls für Unternehmensberatung an der Hochschule Bonn-Rhein-Sieg und Geschäftsführender Direktor der Wissenschaftlichen Gesellschaft für Management und Beratung (WGMB) in Bonn, über den Mehrwert von Consulting-Rankings und den Sinn von Beraterprojekten bei Versicherern
The visual and auditory quality of computer-mediated stimuli for virtual and extended reality (VR/XR) is rapidly improving. Still, it remains challenging to provide a fully embodied sensation and awareness of objects surrounding, approaching, or touching us in a 3D environment, though it can greatly aid task performance in a 3D user interface. For example, feedback can provide warning signals for potential collisions (e.g., bumping into an obstacle while navigating) or pinpointing areas where one’s attention should be directed to (e.g., points of interest or danger). These events inform our motor behaviour and are often associated with perception mechanisms associated with our so-called peripersonal and extrapersonal space models that relate our body to object distance, direction, and contact point/impact. We will discuss these references spaces to explain the role of different cues in our motor action responses that underlie 3D interaction tasks. However, providing proximity and collision cues can be challenging. Various full-body vibration systems have been developed that stimulate body parts other than the hands, but can have limitations in their applicability and feasibility due to their cost and effort to operate, as well as hygienic considerations associated with e.g., Covid-19. Informed by results of a prior study using low-frequencies for collision feedback, in this paper we look at an unobtrusive way to provide spatial, proximal and collision cues. Specifically, we assess the potential of foot sole stimulation to provide cues about object direction and relative distance, as well as collision direction and force of impact. Results indicate that in particular vibration-based stimuli could be useful within the frame of peripersonal and extrapersonal space perception that support 3DUI tasks. Current results favor the feedback combination of continuous vibrotactor cues for proximity, and bass-shaker cues for body collision. Results show that users could rather easily judge the different cues at a reasonably high granularity. This granularity may be sufficient to support common navigation tasks in a 3DUI.
In the field of automatic music generation, one of the greatest challenges is the consistent generation of pieces continuously perceived positively by the majority of the audience since there is no objective method to determine the quality of a musical composition. However, composing principles, which have been refined for millennia, have shaped the core characteristics of today's music. A hybrid music generation system, mlmusic, that incorporates various static, music-theory-based methods, as well as data-driven, subsystems, is implemented to automatically generate pieces considered acceptable by the average listener. Initially, a MIDI dataset, consisting of over 100 hand-picked pieces of various styles and complexities, is analysed using basic music theory principles, and the abstracted information is fed into explicitly constrained LSTM networks. For chord progressions, each individual network is specifically trained on a given sequence length, while phrases are created by consecutively predicting the notes' offset, pitch and duration. Using these outputs as a composition's foundation, additional musical elements, along with constrained recurrent rhythmic and tonal patterns, are statically generated. Although no survey regarding the pieces' reception could be carried out, the successful generation of numerous compositions of varying complexities suggests that the integration of these fundamentally distinctive approaches might lead to success in other branches.