Refine
H-BRS Bibliography
- yes (373) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (80)
- Fachbereich Wirtschaftswissenschaften (69)
- Fachbereich Ingenieurwissenschaften und Kommunikation (65)
- Fachbereich Angewandte Naturwissenschaften (64)
- Fachbereich Sozialpolitik und Soziale Sicherung (59)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (53)
- Institut für funktionale Gen-Analytik (IFGA) (29)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (27)
- Institut für Cyber Security & Privacy (ICSP) (23)
- Institut für Verbraucherinformatik (IVI) (23)
Document Type
- Article (133)
- Conference Object (87)
- Part of a Book (44)
- Book (monograph, edited volume) (25)
- Preprint (15)
- Working Paper (13)
- Contribution to a Periodical (11)
- Report (9)
- Video (8)
- Research Data (6)
Year of publication
- 2021 (373) (remove)
Keywords
- Lehrbuch (7)
- DGQ (6)
- Melcher (6)
- Augmented Reality (4)
- Machine Learning (4)
- Usable Security (4)
- Big Data Analysis (3)
- Digitalisierung (3)
- Grundwerkzeug des Qualitätsmanagements (3)
- radiation (3)
Zur Förderung einer freien und offenen Wissenschaft fördert und unterstützt die Hochschule Bonn-Rhein-Sieg den ungehinderten Zugang zu wissenschaftlicher Arbeit. Um die wissenschaftlichen Ergebnisse der Forschenden an der H-BRS einer breiten Öffentlichkeit zugänglich zu machen, wird die Möglichkeit gefördert, wissenschaftliche Arbeiten Open Access zu publizieren. Dadurch lassen sich diese ohne Zugriffsbeschränkungen nutzen und sind international sichtbar.
Jahresbericht 2020
(2021)
It has been well proved that deep networks are efficient at extracting features from a given (source) labeled dataset. However, it is not always the case that they can generalize well to other (target) datasets which very often have a different underlying distribution. In this report, we evaluate four different domain adaptation techniques for image classification tasks: DeepCORAL, DeepDomainConfusion, CDAN and CDAN+E. These techniques are unsupervised given that the target dataset dopes not carry any labels during training phase. We evaluate model performance on the office-31 dataset. A link to the github repository of this report can be found here: https://github.com/agrija9/Deep-Unsupervised-Domain-Adaptation.
Kundenloyalität stellt als langfristig wirkende Metrik eine erstrebenswerte Erfolgsgröße vieler Unternehmen dar. Im Rahmen einer Strukturgleichungsmodellierung wurden die Beziehungen und Auswirkungen der wahrgenommenen Kundenzentrierung, des Markenvertrauens (kognitiv und affektiv) und der Preis-Wahrnehmung auf die Kundenloyalität (Wiederkaufintention und Empfehlungsbereitschaft) bei physischen high-Involvement-Produkten untersucht. (Verlagsangaben)
Ice accumulation in the blades of wind turbines can cause them to describe anomalous rotations or no rotations at all, thus affecting the generation of electricity and power output. In this work, we investigate the problem of ice accumulation in wind turbines by framing it as anomaly detection of multi-variate time series. Our approach focuses on two main parts: first, learning low-dimensional representations of time series using a Variational Recurrent Autoencoder (VRAE), and second, using unsupervised clustering algorithms to classify the learned representations as normal (no ice accumulated) or abnormal (ice accumulated). We have evaluated our approach on a custom wind turbine time series dataset, for the two-classes problem (one normal versus one abnormal class), we obtained a classification accuracy of up to 96$\%$ on test data. For the multiple-class problem (one normal versus multiple abnormal classes), we present a qualitative analysis of the low-dimensional learned latent space, providing insights into the capacities of our approach to tackle such problem. The code to reproduce this work can be found here https://github.com/agrija9/Wind-Turbines-VRAE-Paper.
TREE Jahresbericht 2019/2020
(2021)
Der Jahresbericht soll in seiner Breite als auch in seiner Tiefe die Stärken unserer gemeinschaftlichen Anstrengungen im Forschungsfeld der nachhaltigen Technologien aufzeigen: interdisziplinär, forschungsstark, nachwuchsfördernd und gesellschaftszugewandt.
Im vergangenen Jahr war die Pandemie auch für das Insitut TREE eine Herausforderung. Wie die Mitglieder mit der Umstellung auf eine hauptsächlich online stattfindende Kommunikation umgegangen sind und wie das Hochschulleben sich dadurch verändert hat, wurde im Jahresbericht unter "See you online" festgehalten. Auch der Wechsel im Direktorium des Instituts ist Thema des diesjährigen Jahresberichts. Unter den Hauptthemen "Wissenschaftstransfer", "TREE und Wirtschaft" und "Transfer Öffentlichkeit" können sie die wichtigsten Ereignisse für das Institut in den Jahren 2019 und 2020 nachlesen.
Machine learning and neural networks are now ubiquitous in sonar perception, but it lags behind the computer vision field due to the lack of data and pre-trained models specifically for sonar images. In this paper we present the Marine Debris Turntable dataset and produce pre-trained neural networks trained on this dataset, meant to fill the gap of missing pre-trained models for sonar images. We train Resnet 20, MobileNets, DenseNet121, SqueezeNet, MiniXception, and an Autoencoder, over several input image sizes, from 32 x 32 to 96 x 96, on the Marine Debris turntable dataset. We evaluate these models using transfer learning for low-shot classification in the Marine Debris Watertank and another dataset captured using a Gemini 720i sonar. Our results show that in both datasets the pre-trained models produce good features that allow good classification accuracy with low samples (10-30 samples per class). The Gemini dataset validates that the features transfer to other kinds of sonar sensors. We expect that the community benefits from the public release of our pre-trained models and the turntable dataset.
Object-Based Trace Model for Automatic Indicator Computation in the Human Learning Environments
(2021)
This paper proposes a traces model in the form of an object or class model (in the UML sense) which allows the automatic calculation of indicators of various kinds and independently of the computer environment for human learning (CEHL). The model is based on the establishment of a trace-based system that encompasses all the logic of traces collecting and indicators calculation. It is im-plemented in the form of a trace database. It is an important contribution in the field of the exploitation of the traces of apprenticeship in a CEHL because it pro-vides a general formalism for modeling the traces and allowing the calculation of several indicators at the same time. Also, with the inclusion of calculated indica-tors as potential learning traces, our model provides a formalism for classifying the various indicators in the form of inheritance relationships, which promotes the reuse of indicators already calculated. Economically, the model can allow organi-zations with different learning platforms to invest only in one traces Management System. At the social level, it can allow a better sharing of trace databases be-tween the various research institutions in the field of CEHL.
The universal basic income grant (UBIG): A comparative review of the characteristics and impact
(2021)
In recent years, public debates, pilot projects and academic research have internationally boosted the prominence of the universal basic income grant (UBIG) as a policy option. Despite this prominence, the arguments and evidence of the UBIG discussion have not been systematically put forward and discussed in light of the different UBIG conceptual understandings and applications. This paper adds value to the debate by systematic presenting the social, economical and political arguments in support of and against a UBIG. It furthermore discusses the UBIG dimensions/characteristics and variations, and also pose questions about whether all the UBIG experiments can really be classified as a UBIG. Antagonist of a UBIG often raise concerns about the negative effect of the lack of conditions and targeting in a UBIG. Since evidence on the impact of UBIG is limited, this paper turns to the evidence base on unconditional cash transfers and conditional cash transfers. The results show that it is the cash transfer rather than the conditionality and targeting that produce positive outcomes in areas of personal wellbeing.
Nudging stellt eine Methode zur positiven Verhaltensbeeinflussung unserer Mitmenschen dar. Mit diesem Instrument kann das Sicherheits- und Gesundheitsverhalten von Arbeitnehmern gestärkt werden. Allerdings findet sie trotz intensiver Forschung bislang wenig Anwendung im betrieblichen Kontext. Daher lautet die Forschungsfrage dieser Arbeit: „Wie lässt sich Nudging seitens der Unternehmen als Präventionsmaßnahme während der Corona-Pandemie einsetzen?“. Mit der Übertragung von Nudging in der Arbeitswelt auf die derzeitigen Herausforderungen der aktuellen Corona-Pandemie leistet diese Arbeit einen wichtigen Beitrag zur Entwicklung neuer Präventionsmaßnahmen in Unternehmen. In der Arbeit konnte festgestellt werden, dass die Entwicklung von Nudges im Unternehmen unter Einbeziehung der Mitarbeiter in einem proaktiven und partizipativen Prozess stattfinden sollte. Mithilfe eines solchen Prozesses werden Gründe für das mögliche Fehlverhalten der Arbeitnehmer analysiert. Anschließend sollten Nudging-Techniken eingesetzt werden, die genau an diesen Punkten anknüpfen – am Fehlverhalten der Menschen. Über den partizipativen Nudging-Prozess wird die Akzeptanz der Arbeitnehmer im Hinblick auf etwaige Maßnahmen gefördert. Es wird am reflektierten Entscheidungssystem der Arbeitnehmer angesetzt. Unter Berücksichtigung der Corona-Pandemie sollte im betrieblichen Kontext zur Förderung des Sicherheitsverhaltens besonders auf den Wirkmechanismus „Norms“ gesetzt werden. Im Home-Office eignen sich aufgrund der Distanz zu den Arbeitnehmern Nudges mit technischer Natur, wie z.B. automatisierte Anmeldungen zu Maßnahmen des Gesundheitsmanagements. Hier greift der Wirkmechanismus „Defaults“. Diese Bachelorarbeit wurde als theoretische Arbeit auf Grundlage einer Literaturrecherche verfasst.
This study investigated the application potential of Black Soldier Fly Larva Hermetia illucens Stratiomyidae: Diptera (L.1758) for wastewater treatment and the removal potential of chemical oxygen demand, ammonia, and phosphorus of and liquid manure residue and municipal waste water containing 1% solids content. Black Soldier Fly Larva were found to reduce the concentration of chemical oxygen demand, but unfortunately, increase the concentration of ammonia and phosphorus. The ability of Black Soldier Fly Larva to feed on organic waste of Liquid manure residue showed that Black Soldier Fly Larva increase their weight by 365% in a solution with 12% solids content and by 595% in a solution having 6% solids content. The study also showed that Black Soldier Fly Larva have the ability to survive in a solution of 1% solids content and have the ability to reduce chemical oxygen demand by up to 86.4% for liquid manure residue and 46.9% for municipal wastewater after 24 hours. Generally, ammonia increased by 43.9% for Liquid manure residue and 98.6% for municipal wastewater. Total phosphorus showed an increase of 11.0% and 88.6% increase for liquid manure residue and municipal wastewater respectively over the 8-day study. Transparent environments tend to reduce the COD content more than the dark environment, both for the liquid manure residue (55.8% and 65.4%) and municipal wastewater (71.5% and 66.4%).
The idea of a basic income grant (BIG) is not new and there are ongoing debates internationally as well as nationally in low- and middle-income countries just like in high-income countries of a BIG as a social protection policy option. The challenge is that there are different conceptualisations, which conflates and muddles the understanding. In the context of social assistance provision, a universal basic income grant (UBIG) is often compared and contrasted against targeted cash transfers (CTs). This case study systematically presents the arguments for targeted CTs and UBIGs. The value of the case study is that it systematically brings together these arguments, highlighting the variations in UBIG applications, including the evidence and actual impact of UBIG experiments. The structure of the case study is as follows: Section 2 simultaneously contrasts and compares the arguments for targeted CTs and UBIG. Section 3 discusses UBIG experiments, as well as presenting the evidence on the application of the UBIG idea, and Section 4 concludes.
Robot Action Diagnosis and Experience Correction by Falsifying Parameterised Execution Models
(2021)
When faced with an execution failure, an intelligent robot should be able to identify the likely reasons for the failure and adapt its execution policy accordingly. This paper addresses the question of how to utilise knowledge about the execution process, expressed in terms of learned constraints, in order to direct the diagnosis and experience acquisition process. In particular, we present two methods for creating a synergy between failure diagnosis and execution model learning. We first propose a method for diagnosing execution failures of parameterised action execution models, which searches for action parameters that violate a learned precondition model. We then develop a strategy that uses the results of the diagnosis process for generating synthetic data that are more likely to lead to successful execution, thereby increasing the set of available experiences to learn from. The diagnosis and experience correction methods are evaluated for the problem of handle grasping, such that we experimentally demonstrate the effectiveness of the diagnosis algorithm and show that corrected failed experiences can contribute towards improving the execution success of a robot.
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited. To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs. This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against two baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.083. Additionally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications. Finally, the source code and pre-trained STonKGs models are available at https://github.com/stonkgs/stonkgs and https://huggingface.co/stonkgs/stonkgs-150k.
A qualitative study of Machine Learning practices and engineering challenges in Earth Observation
(2021)
Machine Learning (ML) is ubiquitously on the advance. Like many domains, Earth Observation (EO) also increasingly relies on ML applications, where ML methods are applied to process vast amounts of heterogeneous and continuous data streams to answer socially and environmentally relevant questions. However, developing such ML- based EO systems remains challenging: Development processes and employed workflows are often barely structured and poorly reported. The application of ML methods and techniques is considered to be opaque and the lack of transparency is contradictory to the responsible development of ML-based EO applications. To improve this situation a better understanding of the current practices and engineering-related challenges in developing ML-based EO applications is required. In this paper, we report observations from an exploratory study where five experts shared their view on ML engineering in semi-structured interviews. We analysed these interviews with coding techniques as often applied in the domain of empirical software engineering. The interviews provide informative insights into the practical development of ML applications and reveal several engineering challenges. In addition, interviewees participated in a novel workflow sketching task, which provided a tangible reflection of implicit processes. Overall, the results confirm a gap between theoretical conceptions and real practices in ML development even though workflows were sketched abstractly as textbook-like. The results pave the way for a large-scale investigation on requirements for ML engineering in EO.
When an autonomous robot learns how to execute actions, it is of interest to know if and when the execution policy can be generalised to variations of the learning scenarios. This can inform the robot about the necessity of additional learning, as using incomplete or unsuitable policies can lead to execution failures. Generalisation is particularly relevant when a robot has to deal with a large variety of objects and in different contexts. In this paper, we propose and analyse a strategy for generalising parameterised execution models of manipulation actions over different objects based on an object ontology. In particular, a robot transfers a known execution model to objects of related classes according to the ontology, but only if there is no other evidence that the model may be unsuitable. This allows using ontological knowledge as prior information that is then refined by the robot’s own experiences. We verify our algorithm for two actions – grasping and stowing everyday objects – such that we show that the robot can deduce cases in which an existing policy can generalise to other objects and when additional execution knowledge has to be acquired.
Spätestens seit der Belegausgabepflicht in Deutschland ist der digitale Kassenbon in aller Munde. Neben der Reduzierung umweltschädlichen Thermopapiers ergeben sich mit dieser Technologie auch neue Schnittstellen zwischen Kunde:in und Handel. Diese können für eine stärkere Digitalisierung und ein gesteigertes Kund:innen-Erlebnis genutzt werden.
Vor diesem Hintergrund betrachtet dieses Whitepaper die Perspektiven der verschiedenen Stakeholder, Architekturen sowie mögliche Mehrwertdienste zur Steigerung des Kund:innen-Erlebnis, aber auch zur Optimierung der Handelsprozesse.
Target meaning representations for semantic parsing tasks are often based on programming or query languages, such as SQL, and can be formalized by a context-free grammar. Assuming a priori knowledge of the target domain, such grammars can be exploited to enforce syntactical constraints when predicting logical forms. To that end, we assess how syntactical parsers can be integrated into modern encoder-decoder frameworks. Specifically, we implement an attentional SEQ2SEQ model that uses an LR parser to maintain syntactically valid sequences throughout the decoding procedure. Compared to other approaches to grammar-guided decoding that modify the underlying neural network architecture or attempt to derive full parse trees, our approach is conceptually simpler, adds less computational overhead during inference and integrates seamlessly with current SEQ2SEQ frameworks. We present preliminary evaluation results against a recurrent SEQ2SEQ baseline on GEOQUERY and ATIS and demonstrate improved performance while enforcing grammatical constraints.
Simultaneous determination of selected catechins and pyrogallol in deer intoxications by HPLC-MS/MS
(2021)
Aim: To understand how transcriptional factors Pdr1 and Pdr3, belonging to the pleiotropic drug resistance system, are activated, and regulated after introducing chemical toxins to the cell in the model organism Saccharomyces cerevisiae.
Methods: Series of molecular methods were applied using different strains of S. cerevisiae over-expressing proteins of interest as a eukaryotic cell model. The chemical stress introduced to the cell is represented by menadione. Results were obtained performing protein detection and analysis. Additionally, the regulation of the DNA binding of the transcriptional activators after stimulation is quantified using chromatin immunoprecipitation, employing epitope-tagged factors and real-time qPCR.
Results: Our results indicated higher expression levels of the Pdr1 transcriptional factor, compared to its homologous Pdr3 after treatment with menadione. The yeast-cell defence system was tested against various organic solvents to exclude the possibility of their presence potentially affecting the results. The results indicate that Pdr1 is most abundant after 30 minutes from the beginning of the treatment, compared with 240 minutes after the treatment when the function of the transcription factor is faded. It appears that Pdr1 binding to the PDR5 and SNQ2 promoters, which are both activated by Pdr1, peaks around the same time, or more precisely after 40 minutes from the start of the treatment.
Conclusion: The tendency of Pdr1 reduction after its activation by menadione is detected. One possibility is that Pdr1, after recognizing the xenobiotic menadione, is removed by a degradation mechanism. Given the fact that Pdr1 directly binds the xenobiotic molecule, its destruction might help the cells to remove toxic levels of menadione. It is possible that overexpressing the part of Pdr1 which recognizes menadione alone was sufficient to detoxify and hence produce a tolerance towards menadione.
Our study shows ZP2 to be a new biomarker for diagnosis, best used in combination with other low abundant genes in colon cancer. Furthermore, ZP2 promotes cell proliferation via the ERK1/2-cyclinD1-signaling pathway. We demonstrate that ZP2 mRNA is expressed in a low-abundant manner with high specificity in subsets of cancer cell lines representing different cancer subtypes and also in a significant proportion of primary colon cancers. The potential benefit of ZP2 as a biomarker is discussed. In the second part of our study, the function of ZP2 in cancerogenesis has been analyzed. Since ZP2 shows an enhanced transcript level in colon cancer cells, siRNA experiments have been performed to verify the potential role of ZP2 in cell proliferation. Based on these data, ZP2 might serve as a new target molecule for cancer diagnosis and treatment in respective cancer types such as colon cancer.
In thyroid carcinoma cells, the soluble βgalactosidespecific lectin, galectin3, is extra and intracellularly expressed and plays a significant role in thyroid cancer diagnosis. The functional relevance of this molecule, particularly in its extracellular environment however, warrants further elucidation. To gain insight into this topic, the present study characterized principal functional properties of galectin3 in 3 commonly used thyroid carcinoma cell lines (BCPAP, Cal62 and FTC133) that express the molecule intra and extracellulary. Cellintrinsic galectin3 harbors a functional carbohydrate recognition domain as determined by affinity purification. Moreover, cell surface expressed galectin3 can be partially removed by treatment with lactose or asialofetuin, but not with sucrose. Thyroid carcinoma cells adhere to substratebound galectin3 in a βgalactosidespecific manner, whereby only cell adhesion, but not cell migration is promoted. Thus, thyroid tumor cells harbor functional active galectin3 that, inter alia, specifically interacts with cell surfaceexpressed molecular ligands in a βgalactosidedependent manner, whereby the molecule can at least interfere with cell adhesion. The modulation of galectin3 expression level or its ligands in such tumor cells could be of therapeutic interest and needs further experimental clarification.
Background: Staurosporine-dependent single and collective cell migration patterns of breast carcinoma cells MDA-MB-231, MCF-7, and SK-BR-3 were analysed to characterise the presence of drug-dependent migration promoting and inhibiting yin-yang effects. Methods: Migration patterns of various breast cancer cells after staurosporine treatment were investigated using Western blot, cell toxicity assays, single and collective cell migration assays, and video time-lapse. Statistical analyses were performed with Kruskal–Wallis and Fligner–Killeen tests. Results: Application of staurosporine induced the migration of single MCF-7 cells but inhibited collective cell migration. With the exception of low-density SK-BR-3 cells, staurosporine induced the generation of immobile flattened giant cells. Video time-lapse analysis revealed that within the borderline of cell collectives, staurosporine reduced the velocity of individual MDA-MB-231 and SK-BR-3, but not of MCF-7 cells. In individual MCF-7 cells, mainly the directionality of migration became disturbed, which led to an increased migration rate parallel to the borderline, and hereby to an inhibition of the migration of the cell collective as a total. Moreover, the application of staurosporine led to a transient activation of ERK1/2 in all cell lines. Conclusion: Dependent on the context (single versus collective cells), a drug may induce opposite effects in the same cell line.
In recent years, the basic income grant (BIG) discourse has gained attention worldwide as a potential policy option in social protection as testified by recent public debates, ongoing pilot projects, campaigning efforts,1 policy measures during Covid-19 and the surge in academic research. A BIG refers to regular cash transfers paid to all members of society irrespective of their socio-economic status, their capacity or willingness to participate in the labour market or having to meet pre-determined conditions (Offe 2008; Van Parijs 1995, 2003; Wright 2004, 2006). Despite the recent hype around BIG, Iran is the only country worldwide with a scaled-up BIG (Tabatabai 2011, 2012). Other programmes have never gone beyond pilot programmes. This raises the question why this is the case.
Der Hochschulentwicklungsplan (HEP) 3 der Hochschule Bonn-Rhein-Sieg beschreibt die strategischen Schwerpunkte der Hochschulentwicklung für die Jahre 2021 bis 2025. Er steht insbesondere unter den Leitbegriffen "Nachhaltigkeit und gesellschaftliche Verantwortung, Digitalisierung, Internationalisierung und Diversität".
Sie sind im Bereich Qualitätsmanagement tätig und haben die Aufgabe bekommen, ein Problem systematisch zu untersuchen und methodisch zu lösen? Sie haben zu viele Aufgaben und wissen nicht, wie Sie diese priorisieren sollen? Oder haben Sie zu begrenzte Ressourcen, um alle Reklamationen gleichzeitig bearbeiten zu können? Oder wissen nicht, wie Sie einen bestimmten Prozess in seinen Grenzen zielführend verbessern können?
Policy analysis is the cornerstone of evidence-based policy making.1 It identifies the problems, informs programme design, supports the monitoring of policy implementation and is needed to evaluate programme impacts (Scott 2005). Rigorous and credible policy evidence is necessary to ensure the transparency and accountability of policy decisions, to secure political and public support and, hence, the allocation of financial resources. Sound policy analysis helps design effective and efficient programmes, thereby maximizing programme impact.
The future of work
(2021)
Driven by the exponential increase in the computational power of machines, data digitalization and scientific advancement in robotics and automation, the current wave of technological change is seemingly unprecedented in speed and scale. It transforms manufacturing and businesses making them more flexible, decentralized and efficient (Lasi et al. 2014). Even though technological change is nothing new, some argue that it is different this time. The new technologies have not only the potential to substitute labor (Nomaler and Verspagen 2018), they also change the way people work. The trend towards new forms of employment is no longer a marginal phenomenon.
Application of underwater robots are on the rise, most of them are dependent on sonar for underwater vision, but the lack of strong perception capabilities limits them in this task. An important issue in sonar perception is matching image patches, which can enable other techniques like localization, change detection, and mapping. There is a rich literature for this problem in color images, but for acoustic images, it is lacking, due to the physics that produce these images. In this paper we improve on our previous results for this problem (Valdenegro-Toro et al, 2017), instead of modeling features manually, a Convolutional Neural Network (CNN) learns a similarity function and predicts if two input sonar images are similar or not. With the objective of improving the sonar image matching problem further, three state of the art CNN architectures are evaluated on the Marine Debris dataset, namely DenseNet, and VGG, with a siamese or two-channel architecture, and contrastive loss. To ensure a fair evaluation of each network, thorough hyper-parameter optimization is executed. We find that the best performing models are DenseNet Two-Channel network with 0.955 AUC, VGG-Siamese with contrastive loss at 0.949 AUC and DenseNet Siamese with 0.921 AUC. By ensembling the top performing DenseNet two-channel and DenseNet-Siamese models overall highest prediction accuracy obtained is 0.978 AUC, showing a large improvement over the 0.91 AUC in the state of the art.
The dataset contains the following data from successful and failed executions of the Toyota HSR robot placing a book on a shelf.
RGB images from the robot's head camera
Depth images from the robot's head camera
Rendered images of the robot's 3D model from the point of view of the robot's head camera
Force-torque readings from a wrist-mounted force-torque sensor
Joint efforts, velocities and positions
extrinsic and intrinsic camera calibration parameters
frame-level anomaly annotations
The anomalies that occur during execution include:
the manipulated book falling down
books on the shelf being disturbed significantly
camera occlusions
robot being disturbed by an external collision
The dataset is split into a train, validation and test set with the following number of trials:
Train: 48 successful trials
Validation: 6 successful trials
Test: 60 anomalous trials and 7 successful trials
Property-Based Testing in Simulation for Verifying Robot Action Execution in Tabletop Manipulation
(2021)
An important prerequisite for the reliability and robustness of a service robot is ensuring the robot’s correct behavior when it performs various tasks of interest. Extensive testing is one established approach for ensuring behavioural correctness; this becomes even more important with the integration of learning-based methods into robot software architectures, as there are often no theoretical guarantees about the performance of such methods in varying scenarios. In this paper, we aim towards evaluating the correctness of robot behaviors in tabletop manipulation through automatic generation of simulated test scenarios in which a robot assesses its performance using property-based testing. In particular, key properties of interest for various robot actions are encoded in an action ontology and are then verified and validated within a simulated environment. We evaluate our framework with a Toyota Human Support Robot (HSR) which is tested in a Gazebo simulation. We show that our framework can correctly and consistently identify various failed actions in a variety of randomised tabletop manipulation scenarios, in addition to providing deeper insights into the type and location of failures for each designed property.
Execution monitoring is essential for robots to detect and respond to failures. Since it is impossible to enumerate all failures for a given task, we learn from successful executions of the task to detect visual anomalies during runtime. Our method learns to predict the motions that occur during the nominal execution of a task, including camera and robot body motion. A probabilistic U-Net architecture is used to learn to predict optical flow, and the robot's kinematics and 3D model are used to model camera and body motion. The errors between the observed and predicted motion are used to calculate an anomaly score. We evaluate our method on a dataset of a robot placing a book on a shelf, which includes anomalies such as falling books, camera occlusions, and robot disturbances. We find that modeling camera and body motion, in addition to the learning-based optical flow prediction, results in an improvement of the area under the receiver operating characteristic curve from 0.752 to 0.804, and the area under the precision-recall curve from 0.467 to 0.549.
Short summary
Accompanying dataset for our paper
A. Mitrevski, P. G. Plöger, and G. Lakemeyer, "Robot Action Diagnosis and Experience Correction by Falsifying Parameterised Execution Models," in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2021.
Contents
The dataset includes a single zip archive, containing data from the experiment described in the paper (conducted with a Toyota HSR). The zip archive contains three subdirectories:
handle_grasping_failure_database: A dump of a MongoDB database containing data from the handle grasping experiment, including ground-truth grasping failure annotations
pre_arm_motion_images: Images collected from the robot's hand camera before moving the robot's hand towards the handle
pregrasp_images: Images collected from the robot's hand camera just before closing the gripper for grasping
The image names include the time stamp at which the images were taken; this allows matching each image with the execution data in the database.
Database usage
After unzipping the archive, the database can be restored with the command
mongorestore handle_grasping_failure_database
This will create a MongoDB database with the name drawer_handle_grasping_failures.
Code for processing the data and failure analysis can be found in our <a href="https://github.com/alex-mitrevski/explainable-robot-execution-models">GitHub repository.
Contents
There are two zip archives included (grasping.zip and throwing.zip), corresponding to two experiments (grasping objects and throwing them in a drawer), both performed with a Toyota HSR. Each archive contains two directories - learning and generalisation - with object-specific learning and generalisation data. For each object, we provide a dump of a MongoDB database, which contains data sufficient for learning the models used in our experiments.
Usage
After unzipping the archives, each database can be restored with the command
mongorestore [data_directory_name]
This will create a MongoDB database with the name of the directory. Code for processing the data and model learning can be found in our <a href="https://github.com/alex-mitrevski/explainable-robot-execution-models">GitHub repository.
The solvent exchange as one of the most important steps during the manufacturing process of organic aerogels was investigated. This step is crucial as a preparatory step for the supercritical drying, since the pore solvent must be soluble in supercritical carbon dioxide to enable solvent extraction. The development and subsequent optimization of a suitable system with a peristaltic pump for automatic solvent exchange proved to be a suitable approach. In addition, the influence of zeolites on the acceleration of the process was found to be beneficial. To investigate the process, the water content in acetone was determined at different times using Karl Fischer titration. The shrinkage, densities, as well as the surface areas of the aerogels were analyzed. Based on these, the influence of various process parameters on the final structure of the obtained aerogels was investigated and evaluated. Modeling on diffusion in porous materials completes this study.
Over the last decades, different kinds of design guides have been created to maintain consistency and usability in interactive system development. However, in the case of spatial applications, practitioners from research and industry either have difficulty finding them or perceive such guides as lacking relevance, practicability, and applicability. This paper presents the current state of scientific research and industry practice by investigating currently used design recommendations for mixed reality (MR) system development. We analyzed and compared 875 design recommendations for MR applications elicited from 89 scientific papers and documentation from six industry practitioners in a literature review. In doing so, we identified differences regarding four key topics: Focus on unique MR design challenges, abstraction regarding devices and ecosystems, level of detail and abstraction of content, and covered topics. Based on that,we contribute to the MR design research by providing three factors for perceived irrelevance and six main implications for design recommendations that are applicable in scientific and industry practice.
Augmented/Virtual Reality (AR/VR) is still a fragmented space to design for due to the rapidly evolving hardware, the interdisciplinarity of teams, and a lack of standards and best practices. We interviewed 26 professional AR/VR designers and developers to shed light on their tasks, approaches, tools, and challenges. Based on their work and the artifacts they generated, we found that AR/VR application creators fulfill four roles: concept developers, interaction designers, content authors, and technical developers. One person often incorporates multiple roles and faces a variety of challenges during the design process from the initial contextual analysis to the deployment. From analysis of their tool sets, methods, and artifacts, we describe critical key challenges. Finally, we discuss the importance of prototyping for the communication in AR/VR development teams and highlight design implications for future tools to create a more usable AR/VR tool chain.
In tree-based adaptive mesh refinement (AMR) we store refinement trees in the cells of an unstructured coarse mesh. This lets us combine the speed and simpler management of structured refinement trees with the more flexible mesh generation of the unstructured coarse mesh. But this creates a conflict between performance and geometrical accuracy. If we favor speed we reduce the cells in our coarse mesh and hence reduce the accuracy of our geometrical representation. If we want more accurate results we generate a finer coarse mesh and lose performance by managing more cells in our unstructured coarse mesh. To mitigate this conflict we present the prototype of an geometry description which we implement in an already existing library. With this description we build geometry adapted hexahedral refinement trees, which also support high-order curved boundary cells. We also present examples on how to use this description. Moreover, we test the speedup of this new algorithm compared with coarse meshes with different geometrical errors.
Die nationale Politik- und Forschungsstrategie Bioökonomie sieht eine Transformation der Wirtschaft vor, bei der die Verwendung fossiler Rohstoffe zunehmend durch den Einsatz nachwachsender Rohstoffe ersetzt wird. Der Einsatz biobasierter Kunststoffe soll dabei gefördert werden. Erste Analysen der Berichterstattung zu Biokunststoffen im Rahmen einer Pilotstudie ergaben, dass der Grundgedanke biologisch abbaubarer Kunststoffe breite Zustimmung im öffentlichen Diskurs erfährt. Abseits der soziopolitischen Diskursebene entwickelt sich jedoch eine medial geführte Diskussion um erhebliche Probleme mit den Stoffen in der Abfallwirtschaft. Die Gefahr besteht nun, dass diese Haltung verbreitet durch die Massenmedien auf die öffentliche Meinung abfärbt. Mangelnde öffentliche Akzeptanz könnte den Erfolg von innovativen Biokunststoff-Produkten gefährden.
In diesem Lehrbuch geben die Autoren eine Einführung in die Methoden des traditionellen Projektmanagements in Anlehnung an die deutsche Gesellschaft für Projektmanagement (GPM/IPMA) und das Project Management Institute (PMI). Die typischen Merkmale und Schritte in einem Projekt, wichtige Rollen sowie die hilfreiche Einteilung in Phasen werden aufgezeigt und der Leser lernt, eine realistische Projektplanung aufzustellen und das Projekt zielgerichtet zu steuern. Viele praktische Beispiele sowie eingefügte Video-Sequenzen veranschaulichen das Gelernte. Arbeitsvorlagen sowie Kontrollfragen und Musterlösungen zu jedem Kapitel erleichtern den Einsatz in der Lehre und im Selbststudium. (Verlagsangaben)
Off-lattice Boltzmann methods increase the flexibility and applicability of lattice Boltzmann methods by decoupling the discretizations of time, space, and particle velocities. However, the velocity sets that are mostly used in off-lattice Boltzmann simulations were originally tailored to on-lattice Boltzmann methods. In this contribution, we show how the accuracy and efficiency of weakly and fully compressible semi-Lagrangian off-lattice Boltzmann simulations is increased by velocity sets derived from cubature rules, i.e. multivariate quadratures, which have not been produced by the Gauss-product rule. In particular, simulations of 2D shock-vortex interactions indicate that the cubature-derived degree-nine D2Q19 velocity set is capable to replace the Gauss-product rule-derived D2Q25. Likewise, the degree-five velocity sets D3Q13 and D3Q21, as well as a degree-seven D3V27 velocity set were successfully tested for 3D Taylor-Green vortex flows to challenge and surpass the quality of the customary D3Q27 velocity set. In compressible 3D Taylor-Green vortex flows with Mach numbers Ma={0.5;1.0;1.5;2.0} on-lattice simulations with velocity sets D3Q103 and D3V107 showed only limited stability, while the off-lattice degree-nine D3Q45 velocity set accurately reproduced the kinetic energy provided by literature.
This thesis explores novel haptic user interfaces for touchscreens, virtual and remote environments (VE and RE). All feedback modalities have been designed to study performance and perception while focusing on integrating an additional sensory channel - the sense of touch. Related work has shown that tactile stimuli can increase performance and usability when interacting with a touchscreen. It was also shown that perceptual aspects in virtual environments could be improved by haptic feedback. Motivated by previous findings, this thesis examines the versatility of haptic feedback approaches. For this purpose, five haptic interfaces from two application areas are presented. Research methods from prototyping and experimental design are discussed and applied. These methods are used to create and evaluate the interfaces; therefore, seven experiments have been performed. All five prototypes use a unique feedback approach. While three haptic user interfaces designed for touchscreen interaction address the fingers, two interfaces developed for VE and RE target the feet. Within touchscreen interaction, an actuated touchscreen is presented, and study shows the limits and perceptibility of geometric shapes. The combination of elastic materials and a touchscreen is examined with the second interface. A psychophysical study has been conducted to highlight the potentials of the interface. The back of a smartphone is used for haptic feedback in the third prototype. Besides a psychophysical study, it is found that the touch accuracy could be increased. Interfaces presented in the second application area also highlight the versatility of haptic feedback. The sides of the feet are stimulated in the first prototype. They are used to provide proximity information of remote environments sensed by a telepresence robot. In a study, it was found that spatial awareness could be increased. Finally, the soles of the feet are stimulated. A designed foot platform that provides several feedback modalities shows that self-motion perception can be increased.
Mebendazole Mediates Proteasomal Degradation of GLI Transcription Factors in Acute Myeloid Leukemia
(2021)
The prognosis of elderly AML patients is still poor due to chemotherapy resistance. The Hedgehog (HH) pathway is important for leukemic transformation because of aberrant activation of GLI transcription factors. MBZ is a well-tolerated anthelmintic that exhibits strong antitumor effects. Herein, we show that MBZ induced strong, dose-dependent anti-leukemic effects on AML cells, including the sensitization of AML cells to chemotherapy with cytarabine. MBZ strongly reduced intracellular protein levels of GLI1/GLI2 transcription factors. Consequently, MBZ reduced the GLI promoter activity as observed in luciferase-based reporter assays in AML cell lines. Further analysis revealed that MBZ mediates its anti-leukemic effects by promoting the proteasomal degradation of GLI transcription factors via inhibition of HSP70/90 chaperone activity. Extensive molecular dynamics simulations were performed on the MBZ-HSP90 complex, showing a stable binding interaction at the ATP binding site. Importantly, two patients with refractory AML were treated with MBZ in an off-label setting and MBZ effectively reduced the GLI signaling activity in a modified plasma inhibitory assay, resulting in a decrease in peripheral blood blast counts in one patient. Our data prove that MBZ is an effective GLI inhibitor that should be evaluated in combination to conventional chemotherapy in the clinical setting.
In this contribution, we perform computer simulations to expedite the development of hydrogen storages based on metal hydride. These simulations enable in-depth analysis of the processes within the systems which otherwise could not be achieved. That is, because the determination of crucial process properties require measurement instruments in the setup which are currently not available. Therefore, we investigate the reliability of reaction values that are determined by a design of experiments.
Specifically, we first explain our model setup in detail. We define the mathematical terms to obtain insights into the thermal processes and reaction kinetics. We then compare the simulated results to measurements of a 5-gram sample consisting of iron-titanium-manganese (FeTiMn) to obtain the values with the highest agreement with the experimental data. In addition, we improve the model by replacing the commonly used Van’t-Hoff equation by a mathematical expression of the pressure-composition-isotherms (PCI) to calculate the equilibrium pressure.
Finally, the parameters’ accuracy is checked in yet another with an existing metal hydride system. The simulated results demonstrate high concordance with experimental data, which advocate the usage of approximated kinetic reaction properties by a design of experiments for further design studies. Furthermore, we are able to determine process parameters like the entropy and enthalpy.
Using Visual and Auditory Cues to Locate Out-of-View Objects in Head-Mounted Augmented Reality
(2021)
The Covid-19 pandemic has challenged educators across the world to move their teaching and mentoring from in-person to remote. During nonpandemic semesters at their institutes (e.g. universities), educators can directly provide students the software environment needed to support their learning - either in specialized computer laboratories (e.g. computational chemistry labs) or shared computer spaces. These labs are often supported by staff that maintains the operating systems (OS) and software. But how does one provide a specialized software environment for remote teaching? One solution is to provide students a customized operating system (e.g., Linux) that includes open-source software for supporting your teaching goals. However, such a solution should not require students to install the OS alongside their existing one (i.e. dual/multi-booting) or be used as a complete replacement. Such approaches are risky because of a) the students' possible lack of software expertise, b) the possible disruption of an existing software workflow that is needed in other classes or by other family members, and c) the importance of maintaining a working computer when isolated (e.g. societal restrictions). To illustrate possible solutions, we discuss our approach that used a customized Linux OS and a Docker container in a course that teaches computational chemistry and Python3.
The identification of energetic materials in containments is an important challenge for analytical methods in the field of safety and security. Opening a package without knowledge of its contents and the resulting hazards is highly involved with risks and should be avoided whenever possible. Therefore, preferable methods work non-destructive with minimal interaction and are capable of identifying target substances in a containment quickly and reliably. Most spectroscopic methods find their limits, if the target substance is shielded by a covering material. To solve this problem, a combined laser drilling method with subsequent identification of the target substance by means of Raman spectroscopic measurements through microscopic bore holes of the covering material is presented. A pulsed laser beam is used for both the drilling process and as an excitation source for Raman measurements in the same optical setup. Results show the ability of this new method to gain high-quality spectra even when performed through microscopic small bore channels. With the laser parameters chosen right, the method can even be performed on highly sensitive explosives like triacetone triperoxide (TATP). Another advantageous effect arises in an observed reduction in unwanted fluorescence signal in the spectral data, resulting from the confocal-like measurement setup with the bore hole acting as aperture.
Die vorliegende Forschungsarbeit behandelt die Filtrierung von sozialen Medien durch die Content Moderatoren. Die Content Moderatoren sind Menschen, die unter schlechten Arbeitsbedingungen und hoher psychischer Belastung Plattformen wie Facebook tagtäglich von strafbaren Inhalten filtern. Durch dieses Löschregime werden zwar gewaltzeigende Inhalte gelöscht, aber auch aufklärende oder künstlerische Inhalte zensiert.
Mithilfe von durchgeführten Fokusgruppendiskussionen wurde der Einfluss der Gruppenzusammensetzung und Darbietung positiver und negativer Informationen auf die individuelle und eindimensionale Wahrnehmung der Teilnehmer bezüglich sozialer Medien, Content Moderatoren und Löschrichtlinien erforscht.
Die Ergebnisse zeigen, dass die Darbietung der Informationen keinen Einfluss auf die mehrdimensionale Wahrnehmung der Probanden hatte und sie unabhängig von der Gruppenzusammensetzung nonkonforme Meinungen vertraten. Trotz des erweiterten Wissenstandes und der entwickelten Alternativlösungen äußerten die meisten Probanden nicht die Absicht, ihr Nutzungsverhalten künftig zu ändern. Trotz der Annahme, dass die meisten Probanden eine eindimensionale Wahrnehmung sozialer Medien haben, zeigten die Ergebnisse, dass viele Probanden eine ähnlich positive und kritische Haltung gegenüber den Plattformen hatten. Darüber hinaus wird deutlich, dass es einen starken Forschungsbedarf in Bezug auf die Langzeitfolgen der Arbeit als Content Moderator und Auswirkungen von Zensur und Filtrierung auf die Nutzer sozialer Medien gibt.
Steuerlehre für Dummies
(2021)
BWL für Dummies
(2021)
Diese etablierte Formelsammlung enthält und erklärt statistische Formeln, wie sie in den Wirtschaftswissenschaften und in der wirtschaftswissenschaftlichen Praxis fundamental notwendig sind. Das Verständnis der Formeln und deren praktische Anwendung werden durch nützliche Hilfen und verständliche Beispiele sinnvoll unterstützt, so dass der Kontext wirtschaftsstatistischer Formeln klar und allgemein verständlich erklärt dargestellt wird. Diese Formelsammlung ist ein unverzichtbares Tool für Studierende der Wirtschaftswissenschaften, aber auch ein nützliches Nachschlagewerk für Verantwortliche aus Wirtschaft, Politik und Lehre. In der 4. Auflage wurden die Inhalte überarbeitet und ergänzt. (Verlagsangaben)
This textbook contains and explains essential mathematical formulas within an economic context. A broad range of aids and supportive examples will help readers to understand the formulas and their practical applications. This mathematical formulary is presented in a practice-oriented, clear, and understandable manner, as it is needed for meaningful and relevant application in global business, as well as in the academic setting and economic practice.
Here we provide the electrophysiology data for the manuscript "Two functional epithelial sodium channel isoforms are present in rodents despite pronounced evolutionary pseudogenization and exon fusion", published in Molecular Biology and Evolution (2021): msab271 (doi: 10.1093/molbev/msab271). Data are reported as current values in Excel format, sorted according to the appearance in Figures and supplemented by explanatory text on the procedures/data presentation.
The clear-sky radiative effect of aerosol-radiation interactions is of relevance for our understanding of the climate system. The influence of aerosol on the surface energy budget is of high interest for the renewable energy sector. In this study, the radiative effect is investigated in particular with respect to seasonal and regional variations for the region of Germany and the year 2015 at the surface and top of atmosphere using two complementary approaches.
First, an ensemble of clear-sky models which explicitly consider aerosols is utilized to retrieve the aerosol optical depth and the surface direct radiative effect of aerosols by means of a clear sky fitting technique. For this, short-wave broadband irradiance measurements in the absence of clouds are used as a basis. A clear sky detection algorithm is used to identify cloud free observations. Considered are measurements of the shortwave broadband global and diffuse horizontal irradiance with shaded and unshaded pyranometers at 25 stations across Germany within the observational network of the German Weather Service (DWD). Clear sky models used are MMAC, MRMv6.1, METSTAT, ESRA, Heliosat-1, CEM and the simplified Solis model. The definition of aerosol and atmospheric characteristics of the models are examined in detail for their suitability for this approach.
Second, the radiative effect is estimated using explicit radiative transfer simulations with inputs on the meteorological state of the atmosphere, trace-gases and aerosol from CAMS reanalysis. The aerosol optical properties (aerosol optical depth, Ångström exponent, single scattering albedo and assymetrie parameter) are first evaluated with AERONET direct sun and inversion products. The largest inconsistency is found for the aerosol absorption, which is overestimated by about 0.03 or about 30 % by the CAMS reanalysis. Compared to the DWD observational network, the simulated global, direct and diffuse irradiances show reasonable agreement within the measurement uncertainty. The radiative kernel method is used to estimate the resulting uncertainty and bias of the simulated direct radiative effect. The uncertainty is estimated to −1.5 ± 7.7 and 0.6 ± 3.5 W m−2 at the surface and top of atmosphere, respectively, while the annual-mean biases at the surface, top of atmosphere and total atmosphere are −10.6, −6.5 and 4.1 W m−2, respectively.
The retrieval of the aerosol radiative effect with the clear sky models shows a high level of agreement with the radiative transfer simulations, with an RMSE of 5.8 W m−2 and a correlation of 0.75. The annual mean of the REari at the surface for the 25 DWD stations shows a value of −12.8 ± 5 W m−2 as average over the clear sky models, compared to −11 W m−2 from the radiative transfer simulations. Since all models assume a fixed aerosol characterisation, the annual cycle of the aerosol radiation effect cannot be reproduced. Out of this set of clear sky models, the largest level of agreement is shown by the ESRA and MRMv6.1 models.
New cars are increasingly "connected" by default. Since not having a car is not an option for many people, understanding the privacy implications of driving connected cars and using their data-based services is an even more pressing issue than for expendable consumer products. While risk-based approaches to privacy are well established in law, they have only begun to gain traction in HCI. These approaches are understood not only to increase acceptance but also to help consumers make choices that meet their needs. To the best of our knowledge, perceived risks in the context of connected cars have not been studied before. To address this gap, our study reports on the analysis of a survey with 18 open-ended questions distributed to 1,000 households in a medium-sized German city. Our findings provide qualitative insights into existing attitudes and use cases of connected car features and, most importantly, a list of perceived risks themselves. Taking the perspective of consumers, we argue that these can help inform consumers about data use in connected cars in a user-friendly way. Finally, we show how these risks fit into and extend existing risk taxonomies from other contexts with a stronger social perspective on risks of data use.
Universities, Entrepreneurship and Enterprise Development in Africa – Conference Proceedings 2020
(2021)
These proceedings are the outcome of the 8th annual joint conference on "Universities Entrepreneurship and Enterprise Development in Africa" between the University of Cape Coast, Ghana and Hochschule Bonn-Rhein-Sieg University of Applied Sciences, Germany, held on 19-20 February 2020 on Campus Sankt Augustin, Hochschule Bonn-Rhein-Sieg University of Applied Sciences.
Science Track FrOSCon 2018
(2021)
The genetic basis of brain tumor development is poorly understood. Here, leukocyte DNA of 21 patients from 15 families with ≥ 2 glioma cases each was analyzed by whole-genome or targeted sequencing. As a result, we identified two families with rare germline variants, p.(A592T) or p.(A817V), in the E-cadherin gene CDH1 that co-segregate with the tumor phenotype, consisting primarily of oligodendrogliomas, WHO grade II/III, IDH-mutant, 1p/19q-codeleted (ODs). Rare CDH1 variants, previously shown to predispose to gastric and breast cancer, were significantly overrepresented in these glioma families (13.3%) versus controls (1.7%). In 68 individuals from 28 gastric cancer families with pathogenic CDH1 germline variants, brain tumors, including a pituitary adenoma, were observed in three cases (4.4%), a significantly higher prevalence than in the general population (0.2%). Furthermore, rare CDH1 variants were identified in tumor DNA of 6/99 (6%) ODs. CDH1 expression was detected in undifferentiated and differentiating oligodendroglial cells isolated from rat brain. Functional studies using CRISPR/Cas9-mediated knock-in or stably transfected cell models demonstrated that the identified CDH1 germline variants affect cell membrane expression, cell migration and aggregation. E-cadherin ectodomain containing variant p.(A592T) had an increased intramolecular flexibility in a molecular dynamics simulation model. E-cadherin harboring intracellular variant p.(A817V) showed reduced β-catenin binding resulting in increased cytosolic and nuclear β-catenin levels reverted by treatment with the MAPK interacting serine/threonine kinase 1 inhibitor CGP 57380. Our data provide evidence for a role of deactivating CDH1 variants in the risk and tumorigenesis of neuroepithelial and epithelial brain tumors, particularly ODs, possibly via WNT/β-catenin signaling.
Fabry disease (FD) is an X‐linked lysosomal storage disorder. Deficiency of the lysosomal enzyme alpha‐galactosidase (GLA) leads to accumulation of potentially toxic globotriaosylceramide (Gb3) on a multisystem level. Cardiac and cerebrovascular abnormalities as well as progressive renal failure are severe, life‐threatening long‐term complications. The complete pathophysiology of chronic kidney disease (CKD) in FD and the role of tubular involvement for its progression are unclear.
We established human renal tubular epithelial cell lines from the urine of male FD patients and male controls. The renal tubular system is rich in mitochondria and involved in transport processes at high energy costs. Our studies revealed fragmented mitochondria with disrupted cristae structure in FD patient cells. Oxidative stress levels were elevated and oxidative phosphorylation was up‐regulated in FD pointing at enhanced energetic needs. Mitochondrial homeostasis and energy metabolism revealed major changes as evidenced by differences in mitochondrial number, energy production and fuel consumption. The changes were accompanied by activation of the autophagy machinery in FD. Sirtuin1, an important sensor of (renal) metabolic stress and modifier of different defense pathways, was highly expressed in FD.
Our data show that lysosomal FD impairs mitochondrial function and results in severe disturbance of mitochondrial energy metabolism in renal cells. This insight on a tissue‐specific level points to new therapeutic targets which might enhance treatment efficacy.
Since stationary self-checkout is widely introduced and well understood, previous research barely examined newer generations of smartphone-based Scan&Go. Especially from a design perspective, we know little about the factors contributing to the adoption of Scan&Go solutions and how design enables consumers to take full advantage of this development rather than being burdened with using complex and unenjoyable systems. To understand the influencing factors and the design from a consumer perspective, we conducted a mixed-methods study where we triangulated data of an online survey with 103 participants and a qualitative study with 20 participants. Based on the results, our study presents a refined and nuanced understanding of technology as well as infrastructure-related factors that influence adoption. Moreover, we present several implications for designing and implementing of Scan&Go in retail environments.
Atomic oxygen in the mesosphere and lower thermosphere measured by terahertz heterodyne spectroscopy
(2021)
Atomic oxygen is a main component of the mesosphere and lower thermosphere (MLT). The photochemistry and the energy balance of the MLT are governed by atomic oxygen. In addition, it is a tracer for dynamical motions in the MLT. It is difficult to measure with remote sensing techniques. Concentrations can be inferred indirectly from the oxygen air glow or from observations of OH, which is involved in photochemical processes related to atomic oxygen. Such measurements have been performed with several satellite instruments such as SCIAMACHY, SABER, WINDII and OSIRIS. However, the methods are indirect and rely on photochemical models and assumptions such as quenching rates, radiative lifetimes, and reaction coefficients. The results are not always in agreement, particularly when obtained with different instruments.
Representation and Experience-Based Learning of Explainable Models for Robot Action Execution
(2021)
For robots acting in human-centered environments, the ability to improve based on experience is essential for reliable and adaptive operation; however, particularly in the context of robot failure analysis, experience-based improvement is only useful if robots are also able to reason about and explain the decisions they make during execution. In this paper, we describe and analyse a representation of execution-specific knowledge that combines (i) a relational model in the form of qualitative attributes that describe the conditions under which actions can be executed successfully and (ii) a continuous model in the form of a Gaussian process that can be used for generating parameters for action execution, but also for evaluating the expected execution success given a particular action parameterisation. The proposed representation is based on prior, modelled knowledge about actions and is combined with a learning process that is supervised by a teacher. We analyse the benefits of this representation in the context of two actions – grasping handles and pulling an object on a table – such that the experiments demonstrate that the joint relational-continuous model allows a robot to improve its execution based on experience, while reducing the severity of failures experienced during execution.
3-Hydroxyisobutyrate Dehydrogenase (HIBADH) deficiency - a novel disorder of valine metabolism
(2021)
3-Hydroxyisobutyric acid (3HiB) is an intermediate in the degradation of the branched-chain amino acid valine. Disorders in valine degradation can lead to 3HiB accumulation and its excretion in the urine. This article describes the first two patients with a new metabolic disorder, 3-hydroxyisobutyrate dehydrogenase (HIBADH) deficiency, its phenotype and its treatment with a low-valine diet. The detected mutation in the HIBADH gene leads to nonsense-mediated mRNA decay of the mutant allele and to a complete loss-of-function of the enzyme. Under strict adherence to a low-valine diet a rapid decrease of 3HiB excretion in the urine was observed. Due to limited patient numbers and intrafamilial differences in phenotype with one affected and one unaffected individual, the clinical phenotype of HIBADH deficiency needs further evaluation.
We consider multi-solution optimization and generative models for the generation of diverse artifacts and the discovery of novel solutions. In cases where the domain's factors of variation are unknown or too complex to encode manually, generative models can provide a learned latent space to approximate these factors. When used as a search space, however, the range and diversity of possible outputs are limited to the expressivity and generative capabilities of the learned model. We compare the output diversity of a quality diversity evolutionary search performed in two different search spaces: 1) a predefined parameterized space and 2) the latent space of a variational autoencoder model. We find that the search on an explicit parametric encoding creates more diverse artifact sets than searching the latent space. A learned model is better at interpolating between known data points than at extrapolating or expanding towards unseen examples. We recommend using a generative model's latent space primarily to measure similarity between artifacts rather than for search and generation. Whenever a parametric encoding is obtainable, it should be preferred over a learned representation as it produces a higher diversity of solutions.
Sharing economies enabled by technical platforms have been studied regarding their economic, legal, and social effects, as well as with regard to their possible influences on CSCW topics such as work, collaboration, and trust. While a lot current research is focusing on the sharing economy and related communities, there is little work addressing the phenomenon from a socio-technical point of view. Our workshop is meant to address this gap. Building on research themes and discussion from last year’s ECSCW, we seek to engage deeper with topics such as novel socio-technical approaches for enabling sharing communities, discussing issues around digital consumer and worker protection, as well as emerging challenges and opportunities of existing platforms and approaches.
An der Hochschule Bonn-Rhein-Sieg fand am Donnerstag, den 23.9.21 das erste Verbraucherforum für Verbraucherinformatik statt. Im Rahmen der Online-Tagesveranstaltung diskutierten mehr als 30 Teilnehmer:innen über Themen und Ideen rund um den Bereich Verbraucherdatenschutz. Dabei kamen sowohl Beiträge aus der Informatik, den Verbraucher- und Sozialwissenschaften sowie auch der regulatorischen Perspektive zur Sprache. Der folgende Beitrag stellt den Hintergrund der Veranstaltung dar und berichtet über Inhalte der Vorträge sowie Anknüpfungspunkte für die weitere Konstituierung der Verbraucherinformatik. Veranstalter waren das Institut für Verbraucherinformatik an der H-BRS in Zusammenarbeit mit dem Lehrstuhl IT-Sicherheit der Universität Siegen sowie dem Kompetenzzentrum Verbraucherforschung NRW der Verbraucherzentrale NRW e. V. mit Förderung des Bundesministeriums der Justiz und für Verbraucherschutz.