Refine
H-BRS Bibliography
- yes (242) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (68)
- Fachbereich Angewandte Naturwissenschaften (59)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (43)
- Fachbereich Ingenieurwissenschaften und Kommunikation (38)
- Fachbereich Wirtschaftswissenschaften (37)
- Institut für funktionale Gen-Analytik (IFGA) (29)
- Fachbereich Sozialpolitik und Soziale Sicherung (22)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (20)
- Institut für Cyber Security & Privacy (ICSP) (16)
- Institut für Verbraucherinformatik (IVI) (16)
Document Type
- Article (101)
- Conference Object (72)
- Part of a Book (27)
- Preprint (15)
- Book (monograph, edited volume) (7)
- Research Data (6)
- Doctoral Thesis (5)
- Working Paper (4)
- Conference Proceedings (2)
- Part of Periodical (1)
- Report (1)
- Study Thesis (1)
Year of publication
- 2021 (242) (remove)
Language
- English (242) (remove)
Keywords
- Augmented Reality (4)
- Machine Learning (4)
- Big Data Analysis (3)
- Usable Security (3)
- recovery (3)
- AML (2)
- Authentication features (2)
- Automatic Differentiation (2)
- C sequestration (2)
- C source (2)
It has been well proved that deep networks are efficient at extracting features from a given (source) labeled dataset. However, it is not always the case that they can generalize well to other (target) datasets which very often have a different underlying distribution. In this report, we evaluate four different domain adaptation techniques for image classification tasks: DeepCORAL, DeepDomainConfusion, CDAN and CDAN+E. These techniques are unsupervised given that the target dataset dopes not carry any labels during training phase. We evaluate model performance on the office-31 dataset. A link to the github repository of this report can be found here: https://github.com/agrija9/Deep-Unsupervised-Domain-Adaptation.
Ice accumulation in the blades of wind turbines can cause them to describe anomalous rotations or no rotations at all, thus affecting the generation of electricity and power output. In this work, we investigate the problem of ice accumulation in wind turbines by framing it as anomaly detection of multi-variate time series. Our approach focuses on two main parts: first, learning low-dimensional representations of time series using a Variational Recurrent Autoencoder (VRAE), and second, using unsupervised clustering algorithms to classify the learned representations as normal (no ice accumulated) or abnormal (ice accumulated). We have evaluated our approach on a custom wind turbine time series dataset, for the two-classes problem (one normal versus one abnormal class), we obtained a classification accuracy of up to 96$\%$ on test data. For the multiple-class problem (one normal versus multiple abnormal classes), we present a qualitative analysis of the low-dimensional learned latent space, providing insights into the capacities of our approach to tackle such problem. The code to reproduce this work can be found here https://github.com/agrija9/Wind-Turbines-VRAE-Paper.
Machine learning and neural networks are now ubiquitous in sonar perception, but it lags behind the computer vision field due to the lack of data and pre-trained models specifically for sonar images. In this paper we present the Marine Debris Turntable dataset and produce pre-trained neural networks trained on this dataset, meant to fill the gap of missing pre-trained models for sonar images. We train Resnet 20, MobileNets, DenseNet121, SqueezeNet, MiniXception, and an Autoencoder, over several input image sizes, from 32 x 32 to 96 x 96, on the Marine Debris turntable dataset. We evaluate these models using transfer learning for low-shot classification in the Marine Debris Watertank and another dataset captured using a Gemini 720i sonar. Our results show that in both datasets the pre-trained models produce good features that allow good classification accuracy with low samples (10-30 samples per class). The Gemini dataset validates that the features transfer to other kinds of sonar sensors. We expect that the community benefits from the public release of our pre-trained models and the turntable dataset.
Object-Based Trace Model for Automatic Indicator Computation in the Human Learning Environments
(2021)
This paper proposes a traces model in the form of an object or class model (in the UML sense) which allows the automatic calculation of indicators of various kinds and independently of the computer environment for human learning (CEHL). The model is based on the establishment of a trace-based system that encompasses all the logic of traces collecting and indicators calculation. It is im-plemented in the form of a trace database. It is an important contribution in the field of the exploitation of the traces of apprenticeship in a CEHL because it pro-vides a general formalism for modeling the traces and allowing the calculation of several indicators at the same time. Also, with the inclusion of calculated indica-tors as potential learning traces, our model provides a formalism for classifying the various indicators in the form of inheritance relationships, which promotes the reuse of indicators already calculated. Economically, the model can allow organi-zations with different learning platforms to invest only in one traces Management System. At the social level, it can allow a better sharing of trace databases be-tween the various research institutions in the field of CEHL.
The universal basic income grant (UBIG): A comparative review of the characteristics and impact
(2021)
In recent years, public debates, pilot projects and academic research have internationally boosted the prominence of the universal basic income grant (UBIG) as a policy option. Despite this prominence, the arguments and evidence of the UBIG discussion have not been systematically put forward and discussed in light of the different UBIG conceptual understandings and applications. This paper adds value to the debate by systematic presenting the social, economical and political arguments in support of and against a UBIG. It furthermore discusses the UBIG dimensions/characteristics and variations, and also pose questions about whether all the UBIG experiments can really be classified as a UBIG. Antagonist of a UBIG often raise concerns about the negative effect of the lack of conditions and targeting in a UBIG. Since evidence on the impact of UBIG is limited, this paper turns to the evidence base on unconditional cash transfers and conditional cash transfers. The results show that it is the cash transfer rather than the conditionality and targeting that produce positive outcomes in areas of personal wellbeing.
This study investigated the application potential of Black Soldier Fly Larva Hermetia illucens Stratiomyidae: Diptera (L.1758) for wastewater treatment and the removal potential of chemical oxygen demand, ammonia, and phosphorus of and liquid manure residue and municipal waste water containing 1% solids content. Black Soldier Fly Larva were found to reduce the concentration of chemical oxygen demand, but unfortunately, increase the concentration of ammonia and phosphorus. The ability of Black Soldier Fly Larva to feed on organic waste of Liquid manure residue showed that Black Soldier Fly Larva increase their weight by 365% in a solution with 12% solids content and by 595% in a solution having 6% solids content. The study also showed that Black Soldier Fly Larva have the ability to survive in a solution of 1% solids content and have the ability to reduce chemical oxygen demand by up to 86.4% for liquid manure residue and 46.9% for municipal wastewater after 24 hours. Generally, ammonia increased by 43.9% for Liquid manure residue and 98.6% for municipal wastewater. Total phosphorus showed an increase of 11.0% and 88.6% increase for liquid manure residue and municipal wastewater respectively over the 8-day study. Transparent environments tend to reduce the COD content more than the dark environment, both for the liquid manure residue (55.8% and 65.4%) and municipal wastewater (71.5% and 66.4%).
The idea of a basic income grant (BIG) is not new and there are ongoing debates internationally as well as nationally in low- and middle-income countries just like in high-income countries of a BIG as a social protection policy option. The challenge is that there are different conceptualisations, which conflates and muddles the understanding. In the context of social assistance provision, a universal basic income grant (UBIG) is often compared and contrasted against targeted cash transfers (CTs). This case study systematically presents the arguments for targeted CTs and UBIGs. The value of the case study is that it systematically brings together these arguments, highlighting the variations in UBIG applications, including the evidence and actual impact of UBIG experiments. The structure of the case study is as follows: Section 2 simultaneously contrasts and compares the arguments for targeted CTs and UBIG. Section 3 discusses UBIG experiments, as well as presenting the evidence on the application of the UBIG idea, and Section 4 concludes.
Robot Action Diagnosis and Experience Correction by Falsifying Parameterised Execution Models
(2021)
When faced with an execution failure, an intelligent robot should be able to identify the likely reasons for the failure and adapt its execution policy accordingly. This paper addresses the question of how to utilise knowledge about the execution process, expressed in terms of learned constraints, in order to direct the diagnosis and experience acquisition process. In particular, we present two methods for creating a synergy between failure diagnosis and execution model learning. We first propose a method for diagnosing execution failures of parameterised action execution models, which searches for action parameters that violate a learned precondition model. We then develop a strategy that uses the results of the diagnosis process for generating synthetic data that are more likely to lead to successful execution, thereby increasing the set of available experiences to learn from. The diagnosis and experience correction methods are evaluated for the problem of handle grasping, such that we experimentally demonstrate the effectiveness of the diagnosis algorithm and show that corrected failed experiences can contribute towards improving the execution success of a robot.
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited. To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs. This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against two baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.083. Additionally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications. Finally, the source code and pre-trained STonKGs models are available at https://github.com/stonkgs/stonkgs and https://huggingface.co/stonkgs/stonkgs-150k.
A qualitative study of Machine Learning practices and engineering challenges in Earth Observation
(2021)
Machine Learning (ML) is ubiquitously on the advance. Like many domains, Earth Observation (EO) also increasingly relies on ML applications, where ML methods are applied to process vast amounts of heterogeneous and continuous data streams to answer socially and environmentally relevant questions. However, developing such ML- based EO systems remains challenging: Development processes and employed workflows are often barely structured and poorly reported. The application of ML methods and techniques is considered to be opaque and the lack of transparency is contradictory to the responsible development of ML-based EO applications. To improve this situation a better understanding of the current practices and engineering-related challenges in developing ML-based EO applications is required. In this paper, we report observations from an exploratory study where five experts shared their view on ML engineering in semi-structured interviews. We analysed these interviews with coding techniques as often applied in the domain of empirical software engineering. The interviews provide informative insights into the practical development of ML applications and reveal several engineering challenges. In addition, interviewees participated in a novel workflow sketching task, which provided a tangible reflection of implicit processes. Overall, the results confirm a gap between theoretical conceptions and real practices in ML development even though workflows were sketched abstractly as textbook-like. The results pave the way for a large-scale investigation on requirements for ML engineering in EO.
When an autonomous robot learns how to execute actions, it is of interest to know if and when the execution policy can be generalised to variations of the learning scenarios. This can inform the robot about the necessity of additional learning, as using incomplete or unsuitable policies can lead to execution failures. Generalisation is particularly relevant when a robot has to deal with a large variety of objects and in different contexts. In this paper, we propose and analyse a strategy for generalising parameterised execution models of manipulation actions over different objects based on an object ontology. In particular, a robot transfers a known execution model to objects of related classes according to the ontology, but only if there is no other evidence that the model may be unsuitable. This allows using ontological knowledge as prior information that is then refined by the robot’s own experiences. We verify our algorithm for two actions – grasping and stowing everyday objects – such that we show that the robot can deduce cases in which an existing policy can generalise to other objects and when additional execution knowledge has to be acquired.
Target meaning representations for semantic parsing tasks are often based on programming or query languages, such as SQL, and can be formalized by a context-free grammar. Assuming a priori knowledge of the target domain, such grammars can be exploited to enforce syntactical constraints when predicting logical forms. To that end, we assess how syntactical parsers can be integrated into modern encoder-decoder frameworks. Specifically, we implement an attentional SEQ2SEQ model that uses an LR parser to maintain syntactically valid sequences throughout the decoding procedure. Compared to other approaches to grammar-guided decoding that modify the underlying neural network architecture or attempt to derive full parse trees, our approach is conceptually simpler, adds less computational overhead during inference and integrates seamlessly with current SEQ2SEQ frameworks. We present preliminary evaluation results against a recurrent SEQ2SEQ baseline on GEOQUERY and ATIS and demonstrate improved performance while enforcing grammatical constraints.
Simultaneous determination of selected catechins and pyrogallol in deer intoxications by HPLC-MS/MS
(2021)
Aim: To understand how transcriptional factors Pdr1 and Pdr3, belonging to the pleiotropic drug resistance system, are activated, and regulated after introducing chemical toxins to the cell in the model organism Saccharomyces cerevisiae.
Methods: Series of molecular methods were applied using different strains of S. cerevisiae over-expressing proteins of interest as a eukaryotic cell model. The chemical stress introduced to the cell is represented by menadione. Results were obtained performing protein detection and analysis. Additionally, the regulation of the DNA binding of the transcriptional activators after stimulation is quantified using chromatin immunoprecipitation, employing epitope-tagged factors and real-time qPCR.
Results: Our results indicated higher expression levels of the Pdr1 transcriptional factor, compared to its homologous Pdr3 after treatment with menadione. The yeast-cell defence system was tested against various organic solvents to exclude the possibility of their presence potentially affecting the results. The results indicate that Pdr1 is most abundant after 30 minutes from the beginning of the treatment, compared with 240 minutes after the treatment when the function of the transcription factor is faded. It appears that Pdr1 binding to the PDR5 and SNQ2 promoters, which are both activated by Pdr1, peaks around the same time, or more precisely after 40 minutes from the start of the treatment.
Conclusion: The tendency of Pdr1 reduction after its activation by menadione is detected. One possibility is that Pdr1, after recognizing the xenobiotic menadione, is removed by a degradation mechanism. Given the fact that Pdr1 directly binds the xenobiotic molecule, its destruction might help the cells to remove toxic levels of menadione. It is possible that overexpressing the part of Pdr1 which recognizes menadione alone was sufficient to detoxify and hence produce a tolerance towards menadione.
Our study shows ZP2 to be a new biomarker for diagnosis, best used in combination with other low abundant genes in colon cancer. Furthermore, ZP2 promotes cell proliferation via the ERK1/2-cyclinD1-signaling pathway. We demonstrate that ZP2 mRNA is expressed in a low-abundant manner with high specificity in subsets of cancer cell lines representing different cancer subtypes and also in a significant proportion of primary colon cancers. The potential benefit of ZP2 as a biomarker is discussed. In the second part of our study, the function of ZP2 in cancerogenesis has been analyzed. Since ZP2 shows an enhanced transcript level in colon cancer cells, siRNA experiments have been performed to verify the potential role of ZP2 in cell proliferation. Based on these data, ZP2 might serve as a new target molecule for cancer diagnosis and treatment in respective cancer types such as colon cancer.
In thyroid carcinoma cells, the soluble βgalactosidespecific lectin, galectin3, is extra and intracellularly expressed and plays a significant role in thyroid cancer diagnosis. The functional relevance of this molecule, particularly in its extracellular environment however, warrants further elucidation. To gain insight into this topic, the present study characterized principal functional properties of galectin3 in 3 commonly used thyroid carcinoma cell lines (BCPAP, Cal62 and FTC133) that express the molecule intra and extracellulary. Cellintrinsic galectin3 harbors a functional carbohydrate recognition domain as determined by affinity purification. Moreover, cell surface expressed galectin3 can be partially removed by treatment with lactose or asialofetuin, but not with sucrose. Thyroid carcinoma cells adhere to substratebound galectin3 in a βgalactosidespecific manner, whereby only cell adhesion, but not cell migration is promoted. Thus, thyroid tumor cells harbor functional active galectin3 that, inter alia, specifically interacts with cell surfaceexpressed molecular ligands in a βgalactosidedependent manner, whereby the molecule can at least interfere with cell adhesion. The modulation of galectin3 expression level or its ligands in such tumor cells could be of therapeutic interest and needs further experimental clarification.
Background: Staurosporine-dependent single and collective cell migration patterns of breast carcinoma cells MDA-MB-231, MCF-7, and SK-BR-3 were analysed to characterise the presence of drug-dependent migration promoting and inhibiting yin-yang effects. Methods: Migration patterns of various breast cancer cells after staurosporine treatment were investigated using Western blot, cell toxicity assays, single and collective cell migration assays, and video time-lapse. Statistical analyses were performed with Kruskal–Wallis and Fligner–Killeen tests. Results: Application of staurosporine induced the migration of single MCF-7 cells but inhibited collective cell migration. With the exception of low-density SK-BR-3 cells, staurosporine induced the generation of immobile flattened giant cells. Video time-lapse analysis revealed that within the borderline of cell collectives, staurosporine reduced the velocity of individual MDA-MB-231 and SK-BR-3, but not of MCF-7 cells. In individual MCF-7 cells, mainly the directionality of migration became disturbed, which led to an increased migration rate parallel to the borderline, and hereby to an inhibition of the migration of the cell collective as a total. Moreover, the application of staurosporine led to a transient activation of ERK1/2 in all cell lines. Conclusion: Dependent on the context (single versus collective cells), a drug may induce opposite effects in the same cell line.
In recent years, the basic income grant (BIG) discourse has gained attention worldwide as a potential policy option in social protection as testified by recent public debates, ongoing pilot projects, campaigning efforts,1 policy measures during Covid-19 and the surge in academic research. A BIG refers to regular cash transfers paid to all members of society irrespective of their socio-economic status, their capacity or willingness to participate in the labour market or having to meet pre-determined conditions (Offe 2008; Van Parijs 1995, 2003; Wright 2004, 2006). Despite the recent hype around BIG, Iran is the only country worldwide with a scaled-up BIG (Tabatabai 2011, 2012). Other programmes have never gone beyond pilot programmes. This raises the question why this is the case.
Policy analysis is the cornerstone of evidence-based policy making.1 It identifies the problems, informs programme design, supports the monitoring of policy implementation and is needed to evaluate programme impacts (Scott 2005). Rigorous and credible policy evidence is necessary to ensure the transparency and accountability of policy decisions, to secure political and public support and, hence, the allocation of financial resources. Sound policy analysis helps design effective and efficient programmes, thereby maximizing programme impact.
The future of work
(2021)
Driven by the exponential increase in the computational power of machines, data digitalization and scientific advancement in robotics and automation, the current wave of technological change is seemingly unprecedented in speed and scale. It transforms manufacturing and businesses making them more flexible, decentralized and efficient (Lasi et al. 2014). Even though technological change is nothing new, some argue that it is different this time. The new technologies have not only the potential to substitute labor (Nomaler and Verspagen 2018), they also change the way people work. The trend towards new forms of employment is no longer a marginal phenomenon.
Application of underwater robots are on the rise, most of them are dependent on sonar for underwater vision, but the lack of strong perception capabilities limits them in this task. An important issue in sonar perception is matching image patches, which can enable other techniques like localization, change detection, and mapping. There is a rich literature for this problem in color images, but for acoustic images, it is lacking, due to the physics that produce these images. In this paper we improve on our previous results for this problem (Valdenegro-Toro et al, 2017), instead of modeling features manually, a Convolutional Neural Network (CNN) learns a similarity function and predicts if two input sonar images are similar or not. With the objective of improving the sonar image matching problem further, three state of the art CNN architectures are evaluated on the Marine Debris dataset, namely DenseNet, and VGG, with a siamese or two-channel architecture, and contrastive loss. To ensure a fair evaluation of each network, thorough hyper-parameter optimization is executed. We find that the best performing models are DenseNet Two-Channel network with 0.955 AUC, VGG-Siamese with contrastive loss at 0.949 AUC and DenseNet Siamese with 0.921 AUC. By ensembling the top performing DenseNet two-channel and DenseNet-Siamese models overall highest prediction accuracy obtained is 0.978 AUC, showing a large improvement over the 0.91 AUC in the state of the art.
The dataset contains the following data from successful and failed executions of the Toyota HSR robot placing a book on a shelf.
RGB images from the robot's head camera
Depth images from the robot's head camera
Rendered images of the robot's 3D model from the point of view of the robot's head camera
Force-torque readings from a wrist-mounted force-torque sensor
Joint efforts, velocities and positions
extrinsic and intrinsic camera calibration parameters
frame-level anomaly annotations
The anomalies that occur during execution include:
the manipulated book falling down
books on the shelf being disturbed significantly
camera occlusions
robot being disturbed by an external collision
The dataset is split into a train, validation and test set with the following number of trials:
Train: 48 successful trials
Validation: 6 successful trials
Test: 60 anomalous trials and 7 successful trials
Property-Based Testing in Simulation for Verifying Robot Action Execution in Tabletop Manipulation
(2021)
An important prerequisite for the reliability and robustness of a service robot is ensuring the robot’s correct behavior when it performs various tasks of interest. Extensive testing is one established approach for ensuring behavioural correctness; this becomes even more important with the integration of learning-based methods into robot software architectures, as there are often no theoretical guarantees about the performance of such methods in varying scenarios. In this paper, we aim towards evaluating the correctness of robot behaviors in tabletop manipulation through automatic generation of simulated test scenarios in which a robot assesses its performance using property-based testing. In particular, key properties of interest for various robot actions are encoded in an action ontology and are then verified and validated within a simulated environment. We evaluate our framework with a Toyota Human Support Robot (HSR) which is tested in a Gazebo simulation. We show that our framework can correctly and consistently identify various failed actions in a variety of randomised tabletop manipulation scenarios, in addition to providing deeper insights into the type and location of failures for each designed property.
Execution monitoring is essential for robots to detect and respond to failures. Since it is impossible to enumerate all failures for a given task, we learn from successful executions of the task to detect visual anomalies during runtime. Our method learns to predict the motions that occur during the nominal execution of a task, including camera and robot body motion. A probabilistic U-Net architecture is used to learn to predict optical flow, and the robot's kinematics and 3D model are used to model camera and body motion. The errors between the observed and predicted motion are used to calculate an anomaly score. We evaluate our method on a dataset of a robot placing a book on a shelf, which includes anomalies such as falling books, camera occlusions, and robot disturbances. We find that modeling camera and body motion, in addition to the learning-based optical flow prediction, results in an improvement of the area under the receiver operating characteristic curve from 0.752 to 0.804, and the area under the precision-recall curve from 0.467 to 0.549.
Short summary
Accompanying dataset for our paper
A. Mitrevski, P. G. Plöger, and G. Lakemeyer, "Robot Action Diagnosis and Experience Correction by Falsifying Parameterised Execution Models," in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2021.
Contents
The dataset includes a single zip archive, containing data from the experiment described in the paper (conducted with a Toyota HSR). The zip archive contains three subdirectories:
handle_grasping_failure_database: A dump of a MongoDB database containing data from the handle grasping experiment, including ground-truth grasping failure annotations
pre_arm_motion_images: Images collected from the robot's hand camera before moving the robot's hand towards the handle
pregrasp_images: Images collected from the robot's hand camera just before closing the gripper for grasping
The image names include the time stamp at which the images were taken; this allows matching each image with the execution data in the database.
Database usage
After unzipping the archive, the database can be restored with the command
mongorestore handle_grasping_failure_database
This will create a MongoDB database with the name drawer_handle_grasping_failures.
Code for processing the data and failure analysis can be found in our <a href="https://github.com/alex-mitrevski/explainable-robot-execution-models">GitHub repository.
Contents
There are two zip archives included (grasping.zip and throwing.zip), corresponding to two experiments (grasping objects and throwing them in a drawer), both performed with a Toyota HSR. Each archive contains two directories - learning and generalisation - with object-specific learning and generalisation data. For each object, we provide a dump of a MongoDB database, which contains data sufficient for learning the models used in our experiments.
Usage
After unzipping the archives, each database can be restored with the command
mongorestore [data_directory_name]
This will create a MongoDB database with the name of the directory. Code for processing the data and model learning can be found in our <a href="https://github.com/alex-mitrevski/explainable-robot-execution-models">GitHub repository.
The solvent exchange as one of the most important steps during the manufacturing process of organic aerogels was investigated. This step is crucial as a preparatory step for the supercritical drying, since the pore solvent must be soluble in supercritical carbon dioxide to enable solvent extraction. The development and subsequent optimization of a suitable system with a peristaltic pump for automatic solvent exchange proved to be a suitable approach. In addition, the influence of zeolites on the acceleration of the process was found to be beneficial. To investigate the process, the water content in acetone was determined at different times using Karl Fischer titration. The shrinkage, densities, as well as the surface areas of the aerogels were analyzed. Based on these, the influence of various process parameters on the final structure of the obtained aerogels was investigated and evaluated. Modeling on diffusion in porous materials completes this study.
Over the last decades, different kinds of design guides have been created to maintain consistency and usability in interactive system development. However, in the case of spatial applications, practitioners from research and industry either have difficulty finding them or perceive such guides as lacking relevance, practicability, and applicability. This paper presents the current state of scientific research and industry practice by investigating currently used design recommendations for mixed reality (MR) system development. We analyzed and compared 875 design recommendations for MR applications elicited from 89 scientific papers and documentation from six industry practitioners in a literature review. In doing so, we identified differences regarding four key topics: Focus on unique MR design challenges, abstraction regarding devices and ecosystems, level of detail and abstraction of content, and covered topics. Based on that,we contribute to the MR design research by providing three factors for perceived irrelevance and six main implications for design recommendations that are applicable in scientific and industry practice.
Augmented/Virtual Reality (AR/VR) is still a fragmented space to design for due to the rapidly evolving hardware, the interdisciplinarity of teams, and a lack of standards and best practices. We interviewed 26 professional AR/VR designers and developers to shed light on their tasks, approaches, tools, and challenges. Based on their work and the artifacts they generated, we found that AR/VR application creators fulfill four roles: concept developers, interaction designers, content authors, and technical developers. One person often incorporates multiple roles and faces a variety of challenges during the design process from the initial contextual analysis to the deployment. From analysis of their tool sets, methods, and artifacts, we describe critical key challenges. Finally, we discuss the importance of prototyping for the communication in AR/VR development teams and highlight design implications for future tools to create a more usable AR/VR tool chain.
In tree-based adaptive mesh refinement (AMR) we store refinement trees in the cells of an unstructured coarse mesh. This lets us combine the speed and simpler management of structured refinement trees with the more flexible mesh generation of the unstructured coarse mesh. But this creates a conflict between performance and geometrical accuracy. If we favor speed we reduce the cells in our coarse mesh and hence reduce the accuracy of our geometrical representation. If we want more accurate results we generate a finer coarse mesh and lose performance by managing more cells in our unstructured coarse mesh. To mitigate this conflict we present the prototype of an geometry description which we implement in an already existing library. With this description we build geometry adapted hexahedral refinement trees, which also support high-order curved boundary cells. We also present examples on how to use this description. Moreover, we test the speedup of this new algorithm compared with coarse meshes with different geometrical errors.
Off-lattice Boltzmann methods increase the flexibility and applicability of lattice Boltzmann methods by decoupling the discretizations of time, space, and particle velocities. However, the velocity sets that are mostly used in off-lattice Boltzmann simulations were originally tailored to on-lattice Boltzmann methods. In this contribution, we show how the accuracy and efficiency of weakly and fully compressible semi-Lagrangian off-lattice Boltzmann simulations is increased by velocity sets derived from cubature rules, i.e. multivariate quadratures, which have not been produced by the Gauss-product rule. In particular, simulations of 2D shock-vortex interactions indicate that the cubature-derived degree-nine D2Q19 velocity set is capable to replace the Gauss-product rule-derived D2Q25. Likewise, the degree-five velocity sets D3Q13 and D3Q21, as well as a degree-seven D3V27 velocity set were successfully tested for 3D Taylor-Green vortex flows to challenge and surpass the quality of the customary D3Q27 velocity set. In compressible 3D Taylor-Green vortex flows with Mach numbers Ma={0.5;1.0;1.5;2.0} on-lattice simulations with velocity sets D3Q103 and D3V107 showed only limited stability, while the off-lattice degree-nine D3Q45 velocity set accurately reproduced the kinetic energy provided by literature.
This thesis explores novel haptic user interfaces for touchscreens, virtual and remote environments (VE and RE). All feedback modalities have been designed to study performance and perception while focusing on integrating an additional sensory channel - the sense of touch. Related work has shown that tactile stimuli can increase performance and usability when interacting with a touchscreen. It was also shown that perceptual aspects in virtual environments could be improved by haptic feedback. Motivated by previous findings, this thesis examines the versatility of haptic feedback approaches. For this purpose, five haptic interfaces from two application areas are presented. Research methods from prototyping and experimental design are discussed and applied. These methods are used to create and evaluate the interfaces; therefore, seven experiments have been performed. All five prototypes use a unique feedback approach. While three haptic user interfaces designed for touchscreen interaction address the fingers, two interfaces developed for VE and RE target the feet. Within touchscreen interaction, an actuated touchscreen is presented, and study shows the limits and perceptibility of geometric shapes. The combination of elastic materials and a touchscreen is examined with the second interface. A psychophysical study has been conducted to highlight the potentials of the interface. The back of a smartphone is used for haptic feedback in the third prototype. Besides a psychophysical study, it is found that the touch accuracy could be increased. Interfaces presented in the second application area also highlight the versatility of haptic feedback. The sides of the feet are stimulated in the first prototype. They are used to provide proximity information of remote environments sensed by a telepresence robot. In a study, it was found that spatial awareness could be increased. Finally, the soles of the feet are stimulated. A designed foot platform that provides several feedback modalities shows that self-motion perception can be increased.
Mebendazole Mediates Proteasomal Degradation of GLI Transcription Factors in Acute Myeloid Leukemia
(2021)
The prognosis of elderly AML patients is still poor due to chemotherapy resistance. The Hedgehog (HH) pathway is important for leukemic transformation because of aberrant activation of GLI transcription factors. MBZ is a well-tolerated anthelmintic that exhibits strong antitumor effects. Herein, we show that MBZ induced strong, dose-dependent anti-leukemic effects on AML cells, including the sensitization of AML cells to chemotherapy with cytarabine. MBZ strongly reduced intracellular protein levels of GLI1/GLI2 transcription factors. Consequently, MBZ reduced the GLI promoter activity as observed in luciferase-based reporter assays in AML cell lines. Further analysis revealed that MBZ mediates its anti-leukemic effects by promoting the proteasomal degradation of GLI transcription factors via inhibition of HSP70/90 chaperone activity. Extensive molecular dynamics simulations were performed on the MBZ-HSP90 complex, showing a stable binding interaction at the ATP binding site. Importantly, two patients with refractory AML were treated with MBZ in an off-label setting and MBZ effectively reduced the GLI signaling activity in a modified plasma inhibitory assay, resulting in a decrease in peripheral blood blast counts in one patient. Our data prove that MBZ is an effective GLI inhibitor that should be evaluated in combination to conventional chemotherapy in the clinical setting.