Refine
H-BRS Bibliography
- yes (178) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (178) (remove)
Document Type
- Article (79)
- Report (44)
- Conference Object (32)
- Part of a Book (8)
- Master's Thesis (7)
- Preprint (5)
- Book (monograph, edited volume) (1)
- Conference Proceedings (1)
- Working Paper (1)
Year of publication
Has Fulltext
- yes (178) (remove)
Keywords
- Robotik (6)
- Usable Security (5)
- Big Data Analysis (4)
- Machine Learning (4)
- Risk-based Authentication (4)
- Virtuelle Realität (4)
- computer vision (4)
- 802.11 (3)
- Cutting sticks-Problem (3)
- Knowledge Graphs (3)
- Perceptual Upright (3)
- Risk-based Authentication (RBA) (3)
- Teilsummenaufteilung (3)
- Virtual Reality (3)
- WiLD (3)
- 3D-Scanner (2)
- Artificial Intelligence (2)
- Authentication features (2)
- Bioinformatics (2)
- Computer Vision (2)
- Deep Learning (2)
- Forschungsbericht (2)
- GDPR (2)
- Gravitation (2)
- IEEE 802.11 (2)
- IEEE802.11 (2)
- Long-Distance WiFi (2)
- Mengenpartitionsproblem (2)
- Natural Language Processing (2)
- Object Detection (2)
- Password (2)
- Perception (2)
- Raumwahrnehmung (2)
- Ray Tracing (2)
- Transformers (2)
- Usable Security and Privacy (2)
- Verkehrssimulation (2)
- automated sensor-screening (2)
- biometrics (2)
- deep learning (2)
- domestic robots (2)
- haptics (2)
- robot competitions (2)
- robotics (2)
- security (2)
- semiconducting metal oxide gas sensor array (2)
- virtual reality (2)
- virtuelle Umgebungen (2)
- 0-1-Integer-Problem (1)
- 16S rRNA gene sequencing (1)
- 3D Segmentation (1)
- 3D navigation (1)
- 3D user interface (1)
- AI usage in sports (1)
- AML (1)
- API Documentation (1)
- AR (1)
- ARRs (1)
- Abstract Syntax Tree (1)
- Active Learning (1)
- Adaptive Behavior (1)
- Adaptive Case Management (1)
- Agents (1)
- Algorithmische Informationstheorie (1)
- Algorithms (1)
- Allgegenwärtige Spiele (1)
- Analyse (1)
- Apprenticeship Learning (1)
- Assistenzsystem (1)
- Aufrecht (1)
- Augmented Reality (1)
- Authentication (1)
- B-cell leukemia (1)
- BPMS (1)
- Bacteria, Anaerobic (1)
- Ball Tracking (1)
- Benchmark (1)
- Bildverarbeitung (1)
- Black-Box Optimization (1)
- Blockchain (1)
- Bond graph (1)
- Bounding box explanations (1)
- Bubble-Chart (1)
- CC (1)
- CEHL (1)
- Calibration (1)
- Camera selection (1)
- Camera view analysis (1)
- Centrifugation (1)
- Centrifuge (1)
- Cervical cancer screening (1)
- Cervicovaginal microbiome (1)
- Chaitin-Konstante (1)
- Change-Prozess (1)
- Chemical imaging (1)
- Chloroquine (1)
- Classification explanations (1)
- Cloud Computing (1)
- Code Generation (1)
- Codierung (1)
- Cognition (1)
- Collaboration/Cooperation (1)
- Colposcopy (1)
- Comparative Analysis (1)
- Complexity (1)
- Computer Graphics (1)
- Content Security Policies (1)
- Convexity (1)
- Created Gravity (1)
- Creative Commons (1)
- Cryptography (1)
- Current research information systems (1)
- Curriculum (1)
- Cutting sticks problem (1)
- CyberGlove (1)
- DCF (1)
- DNA extraction protocols (1)
- DNA profile (1)
- DOI (1)
- Data Protection Officer (1)
- Data Publication (1)
- DataCite (1)
- Database Management Systems (1)
- Datenbanksysteme (1)
- Declarative Process Modeling (1)
- Demonstration-based training (1)
- Developer Centered Security (1)
- Digital Ecosystem (1)
- Digital Object Identifier (1)
- Digitale Lehre (1)
- Digitaler Stress (1)
- Directed Acyclic Graph (1)
- Directional Antenna (1)
- Directional antennas (1)
- Disco (1)
- Domain-Specific Language (1)
- Domain-Specific Modeling Languages, (1)
- Domestic Robots (1)
- Drosophila (1)
- Dynamic Case Management (1)
- E-Health (1)
- ETV6-RUNX1 (1)
- Earth Observation (1)
- Ecosystem simulation (1)
- Edutainment (1)
- Electromagnetic Fields (1)
- Elephantiasis (1)
- Emotion (1)
- EnOcean (1)
- Entropy (1)
- Environmental Data (1)
- Eriodictyol (1)
- Erweiterte Realität (1)
- Evaluation (1)
- Evaluation als Kommunikationsanlass (1)
- Executive functions (1)
- Expert Interviews (1)
- Explainable Artificial Intelligence (XAI) (1)
- FDI (1)
- FIVIS (1)
- FOS: Computer and information sciences (1)
- FPGA (1)
- FS20 (1)
- Fahrradfahrsimulator (1)
- Fahrsimulator (1)
- Fault detection and isolation (1)
- Feedback (1)
- First-order frequency domain sensitivities (1)
- Five Factor Model (1)
- Flussnetz (1)
- Free-Space Loss (FSL) (1)
- Fuzzy Miner (1)
- GDDL (1)
- GLI (1)
- Gabor filters (1)
- Gefahrenprävention (1)
- Genomics (1)
- Genomics/methods (1)
- Gnu Linear Programming Kit (1)
- Gradient-based explanation methods (1)
- Graph Convolutional Neural Networks (1)
- Graph embeddings (1)
- Graph theory (1)
- Graphentheorie (1)
- Grasp Domain Definition Language (1)
- Grasp Planner (1)
- Grasping (1)
- Group behavior analysis (1)
- HDBR (1)
- HPV diagnostic (1)
- HSP70 (1)
- HSP90 (1)
- HTTP (1)
- Handzeichenerkennung (1)
- Header whitelisting (1)
- Healthcare logistics (1)
- Hochschulehre (1)
- Hochschullehre (1)
- HomeMatic (1)
- Hough Forests (1)
- Human orientation perception (1)
- Human-Centered Design (1)
- Human-Centered Robotics (1)
- Human-Computer-Interaction (HCI) (1)
- Humans (1)
- Hybrid models of engineering systems (1)
- Hybrid systems (1)
- Hydroxychloroquine (1)
- Hyper-parameter Tuning (1)
- ICF (1)
- ISO9999 (1)
- Image Classification (1)
- Immersive analytics (1)
- Implementation Challenges (1)
- Incremental bond graph (1)
- Inductive Visual Miner (1)
- Informations-, Kommunikations- und Medientechnologie (1)
- Instantaneous assignment (1)
- Instantiation (1)
- Instruction design (1)
- Interaction devices (1)
- Interactive Object Detection (1)
- Interaktion (1)
- Interference (1)
- Intermediaries (1)
- Interventionstudie (1)
- KNX (1)
- Knowledge Worker (1)
- Knowledge-intensive Process (1)
- Kollaboration/Kooperation (1)
- Kolmogorov-Komplexität (1)
- Kombinatorische Optimierung (1)
- Kompression und Zufälligkeit von Zeichenketten (1)
- Künstliche Gravitation (1)
- LAA (1)
- LBP (1)
- LDP (1)
- LOTUS Sensor Node (1)
- LP-Heuristik (1)
- LSTM (1)
- LTE-U (1)
- Lagerlogistik (1)
- Language Engineering (1)
- Large display interaction (1)
- Large-Scale Online Services (1)
- Learning and Adaptive Systems (1)
- Leg (1)
- Lehr-Lernpsychologie (1)
- Lernen (1)
- Lernumgebung (1)
- Lineare Programmierung (1)
- LoRa (1)
- LoRa receiver accuracy (1)
- Long-Term Autonomy (1)
- Longley-Rice (1)
- Longley-Rice Irregular Terrain Model (ITM) (1)
- Lymphedema (1)
- MAC (1)
- MBZ (1)
- MESD (1)
- METEOR score (1)
- MOX gas sensors (1)
- Malware (1)
- Markov Cluster Algorithm (1)
- Materialwissenschaften (1)
- Maximalflussproblem (1)
- Measurement (1)
- Mengenpartitionierungsproblem (1)
- Mensch-Computer-Interaktion (1)
- Mensch-Maschine-Kommunikation (1)
- Meteorological Data (1)
- Minimaler Schnitt (1)
- Mixed-Reality (MR) (1)
- Mobiler Roboter (1)
- Model-Driven Engineering (1)
- Molecular rotation (1)
- Multi-camera (1)
- Multi-object visualization (1)
- Multi-robot systems (1)
- Multimodal hyperspectral data (1)
- NGS (1)
- NLP (1)
- NP-Vollständigkeit (1)
- Naive physics (1)
- Navigation (1)
- Neuroscience (1)
- OCT (1)
- OCU (1)
- OER (1)
- Object Segmentation (1)
- Object detectors (1)
- Older adults (1)
- Open Educational Ressources (1)
- Open source software (1)
- OpenFlow (1)
- OpenStack (1)
- Operation Research (1)
- Optical Flow (1)
- Optimierungsproblem (1)
- Organic compounds and Functional groups (1)
- Out Of Distribution (OOD) data (1)
- Outer Space Research (1)
- PAD (1)
- PCR inhibitors (1)
- PHR (1)
- Part Segmentation (1)
- Path loss model (1)
- Path-Packing (1)
- Pattern recognition (1)
- Personal Health Record (1)
- Personality (1)
- Persönlichkeit (1)
- Persönlichkeitseigenschaften (1)
- Persönlichkeitsfaktor (1)
- Pervasive Gaming (1)
- Physical activity (1)
- Point Cloud Segmentation (1)
- Point Clouds (1)
- Poisson Disc Distribution (1)
- Privacy patterns (1)
- ProM (1)
- Process Automation (1)
- Process Mining (1)
- Process Models (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Propagation (1)
- Proximity (1)
- Prozessautomation (1)
- Psychology (1)
- Qualitative reasoning (1)
- Qualitätspakt Lehre (1)
- Quantitative analysis of explanations (1)
- Quantum mechanical methods (1)
- RGB-D (1)
- ROPOD (1)
- RSSI (1)
- Radfahren (1)
- Random number generator (1)
- RapidMiner (1)
- Re-authentication (1)
- Real-Time Image Processing (1)
- Recommender systems (1)
- Relative Energies (1)
- Rendering (1)
- Requirements Engineering (1)
- Robotic faults (1)
- Robotics (cs.RO) (1)
- Roleplaying Game (RPG) (1)
- Rollenspiel (1)
- Rollenspiele (1)
- Rucksackproblem (1)
- SAML (1)
- SARS-CoV-2 (1)
- SEMA (1)
- SMPA loop (1)
- SOAP (1)
- Saliency maps (1)
- Sanity checks for explaining detectors (1)
- Scale Tuning (1)
- Scene understanding through Deep Learning (1)
- School experiments (1)
- Secure Coding Practices (1)
- Segmentation (1)
- Semantic Segmentation (1)
- Semantic gap (1)
- Semantic search (1)
- Sensitivity matrix in symbolic form (1)
- Set partition problem (1)
- Side Channel Analysis (1)
- Silmitasertib (1)
- Simulation (1)
- Skin (1)
- Skin detection (1)
- Software Architecture (1)
- Software Framework (1)
- Software Supply Chain (1)
- Somatogravic Illusion (1)
- Spatio-Temporal (1)
- Spectral Analysis (1)
- Spectral Clustering (1)
- Spectroscopy (1)
- Spectrum occupancy (1)
- Spherical Treadmill (1)
- Studienverlauf (1)
- Supervised classification (1)
- TEL-AML1 (1)
- Task allocation (1)
- Teaching Quality Pact (1)
- Temporal constraints (1)
- Terrain rendering (1)
- Time extended assignment (1)
- TinyECC 2.0 (1)
- Token (1)
- Translocation (1)
- Two-Ray (1)
- Two-factor Authentication (1)
- U-NII band (1)
- UAV (1)
- UGV (1)
- USAR (1)
- Uncertainty Estimation (1)
- Unity (1)
- Unknown parameter degradation (1)
- User experience design (1)
- User-Centered Design (1)
- User-Computer Interface (1)
- User-centered privacy engineering (1)
- VR (1)
- VR-based systems (1)
- Valproic acid (1)
- Verkehrserziehung (1)
- Verkehrsnetz (1)
- Verkehrsnetzwerke (1)
- Vibrational microspectroscopy (1)
- View selection (1)
- Visual Computing (1)
- Wahrnehmung (1)
- Wang-tiles (1)
- Web (1)
- Weltraumforschung (1)
- Wi-Fi (1)
- WiFi (1)
- WiFi-based Long Distance (WiLD) (1)
- Wireless Sensor Network (1)
- Wireless backhaul (1)
- Workflow (1)
- Workflow Management (1)
- XML Signature (1)
- XML Signature Wrapping (1)
- YAWL (1)
- Young adults (1)
- ZWave (1)
- Zentrifuge (1)
- ZigBee (1)
- accelerometer (1)
- adaptive trigger (1)
- analog/digital signal processing (1)
- analytical redundancy relation residuals (1)
- assistive robotics (1)
- assistive robots (1)
- authentication (1)
- automatic measurement validation (1)
- automatic music generation (1)
- automation of sample processing (1)
- automatisierte Netzwerkgenerierung (1)
- autonomous driving (1)
- averaged bond graph models (1)
- benchmarking (1)
- bicausal diagnostic Bond Graphs (1)
- binary classification (1)
- bioinformatics (1)
- bond graph modelling (1)
- bond graphs (1)
- bond-graph-based physical systems modelling (1)
- bootstrapping (1)
- building automation (1)
- built environment (1)
- camera (1)
- change process (1)
- closed kinematic chain (1)
- cognitive radio (1)
- collision (1)
- conformations (1)
- constraint relaxation (1)
- control (1)
- control architectures (1)
- controller design (1)
- convex optimization (1)
- convolutional neural networks (1)
- data analysis (1)
- data glove (1)
- database (1)
- database systems (1)
- degraded DNA (1)
- diagnostic bond graphs (1)
- differential algebraic equation systems (1)
- directional antennas (1)
- drugs (1)
- dynamics (1)
- e-Research (1)
- elite sports (1)
- employee privacy (1)
- energy (1)
- energy saving (1)
- evaluation as a mean to communication (1)
- explainable AI (1)
- external faults (1)
- extraction-linked bias (1)
- eye movement (1)
- eye tracking (1)
- facial expression recognition (1)
- factor analysis (1)
- failure prognostic (1)
- fault indicators (1)
- fault scenarios (1)
- fault detection (1)
- feature (1)
- fiducial marker (1)
- fingerprint (1)
- fitness-fatigue model (1)
- fixed causalities generation of analytical redundancy relations (1)
- forensic (1)
- foveated rendering (1)
- gaze (1)
- grasp motions (1)
- grasping (1)
- gravito-inertial force (1)
- head down bed rest (1)
- heart rate control (1)
- heart rate modeling (1)
- heart rate prediction (1)
- high degree of diagnostic coverage and reliability (1)
- high diagnostic coverage and reliability (1)
- high dynamic range resistance readout (1)
- high-throughput DNA sequencing (1)
- high-throughput sequencing (1)
- higher education (1)
- hospital environment (1)
- hospital-acquired infections (1)
- human microbiome (1)
- hybrid dynamics solver (1)
- hybrid system (1)
- ideal switches (1)
- image captioning (1)
- immersive Visualisierung (1)
- indicators calculation (1)
- industrial robots (1)
- informational self-determination (1)
- infrared pattern (1)
- instance segmentation (1)
- intelligente Agenten (1)
- intelligente virtuelle Agenten (1)
- interaction (1)
- interactive computer graphics (1)
- interference (1)
- isolation (1)
- knowledge graphs (1)
- large-high-resolution displays (1)
- latent class analysis (1)
- leaning-based interfaces (1)
- learning traces (1)
- load control (1)
- locomotion interface (1)
- long-distance 802.11 (1)
- long-distance modeling (1)
- massive parallel sequencing (1)
- mathematical modeling (1)
- mebendazole (1)
- mental models (1)
- microbial community structure (1)
- microbial ecology (1)
- microbiome (1)
- microbiome analyses (1)
- mixed reality (1)
- mobile manipulators (1)
- mobility assistance system (1)
- mode-dependent implicit state space model (1)
- mode-switching linear time-invariant models (1)
- model exchange (1)
- modelling methodology (1)
- motion capture (1)
- multi-channel power sourcing (1)
- multibody system (1)
- multibond graphs (1)
- music analysis (1)
- natural language processing (1)
- navigational search (1)
- near infrared (1)
- next generation sequencing (1)
- numerical computation of residuals (1)
- object detection (1)
- object-oriented modelling (1)
- open educational resources (OERs) (1)
- operation mode independent causalities (1)
- optical character recognition (1)
- optical coherence tomography (1)
- optical sensor (1)
- optical tracking (1)
- optimized geometries (1)
- parameter estimation (1)
- parameter sensitivity of residuals (1)
- perceived quality (1)
- perception of upright (1)
- performance modeling (1)
- performance prediction (1)
- person and object detection and recognition (1)
- phenomenological approaches (1)
- posture analysis (1)
- power electronic systems (1)
- predictive maintenance (1)
- prehensile motions (1)
- presentation attack detection (1)
- presentation attack detection (PAD) (1)
- prioritizable ranking (1)
- privacy at work (1)
- process (1)
- pseudo-random number generator (1)
- psychophysics (1)
- quantitative model-based fault detection (1)
- question answering (1)
- rapid prototyping tool (1)
- region of interest (1)
- reinforcement learning (1)
- remaining useful life (1)
- repeated trend projection (1)
- representation learning (1)
- residual bond graph sinks (1)
- residual sinks (1)
- reuse of indicators (1)
- road (1)
- robot behaviour model (1)
- robot control (1)
- robot dynamics (1)
- robot personalisation (1)
- robotic arm (1)
- robotic evaluation (1)
- routing (1)
- rural areas (1)
- scene-segmentation (1)
- scenes (1)
- semantic mapping (1)
- sensemaking (1)
- sensor resilience (1)
- short tandem repeat (1)
- simulation (1)
- simulation of fault scenarios (1)
- slope based signature (1)
- small molecule (1)
- software engineering (1)
- software-defined networking (1)
- space flight analog (1)
- spatial orientation (1)
- spatial updating (1)
- spectrum scan (1)
- spectrum sensing (1)
- speech understanding (1)
- spinal posture (1)
- static friction (1)
- structural equation modeling (1)
- subjective visual vertical (1)
- switched three-phase power inverter (1)
- system mode independent bond graph representation (1)
- task models (1)
- taxonomie (1)
- technology mapping (1)
- teleoperation (1)
- text detection (1)
- text localization (1)
- text mining (1)
- textual model description languages (1)
- time series analysis (1)
- tools for education (1)
- trace model (1)
- trace-based system (1)
- traffic sign detection (1)
- traffic sign localization (1)
- training monitoring (1)
- training performance relationship (1)
- transfer learning (1)
- true random number generator (1)
- uncertainties (1)
- usable privacy controls (1)
- user input (1)
- user interaction (1)
- user modelling (1)
- user study (1)
- vestibular system (1)
- vibration (1)
- visual attention (1)
- wearable sensor (1)
- wearable sensors (1)
- weight perception (1)
- wireless mesh networks (1)
- xorshift-generator (1)
Due to their user-friendliness and reliability, biometric systems have taken a central role in everyday digital identity management for all kinds of private, financial and governmental applications with increasing security requirements. A central security aspect of unsupervised biometric authentication systems is the presentation attack detection (PAD) mechanism, which defines the robustness to fake or altered biometric features. Artifacts like photos, artificial fingers, face masks and fake iris contact lenses are a general security threat for all biometric modalities. The Biometric Evaluation Center of the Institute of Safety and Security Research (ISF) at the University of Applied Sciences Bonn-Rhein-Sieg has specialized in the development of a near-infrared (NIR)-based contact-less detection technology that can distinguish between human skin and most artifact materials. This technology is highly adaptable and has already been successfully integrated into fingerprint scanners, face recognition devices and hand vein scanners. In this work, we introduce a cutting-edge, miniaturized near-infrared presentation attack detection (NIR-PAD) device. It includes an innovative signal processing chain and an integrated distance measurement feature to boost both reliability and resilience. We detail the device’s modular configuration and conceptual decisions, highlighting its suitability as a versatile platform for sensor fusion and seamless integration into future biometric systems. This paper elucidates the technological foundations and conceptual framework of the NIR-PAD reference platform, alongside an exploration of its potential applications and prospective enhancements.
During robot-assisted therapy, a robot typically needs to be partially or fully controlled by therapists, for instance using a Wizard-of-Oz protocol; this makes therapeutic sessions tedious to conduct, as therapists cannot fully focus on the interaction with the person under therapy. In this work, we develop a learning-based behaviour model that can be used to increase the autonomy of a robot’s decision-making process. We investigate reinforcement learning as a model training technique and compare different reward functions that consider a user’s engagement and activity performance. We also analyse various strategies that aim to make the learning process more tractable, namely i) behaviour model training with a learned user model, ii) policy transfer between user groups, and iii) policy learning from expert feedback. We demonstrate that policy transfer can significantly speed up the policy learning process, although the reward function has an important effect on the actions that a robot can choose. Although the main focus of this paper is the personalisation pipeline itself, we further evaluate the learned behaviour models in a small-scale real-world feasibility study in which six users participated in a sequence learning game with an assistive robot. The results of this study seem to suggest that learning from guidance may result in the most adequate policies in terms of increasing the engagement and game performance of users, but a large-scale user study is needed to verify the validity of that observation.
The European General Data Protection Regulation requires the implementation of Technical and Organizational Measures (TOMs) to reduce the risk of illegitimate processing of personal data. For these measures to be effective, they must be applied correctly by employees who process personal data under the authority of their organization. However, even data processing employees often have limited knowledge of data protection policies and regulations, which increases the likelihood of misconduct and privacy breaches. To lower the likelihood of unintentional privacy breaches, TOMs must be developed with employees’ needs, capabilities, and usability requirements in mind. To reduce implementation costs and help organizations and IT engineers with the implementation, privacy patterns have proven to be effective for this purpose. In this chapter, we introduce the privacy pattern Data Cart, which specifically helps to develop TOMs for data processing employees. Based on a user-centered design approach with employees from two public organizations in Germany, we present a concept that illustrates how Privacy by Design can be effectively implemented. Organizations, IT engineers, and researchers will gain insight on how to improve the usability of privacy-compliant tools for managing personal data.
Users should always play a central role in the development of (software) solutions. The human-centered design (HCD) process in the ISO 9241-210 standard proposes a procedure for systematically involving users. However, due to its abstraction level, the HCD process provides little guidance for how it should be implemented in practice. In this chapter, we propose three concrete practical methods that enable the reader to develop usable security and privacy (USP) solutions using the HCD process. This chapter equips the reader with the procedural knowledge and recommendations to: (1) derive mental models with regard to security and privacy, (2) analyze USP needs and privacy-related requirements, and (3) collect user characteristics on privacy and structure them by user group profiles and into privacy personas. Together, these approaches help to design measures for a user-friendly implementation of security and privacy measures based on a firm understanding of the key stakeholders.
Self-motion perception is a multi-sensory process that involves visual, vestibular, and other cues. When perception of self-motion is induced using only visual motion, vestibular cues indicate that the body remains stationary, which may bias an observer’s perception. When lowering the precision of the vestibular cue by for example, lying down or by adapting to microgravity, these biases may decrease, accompanied by a decrease in precision. To test this hypothesis, we used a move-to-target task in virtual reality. Astronauts and Earth-based controls were shown a target at a range of simulated distances. After the target disappeared, forward self-motion was induced by optic flow. Participants indicated when they thought they had arrived at the target’s previously seen location. Astronauts completed the task on Earth (supine and sitting upright) prior to space travel, early and late in space, and early and late after landing. Controls completed the experiment on Earth using a similar regime with a supine posture used to simulate being in space. While variability was similar across all conditions, the supine posture led to significantly higher gains (target distance/perceived travel distance) than the sitting posture for the astronauts pre-flight and early post-flight but not late post-flight. No difference was detected between the astronauts’ performance on Earth and onboard the ISS, indicating that judgments of traveled distance were largely unaffected by long-term exposure to microgravity. Overall, this constitutes mixed evidence as to whether non-visual cues to travel distance are integrated with relevant visual cues when self-motion is simulated using optic flow alone.
This work proposes a novel approach for probabilistic end-to-end all-sky imager-based nowcasting with horizons of up to 30 min using an ImageNet pre-trained deep neural network. The method involves a two-stage approach. First, a backbone model is trained to estimate the irradiance from all-sky imager (ASI) images. The model is then extended and retrained on image and parameter sequences for forecasting. An open access data set is used for training and evaluation. We investigated the impact of simultaneously considering global horizontal (GHI), direct normal (DNI), and diffuse horizontal irradiance (DHI) on training time and forecast performance as well as the effect of adding parameters describing the irradiance variability proposed in the literature. The backbone model estimates current GHI with an RMSE and MAE of 58.06 and 29.33 W m−2, respectively. When extended for forecasting, the model achieves an overall positive skill score reaching 18.6 % compared to a smart persistence forecast. Minor modifications to the deterministic backbone and forecasting models enables the architecture to output an asymmetrical probability distribution and reduces training time while leading to similar errors for the backbone models. Investigating the impact of variability parameters shows that they reduce training time but have no significant impact on the GHI forecasting performance for both deterministic and probabilistic forecasting while simultaneously forecasting GHI, DNI, and DHI reduces the forecast performance.
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot. We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation, robust object recognition and task planning. New developments include an approach to grasp vertical objects, placement of objects by considering the empty space on a workstation, and the process of porting our code to ROS2.
Neuromorphic computing aims to mimic the computational principles of the brain in silico and has motivated research into event-based vision and spiking neural networks (SNNs). Event cameras (ECs) capture local, independent changes in brightness, and offer superior power consumption, response latencies, and dynamic ranges compared to frame-based cameras. SNNs replicate neuronal dynamics observed in biological neurons and propagate information in sparse sequences of ”spikes”. Apart from biological fidelity, SNNs have demonstrated potential as an alternative to conventional artificial neural networks (ANNs), such as in reducing energy expenditure and inference time in visual classification. Although potentially beneficial for robotics, the novel event-driven and spike-based paradigms remain scarcely explored outside the domain of aerial robots.
To investigate the utility of brain-inspired sensing and data processing in a robotics application, we developed a neuromorphic approach to real-time, online obstacle avoidance on a manipulator with an onboard camera. Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN, decoding neural activations into avoidance motions, and adjusting plans in a dynamic motion primitive formulation. We conducted simulated and real experiments with a Kinova Gen3 arm performing simple reaching tasks involving static and dynamic obstacles. Our implementation was systematically tuned, validated, and tested in sets of distinct task scenarios, and compared to a non-adaptive baseline through formalized quantitative metrics and qualitative criteria.
The neuromorphic implementation facilitated reliable avoidance of imminent collisions in most scenarios, with 84% and 92% median success rates in simulated and real experiments, where the baseline consistently failed. Adapted trajectories were qualitatively similar to baseline trajectories, indicating low impacts on safety, predictability and smoothness criteria. Among notable properties of the SNN were the correlation of processing time with the magnitude of perceived motions (captured in events) and robustness to different event emulation methods. Preliminary tests with a DAVIS346 EC showed similar performance, validating our experimental event emulation method. These results motivate future efforts to incorporate SNN learning, utilize neuromorphic processors, and target other robot tasks to further explore this approach.
The non-filarial and non-communicable disease podoconiosis affects around 4 million people and is characterized by severe leg lymphedema accompanied with painful intermittent acute inflammatory episodes, called acute dermatolymphangioadenitis (ADLA) attacks. Risk factors have been associated with the disease but the mechanisms of pathophysiology remain uncertain. Lymphedema can lead to skin lesions, which can serve as entry points for bacteria that may cause ADLA attacks leading to progression of the lymphedema. However, the microbiome of the skin of affected legs from podoconiosis individuals remains unclear. Thus, we analysed the skin microbiome of podoconiosis legs using next generation sequencing. We revealed a positive correlation between increasing lymphedema severity and non-commensal anaerobic bacteria, especially Anaerococcus provencensis, as well as a negative correlation with the presence of Corynebacterium, a constituent of normal skin flora. Disease symptoms were generally linked to higher microbial diversity and richness, which deviated from the normal composition of the skin. These findings show an association of distinct bacterial taxa with lymphedema stages, highlighting the important role of bacteria for the pathogenesis of podoconiosis and might enable a selection of better treatment regimens to manage ADLA attacks and disease progression.
While humans can effortlessly pick a view from multiple streams, automatically choosing the best view is a challenge. Choosing the best view from multi-camera streams poses a problem regarding which objective metrics should be considered. Existing works on view selection lack consensus about which metrics should be considered to select the best view. The literature on view selection describes diverse possible metrics. And strategies such as information-theoretic, instructional design, or aesthetics-motivated fail to incorporate all approaches. In this work, we postulate a strategy incorporating information-theoretic and instructional design-based objective metrics to select the best view from a set of views. Traditionally, information-theoretic measures have been used to find the goodness of a view, such as in 3D rendering. We adapted a similar measure known as the viewpoint entropy for real-world 2D images. Additionally, we incorporated similarity penalization to get a more accurate measure of the entropy of a view, which is one of the metrics for the best view selection. Since the choice of the best view is domain-dependent, we chose demonstration-based training scenarios as our use case. The limitation of our chosen scenarios is that they do not include collaborative training and solely feature a single trainer. To incorporate instructional design considerations, we included the trainer’s body pose, face, face when instructing, and hands visibility as metrics. To incorporate domain knowledge we included predetermined regions’ visibility as another metric. All of those metrics are taken into account to produce a parameterized view recommendation approach for demonstration-based training. An online study using recorded multi-camera video streams from a simulation environment was used to validate those metrics. Furthermore, the responses from the online study were used to optimize the view recommendation performance with a normalized discounted cumulative gain (NDCG) value of 0.912, which shows good performance with respect to matching user choices.
Question Answering (QA) has gained significant attention in recent years, with transformer-based models improving natural language processing. However, issues of explainability remain, as it is difficult to determine whether an answer is based on a true fact or a hallucination. Knowledge-based question answering (KBQA) methods can address this problem by retrieving answers from a knowledge graph. This paper proposes a hybrid approach to KBQA called FRED, which combines pattern-based entity retrieval with a transformer-based question encoder. The method uses an evolutionary approach to learn SPARQL patterns, which retrieve candidate entities from a knowledge base. The transformer-based regressor is then trained to estimate each pattern’s expected F1 score for answering the question, resulting in a ranking ofcandidate entities. Unlike other approaches, FRED can attribute results to learned SPARQL patterns, making them more interpretable. The method is evaluated on two datasets and yields MAP scores of up to 73 percent, with the transformer-based interpretation falling only 4 pp short of an oracle run. Additionally, the learned patterns successfully complement manually generated ones and generalize well to novel questions.
Microbiome analyses are essential for understanding microorganism composition and diversity, but interpretation is often challenging due to biological and technical variables. DNA extraction is a critical step that can significantly bias results, particularly in samples containing a high abundance of challenging-to-lyse microorganisms. Taking into consideration the distinctive microenvironments observed in different bodily locations, our study sought to assess the extent of bias introduced by suboptimal bead-beating during DNA extraction across diverse clinical sample types. The question was whether complex targeted extraction methods are always necessary for reliable taxonomic abundance estimation through amplicon sequencing or if simpler alternatives are effective for some sample types. Hence, for four different clinical sample types (stool, cervical swab, skin swab, and hospital surface swab samples), we compared the results achieved from extracting targeted manual protocols routinely used in our research lab for each sample type with automated protocols specifically not designed for that purpose. Unsurprisingly, we found that for the stool samples, manual extraction protocols with vigorous bead-beating were necessary in order to avoid erroneous taxa proportions on all investigated taxonomic levels and, in particular, false under- or overrepresentation of important genera such as Blautia, Faecalibacterium, and Parabacteroides. However, interestingly, we found that the skin and cervical swab samples had similar results with all tested protocols. Our results suggest that the level of practical automation largely depends on the expected microenvironment, with skin and cervical swabs being much easier to process than stool samples. Prudent consideration is necessary when extending the conclusions of this study to applications beyond rough estimations of taxonomic abundance.
The perceptual upright results from the multisensory integration of the directions indicated by vision and gravity as well as a prior assumption that upright is towards the head. The direction of gravity is signalled by multiple cues, the predominant of which are the otoliths of the vestibular system and somatosensory information from contact with the support surface. Here, we used neutral buoyancy to remove somatosensory information while retaining vestibular cues, thus "splitting the gravity vector" leaving only the vestibular component. In this way, neutral buoyancy can be used as a microgravity analogue. We assessed spatial orientation using the oriented character recognition test (OChaRT, which yields the perceptual upright, PU) under both neutrally buoyant and terrestrial conditions. The effect of visual cues to upright (the visual effect) was reduced under neutral buoyancy compared to on land but the influence of gravity was unaffected. We found no significant change in the relative weighting of vision, gravity, or body cues, in contrast to results found both in long-duration microgravity and during head-down bed rest. These results indicate a relatively minor role for somatosensation in determining the perceptual upright in the presence of vestibular cues. Short-duration neutral buoyancy is a weak analogue for microgravity exposure in terms of its perceptual consequences compared to long-duration head-down bed rest.
In the project EILD.nrw, Open Educational Resources (OER) have been developed for teaching databases. Lecturers can use the tools and courses in a variety of learning scenarios. Students of computer science and application subjects can learn the complete life cycle of databases. For this purpose, quizzes, interactive tools, instructional videos, and courses for learning management systems are developed and published under a Creative Commons license. We give an overview of the developed OERs according to subject, description, teaching form, and format. Following, we describe how licencing, sustainability, accessibility, contextualization, content description, and technical adaptability are implemented. The feedback of students in ongoing classes are evaluated.
Risk-Based Authentication for OpenStack: A Fully Functional Implementation and Guiding Example
(2023)
Online services have difficulties to replace passwords with more secure user authentication mechanisms, such as Two-Factor Authentication (2FA). This is partly due to the fact that users tend to reject such mechanisms in use cases outside of online banking. Relying on password authentication alone, however, is not an option in light of recent attack patterns such as credential stuffing.
Risk-Based Authentication (RBA) can serve as an interim solution to increase password-based account security until better methods are in place. Unfortunately, RBA is currently used by only a few major online services, even though it is recommended by various standards and has been shown to be effective in scientific studies. This paper contributes to the hypothesis that the low adoption of RBA in practice can be due to the complexity of implementing it. We provide an RBA implementation for the open source cloud management software OpenStack, which is the first fully functional open source RBA implementation based on the Freeman et al. algorithm, along with initial reference tests that can serve as a guiding example and blueprint for developers.
PURPOSE
Cervical cancer (CC) is caused by a persistent high-risk human papillomavirus (hrHPV) infection. The cervico-vaginal microbiome may influence the development of (pre)cancer lesions. Aim of the study was (i) to evaluate the new CC screening program in Germany for the detection of high-grade CC precursor lesions, and (ii) to elucidate the role of the cervico-vaginal microbiome and its potential impact on cervical dysplasia.
METHODS
The microbiome of 310 patients referred to colposcopy was determined by amplicon sequencing and correlated with clinicopathological parameters.
RESULTS
Most patients were referred for colposcopy due to a positive hrHPV result in two consecutive years combined with a normal PAP smear. In 2.1% of these cases, a CIN III lesion was detected. There was a significant positive association between the PAP stage and Lactobacillus vaginalis colonization and between the severity of CC precursor lesions and Ureaplasma parvum.
CONCLUSION
In our cohort, the new cervical cancer screening program resulted in a low rate of additional CIN III detected. It is questionable whether these cases were only identified earlier with additional HPV testing before the appearance of cytological abnormalities, or the new screening program will truly increase the detection rate of CIN III in the long run. Colonization with U. parvum was associated with histological dysplastic lesions. Whether targeted therapy of this pathogen or optimization of the microbiome prevents dysplasia remains speculative.
Trojanized software packages used in software supply chain attacks constitute an emerging threat. Unfortunately, there is still a lack of scalable approaches that allow automated and timely detection of malicious software packages and thus most detections are based on manual labor and expertise. However, it has been observed that most attack campaigns comprise multiple packages that share the same or similar malicious code. We leverage that fact to automatically reproduce manually identified clusters of known malicious packages that have been used in real world attacks, thus, reducing the need for expert knowledge and manual inspection. Our approach, AST Clustering using MCL to mimic Expertise (ACME), yields promising results with a 𝐹1 score of 0.99. Signatures are automatically generated based on characteristic code fragments from clusters and are subsequently used to scan the whole npm registry for unreported malicious packages. We are able to identify and report six malicious packages that have been removed from npm consequentially. Therefore, our approach can support the detection by reducing manual labor and hence may be employed by maintainers of package repositories to detect possible software supply chain attacks through trojanized software packages.
Airborne and spaceborne platforms are the primary data sources for large-scale forest mapping, but visual interpretation for individual species determination is labor-intensive. Hence, various studies focusing on forests have investigated the benefits of multiple sensors for automated tree species classification. However, transferable deep learning approaches for large-scale applications are still lacking. This gap motivated us to create a novel dataset for tree species classification in central Europe based on multi-sensor data from aerial, Sentinel-1 and Sentinel-2 imagery. In this paper, we introduce the TreeSatAI Benchmark Archive, which contains labels of 20 European tree species (i.e., 15 tree genera) derived from forest administration data of the federal state of Lower Saxony, Germany. We propose models and guidelines for the application of the latest machine learning techniques for the task of tree species classification with multi-label data. Finally, we provide various benchmark experiments showcasing the information which can be derived from the different sensors including artificial neural networks and tree-based machine learning methods. We found that residual neural networks (ResNet) perform sufficiently well with weighted precision scores up to 79 % only by using the RGB bands of aerial imagery. This result indicates that the spatial content present within the 0.2 m resolution data is very informative for tree species classification. With the incorporation of Sentinel-1 and Sentinel-2 imagery, performance improved marginally. However, the sole use of Sentinel-2 still allows for weighted precision scores of up to 74 % using either multi-layer perceptron (MLP) or Light Gradient Boosting Machine (LightGBM) models. Since the dataset is derived from real-world reference data, it contains high class imbalances. We found that this dataset attribute negatively affects the models' performances for many of the underrepresented classes (i.e., scarce tree species). However, the class-wise precision of the best-performing late fusion model still reached values ranging from 54 % (Acer) to 88 % (Pinus). Based on our results, we conclude that deep learning techniques using aerial imagery could considerably support forestry administration in the provision of large-scale tree species maps at a very high resolution to plan for challenges driven by global environmental change. The original dataset used in this paper is shared via Zenodo (https://doi.org/10.5281/zenodo.6598390, Schulz et al., 2022). For citation of the dataset, we refer to this article.
Forensic DNA profiles are established by multiplex PCR amplification of a set of highly variable short tandem repeat (STR) loci followed by capillary electrophoresis (CE) as a means to assign alleles to PCR products of differential length. Recently, CE analysis of STR amplicons has been supplemented by high-throughput next generation sequencing (NGS) techniques that are able to detect isoalleles bearing sequence polymorphisms and allow for an improved analysis of degraded DNA. Several such assays have been commercialised and validated for forensic applications. However, these systems are cost-effective only when applied to high numbers of samples. We report here an alternative, cost-efficient shallow-sequence output NGS assay called maSTR assay that, in conjunction with a dedicated bioinformatics pipeline called SNiPSTR, can be implemented with standard NGS instrumentation. In a back-to-back comparison with a CE-based, commercial forensic STR kit, we find that for samples with low DNA content, with mixed DNA from different individuals, or containing PCR inhibitors, the maSTR assay performs equally well, and with degraded DNA is superior to CE-based analysis. Thus, the maSTR assay is a simple, robust and cost-efficient NGS-based STR typing method applicable for human identification in forensic and biomedical contexts.
Das Interesse an Virtual Reality (VR) für die Hochschullehre steigt aktuell vermehrt durch die Möglichkeit, logistisch schwierige Aufgaben abzubilden sowie aufgrund positiver Ergebnisse aus Wirksamkeitsstudien. Gleichzeitig fehlt es jedoch an Studien, die immersive VR-Umgebungen, nicht-immersive Desktop-Umgebungen und konventionelle Lernmaterialien gegenüberstellen und lehr-lernmethodische Aspekte evaluieren. Aus diesem Grund beschäftigt sich dieser Beitrag mit der Konzeption und Realisierung einer Lernumgebung für die Hochschullehre, die sowohl mit einem Head Mounted Display (HMD) als auch mittels Desktops genutzt werden kann, sowie deren Evaluation anhand eines experimentellen Gruppendesigns. Die Lernumgebung wurde auf Basis einer eigens entwickelten Softwareplattform erstellt und die Wirksamkeit mithilfe von zwei Experimentalgruppen – VR vs. Desktop-Umgebung – und einer Kontrollgruppe evaluiert und verglichen. In einer Pilotstudie konnten sowohl qualitativ als auch quantitativ positive Einschätzungen der Usability der Lernumgebung in beiden Experimentalgruppen herausgestellt werden. Darüber hinaus zeigten sich positive Effekte auf die kognitive und affektive Wirkung der Lernumgebung im Vergleich zu konventionellen Lernmaterialien. Unterschiede zwischen der Nutzung als VR- oder Desktop-Umgebung zeigen sich auf kognitiver und affektiver Ebene jedoch kaum. Die Analyse von Log-Daten deutet allerdings auf Unterschiede im Lern- und Explorationsverhalten hin.
Digital ecosystems are driving the digital transformation of business models. Meanwhile, the associated processing of personal data within these complex systems poses challenges to the protection of individual privacy. In this paper, we explore these challenges from the perspective of digital ecosystems' platform providers. To this end, we present the results of an interview study with seven data protection officers representing a total of 12 digital ecosystems in Germany. We identified current and future challenges for the implementation of data protection requirements, covering issues on legal obligations and data subject rights. Our results support stakeholders involved in the implementation of privacy protection measures in digital ecosystems, and form the foundation for future privacy-related studies tailored to the specifics of digital ecosystems.
A company's financial documents use tables along with text to organize the data containing key performance indicators (KPIs) (such as profit and loss) and a financial quantity linked to them. The KPI’s linked quantity in a table might not be equal to the similarly described KPI's quantity in a text. Auditors take substantial time to manually audit these financial mistakes and this process is called consistency checking. As compared to existing work, this paper attempts to automate this task with the help of transformer-based models. Furthermore, for consistency checking it is essential for the table's KPIs embeddings to encode the semantic knowledge of the KPIs and the structural knowledge of the table. Therefore, this paper proposes a pipeline that uses a tabular model to get the table's KPIs embeddings. The pipeline takes input table and text KPIs, generates their embeddings, and then checks whether these KPIs are identical. The pipeline is evaluated on the financial documents in the German language and a comparative analysis of the cell embeddings' quality from the three tabular models is also presented. From the evaluation results, the experiment that used the English-translated text and table KPIs and Tabbie model to generate table KPIs’ embeddings achieved an accuracy of 72.81% on the consistency checking task, outperforming the benchmark, and other tabular models.
Indoor spaces exhibit microbial compositions that are distinctly dissimilar from one another and from outdoor spaces. Unique in this regard, and a topic that has only recently come into focus, is the microbiome of hospitals. While the benefits of knowing exactly which microorganisms propagate how and where in hospitals are undoubtedly beneficial for preventing hospital-acquired infections, there are, to date, no standardized procedures on how to best study the hospital microbiome. Our study aimed to investigate the microbiome of hospital sanitary facilities, outlining the extent to which hospital microbiome analyses differ according to sample-preparation protocol. For this purpose, fifty samples were collected from two separate hospitals—from three wards and one hospital laboratory—using two different storage media from which DNA was extracted using two different extraction kits and sequenced with two different primer pairs (V1–V2 and V3–V4). There were no observable differences between the sample-preservation media, small differences in detected taxa between the DNA extraction kits (mainly concerning Propionibacteriaceae), and large differences in detected taxa between the two primer pairs V1–V2 and V3–V4. This analysis also showed that microbial occurrences and compositions can vary greatly from toilets to sinks to showers and across wards and hospitals. In surgical wards, patient toilets appeared to be characterized by lower species richness and diversity than staff toilets. Which sampling sites are the best for which assessments should be analyzed in more depth. The fact that the sample processing methods we investigated (apart from the choice of primers) seem to have changed the results only slightly suggests that comparing hospital microbiome studies is a realistic option. The observed differences in species richness and diversity between patient and staff toilets should be further investigated, as these, if confirmed, could be a result of excreted antimicrobials.
Deployment of modern data-driven machine learning methods, most often realized by deep neural networks (DNNs), in safety-critical applications such as health care, industrial plant control, or autonomous driving is highly challenging due to numerous model-inherent shortcomings. These shortcomings are diverse and range from a lack of generalization over insufficient interpretability and implausible predictions to directed attacks by means of malicious inputs. Cyber-physical systems employing DNNs are therefore likely to suffer from so-called safety concerns, properties that preclude their deployment as no argument or experimental setup can help to assess the remaining risk. In recent years, an abundance of state-of-the-art techniques aiming to address these safety concerns has emerged. This chapter provides a structured and broad overview of them. We first identify categories of insufficiencies to then describe research activities aiming at their detection, quantification, or mitigation. Our work addresses machine learning experts and safety engineers alike: The former ones might profit from the broad range of machine learning topics covered and discussions on limitations of recent methods. The latter ones might gain insights into the specifics of modern machine learning methods. We hope that this contribution fuels discussions on desiderata for machine learning systems and strategies on how to help to advance existing approaches accordingly.
In the field of automatic music generation, one of the greatest challenges is the consistent generation of pieces continuously perceived positively by the majority of the audience since there is no objective method to determine the quality of a musical composition. However, composing principles, which have been refined for millennia, have shaped the core characteristics of today's music. A hybrid music generation system, mlmusic, that incorporates various static, music-theory-based methods, as well as data-driven, subsystems, is implemented to automatically generate pieces considered acceptable by the average listener. Initially, a MIDI dataset, consisting of over 100 hand-picked pieces of various styles and complexities, is analysed using basic music theory principles, and the abstracted information is fed into explicitly constrained LSTM networks. For chord progressions, each individual network is specifically trained on a given sequence length, while phrases are created by consecutively predicting the notes' offset, pitch and duration. Using these outputs as a composition's foundation, additional musical elements, along with constrained recurrent rhythmic and tonal patterns, are statically generated. Although no survey regarding the pieces' reception could be carried out, the successful generation of numerous compositions of varying complexities suggests that the integration of these fundamentally distinctive approaches might lead to success in other branches.
Robust Identification and Segmentation of the Outer Skin Layers in Volumetric Fingerprint Data
(2022)
Despite the long history of fingerprint biometrics and its use to authenticate individuals, there are still some unsolved challenges with fingerprint acquisition and presentation attack detection (PAD). Currently available commercial fingerprint capture devices struggle with non-ideal skin conditions, including soft skin in infants. They are also susceptible to presentation attacks, which limits their applicability in unsupervised scenarios such as border control. Optical coherence tomography (OCT) could be a promising solution to these problems. In this work, we propose a digital signal processing chain for segmenting two complementary fingerprints from the same OCT fingertip scan: One fingerprint is captured as usual from the epidermis (“outer fingerprint”), whereas the other is taken from inside the skin, at the junction between the epidermis and the underlying dermis (“inner fingerprint”). The resulting 3D fingerprints are then converted to a conventional 2D grayscale representation from which minutiae points can be extracted using existing methods. Our approach is device-independent and has been proven to work with two different time domain OCT scanners. Using efficient GPGPU computing, it took less than a second to process an entire gigabyte of OCT data. To validate the results, we captured OCT fingerprints of 130 individual fingers and compared them with conventional 2D fingerprints of the same fingers. We found that both the outer and inner OCT fingerprints were backward compatible with conventional 2D fingerprints, with the inner fingerprint generally being less damaged and, therefore, more reliable.
The visual and auditory quality of computer-mediated stimuli for virtual and extended reality (VR/XR) is rapidly improving. Still, it remains challenging to provide a fully embodied sensation and awareness of objects surrounding, approaching, or touching us in a 3D environment, though it can greatly aid task performance in a 3D user interface. For example, feedback can provide warning signals for potential collisions (e.g., bumping into an obstacle while navigating) or pinpointing areas where one’s attention should be directed to (e.g., points of interest or danger). These events inform our motor behaviour and are often associated with perception mechanisms associated with our so-called peripersonal and extrapersonal space models that relate our body to object distance, direction, and contact point/impact. We will discuss these references spaces to explain the role of different cues in our motor action responses that underlie 3D interaction tasks. However, providing proximity and collision cues can be challenging. Various full-body vibration systems have been developed that stimulate body parts other than the hands, but can have limitations in their applicability and feasibility due to their cost and effort to operate, as well as hygienic considerations associated with e.g., Covid-19. Informed by results of a prior study using low-frequencies for collision feedback, in this paper we look at an unobtrusive way to provide spatial, proximal and collision cues. Specifically, we assess the potential of foot sole stimulation to provide cues about object direction and relative distance, as well as collision direction and force of impact. Results indicate that in particular vibration-based stimuli could be useful within the frame of peripersonal and extrapersonal space perception that support 3DUI tasks. Current results favor the feedback combination of continuous vibrotactor cues for proximity, and bass-shaker cues for body collision. Results show that users could rather easily judge the different cues at a reasonably high granularity. This granularity may be sufficient to support common navigation tasks in a 3DUI.
The following work presents algorithms for semi-automatic validation, feature extraction and ranking of time series measurements acquired from MOX gas sensors. Semi-automatic measurement validation is accomplished by extending established curve similarity algorithms with a slope-based signature calculation. Furthermore, a feature-based ranking metric is introduced. It allows for individual prioritization of each feature and can be used to find the best performing sensors regarding multiple research questions. Finally, the functionality of the algorithms, as well as the developed software suite, are demonstrated with an exemplary scenario, illustrating how to find the most power-efficient MOX gas sensor in a data set collected during an extensive screening consisting of 16,320 measurements, all taken with different sensors at various temperatures and analytes.
Computers can help us to trigger our intuition about how to solve a problem. But how does a computer take into account what a user wants and update these triggers? User preferences are hard to model as they are by nature vague, depend on the user’s background and are not always deterministic, changing depending on the context and process under which they were established. We pose that the process of preference discovery should be the object of interest in computer aided design or ideation. The process should be transparent, informative, interactive and intuitive. We formulate Hyper-Pref, a cyclic co-creative process between human and computer, which triggers the user’s intuition about what is possible and is updated according to what the user wants based on their decisions. We combine quality diversity algorithms, a divergent optimization method that can produce many, diverse solutions, with variational autoencoders to both model that diversity as well as the user’s preferences, discovering the preference hypervolume within large search spaces.
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
This paper explores the role of artificial intelligence (AI) in elite sports. We approach the topic from two perspectives. Firstly, we provide a literature based overview of AI success stories in areas other than sports. We identified multiple approaches in the area of Machine Perception, Machine Learning and Modeling, Planning and Optimization as well as Interaction and Intervention, holding a potential for improving training and competition. Secondly, we discover the present status of AI use in elite sports. Therefore, in addition to another literature review, we interviewed leading sports scientist, which are closely connected to the main national service institute for elite sports in their countries. The analysis of this literature review and the interviews show that the most activity is carried out in the methodical categories of signal and image processing. However, projects in the field of modeling & planning have become increasingly popular within the last years. Based on these two perspectives, we extract deficits, issues and opportunities and summarize them in six key challenges faced by the sports analytics community. These challenges include data collection, controllability of an AI by the practitioners and explainability of AI results.
Die Digitalisierung und der Einsatz von Informations- und Kommunikationstechnologien (ICT) hat im Arbeits- und Privatleben neben einer höheren Produktivität auch zu neuen Formen von psychischem Stress geführt. Das Stresserleben, das mit dem Einsatz von ICT verbunden ist, wird in der Literatur auch als Technostress bezeichnet. Die Forschung zu diesem Thema zeigt, dass die Entstehung von Technostress von individuellen Faktoren abhängt. Die Persönlichkeit von ICT-Anwenderinnen und Anwendern bestimmt nicht nur das Auftreten von Technostress, sondern hat auch Einfluss auf dessen gesundheitliche und leistungsbezogene Folgen. In diesem Literaturreview wird der Forschungsstand zu der Rolle von Persönlichkeitsunterschieden bei der Entstehung von Technostress und dessen Folgen systematisch zusammengefasst. Die Auswertung der relevanten Forschungsartikel erfolgt hinsichtlich verwendeter Variablen, Stichproben und Studiendesigns, statistischer Methoden, Theorien und Frameworks. Abschließend werden der aktuelle Forschungsstand eingeordnet und Forschungslücken aufgezeigt.
The accurate forecasting of solar radiation plays an important role for predictive control applications for energy systems with a high share of photovoltaic (PV) energy. Especially off-grid microgrid applications using predictive control applications can benefit from forecasts with a high temporal resolution to address sudden fluctuations of PV-power. However, cloud formation processes and movements are subject to ongoing research. For now-casting applications, all-sky-imagers (ASI) are used to offer an appropriate forecasting for aforementioned application. Recent research aims to achieve these forecasts via deep learning approaches, either as an image segmentation task to generate a DNI forecast through a cloud vectoring approach to translate the DNI to a GHI with ground-based measurement (Fabel et al., 2022; Nouri et al., 2021), or as an end-to-end regression task to generate a GHI forecast directly from the images (Paletta et al., 2021; Yang et al., 2021). While end-to-end regression might be the more attractive approach for off-grid scenarios, literature reports increased performance compared to smart-persistence but do not show satisfactory forecasting patterns (Paletta et al., 2021). This work takes a step back and investigates the possibility to translate ASI-images to current GHI to deploy the neural network as a feature extractor. An ImageNet pre-trained deep learning model is used to achieve such translation on an openly available dataset by the University of California San Diego (Pedro et al., 2019). The images and measurements were collected in Folsom, California. Results show that the neural network can successfully translate ASI-images to GHI for a variety of cloud situations without the need of any external variables. Extending the neural network to a forecasting task also shows promising forecasting patterns, which shows that the neural network extracts both temporal and momentarily features within the images to generate GHI forecasts.
Risk-based authentication (RBA) aims to protect users against attacks involving stolen passwords. RBA monitors features during login, and requests re-authentication when feature values widely differ from those previously observed. It is recommended by various national security organizations, and users perceive it more usable than and equally secure to equivalent two-factor authentication. Despite that, RBA is still used by very few online services. Reasons for this include a lack of validated open resources on RBA properties, implementation, and configuration. This effectively hinders the RBA research, development, and adoption progress.
To close this gap, we provide the first long-term RBA analysis on a real-world large-scale online service. We collected feature data of 3.3 million users and 31.3 million login attempts over more than 1 year. Based on the data, we provide (i) studies on RBA’s real-world characteristics plus its configurations and enhancements to balance usability, security, and privacy; (ii) a machine learning–based RBA parameter optimization method to support administrators finding an optimal configuration for their own use case scenario; (iii) an evaluation of the round-trip time feature’s potential to replace the IP address for enhanced user privacy; and (iv) a synthesized RBA dataset to reproduce this research and to foster future RBA research. Our results provide insights on selecting an optimized RBA configuration so that users profit from RBA after just a few logins. The open dataset enables researchers to study, test, and improve RBA for widespread deployment in the wild.
Comparative study of 3D object detection frameworks based on LiDAR data and sensor fusion techniques
(2022)
Estimating and understanding the surroundings of the vehicle precisely forms the basic and crucial step for the autonomous vehicle. The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time. Generally, the perception system involves various subsystems such as localization, obstacle (static and dynamic) detection, and avoidance, mapping systems, and others. For perceiving the environment, these vehicles will be equipped with various exteroceptive (both passive and active) sensors in particular cameras, Radars, LiDARs, and others. These systems are equipped with deep learning techniques that transform the huge amount of data from the sensors into semantic information on which the object detection and localization tasks are performed. For numerous driving tasks, to provide accurate results, the location and depth information of a particular object is necessary. 3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object. Based on recent research, 3D object detection frameworks performing object detection and localization on LiDAR data and sensor fusion techniques show significant improvement in their performance. In this work, a comparative study of the effect of using LiDAR data for object detection frameworks and the performance improvement seen by using sensor fusion techniques are performed. Along with discussing various state-of-the-art methods in both the cases, performing experimental analysis, and providing future research directions.
As cameras are ubiquitous in autonomous systems, object detection is a crucial task. Object detectors are widely used in applications such as autonomous driving, healthcare, and robotics. Given an image, an object detector outputs both the bounding box coordinates as well as classification probabilities for each object detected. The state-of-the-art detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications in particular. It is therefore crucial to explain the reason behind each detector decision in order to gain user trust, enhance detector performance, and analyze their failure.
Previous work fails to explain as well as evaluate both bounding box and classification decisions individually for various detectors. Moreover, no tools explain each detector decision, evaluate the explanations, and also identify the reasons for detector failures. This restricts the flexibility to analyze detectors. The main contribution presented here is an open-source Detector Explanation Toolkit (DExT). It is used to explain the detector decisions, evaluate the explanations, and analyze detector errors. The detector decisions are explained visually by highlighting the image pixels that most influence a particular decision. The toolkit implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. To the author’s knowledge, this is the first work to conduct extensive qualitative and novel quantitative evaluations of different explanation methods across various detectors. The qualitative evaluation incorporates a visual analysis of the explanations carried out by the author as well as a human-centric evaluation. The human-centric evaluation includes a user study to understand user trust in the explanations generated across various explanation methods for different detectors. Four multi-object visualization methods are provided to merge the explanations of multiple objects detected in an image as well as the corresponding detector outputs in a single image. Finally, DExT implements the procedure to analyze detector failures using the formulated approach.
The visual analysis illustrates that the ability to explain a model is more dependent on the model itself than the actual ability of the explanation method. In addition, the explanations are affected by the object explained, the decision explained, detector architecture, training data labels, and model parameters. The results of the quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. In addition, a single explanation method cannot generate more faithful explanations than other methods for both the bounding box and the classification decision across different detectors. Both the quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. Finally, a convex polygon-based multi-object visualization method provides more human-understandable visualization than other methods.
The author expects that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
ProtSTonKGs: A Sophisticated Transformer Trained on Protein Sequences, Text, and Knowledge Graphs
(2022)
While most approaches individually exploit unstructured data from the biomedical literature or structured data from biomedical knowledge graphs, their union can better exploit the advantages of such approaches, ultimately improving representations of biology. Using multimodal transformers for such purposes can improve performance on context dependent classication tasks, as demonstrated by our previous model, the Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs (STonKGs). In this work, we introduce ProtSTonKGs, a transformer aimed at learning all-encompassing representations of protein-protein interactions. ProtSTonKGs presents an extension to our previous work by adding textual protein descriptions and amino acid sequences (i.e., structural information) to the text- and knowledge graph-based input sequence used in STonKGs. We benchmark ProtSTonKGs against STonKGs, resulting in improved F1 scores by up to 0.066 (i.e., from 0.204 to 0.270) in several tasks such as predicting protein interactions in several contexts. Our work demonstrates how multimodal transformers can be used to integrate heterogeneous sources of information, paving the foundation for future approaches that use multiple modalities for biomedical applications.
Contextual information is widely considered for NLP and knowledge discovery in life sciences since it highly influences the exact meaning of natural language. The scientific challenge is not only to extract such context data, but also to store this data for further query and discovery approaches. Classical approaches use RDF triple stores, which have serious limitations. Here, we propose a multiple step knowledge graph approach using labeled property graphs based on polyglot persistence systems to utilize context data for context mining, graph queries, knowledge discovery and extraction. We introduce the graph-theoretic foundation for a general context concept within semantic networks and show a proof of concept based on biomedical literature and text mining. Our test system contains a knowledge graph derived from the entirety of PubMed and SCAIView data and is enriched with text mining data and domain-specific language data using Biological Expression Language. Here, context is a more general concept than annotations. This dense graph has more than 71M nodes and 850M relationships. We discuss the impact of this novel approach with 27 real-world use cases represented by graph queries. Storing and querying a giant knowledge graph as a labeled property graph is still a technological challenge. Here, we demonstrate how our data model is able to support the understanding and interpretation of biomedical data. We present several real-world use cases that utilize our massive, generated knowledge graph derived from PubMed data and enriched with additional contextual data. Finally, we show a working example in context of biologically relevant information using SCAIView.
Effective Neighborhood Feature Exploitation in Graph CNNs for Point Cloud Object-Part Segmentation
(2022)
Part segmentation is the task of semantic segmentation applied on objects and carries a wide range of applications from robotic manipulation to medical imaging. This work deals with the problem of part segmentation on raw, unordered point clouds of 3D objects. While pioneering works on deep learning for point clouds typically ignore taking advantage of local geometric structure around individual points, the subsequent methods proposed to extract features by exploiting local geometry have not yielded significant improvements either. In order to investigate further, a graph convolutional network (GCN) is used in this work in an attempt to increase the effectiveness of such neighborhood feature exploitation approaches. Most of the previous works also focus only on segmenting complete point cloud data. Considering the impracticality of such approaches, taking into consideration the real world scenarios where complete point clouds are scarcely available, this work proposes approaches to deal with partial point cloud segmentation.
In the attempt to better capture neighborhood features, this work proposes a novel method to learn regional part descriptors which guide and refine the segmentation predictions. The proposed approach helps the network achieve state-of-the-art performance of 86.4% mIoU on the ShapeNetPart dataset for methods which do not use any preprocessing techniques or voting strategies. In order to better deal with partial point clouds, this work also proposes new strategies to train and test on partial data. While achieving significant improvements compared to the baseline performance, the problem of partial point cloud segmentation is also viewed through an alternate lens of semantic shape completion.
Semantic shape completion networks not only help deal with partial point cloud segmentation but also enrich the information captured by the system by predicting complete point clouds with corresponding semantic labels for each point. To this end, a new network architecture for semantic shape completion is also proposed based on point completion network (PCN) which takes advantage of a graph convolution based hierarchical decoder for completion as well as segmentation. In addition to predicting complete point clouds, results indicate that the network is capable of reaching within a margin of 5% to the mIoU performance of dedicated segmentation networks for partial point cloud segmentation.
The processing of employees’ personal data is dramatically increasing, yet there is a lack of tools that allow employees to manage their privacy. In order to develop these tools, one needs to understand what sensitive personal data are and what factors influence employees’ willingness to disclose. Current privacy research, however, lacks such insights, as it has focused on other contexts in recent decades. To fill this research gap, we conducted a cross-sectional survey with 553 employees from Germany. Our survey provides multiple insights into the relationships between perceived data sensitivity and willingness to disclose in the employment context. Among other things, we show that the perceived sensitivity of certain types of data differs substantially from existing studies in other contexts. Moreover, currently used legal and contextual distinctions between different types of data do not accurately reflect the subtleties of employees’ perceptions. Instead, using 62 different data elements, we identified four groups of personal data that better reflect the multi-dimensionality of perceptions. However, previously found common disclosure antecedents in the context of online privacy do not seem to affect them. We further identified three groups of employees that differ in their perceived data sensitivity and willingness to disclose, but neither in their privacy beliefs nor in their demographics. Our findings thus provide employers, policy makers, and researchers with a better understanding of employees’ privacy perceptions and serve as a basis for future targeted research
on specific types of personal data and employees.
We benchmark the robustness of maximum likelihood based uncertainty estimation methods to outliers in training data for regression tasks. Outliers or noisy labels in training data results in degraded performances as well as incorrect estimation of uncertainty. We propose the use of a heavy-tailed distribution (Laplace distribution) to improve the robustness to outliers. This property is evaluated using standard regression benchmarks and on a high-dimensional regression task of monocular depth estimation, both containing outliers. In particular, heavy-tailed distribution based maximum likelihood provides better uncertainty estimates, better separation in uncertainty for out-of-distribution data, as well as better detection of adversarial attacks in the presence of outliers.
Recent advances in Natural Language Processing have substantially improved contextualized representations of language. However, the inclusion of factual knowledge, particularly in the biomedical domain, remains challenging. Hence, many Language Models (LMs) are extended by Knowledge Graphs (KGs), but most approaches require entity linking (i.e., explicit alignment between text and KG entities). Inspired by single-stream multimodal Transformers operating on text, image and video data, this thesis proposes the Sophisticated Transformer trained on biomedical text and Knowledge Graphs (STonKGs). STonKGs incorporates a novel multimodal architecture based on a cross encoder that uses the attention mechanism on a concatenation of input sequences derived from text and KG triples, respectively. Over 13 million so-called text-triple pairs, coming from PubMed and assembled using the Integrated Network and Dynamical Reasoning Assembler (INDRA), were used in an unsupervised pre-training procedure to learn representations of biomedical knowledge in STonKGs. By comparing STonKGs to an NLP- and a KG-baseline (operating on either text or KG data) on a benchmark consisting of eight fine-tuning tasks, the proposed knowledge integration method applied in STonKGs was empirically validated. Specifically, on tasks with a comparatively small dataset size and a larger number of classes, STonKGs resulted in considerable performance gains, beating the F1-score of the best baseline by up to 0.083. Both the source code as well as the code used to implement STonKGs are made publicly available so that the proposed method of this thesis can be extended to many other biomedical applications.
MOTIVATION
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited.
RESULTS
To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs (KGs). This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations in a shared embedding space. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against three baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.084 (i.e., from 0.881 to 0.965). Finally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications.
AVAILABILITY
We make the source code and the Python package of STonKGs available at GitHub (https://github.com/stonkgs/stonkgs) and PyPI (https://pypi.org/project/stonkgs/). The pre-trained STonKGs models and the task-specific classification models are respectively available at https://huggingface.co/stonkgs/stonkgs-150k and https://zenodo.org/communities/stonkgs.
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online.
It is challenging to provide users with a haptic weight sensation of virtual objects in VR since current consumer VR controllers and software-based approaches such as pseudo-haptics cannot render appropriate haptic stimuli. To overcome these limitations, we developed a haptic VR controller named Triggermuscle that adjusts its trigger resistance according to the weight of a virtual object. Therefore, users need to adapt their index finger force to grab objects of different virtual weights. Dynamic and continuous adjustment is enabled by a spring mechanism inside the casing of an HTC Vive controller. In two user studies, we explored the effect on weight perception and found large differences between participants for sensing change in trigger resistance and thus for discriminating virtual weights. The variations were easily distinguished and associated with weight by some participants while others did not notice them at all. We discuss possible limitations, confounding factors, how to overcome them in future research and the pros and cons of this novel technology.
Object-Based Trace Model for Automatic Indicator Computation in the Human Learning Environments
(2021)
This paper proposes a traces model in the form of an object or class model (in the UML sense) which allows the automatic calculation of indicators of various kinds and independently of the computer environment for human learning (CEHL). The model is based on the establishment of a trace-based system that encompasses all the logic of traces collecting and indicators calculation. It is im-plemented in the form of a trace database. It is an important contribution in the field of the exploitation of the traces of apprenticeship in a CEHL because it pro-vides a general formalism for modeling the traces and allowing the calculation of several indicators at the same time. Also, with the inclusion of calculated indica-tors as potential learning traces, our model provides a formalism for classifying the various indicators in the form of inheritance relationships, which promotes the reuse of indicators already calculated. Economically, the model can allow organi-zations with different learning platforms to invest only in one traces Management System. At the social level, it can allow a better sharing of trace databases be-tween the various research institutions in the field of CEHL.
Die Blockchain-Technologie ist einer der großen Innovationstreiber der letzten Jahre. Mit einer zugrundeliegenden Blockchain-Technologie ist auch der Betrieb von verteilten Anwendungen, sogenannter Decentralized Applications (DApps), bereits technisch umsetzbar. Dieser Beitrag verfolgt das Ziel, Gestaltungsmöglichkeiten der digitalen Verbraucherteilhabe an Blockchain-Anwendungen zu untersuchen. Hierzu enthält der Beitrag eine Einführung in die digitale Verbraucherteilhabe und die technischen Grundlagen und Eigenschaften der Blockchain-Technologie, einschließlich darauf basierender DApps. Abschließend werden technische, ethisch-organisatorische, rechtliche und sonstige Anforderungsbereiche für die Umsetzung von digitaler Verbraucherteilhabe in Blockchain-Anwendungen adressiert.
BACKGROUND: Humans demonstrate many physiological changes in microgravity for which long-duration head down bed rest (HDBR) is a reliable analog. However, information on how HDBR affects sensory processing is lacking.
OBJECTIVE: We previously showed [25] that microgravity alters the weighting applied to visual cues in determining the perceptual upright (PU), an effect that lasts long after return. Does long-duration HDBR have comparable effects?
METHODS: We assessed static spatial orientation using the luminous line test (subjective visual vertical, SVV) and the oriented character recognition test (PU) before, during and after 21 days of 6° HDBR in 10 participants. Methods were essentially identical as previously used in orbit [25].
RESULTS: Overall, HDBR had no effect on the reliance on visual relative to body cues in determining the PU. However, when considering the three critical time points (pre-bed rest, end of bed rest, and 14 days post-bed rest) there was a significant decrease in reliance on visual relative to body cues, as found in microgravity. The ratio had an average time constant of 7.28 days and returned to pre-bed-rest levels within 14 days. The SVV was unaffected.
CONCLUSIONS: We conclude that bed rest can be a useful analog for the study of the perception of static self-orientation during long-term exposure to microgravity. More detailed work on the precise time course of our effects is needed in both bed rest and microgravity conditions.
Risk-based authentication (RBA) aims to strengthen password-based authentication rather than replacing it. RBA does this by monitoring and recording additional features during the login process. If feature values at login time differ significantly from those observed before, RBA requests an additional proof of identification. Although RBA is recommended in the NIST digital identity guidelines, it has so far been used almost exclusively by major online services. This is partly due to a lack of open knowledge and implementations that would allow any service provider to roll out RBA protection to its users. To close this gap, we provide a first in-depth analysis of RBA characteristics in a practical deployment. We observed N=780 users with 247 unique features on a real-world online service for over 1.8 years. Based on our collected data set, we provide (i) a behavior analysis of two RBA implementations that were apparently used by major online services in the wild, (ii) a benchmark of the features to extract a subset that is most suitable for RBA use, (iii) a new feature that has not been used in RBA before, and (iv) factors which have a significant effect on RBA performance. Our results show that RBA needs to be carefully tailored to each online service, as even small configuration adjustments can greatly impact RBA's security and usability properties. We provide insights on the selection of features, their weightings, and the risk classification in order to benefit from RBA after a minimum number of login attempts.
Mebendazole Mediates Proteasomal Degradation of GLI Transcription Factors in Acute Myeloid Leukemia
(2021)
The prognosis of elderly AML patients is still poor due to chemotherapy resistance. The Hedgehog (HH) pathway is important for leukemic transformation because of aberrant activation of GLI transcription factors. MBZ is a well-tolerated anthelmintic that exhibits strong antitumor effects. Herein, we show that MBZ induced strong, dose-dependent anti-leukemic effects on AML cells, including the sensitization of AML cells to chemotherapy with cytarabine. MBZ strongly reduced intracellular protein levels of GLI1/GLI2 transcription factors. Consequently, MBZ reduced the GLI promoter activity as observed in luciferase-based reporter assays in AML cell lines. Further analysis revealed that MBZ mediates its anti-leukemic effects by promoting the proteasomal degradation of GLI transcription factors via inhibition of HSP70/90 chaperone activity. Extensive molecular dynamics simulations were performed on the MBZ-HSP90 complex, showing a stable binding interaction at the ATP binding site. Importantly, two patients with refractory AML were treated with MBZ in an off-label setting and MBZ effectively reduced the GLI signaling activity in a modified plasma inhibitory assay, resulting in a decrease in peripheral blood blast counts in one patient. Our data prove that MBZ is an effective GLI inhibitor that should be evaluated in combination to conventional chemotherapy in the clinical setting.