Refine
H-BRS Bibliography
- yes (265) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (265) (remove)
Document Type
- Article (265) (remove)
Year of publication
Keywords
- Virtual reality (5)
- virtual reality (5)
- 3D user interface (3)
- ARRs (3)
- Augmented Reality (3)
- Datenanalyse (3)
- Sicherheitslücke (3)
- post-buckling (3)
- ARIMA (2)
- Artificial Intelligence (2)
- BPMS (2)
- Cognition (2)
- Computer Graphics (2)
- Datenschutz (2)
- Datensicherheit (2)
- ERP (2)
- FDI (2)
- Force field (2)
- Humans (2)
- Hybrid systems (2)
- Incremental bond graph (2)
- Intelligent Transport System (2)
- Machine Learning (2)
- Navigation (2)
- Optimization (2)
- Perception (2)
- Performance (2)
- Privacy (2)
- Ray Tracing (2)
- Security (2)
- Software (2)
- Software-Werkzeug (2)
- UAV (2)
- Unternehmen (2)
- Usable Security (2)
- Vehicular Ad hoc Networks (2)
- Virtual Reality (2)
- Visualization (2)
- YAWL (2)
- adaptive fault thresholds (2)
- automated sensor-screening (2)
- biometrics (2)
- bond graph modelling (2)
- computer vision (2)
- confidence level (2)
- data filtering (2)
- fault detection (2)
- foveated rendering (2)
- haptics (2)
- incremental bond graphs (2)
- neural network (2)
- nonlinear stability (2)
- optical sensor (2)
- parallel breadth-first search (2)
- power electronic systems (2)
- semiconducting metal oxide gas sensor array (2)
- short-term load forecasting (2)
- simulation (2)
- spatial updating (2)
- usable privacy (2)
- 16S rRNA gene sequencing (1)
- 3D navigation (1)
- 3D nucleus (1)
- 3D real-time echocardiography (1)
- 3D user interfaces (1)
- ABT-737 (1)
- ACPYPE (1)
- AD (1)
- AI usage in sports (1)
- ALPS (1)
- AMBER (1)
- AML (1)
- API Documentation (1)
- AR (1)
- Absolute nodal coordinate formulation (1)
- Active vision interface (1)
- Acute lymphoblastic leukemia (1)
- Adaptive Control (1)
- Advanced Driver Assistance Systems (1)
- Affordances (1)
- Algorithms (1)
- Alkane (1)
- Altenhilfe (1)
- Antifuse memory (1)
- Assekuranz (1)
- Auditory Cueing (1)
- Augmented reality (1)
- Automation (1)
- Automatisierte Prozesse Rekonstruktio (1)
- Autonomy (1)
- Autotuning (1)
- Available Bandwidth (1)
- B-cell leukemia (1)
- B-cell lymphoma (1)
- B2T (1)
- BCL2 (1)
- BFS (1)
- BH3-mimetic inhibitor (1)
- Background music (1)
- Bacteria, Anaerobic (1)
- Ball tracking (1)
- Bandwidth Estimation (1)
- Basis set (1)
- Bayesian Network (1)
- Bayesian optimization (1)
- Bayessches Netz (1)
- Begriffsbestimmung (1)
- Behörde (1)
- Benchmarking (1)
- Bewertung (1)
- Big Data Analysis (1)
- Blasendiagramm (1)
- Blocking (1)
- Bond Graph modelling and simulation (1)
- Bond graph (1)
- Bond graph modelling (1)
- Bond graphs (1)
- Bundesrepublik Deutschland (1)
- Bundesverfassungsgericht (1)
- Business (1)
- Business Process Intelligence (1)
- Business-to-Thing (1)
- CAS (1)
- CC (1)
- CEHL (1)
- CIBERSORT (1)
- CMMN (1)
- CREBBP (1)
- Camera selection (1)
- Camera view analysis (1)
- Canonical form of state equations and standard interconnection form for robustness study (1)
- Capability framework (1)
- Capacity (1)
- Carbohydrate (1)
- Case Management Model and Notation (1)
- Centrifugation (1)
- Cervical cancer screening (1)
- Cervicovaginal microbiome (1)
- Chalcogenide glass sensor (1)
- Challenges (1)
- Change-Prozess (1)
- Chemical imaging (1)
- Chip ID (1)
- Circular saws (1)
- Classifiers (1)
- Cloud computing (1)
- Co-rotational formulation (1)
- Code Generation (1)
- Cognitive informatics (1)
- Colposcopy (1)
- Company (1)
- Complexity (1)
- Computational causality (1)
- Computational chemistry (1)
- Computational modeling (1)
- Computer (1)
- Computer Automated Design (1)
- Computer Vision System (1)
- Computerkriminalitaet (1)
- Conformation (1)
- Congenital heart disease (1)
- Content Security Policies (1)
- Cooperative Awareness Message (1)
- Counterfeit protection (1)
- Coupled process (1)
- Creative Commons (1)
- Cross-sensitivity (1)
- Crystal structure (1)
- Current research information systems (1)
- Curriculum (1)
- Cutting sticks problem (1)
- Cybersickness (1)
- DNA extraction protocols (1)
- DNA profile (1)
- DOI (1)
- Data Publication (1)
- DataCite (1)
- Database Management Systems (1)
- Datenbanksysteme (1)
- Datenbasierte Prozessanalyse (1)
- Decision Diagram (1)
- Decision Network (1)
- Decision Support (1)
- Demenz (1)
- Demonstration-based training (1)
- Developer Centered Security (1)
- Diagnostic bond graphs (1)
- Dienstgütevereinbarung (1)
- Digital Object Identifier (1)
- Digitale Lehre (1)
- Digitisation (1)
- Disco (1)
- Discrete cosine transform (1)
- Displacement (1)
- Domain-Specific Languages (1)
- Domain-Specific Modeling Languages, (1)
- Domestic service robots (1)
- Drosophila (1)
- Drug resistance (1)
- EEG (1)
- ELM (1)
- EM leakage (1)
- EN-12299 (1)
- ERP system (1)
- ETV6-RUNX1 (1)
- Earth Observation (1)
- Ecosystem simulation (1)
- Educational Data Mining (1)
- Educational Process Mining (1)
- Edutainment (1)
- Efficiency (1)
- Einflussdiagramm (1)
- Electric mobility (1)
- Electromagnetic Fields (1)
- Electronic commerce (1)
- Electronic tongue (1)
- Elephantiasis (1)
- Emotion (1)
- Empfehlung (1)
- Empirical formula (1)
- Encryption (1)
- Enterprise Resource Planning Software (1)
- Enterprise Resource Planning System (1)
- Entropy (1)
- Entscheidungsunterstützung (1)
- Environmental Data (1)
- Escape analysis (1)
- Euler–Bernoulli beam (1)
- Evaluation (1)
- Evaluation als Kommunikationsanlass (1)
- Event detection (1)
- Ewing´s Sarcoma Family of Tumors (1)
- Exchange and reuse of bond graph models (1)
- Executive functions (1)
- Exercise (1)
- Expert system (1)
- External faults (1)
- FGR (1)
- Face and hand gesture recognition (1)
- Factory instrumentation (1)
- Fallbeschreibung (1)
- Fas (1)
- Fault detection and isolation (1)
- Fault handling (1)
- Feedback (1)
- Fehlertoleranz (1)
- Female (1)
- Field sequential imaging (1)
- Finite element modelling (1)
- First-order frequency domain sensitivities (1)
- Flexible multibody system (1)
- Fluency (1)
- Focus plus context (1)
- Foreground segmentation (1)
- Formal definition and validation of the content of a model description (1)
- Forms of mathematical models (1)
- Functional Programming (1)
- Funktionsprinzip (1)
- Fuzzy Mining (1)
- Fuzzy logic (1)
- Fuzzy-System (1)
- GDPR (1)
- GLI (1)
- Gabor filter (1)
- Games (1)
- Garbage collection (1)
- Gaze-contingent depth-of-field (1)
- Genetic Predisposition to Disease (1)
- Genetic algorithm (1)
- Genomics (1)
- Genomics/methods (1)
- Gesamt-Exom-Sequenzierung (1)
- Geschäftsprozess (1)
- Glycam06 (1)
- Goal Programming (1)
- Graph embeddings (1)
- Graph theory (1)
- Gromacs (1)
- Group behavior analysis (1)
- HDAC inhibitor (1)
- HDBR (1)
- HIF1α (1)
- HPV diagnostic (1)
- HSP70 (1)
- HSP90 (1)
- Hacker-Angriff (1)
- Hacking-Technik (1)
- Hand injuries (1)
- Handlungsempfehlung (1)
- Head Mounted Display (1)
- Head-mounted Display (1)
- High hyperdiploidy (1)
- High-resolution displays (1)
- High-speed railway track (1)
- Histograms (1)
- Hochschulehre (1)
- Hochschullehre (1)
- Human computer interaction (1)
- Human factors (1)
- Human orientation perception (1)
- Human robot interaction (1)
- Hybrid models of engineering systems (1)
- Hydraulic orifices (1)
- Hydrocarbon (1)
- IC identification (1)
- IEEE 802.11 (1)
- IT professionals (1)
- IT-Verfügbarkeit (1)
- IaaS (1)
- Ikaros (1)
- Image Processing (1)
- Immersion (1)
- Immersive analytics (1)
- Inductive Visual Mining (1)
- Industry 4.0 (1)
- Influence Diagram (1)
- Informations-, Kommunikations- und Medientechnologie (1)
- Informationssicherheit (1)
- Informationstechnik (1)
- Instantiation (1)
- Instruction design (1)
- Integral backstepping technique (1)
- Intel Xeon Phi (1)
- Interaction Patterns (1)
- Interaction devices (1)
- Internet (1)
- Internet of Things (1)
- Interventionstudie (1)
- It-Diensteanbieter (1)
- Java virtual machine (1)
- Kartographie (1)
- Key recovery (1)
- KiBP (1)
- Knowledge Graphs (1)
- Knowledge Worker (1)
- Knowledge-intensive Business Process (1)
- Kopie digitaler Daten (1)
- Laminar and turbulent flow (1)
- Language Engineering (1)
- Language learning (1)
- Langzeitbehandlung (1)
- Large display interaction (1)
- Large-Scale Online Services (1)
- Lattice Boltzmann Method (1)
- Leakage circuits (1)
- Leg (1)
- Lehr-Lernpsychologie (1)
- Leistungsdaten (1)
- Lernen (1)
- Lernumgebung (1)
- Library model (1)
- Ligands (1)
- Light curtain (1)
- Linear Optimization (1)
- Linear Programming (1)
- Linear quadratic regulator (1)
- Lineare Optimierung (1)
- Locomotion (1)
- Lymphedema (1)
- MAP-Elites (1)
- MBZ (1)
- MOX gas sensors (1)
- MP2.5 (1)
- Machine learning (1)
- Male (1)
- Malware (1)
- Manipulation tasks (1)
- Materialwissenschaften (1)
- Mathematical methods (1)
- Maximal covering location problem (1)
- Measurement (1)
- Media in education (1)
- Memory (1)
- Memory management (1)
- Meteorological Data (1)
- Methodenanalyse (1)
- Methodologies (1)
- Microsoft CRM 2013 (1)
- Mobile robotics (1)
- Mode switching LTI model (1)
- Mode-dependent ARRs (1)
- Model-Driven Engineering (1)
- Model-based failure prognosis (1)
- Model-free control (1)
- Modellbildung (1)
- Modelling (1)
- Molecular rotation (1)
- Molecular structure (1)
- Monitoring (1)
- Motion (1)
- Motion Sickness (1)
- Multi-camera (1)
- Multi-component heavy metal solution (1)
- Multi-objective (1)
- Multi-stage (1)
- Multibody systems (1)
- Multidisciplinary systems (1)
- Multimodal hyperspectral data (1)
- Multisensory cues (1)
- N200 (1)
- NFKB (1)
- NGS (1)
- NLP (1)
- NUMA (1)
- Naive physics (1)
- Neuroscience (1)
- Nonbonded scaling factor (1)
- Nonlinear control quadrotor uav (1)
- Nutzung (1)
- Nvidia graphic processors (1)
- OCT (1)
- OCU (1)
- OER (1)
- Object detection (1)
- Object recognition (1)
- Object-oriented physical systems modelling (1)
- Older adults (1)
- Online-Überwachung (1)
- Ontology (1)
- Open Educational Ressources (1)
- Open innovation (1)
- Open source software (1)
- OpenFlow (1)
- OpenMP (1)
- Optical flow (1)
- Organic compounds and Functional groups (1)
- Out-of-view Objects (1)
- Outside-in process (1)
- P300 (1)
- PAD (1)
- PCR inhibitors (1)
- PSD (1)
- PaaS (1)
- Parameter degradation model (1)
- Parameter uncertainties (1)
- Pattern recognition (1)
- Pedigree (1)
- Perceptual Upright (1)
- Pflegepersonal (1)
- Physical activity (1)
- Plan-based robot control (1)
- Poisson Disc Distribution (1)
- Polymorphism, Single Nucleotide (1)
- Precursor B-Cell Lymphoblastic Leukemia-Lymphoma/genetics (1)
- Prescriptive Analytics (1)
- Presence (1)
- Pressure wire (1)
- Pressure-volume relation (1)
- ProM (1)
- Process Mining (1)
- Process Models (1)
- Process views (1)
- Programmsicherheit (1)
- Pronunciation (1)
- Protective system (1)
- Proximity (1)
- Prozessanalyse (1)
- Prozessmanagement (1)
- Prozessoptimierung (1)
- Präskriptive Analytik (1)
- Pseudonym Concept (1)
- Psychology (1)
- Public Key Infrastructure (1)
- Quality Diversity (1)
- Quality diversity (1)
- Quality of Service (1)
- Qualitätspakt Lehre (1)
- Quantum mechanical methods (1)
- RAS (1)
- RFID (1)
- RMS acceleration (1)
- Raman spectroscopy (1)
- Rapid Prototyping (1)
- RapidMiner (1)
- Ray tracing (1)
- Reader (1)
- Real-time image processing (1)
- Reasoning (1)
- Recht (Gesetzgebung) (1)
- Recommendation (1)
- Recommender systems (1)
- Relapse (1)
- Remaining Useful Life (1)
- Rendering (1)
- Risk-based Authentication (1)
- RoboCup industrial (1)
- Robotersteuerung (1)
- Robotics (1)
- Robustness (1)
- Rule-based production systems (1)
- SAHA (1)
- SIMPACK (1)
- SMPA loop (1)
- SOA (1)
- STAT3 (1)
- SVM (1)
- SaaS (1)
- Saccades (1)
- Saccadic suppression (1)
- Safety guard (1)
- Schadenanalyse (1)
- Schadensprozess (1)
- School experiments (1)
- Schutzmaßnahme (1)
- Secure Coding Practices (1)
- Security architecture (1)
- Security level (1)
- Semantic scene understanding (1)
- Semantic search (1)
- Sense of presence (1)
- Sensitivity matrix in symbolic form (1)
- Service-Level-Agreement (1)
- Service-Roboter (1)
- Service-based cloud computing (1)
- Set partition problem (1)
- Shadow detection (1)
- Sicherheit (1)
- Sicherheitsarchitektur (1)
- Sicherheitsniveau (1)
- Side channels (1)
- Signal detection (1)
- Signal processing (1)
- Simulation (1)
- Single Instruction Multiple Data (SIMD) (1)
- Single-objective (1)
- Skalierbarkeit (1)
- Skin (1)
- Skin detection (1)
- Smart Card (1)
- Smart Card User Interface Design, Interactive Smart Card Applications (1)
- Smart factory (1)
- Social intelligence (1)
- Sociomateriality (1)
- Software Architecture (1)
- Software Framework (1)
- Software-Fehler (1)
- Software-Lebenszyklus (1)
- Software-Sicherheit (1)
- Software-Test (1)
- Software-Testautomatisierung (1)
- Somatogravic Illusion (1)
- SpMV (1)
- Sparse Matrix Vector Multiplication (1)
- Sparse Matrix Vector multiply (SpMV) (1)
- Spectroscopy (1)
- Spherical treadmill (1)
- Stabilitätsanalyse (1)
- Stream cipher (1)
- Studenten (1)
- Studienverlauf (1)
- Supervised classification (1)
- Support Vector Machine (1)
- Surrogate Modeling (1)
- System health monitoring (1)
- TEL-AML1 (1)
- TP53 (1)
- Tag (1)
- Taxonomie (1)
- Taxonomy (1)
- Teaching Quality Pact (1)
- Technologie (1)
- Teleconferencing system (1)
- Terrain rendering (1)
- Testing (1)
- Text detection (1)
- Text recognition (1)
- Therapy (1)
- Throughput (1)
- Tracking (1)
- Transformations between various description formats (1)
- Translocation (1)
- Transponder (1)
- Travel Techniques (1)
- Treatment of discontinuities and singularities in ordinary differential equations (1)
- UGV (1)
- USAR (1)
- Umgebung (Umwelt) (1)
- Unity (1)
- Unknown parameter degradation (1)
- Unterstützung (1)
- Usability (1)
- User engagement (1)
- User interface (1)
- User interfaces (1)
- User-Computer Interface (1)
- VR (1)
- VR-based systems (1)
- Vector Units (1)
- Vehicle-2-Vehicle Communication (1)
- Vehicle-to-Infrastructure Communication (1)
- Vehicle-to-Vehicle Com- munication (1)
- Verfassungsbeschwerde (1)
- Verschlüsselung (1)
- Versicherung (1)
- Vertrag (1)
- Vertragsbeendigung (1)
- Vertragspartner (1)
- Vertriebssysteme (1)
- Vibrational microspectroscopy (1)
- Video surveillance (1)
- View selection (1)
- Virtual Environment (1)
- Virtual Memory Palace (1)
- Virtuelle Realität (1)
- Visual Computing (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Visual perception (1)
- Wang-tiles (1)
- Watermarking (1)
- WiFi (1)
- Wireless backhaul (1)
- Wissensarbeiter (1)
- Wissensintensiver Geschäftsprozess (1)
- Workflow (1)
- Workflow Management (1)
- XML schema for bond graph models (1)
- Young adults (1)
- Zielprogrammierung (1)
- accelerometer (1)
- acute (1)
- adaptive trigger (1)
- aerodynamics (1)
- allopurinol (1)
- analog/digital signal processing (1)
- analytical redundancy relation residuals (1)
- antibody deficiency (1)
- apoptosis (1)
- architectural distortion (1)
- assistive robotics (1)
- atomic operation (1)
- authentication (1)
- autoimmune lymphoproliferative syndrome (1)
- autoinflammatory disease (1)
- automated 3D scanning (1)
- automatic measurement validation (1)
- automation of sample processing (1)
- averaged bond graph models (1)
- back-of-device interaction (1)
- bicausal diagnostic Bond Graphs (1)
- bioinformatics (1)
- bond graph (1)
- bond graphs (1)
- bond-graph-based physical systems modelling (1)
- bootstrapping (1)
- breast cancer (1)
- built environment (1)
- cancer (1)
- change process (1)
- childhood (1)
- childhood cancer syndrome (1)
- closed kinematic chain (1)
- collision (1)
- complete basis set limit (1)
- concrete plate (1)
- constitutional mismatch repair syndrome (1)
- constructive process deviance (1)
- controller design (1)
- curved shell (1)
- cyanide (1)
- cybersickness (1)
- data analysis (1)
- data locality (1)
- deep learning (1)
- degraded DNA (1)
- design process (1)
- detection (1)
- diagnostic bond graphs (1)
- differential algebraic equation systems (1)
- displacement measurement (1)
- distance perception (1)
- dynamic vector fields (1)
- e-Research (1)
- electrochemical sensor (1)
- elite sports (1)
- employee privacy (1)
- enterprise software (1)
- erbliche Krebssyndrome (1)
- estimation (1)
- evaluation as a mean to communication (1)
- execution (1)
- explainable AI (1)
- extraction-linked bias (1)
- extreme learning machine (1)
- eye movement (1)
- eye tracking (1)
- eye-tracking (1)
- factor analysis (1)
- failure prognostic (1)
- fault indicators (1)
- fault scenarios (1)
- feature (1)
- felt obligations (1)
- fingerprint (1)
- finite element method (1)
- fitness-fatigue model (1)
- fixed causalities generation of analytical redundancy relations (1)
- flight zone (1)
- flying (1)
- force sensing (1)
- forensic (1)
- free and open source software (1)
- frequency (1)
- fuel (1)
- gaze (1)
- general plate theory (1)
- genes (1)
- genetic testing (1)
- genetics (1)
- genetische Testung (1)
- geofence (1)
- graphene oxide powder (1)
- graphene oxide powders (1)
- gravito-inertial force (1)
- guidance (1)
- haptic interfaces (1)
- head down bed rest (1)
- heart rate control (1)
- heart rate modeling (1)
- heart rate prediction (1)
- heavy metal (1)
- high degree of diagnostic coverage and reliability (1)
- high diagnostic coverage and reliability (1)
- high dynamic range resistance readout (1)
- high speed railway vehicle (1)
- high-throughput DNA sequencing (1)
- high-throughput sequencing (1)
- hospital environment (1)
- hospital-acquired infections (1)
- human computer interaction (1)
- human microbiome (1)
- human-centred design (1)
- hybrid system models (1)
- hydrocarbon (1)
- ideal switches (1)
- immunodeficiency (1)
- indicators calculation (1)
- informational self-determination (1)
- interactive computer graphics (1)
- intervention mechanisms (1)
- intrinsics (1)
- ion-selective electrodes (1)
- irregularity amplitude (1)
- isolation (1)
- large-high-resolution displays (1)
- latent class analysis (1)
- leaning (1)
- leaning-based interfaces (1)
- learning traces (1)
- leukemia (1)
- lipid (1)
- load control (1)
- locomotion (1)
- locomotion interface (1)
- long short-term memory (1)
- lymphocytic (1)
- massive parallel sequencing (1)
- mathematical modeling (1)
- mebendazole (1)
- memory bandwidth (1)
- mental models (1)
- micro-benchmarks (1)
- microbial community structure (1)
- microbial ecology (1)
- microbiome (1)
- microbiome analyses (1)
- mixed reality (1)
- mobile applications (1)
- mobiler Roboter (1)
- modal superposition (1)
- mode-dependent implicit state space model (1)
- mode-switching linear time-invariant models (1)
- model exchange (1)
- modelling methodology (1)
- monitoring (1)
- mood (1)
- motion estimation (1)
- motion sickness (1)
- mp2 (1)
- multi-channel power sourcing (1)
- multi-solution optimization (1)
- multibody system (1)
- multibond graphs (1)
- multisensory cues (1)
- mutation (1)
- nano-composite (1)
- natural language processing (1)
- navigational search (1)
- near infrared (1)
- neuro-cognitive performance (1)
- next generation sequencing (1)
- numerical computation of residuals (1)
- object-oriented modelling (1)
- octane (1)
- operation mode independent causalities (1)
- optic flow (1)
- optical coherence tomography (1)
- optical flow (1)
- optical triangulation (1)
- opto-electronic protective device (1)
- optoelectronic (1)
- parallel work queue (1)
- parameter estimation (1)
- parameter sensitivity of residuals (1)
- perceived quality (1)
- perception of upright (1)
- performance modeling (1)
- performance prediction (1)
- peripheral vision (1)
- phenomenological approaches (1)
- physical activity (1)
- plasma-enhanced CVD (PECVD) (deposition) (1)
- porous material (1)
- posture analysis (1)
- power spectral density (1)
- pre-optimization (1)
- predictive maintenance (1)
- prefrontal cortex (1)
- presentation attack detection (1)
- presentation attack detection (PAD) (1)
- prioritizable ranking (1)
- privacy at work (1)
- privacy by design (1)
- process (1)
- prognosis (1)
- projection based systems (1)
- propan-2-ol (1)
- psychophysics (1)
- quantitative model-based fault detection (1)
- quantum mechanics (1)
- rapid prototyping tool (1)
- ray tracing (1)
- redundant work (1)
- refined beam theory (1)
- region of interest (1)
- regression testing (1)
- reinforcement learning (1)
- remaining useful life (1)
- repeated trend projection (1)
- residual bond graph sinks (1)
- residual sinks (1)
- reuse of indicators (1)
- ride comfort (1)
- robot behaviour model (1)
- robot control architecture (1)
- robot execution failures (1)
- robot introspection (1)
- robot personalisation (1)
- routing (1)
- run-time adaptation (1)
- scalability (1)
- schädliches Programm (1)
- security and privacy literacy (1)
- self-motion perception (1)
- semi-continuous locomotion (1)
- sensemaking (1)
- sensor fusion (1)
- sensor resilience (1)
- server processors (1)
- shared memory (1)
- shell theory (1)
- short tandem repeat (1)
- signal processing algorithm (1)
- simulation of fault scenarios (1)
- situation awareness (1)
- skill execution models (1)
- slip detection (1)
- slope based signature (1)
- software engineering (1)
- software testing (1)
- software-defined networking (1)
- space flight analog (1)
- spatial augmented reality (1)
- spatial orientation (1)
- spinal posture (1)
- stiffeners (1)
- structural equation modeling (1)
- subjective visual vertical (1)
- switched three-phase power inverter (1)
- system mode independent bond graph representation (1)
- system modes (1)
- tactile sensing (1)
- task planning (1)
- teleoperation (1)
- teleportation (1)
- telomeres (1)
- test case reduction (1)
- text mining (1)
- textual model description languages (1)
- thread mapping (1)
- time series analysis (1)
- tools for education (1)
- trace model (1)
- trace-based system (1)
- track irregularity (1)
- training monitoring (1)
- training performance relationship (1)
- transparency-enhancing technologies (1)
- travel techniques (1)
- tumor microenvironment (1)
- tumor-infiltrating immune cells (1)
- uncertainties (1)
- unrolling (1)
- usable privacy controls (1)
- user modelling (1)
- user study (1)
- vection (1)
- vector units (1)
- vestibular system (1)
- vibration (1)
- view management (1)
- visual attention (1)
- water dimer (1)
- wavelet (1)
- wearable sensor (1)
- wearable sensors (1)
- web services (1)
- weight perception (1)
- whole-exome sequencing (1)
- wind nuisance (1)
- wireless mesh networks (1)
- workday breaks (1)
- zooming interfaces (1)
- Überwachung (1)
- Wavenet (1)
Zentrale Archivierung und verteilte Kommunikation digitaler Bilddaten in der Pneumokoniosevorsorge
(2010)
Pneumokoniose-Vorsorgeuntersuchungen erfordern die Befundung einer Röntgenthoraxaufnahme nach ILO-Staublungenklassifikation. Inzwischen werden die benötigten Aufnahmen bereits in großem Umfang digital angefertigt und kommuniziert. Hierdurch entstehen neue Anforderungen an verwendete Technik und Workflow-Mechanismen, um einen effizienten Ablauf von Untersuchung, Befundung und Dokumentation zu gewährleisten.
YAWL (Yet Another Workflow Language) is an open source Business Process Management System, first released in 2003. YAWL grew out of a university research environment to become a unique system that has been deployed worldwide as a laboratory environment for research in Business Process Management and as a productive system in other scientific domains.
While humans can effortlessly pick a view from multiple streams, automatically choosing the best view is a challenge. Choosing the best view from multi-camera streams poses a problem regarding which objective metrics should be considered. Existing works on view selection lack consensus about which metrics should be considered to select the best view. The literature on view selection describes diverse possible metrics. And strategies such as information-theoretic, instructional design, or aesthetics-motivated fail to incorporate all approaches. In this work, we postulate a strategy incorporating information-theoretic and instructional design-based objective metrics to select the best view from a set of views. Traditionally, information-theoretic measures have been used to find the goodness of a view, such as in 3D rendering. We adapted a similar measure known as the viewpoint entropy for real-world 2D images. Additionally, we incorporated similarity penalization to get a more accurate measure of the entropy of a view, which is one of the metrics for the best view selection. Since the choice of the best view is domain-dependent, we chose demonstration-based training scenarios as our use case. The limitation of our chosen scenarios is that they do not include collaborative training and solely feature a single trainer. To incorporate instructional design considerations, we included the trainer’s body pose, face, face when instructing, and hands visibility as metrics. To incorporate domain knowledge we included predetermined regions’ visibility as another metric. All of those metrics are taken into account to produce a parameterized view recommendation approach for demonstration-based training. An online study using recorded multi-camera video streams from a simulation environment was used to validate those metrics. Furthermore, the responses from the online study were used to optimize the view recommendation performance with a normalized discounted cumulative gain (NDCG) value of 0.912, which shows good performance with respect to matching user choices.
Risk-based authentication (RBA) is an adaptive security measure to strengthen password-based authentication against account takeover attacks. Our study on 65 participants shows that users find RBA more usable than two-factor authentication equivalents and more secure than password-only authentication. We identify pitfalls and provide guidelines for putting RBA into practice.
Using Visual and Auditory Cues to Locate Out-of-View Objects in Head-Mounted Augmented Reality
(2021)
Improving the study entry supports students in a decisive phase of their university education. Implementing improvements is a change process and can only be successful if the relevant stakeholders are addressed and convinced. In the described Teaching Quality Pact project evaluation data is used as a mean to discuss in the university the situation of the study programs. As these discussions were based on empirical data rather than on opinion, it was possible to achieve an open discussion about measures that are implemented. The open discussion is maintained during the project when results of the measures taken are analyzed.
Zur Erreichung des Sachziels Vertraulichkeit werden Daten (Dokumente, Dateien, E-Mails etc.) bei der Speicherung und Übertragung in lokalen Netzen, Intranets und im Internet häufig vom Endanwender verschlüsselt. Die dazu benutzten Schlüssel müssen bei Bedarf für den Endanwender genauso verfügbar sein wie für das Unternehmen bei Nicht-Verfügbarkeit des Endanwenders. Unternehmensinterne Schlüssel-Archive speichern die benutzten Schlüssel oder speichern die Adressen benutzter Schlüssel oder stellen auf andere Weise die benutzten Schlüssel auf Anforderung Berechtigten wieder bereit. Schlüssel-Archive sind aus der auf dem Escrowed Encryption Standard (EES) basierenden Clipper-Initiative der Vereinigten Staaten entstanden und werden dort als Key Recovery Center bezeichnet.
The non-filarial and non-communicable disease podoconiosis affects around 4 million people and is characterized by severe leg lymphedema accompanied with painful intermittent acute inflammatory episodes, called acute dermatolymphangioadenitis (ADLA) attacks. Risk factors have been associated with the disease but the mechanisms of pathophysiology remain uncertain. Lymphedema can lead to skin lesions, which can serve as entry points for bacteria that may cause ADLA attacks leading to progression of the lymphedema. However, the microbiome of the skin of affected legs from podoconiosis individuals remains unclear. Thus, we analysed the skin microbiome of podoconiosis legs using next generation sequencing. We revealed a positive correlation between increasing lymphedema severity and non-commensal anaerobic bacteria, especially Anaerococcus provencensis, as well as a negative correlation with the presence of Corynebacterium, a constituent of normal skin flora. Disease symptoms were generally linked to higher microbial diversity and richness, which deviated from the normal composition of the skin. These findings show an association of distinct bacterial taxa with lymphedema stages, highlighting the important role of bacteria for the pathogenesis of podoconiosis and might enable a selection of better treatment regimens to manage ADLA attacks and disease progression.
It is challenging to provide users with a haptic weight sensation of virtual objects in VR since current consumer VR controllers and software-based approaches such as pseudo-haptics cannot render appropriate haptic stimuli. To overcome these limitations, we developed a haptic VR controller named Triggermuscle that adjusts its trigger resistance according to the weight of a virtual object. Therefore, users need to adapt their index finger force to grab objects of different virtual weights. Dynamic and continuous adjustment is enabled by a spring mechanism inside the casing of an HTC Vive controller. In two user studies, we explored the effect on weight perception and found large differences between participants for sensing change in trigger resistance and thus for discriminating virtual weights. The variations were easily distinguished and associated with weight by some participants while others did not notice them at all. We discuss possible limitations, confounding factors, how to overcome them in future research and the pros and cons of this novel technology.
Airborne and spaceborne platforms are the primary data sources for large-scale forest mapping, but visual interpretation for individual species determination is labor-intensive. Hence, various studies focusing on forests have investigated the benefits of multiple sensors for automated tree species classification. However, transferable deep learning approaches for large-scale applications are still lacking. This gap motivated us to create a novel dataset for tree species classification in central Europe based on multi-sensor data from aerial, Sentinel-1 and Sentinel-2 imagery. In this paper, we introduce the TreeSatAI Benchmark Archive, which contains labels of 20 European tree species (i.e., 15 tree genera) derived from forest administration data of the federal state of Lower Saxony, Germany. We propose models and guidelines for the application of the latest machine learning techniques for the task of tree species classification with multi-label data. Finally, we provide various benchmark experiments showcasing the information which can be derived from the different sensors including artificial neural networks and tree-based machine learning methods. We found that residual neural networks (ResNet) perform sufficiently well with weighted precision scores up to 79 % only by using the RGB bands of aerial imagery. This result indicates that the spatial content present within the 0.2 m resolution data is very informative for tree species classification. With the incorporation of Sentinel-1 and Sentinel-2 imagery, performance improved marginally. However, the sole use of Sentinel-2 still allows for weighted precision scores of up to 74 % using either multi-layer perceptron (MLP) or Light Gradient Boosting Machine (LightGBM) models. Since the dataset is derived from real-world reference data, it contains high class imbalances. We found that this dataset attribute negatively affects the models' performances for many of the underrepresented classes (i.e., scarce tree species). However, the class-wise precision of the best-performing late fusion model still reached values ranging from 54 % (Acer) to 88 % (Pinus). Based on our results, we conclude that deep learning techniques using aerial imagery could considerably support forestry administration in the provision of large-scale tree species maps at a very high resolution to plan for challenges driven by global environmental change. The original dataset used in this paper is shared via Zenodo (https://doi.org/10.5281/zenodo.6598390, Schulz et al., 2022). For citation of the dataset, we refer to this article.
In this paper we present the steps towards a well-designed concept of a 5VR6 system for school experiments in scientific domains like physics, biology and chemistry. The steps include the analysis of system requirements in general, the analysis of school experiments and the analysis of input and output devices demands. Based on the results of these steps we show a taxonomy of school experiments and provide a comparison between several currently available devices which can be used for building such a system. We also compare the advantages and shortcomings of 5VR6 and 5AR6 systems in general to show why, in our opinion, 5VR6 systems are better suited for school-use.
The Virtual Memory Palace
(2006)
The intention of the Virtual Memory Palace is to help people memorize information by addressing their visual memory. The concept is based on the “Memory Palace” as an ancient Greek memorization technique, where symbols are placed in a certain way within an imaginative building in order to remember the original information whenever the mind goes through the vision of this building again. The goal of this work was to create such a Memory Palace in a virtual environment, so it requires less creative effort of the contemporary learner than was necessary in ancient Greece. The Virtual Memory Palace offers the possibility to freely explore a virtual 3d architectural model and to place icons at various locations within this model. Specific behaviors were assigned to these locations to make them more memorable. To test the benefit of this concept, an experiment with 15 subjects was conducted. The results show a higher remembrance rate of items learned in the Virtual Memory Palace compared to a wordlist. The observations made during the test showed that most of the subjects enjoyed the memorization environment and were astonished how well the Virtual Memory Palace worked for them.
Maintaining orientation in an environment with non-Earth gravity (1 g) is critical for an astronaut's operational performance. Such environments present a number of complexities for balance and motion. For example, when an astronaut tilts due to ascending or descending an inclined plane on the moon, the gravity vector will be tilted correctly, but the magnitude will be different from on earth. If this results in a mis-perceived tilt, then that may lead to postural and perceptual errors, such as mis-perceiving the orientation of oneself or the ground plane and corresponding errors in task judgment.
The relative contributions of radial and laminar optic flow to the perception of linear self-motion
(2012)
When illusory self-motion is induced in a stationary observer by optic flow, the perceived distance traveled is generally overestimated relative to the distance of a remembered target (Redlick, Harris, & Jenkin, 2001): subjects feel they have gone further than the simulated distance and indicate that they have arrived at a target's previously seen location too early. In this article we assess how the radial and laminar components of translational optic flow contribute to the perceived distance traveled. Subjects monocularly viewed a target presented in a virtual hallway wallpapered with stripes that periodically changed color to prevent tracking. The target was then extinguished and the visible area of the hallway shrunk to an oval region 40° (h) × 24° (v). Subjects either continued to look centrally or shifted their gaze eccentrically, thus varying the relative amounts of radial and laminar flow visible. They were then presented with visual motion compatible with moving down the hallway toward the target and pressed a button when they perceived that they had reached the target's remembered position. Data were modeled by the output of a leaky spatial integrator (Lappe, Jenkin, & Harris, 2007). The sensory gain varied systematically with viewing eccentricity while the leak constant was independent of viewing eccentricity. Results were modeled as the linear sum of separate mechanisms sensitive to radial and laminar optic flow. Results are compatible with independent channels for processing the radial and laminar flow components of optic flow that add linearly to produce large but predictable errors in perceived distance traveled.
The reciprocal translocation t(12;21)(p13;q22), the most common structural genomic alteration in B-cell precursor acute lymphoblastic leukaemia in children, results in a chimeric transcription factor TEL-AML1 (ETV6-RUNX1). We identified directly and indirectly regulated target genes utilizing an inducible TEL-AML1 system derived from the murine pro B-cell line BA/F3 and a monoclonal antibody directed against TEL-AML1. By integration of promoter binding identified with chromatin immunoprecipitation (ChIP)-on-chip, gene expression and protein output through microarray technology and stable labelling of amino acids in cell culture, we identified 217 directly and 118 indirectly regulated targets of the TEL-AML1 fusion protein. Directly, but not indirectly, regulated promoters were enriched in AML1-binding sites. The majority of promoter regions were specific for the fusion protein and not bound by native AML1 or TEL. Comparison with gene expression profiles from TEL-AML1-positive patients identified 56 concordantly misregulated genes with negative effects on proliferation and cellular transport mechanisms and positive effects on cellular migration, and stress responses including immunological responses. In summary, this work for the first time gives a comprehensive insight into how TEL-AML1 expression may directly and indirectly contribute to alter cells to become prone for leukemic transformation.
Self-motion perception is a multi-sensory process that involves visual, vestibular, and other cues. When perception of self-motion is induced using only visual motion, vestibular cues indicate that the body remains stationary, which may bias an observer’s perception. When lowering the precision of the vestibular cue by for example, lying down or by adapting to microgravity, these biases may decrease, accompanied by a decrease in precision. To test this hypothesis, we used a move-to-target task in virtual reality. Astronauts and Earth-based controls were shown a target at a range of simulated distances. After the target disappeared, forward self-motion was induced by optic flow. Participants indicated when they thought they had arrived at the target’s previously seen location. Astronauts completed the task on Earth (supine and sitting upright) prior to space travel, early and late in space, and early and late after landing. Controls completed the experiment on Earth using a similar regime with a supine posture used to simulate being in space. While variability was similar across all conditions, the supine posture led to significantly higher gains (target distance/perceived travel distance) than the sitting posture for the astronauts pre-flight and early post-flight but not late post-flight. No difference was detected between the astronauts’ performance on Earth and onboard the ISS, indicating that judgments of traveled distance were largely unaffected by long-term exposure to microgravity. Overall, this constitutes mixed evidence as to whether non-visual cues to travel distance are integrated with relevant visual cues when self-motion is simulated using optic flow alone.
The Sparse Matrix Vector Multiplication is an important operation on sparse matrices. This operation is the most time consuming operation in iterative solvers and therefore an efficient execution of that operation is of great importance for many applications. Numerous different storage formats that store sparse matrices efficiently have already been established. Often, these storage formats utilize the sparsity pattern of a matrix in an appropiate manner. For one class of sparse matrices the nonzero values occur in small dense blocks and appropriate block storage formats are well suited for such patterns. But on the other side, these formats perform often poor on general matrices without an explicit / regular block structure. In this paper, the newly developed sparse matrix format DynB is introduced. The aim is to efficiently use several optimization approaches and vectorization with current processors, even for matrices without an explicit block structure of nonzero elements. The DynB matrix format uses 2D rectangular blocks of variable size, allowing fill-ins per block of explicit zero values up to a user controllable threshold. We give a simple and fast heuristic to detect such 2D blocks in a sparse matrix. The performance of the Sparse Matrix Vector Multiplication for a selection of different block formats and matrices with different sparsity structures is compared. Results show that the benefit of blocking formats depend – as to be expected – on the structure of the matrix and that variable sized block formats like DynB can have advantages over fixed size formats and deliver good performance results even for general sparse matrices.
Software testing in web services environment faces different challenges in comparison with testing in traditional software environments. Regression testing activities are triggered based on software changes or evolutions. In web services, evolution is not a choice for service clients. They have always to use the current updated version of the software. In addition test execution or invocation is expensive in web services and hence providing algorithms to optimize test case generation and execution is vital. In this environment, we proposed several approach for test cases' selection in web services' regression testing. Testing in this new environment should evolve to be included part of the service contract. Service providers should provide data or usage sessions that can help service clients reduce testing expenses through optimizing the selected and executed test cases.
This paper introduces our robotic system named UGAV (Unmanned Ground-Air Vehicle) consisting of two semi-autonomous robot platforms, an Unmanned Ground Vehicle (UGV) and an Unmanned Aerial Vehicles (UAV). The paper focuses on three topics of the inspection with the combined UGV and UAV: (A) teleoperated control by means of cell or smart phones with a new concept of automatic configuration of the smart phone based on a RKI-XML description of the vehicles control capabilities, (B) the camera and vision system with the focus to real time feature extraction e.g. for the tracking of the UAV and (C) the architecture and hardware of the UAV.
Large display environments are highly suitable for immersive analytics. They provide enough space for effective co-located collaboration and allow users to immerse themselves in the data. To provide the best setting - in terms of visualization and interaction - for the collaborative analysis of a real-world task, we have to understand the group dynamics during the work on large displays. Among other things, we have to study, what effects different task conditions will have on user behavior.
In this paper, we investigated the effects of task conditions on group behavior regarding collaborative coupling and territoriality during co-located collaboration on a wall-sized display. For that, we designed two tasks: a task that resembles the information foraging loop and a task that resembles the connecting facts activity. Both tasks represent essential sub-processes of the sensemaking process in visual analytics and cause distinct space/display usage conditions. The information foraging activity requires the user to work with individual data elements to look into details. Here, the users predominantly occupy only a small portion of the display. In contrast, the connecting facts activity requires the user to work with the entire information space. Therefore, the user has to overview the entire display.
We observed 12 groups for an average of two hours each and gathered qualitative data and quantitative data. During data analysis, we focused specifically on participants' collaborative coupling and territorial behavior.
We could detect that participants tended to subdivide the task to approach it, in their opinion, in a more effective way, in parallel. We describe the subdivision strategies for both task conditions. We also detected and described multiple user roles, as well as a new coupling style that does not fit in either category: loosely or tightly. Moreover, we could observe a territory type that has not been mentioned previously in research. In our opinion, this territory type can affect the collaboration process of groups with more than two collaborators negatively. Finally, we investigated critical display regions in terms of ergonomics. We could detect that users perceived some regions as less comfortable for long-time work.
The application of Raman and infrared (IR) microspectroscopy is leading to hyperspectral data containing complementary information concerning the molecular composition of a sample. The classification of hyperspectral data from the individual spectroscopic approaches is already state-of-the-art in several fields of research. However, more complex structured samples and difficult measuring conditions might affect the accuracy of classification results negatively and could make a successful classification of the sample components challenging. This contribution presents a comprehensive comparison in supervised pixel classification of hyperspectral microscopic images, proving that a combined approach of Raman and IR microspectroscopy has a high potential to improve classification rates by a meaningful extension of the feature space. It shows that the complementary information in spatially co-registered hyperspectral images of polymer samples can be accessed using different feature extraction methods and, once fused on the feature-level, is in general more accurately classifiable in a pattern recognition task than the corresponding classification results for data derived from the individual spectroscopic approaches.
Structure-activity relationships of thiostrepton derivatives: implications for rational drug design
(2014)
Human butyrylcholinesterase (BChE) is a glycoprotein capable of bioscavenging toxic compounds such as organophosphorus (OP) nerve agents. For commercial production of BChE, it is practical to synthesize BChE in non-human expression systems, such as plants or animals. However, the glycosylation profile in these systems is significantly different from the human glycosylation profile, which could result in changes in BChE's structure and function. From our investigation, we found that the glycan attached to ASN241 is both structurally and functionally important due to its close proximity to the BChE tetramerization domain and the active site gorge. To investigate the effects of populating glycosylation site ASN241, monomeric human BChE glycoforms were simulated with and without site ASN241 glycosylated. Our simulations indicate that the structure and function of human BChE are significantly affected by the absence of glycan 241.
MOTIVATION
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited.
RESULTS
To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs (KGs). This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations in a shared embedding space. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against three baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.084 (i.e., from 0.881 to 0.965). Finally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications.
AVAILABILITY
We make the source code and the Python package of STonKGs available at GitHub (https://github.com/stonkgs/stonkgs) and PyPI (https://pypi.org/project/stonkgs/). The pre-trained STonKGs models and the task-specific classification models are respectively available at https://huggingface.co/stonkgs/stonkgs-150k and https://zenodo.org/communities/stonkgs.
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online.
SpMV Runtime Improvements with Program Optimization Techniques on Different Abstraction Levels
(2016)
The multiplication of a sparse matrix with a dense vector is a performance critical computational kernel in many applications, especially in natural and engineering sciences. To speed up this operation, many optimization techniques have been developed in the past, mainly focusing on the data layout for the sparse matrix. Strongly related to the data layout is the program code for the multiplication. But even for a fixed data layout with an accommodated kernel, there are several alternatives for program optimizations. This paper discusses a spectrum of program optimization techniques on different abstraction layers for six different sparse matrix data format and kernels. At the one end of the spectrum, compiler options can be used that hide from the programmer all optimizations done by the compiler internally. On the other end of the spectrum, a multiplication kernel can be programmed that use highly sophisticated intrinsics on an assembler level that ask for a programmer with a deep understanding of processor architectures. These special instructions can be used to efficiently utilize hardware features in processors like vector units that have the potential to speed up sparse matrix computations. The paper compares the programming effort and required knowledge level for certain program optimizations in relation to the gained runtime improvements.
The combination of Software-Defined Networking (SDN) and Wireless Mesh Network (WMN) is challenging due to the different natures of both concepts. SDN describes networks with homogeneous, static and centralized controlled topologies. In contrast, a WMN is characterized by a dynamic and distributed network control, and adds new challenges with respect to time-critical operation. However, SDN and WMN are both associated with decreasing the operational costs for communication networks which is especially beneficial for internet provisioning in rural areas. This work surveys the current status for Software-Defined Wireless Mesh Networking. Besides a general overview in the domain of wireless SDN, this work focuses especially on different identified aspects: representing and controlling wireless interfaces, control-plane connection and topology discovery, modulation and coding, routing and load-balancing and client handling. A complete overview of surveyed solutions, open issues and new research directions is provided with regard to each aspect.
The elucidation of conformations and relative potential energies (rPEs) of small molecules has a long history across a diverse range of fields. Periodically, it is helpful to revisit what conformations have been investigated and to provide a consistent theoretical framework for which clear comparisons can be made. In this paper, we compute the minima, first- and second-order saddle points, and torsion-coupled surfaces for methanol, ethanol, propan-2-ol, and propanol using consistent high-level MP2 and CCSD(T) methods. While for certain molecules more rigorous methods were employed, the CCSD(T)/aug-cc-pVTZ//MP2/aug-cc-pV5Z theory level was used throughout to provide relative energies of all minima and first-order saddle points. The rPE surfaces were uniformly computed at the CCSD(T)/aug-cc-pVTZ//MP2/aug-cc-pVTZ level. To the best of our knowledge, this represents the most extensive study for alcohols of this kind, revealing some new aspects. Especially for propanol, we report several new conformations that were previously not investigated. Moreover, two metrics are included in our analysis that quantify how the selected surfaces are similar to one another and hence improve our understanding of the relationship between these alcohols.
Simultaneous detection of cyanide and heavy metals for environmental analysis by means of µISEs
(2010)
Bond graph software can simulate bond graph models without the user needing to manually derive equations. This offers the power to model larger and more complex systems than in the past. Multibond graphs (those with vector bonds) offer a compact model which further eases handling multibody systems. Although multibond graphs can be simulated successfully, the use of vector bonds can present difficulties. In addition, most qualitative, bond graph–based exploitation relies on the use of scalar bonds. This article discusses the main methods for simulating bond graphs of multibody systems, using a graphical software platform. The transformation between models with vector and scalar bonds is presented. The methods are then compared with respect to both time and accuracy, through simulation of two benchmark models. This article is a tutorial on the existing methods for simulating three-dimensional rigid and holonomic multibody systems using bond graphs and discusses the difficulties encountered. It then proposes and adapts methods for simulating this type of system directly from its bond graph within a software package. The value of this study is in giving practical guidance to modellers, so that they can implement the adapted method in software.
Treatment options for acute myeloid leukemia (AML) remain extremely limited and associated with significant toxicity. Nicotinamide phosphoribosyltransferase (NAMPT) is involved in the generation of NAD+ and a potential therapeutic target in AML. We evaluated the effect of KPT-9274, a p21-activated kinase 4/NAMPT inhibitor that possesses a unique NAMPT-binding profile based on in silico modeling compared with earlier compounds pursued against this target. KPT-9274 elicited loss of mitochondrial respiration and glycolysis and induced apoptosis in AML subtypes independent of mutations and genomic abnormalities. These actions occurred mainly through the depletion of NAD+, whereas genetic knockdown of p21-activated kinase 4 did not induce cytotoxicity in AML cell lines or influence the cytotoxic effect of KPT-9274. KPT-9274 exposure reduced colony formation, increased blast differentiation, and diminished the frequency of leukemia-initiating cells from primary AML samples; KPT-9274 was minimally cytotoxic toward normal hematopoietic or immune cells. In addition, KPT-9274 improved overall survival in vivo in 2 different mouse models of AML and reduced tumor development in a patient-derived xenograft model of AML. Overall, KPT-9274 exhibited broad preclinical activity across a variety of AML subtypes and warrants further investigation as a potential therapeutic agent for AML.
With the rising interest in vehicular communication systems many proposals for secure vehicle-to-vehicle commu- nication were made in recent years. Also, several standard- ization activities concerning the security and privacy measures in these communication systems were initiated in Europe and in US. Here, we discuss some limitations for secure vehicle- to-infrastructure communication in the existing standards of the European Telecommunications Standards Institute. Next, a vulnerability analysis for roadside stations on one side and security and privacy requirements for roadside stations on the other side are given. Afterwards, a proposal for a multi-domain public key architecture for intelligent transport systems, which considers the necessities of road infrastructure authorities and vehicle manufacturers, is introduced. The domains of the public key infrastructure are cryptographically linked based on local trust lists. In addition, a crypto agility concept is suggested, which takes adaptation of key length and cryptographic algorithms during PKI operation into account.
Robust Identification and Segmentation of the Outer Skin Layers in Volumetric Fingerprint Data
(2022)
Despite the long history of fingerprint biometrics and its use to authenticate individuals, there are still some unsolved challenges with fingerprint acquisition and presentation attack detection (PAD). Currently available commercial fingerprint capture devices struggle with non-ideal skin conditions, including soft skin in infants. They are also susceptible to presentation attacks, which limits their applicability in unsupervised scenarios such as border control. Optical coherence tomography (OCT) could be a promising solution to these problems. In this work, we propose a digital signal processing chain for segmenting two complementary fingerprints from the same OCT fingertip scan: One fingerprint is captured as usual from the epidermis (“outer fingerprint”), whereas the other is taken from inside the skin, at the junction between the epidermis and the underlying dermis (“inner fingerprint”). The resulting 3D fingerprints are then converted to a conventional 2D grayscale representation from which minutiae points can be extracted using existing methods. Our approach is device-independent and has been proven to work with two different time domain OCT scanners. Using efficient GPGPU computing, it took less than a second to process an entire gigabyte of OCT data. To validate the results, we captured OCT fingerprints of 130 individual fingers and compared them with conventional 2D fingerprints of the same fingers. We found that both the outer and inner OCT fingerprints were backward compatible with conventional 2D fingerprints, with the inner fingerprint generally being less damaged and, therefore, more reliable.
RFID security
(2004)
In order to achieve the highest possible performance, the ray traversal and intersection routines at the core of every high-performance ray tracer are usually hand-coded, heavily optimized, and implemented separately for each hardware platform—even though they share most of their algorithmic core. The results are implementations that heavily mix algorithmic aspects with hardware and implementation details, making the code non-portable and difficult to change and maintain.
In this paper, we present a new approach that offers the ability to define in a functional language a set of conceptual, high-level language abstractions that are optimized away by a special compiler in order to maximize performance. Using this abstraction mechanism we separate a generic ray traversal and intersection algorithm from its low-level aspects that are specific to the target hardware. We demonstrate that our code is not only significantly more flexible, simpler to write, and more concise but also that the compiled results perform as well as state-of-the-art implementations on any of the tested CPU and GPU platforms.
Risk-based authentication (RBA) aims to protect users against attacks involving stolen passwords. RBA monitors features during login, and requests re-authentication when feature values widely differ from those previously observed. It is recommended by various national security organizations, and users perceive it more usable than and equally secure to equivalent two-factor authentication. Despite that, RBA is still used by very few online services. Reasons for this include a lack of validated open resources on RBA properties, implementation, and configuration. This effectively hinders the RBA research, development, and adoption progress.
To close this gap, we provide the first long-term RBA analysis on a real-world large-scale online service. We collected feature data of 3.3 million users and 31.3 million login attempts over more than 1 year. Based on the data, we provide (i) studies on RBA’s real-world characteristics plus its configurations and enhancements to balance usability, security, and privacy; (ii) a machine learning–based RBA parameter optimization method to support administrators finding an optimal configuration for their own use case scenario; (iii) an evaluation of the round-trip time feature’s potential to replace the IP address for enhanced user privacy; and (iv) a synthesized RBA dataset to reproduce this research and to foster future RBA research. Our results provide insights on selecting an optimized RBA configuration so that users profit from RBA after just a few logins. The open dataset enables researchers to study, test, and improve RBA for widespread deployment in the wild.
This work proposes a novel approach for probabilistic end-to-end all-sky imager-based nowcasting with horizons of up to 30 min using an ImageNet pre-trained deep neural network. The method involves a two-stage approach. First, a backbone model is trained to estimate the irradiance from all-sky imager (ASI) images. The model is then extended and retrained on image and parameter sequences for forecasting. An open access data set is used for training and evaluation. We investigated the impact of simultaneously considering global horizontal (GHI), direct normal (DNI), and diffuse horizontal irradiance (DHI) on training time and forecast performance as well as the effect of adding parameters describing the irradiance variability proposed in the literature. The backbone model estimates current GHI with an RMSE and MAE of 58.06 and 29.33 W m−2, respectively. When extended for forecasting, the model achieves an overall positive skill score reaching 18.6 % compared to a smart persistence forecast. Minor modifications to the deterministic backbone and forecasting models enables the architecture to output an asymmetrical probability distribution and reduces training time while leading to similar errors for the backbone models. Investigating the impact of variability parameters shows that they reduce training time but have no significant impact on the GHI forecasting performance for both deterministic and probabilistic forecasting while simultaneously forecasting GHI, DNI, and DHI reduces the forecast performance.
Microbiome analyses are essential for understanding microorganism composition and diversity, but interpretation is often challenging due to biological and technical variables. DNA extraction is a critical step that can significantly bias results, particularly in samples containing a high abundance of challenging-to-lyse microorganisms. Taking into consideration the distinctive microenvironments observed in different bodily locations, our study sought to assess the extent of bias introduced by suboptimal bead-beating during DNA extraction across diverse clinical sample types. The question was whether complex targeted extraction methods are always necessary for reliable taxonomic abundance estimation through amplicon sequencing or if simpler alternatives are effective for some sample types. Hence, for four different clinical sample types (stool, cervical swab, skin swab, and hospital surface swab samples), we compared the results achieved from extracting targeted manual protocols routinely used in our research lab for each sample type with automated protocols specifically not designed for that purpose. Unsurprisingly, we found that for the stool samples, manual extraction protocols with vigorous bead-beating were necessary in order to avoid erroneous taxa proportions on all investigated taxonomic levels and, in particular, false under- or overrepresentation of important genera such as Blautia, Faecalibacterium, and Parabacteroides. However, interestingly, we found that the skin and cervical swab samples had similar results with all tested protocols. Our results suggest that the level of practical automation largely depends on the expected microenvironment, with skin and cervical swabs being much easier to process than stool samples. Prudent consideration is necessary when extending the conclusions of this study to applications beyond rough estimations of taxonomic abundance.
In this paper, a set of micro-benchmarks is proposed to determine basic performance parameters of single-node mainstream hardware architectures for High Performance Computing. Performance parameters of recent processors, including those of accelerators, are determined. The investigated systems are Intel server processor architectures and the two accelerator lines Intel Xeon Phi and Nvidia graphic processors. Additionally, the performance impact of thread mapping on multiprocessors and Intel Xeon Phi is shown. The results show similarities for some parameters between all architectures, but significant differences for others.
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
An internal model of self-motion provides a fundamental basis for action in our daily lives, yet little is known about its development. The ability to control self-motion develops in youth and often deteriorates with advanced age. Self-motion generates relative motion between the viewer and the environment. Thus, the smoothness of the visual motion created will vary as control improves. Here, we study the influence of the smoothness of visually simulated self-motion on an observer's ability to judge how far they have travelled over a wide range of ages. Previous studies were typically highly controlled and concentrated on university students. But are such populations representative of the general public? And are there developmental and sex effects? Here, estimates of distance travelled (visual odometry) during visually induced self-motion were obtained from 466 participants drawn from visitors to a public science museum. Participants were presented with visual motion that simulated forward linear self-motion through a field of lollipops using a head-mounted virtual reality display. They judged the distance of their simulated motion by indicating when they had reached the position of a previously presented target. The simulated visual motion was presented with or without horizontal or vertical sinusoidal jitter. Participants' responses indicated that they felt they travelled further in the presence of vertical jitter. The effectiveness of the display increased with age over all jitter conditions. The estimated time for participants to feel that they had started to move also increased slightly with age. There were no differences between the sexes. These results suggest that age should be taken into account when generating motion in a virtual reality environment. Citizen science studies like this can provide a unique and valuable insight into perceptual processes in a truly representative sample of people.
Survival of patients with pediatric acute lymphoblastic leukemia (ALL) after allogeneic hematopoietic stem cell transplantation (allo-SCT) is mainly compromised by leukemia relapse, carrying dismal prognosis. As novel individualized therapeutic approaches are urgently needed, we performed whole-exome sequencing of leukemic blasts of 10 children with post–allo-SCT relapses with the aim of thoroughly characterizing the mutational landscape and identifying druggable mutations. We found that post–allo-SCT ALL relapses display highly diverse and mostly patient-individual genetic lesions. Moreover, mutational cluster analysis showed substantial clonal dynamics during leukemia progression from initial diagnosis to relapse after allo-SCT. Only very few alterations stayed constant over time. This dynamic clonality was exemplified by the detection of thiopurine resistance-mediating mutations in the nucleotidase NT5C2 in 3 patients’ first relapses, which disappeared in the post–allo-SCT relapses on relief of selective pressure of maintenance chemotherapy. Moreover, we identified TP53 mutations in 4 of 10 patients after allo-SCT, reflecting acquired chemoresistance associated with selective pressure of prior antineoplastic treatment. Finally, in 9 of 10 children’s post–allo-SCT relapse, we found alterations in genes for which targeted therapies with novel agents are readily available. We could show efficient targeting of leukemic blasts by APR-246 in 2 patients carrying TP53 mutations. Our findings shed light on the genetic basis of post–allo-SCT relapse and may pave the way for unraveling novel therapeutic strategies in this challenging situation.
Current computer architectures are multi-threaded and make use of multiple CPU cores. Most garbage collections policies for the Java Virtual Machine include a stop-the-world phase, which means that all threads are suspended. A considerable portion of the execution time of Java programs is spent in these stop-the-world garbage collections. To improve this behavior, a thread-local allocation and garbage collection that only affects single threads, has been proposed. Unfortunately, only objects that are not accessible by other threads ("do not escape") are eligible for this kind of allocation. It is therefore necessary to reliably predict the escaping of objects. The work presented in this paper analyzes the escaping of objects based on the line of code (program counter – PC) the object was allocated at. The results show that on average 60-80% of the objects do not escape and can therefore be locally allocated.
Im vorliegenden Beitrag wird ein prozess- und serviceorientiertes Rahmenmodell vorgeschlagen, das eine strukturelle und begriffliche Orientierung für das Gebiet der elektronischen Bezahlung bietet. Das Rahmenmodell erlaubt eine ganzheitliche Betrachtung über die Merkmale eines einzelnen Zahlungssystems hinaus. Die systemspezifische Sicht auf die elektronische Zahlung wird zu einem prozessorientierten Phasenmodell verallgemeinert. Mit diesem lassen sich die unterstützenden Services für die elektronische Bezahlung aus Kunden- und Händlersicht systematisch zusammenstellen und beschreiben. Die organisatorische Umsetzung der Serviceprozesse führt zur Rolle des Payment Service Providers, der als Mittler zwischen Anbietern und Anwendern elektronischer Zahlungssysteme agiert.
Open Source ERP-Systeme
(2012)
Mit Free and Open Source Software können die IT-Kosten in erheblichem Umfang gesenkt werden. Wegen ihres hohen Durchdringungsgrades in Unternehmen und des damit verbundenen Kostenblocks gilt dies insbesondere für Free and Open Source (FOS-) ERP-Systeme. Zwar sind die Verbreitung und die Akzeptanz von FOS-ERP-Systemen in den letzten Jahren schon stark angewachsen, durch eine verbesserte Markttransparenz lassen sich aber noch weitere Potenziale erschließen. Bestehende Marktübersichten für FOS-ERP-Systeme sind jedoch wenig umfassend. Vor diesem Hintergrund wurde ein Marktspiegel mit detaillierten Angaben zu den verschiedenen FOS-ERP-Systemen erstellt.
ERP systems are being used throughout the whole enterprise and are therefore responsible for a high percentage of IT expenses. The use of Free and Open Source ERP systems (FOS ERP systems) can help to reduce these IT costs. Though the acceptance of FOS ERP systems has increased enormously in the last years, even more entreprises would use FOS ERP systems to support their order processing, if the FOS ERP market was more transparent. Existing market surveys are less comprehensive. Therefore, a detailed market guide was developed.
Object-Based Trace Model for Automatic Indicator Computation in the Human Learning Environments
(2021)
This paper proposes a traces model in the form of an object or class model (in the UML sense) which allows the automatic calculation of indicators of various kinds and independently of the computer environment for human learning (CEHL). The model is based on the establishment of a trace-based system that encompasses all the logic of traces collecting and indicators calculation. It is im-plemented in the form of a trace database. It is an important contribution in the field of the exploitation of the traces of apprenticeship in a CEHL because it pro-vides a general formalism for modeling the traces and allowing the calculation of several indicators at the same time. Also, with the inclusion of calculated indica-tors as potential learning traces, our model provides a formalism for classifying the various indicators in the form of inheritance relationships, which promotes the reuse of indicators already calculated. Economically, the model can allow organi-zations with different learning platforms to invest only in one traces Management System. At the social level, it can allow a better sharing of trace databases be-tween the various research institutions in the field of CEHL.
Novel Automated Three-Dimensional Genome Scanning Based on the Nuclear Architecture of Telomeres
(2011)
The perceptual upright results from the multisensory integration of the directions indicated by vision and gravity as well as a prior assumption that upright is towards the head. The direction of gravity is signalled by multiple cues, the predominant of which are the otoliths of the vestibular system and somatosensory information from contact with the support surface. Here, we used neutral buoyancy to remove somatosensory information while retaining vestibular cues, thus "splitting the gravity vector" leaving only the vestibular component. In this way, neutral buoyancy can be used as a microgravity analogue. We assessed spatial orientation using the oriented character recognition test (OChaRT, which yields the perceptual upright, PU) under both neutrally buoyant and terrestrial conditions. The effect of visual cues to upright (the visual effect) was reduced under neutral buoyancy compared to on land but the influence of gravity was unaffected. We found no significant change in the relative weighting of vision, gravity, or body cues, in contrast to results found both in long-duration microgravity and during head-down bed rest. These results indicate a relatively minor role for somatosensation in determining the perceptual upright in the presence of vestibular cues. Short-duration neutral buoyancy is a weak analogue for microgravity exposure in terms of its perceptual consequences compared to long-duration head-down bed rest.