006 Spezielle Computerverfahren
Refine
H-BRS Bibliography
- yes (63)
Departments, institutes and facilities
- Fachbereich Informatik (63) (remove)
Document Type
- Conference Object (26)
- Article (25)
- Preprint (4)
- Part of a Book (2)
- Doctoral Thesis (2)
- Report (2)
- Contribution to a Periodical (1)
- Research Data (1)
Year of publication
Keywords
- Augmented Reality (3)
- Knowledge Graphs (3)
- Machine Learning (3)
- Virtual Reality (3)
- haptics (3)
- virtual reality (3)
- 3D user interface (2)
- Bioinformatics (2)
- Natural Language Processing (2)
- Robotics (2)
- Skin detection (2)
- Transformers (2)
- biometrics (2)
- guidance (2)
- 3D navigation (1)
- 450 MHz (1)
- AI usage in sports (1)
- AR (1)
- Altenhilfe (1)
- Artificial Intelligence (1)
- Auditory Cueing (1)
- Ball Tracking (1)
- Bayesian Deep Learning (1)
- Behaviour-Driven Development (1)
- Blasendiagramm (1)
- Business Process Intelligence (1)
- Camera selection (1)
- Camera view analysis (1)
- Classifiers (1)
- Collaborating industrial robots (1)
- Complexity (1)
- Compliant fingers (1)
- Computergrafik (1)
- Concurrent repeated failure prognosis (1)
- Conformation (1)
- Crossmedia (1)
- Crystal structure (1)
- Current research information systems (1)
- Curriculum (1)
- Cybersickness (1)
- Data Fusion (1)
- Datenanalyse (1)
- Demenz (1)
- Demonstration-based training (1)
- Diagnostic bond graph-based online fault diagnosis (1)
- Disco (1)
- Distance Perception (1)
- Drosophila (1)
- Educational Data Mining (1)
- Educational Process Mining (1)
- Embedded system (1)
- Emotion (1)
- Entropy (1)
- Facial Emotion Recognition (1)
- Fallbeschreibung (1)
- Feedback (1)
- Fluency (1)
- Forests (1)
- Functional safety (1)
- Fuzzy Mining (1)
- Games and Simulations for Learning (1)
- Geschäftsprozess (1)
- Graph embeddings (1)
- Graph theory (1)
- HDBR (1)
- Head-mounted Display (1)
- Higher education (1)
- Human factors (1)
- Human orientation perception (1)
- Human-Centered Design (1)
- Hyperspectral image (1)
- IEC 104 (1)
- IEC 61850 (1)
- Increasing fault magnitude (1)
- Inductive Logic Programming (1)
- Inductive Visual Mining (1)
- Information Security (1)
- Instruction design (1)
- Intermittent faults (1)
- LTE-M (1)
- Language learning (1)
- Langzeitbehandlung (1)
- Ligands (1)
- Locomotion (1)
- MQTT (1)
- Mathematical methods (1)
- Microgravity (1)
- Model-driven engineering (1)
- Molecular structure (1)
- Motion Sickness (1)
- Multi-camera (1)
- NIR-point sensor (1)
- NLP (1)
- Navigation (1)
- Neuroscience (1)
- OCT (1)
- Object-Based Image Analysis (OBIA) (1)
- Optical Flow (1)
- Out-of-view Objects (1)
- PAD (1)
- Perception (1)
- Perceptual Upright (1)
- Pflegepersonal (1)
- ProM (1)
- Process Mining (1)
- Pronunciation (1)
- Proximity (1)
- Psychology (1)
- Raman microscopy (1)
- RapidMiner (1)
- Ray tracing (1)
- Real-Time Image Processing (1)
- Reasoning (1)
- Recommender systems (1)
- Remaining Useful Life (RUL) estimates (1)
- Requirements (1)
- Requirements Engineering (1)
- Review (1)
- Robust grasping (1)
- SMPA loop (1)
- Semantic search (1)
- Serious Games (1)
- Slippage detection (1)
- Smart Grid (1)
- Smart InGaAs camera-system (1)
- Spectroscopy (1)
- Spherical Treadmill (1)
- Studenten (1)
- Studienverlauf (1)
- Survey (1)
- Technologie (1)
- Traffic Simulations (1)
- Travel Techniques (1)
- Tree Stumps (1)
- UAV (1)
- Ultrasonic array (1)
- Uncertainty Quantification (1)
- Underwater (1)
- Unmanned Aerial Vehicle (UAV) (1)
- Unterstützung (1)
- Usable Security and Privacy (1)
- User experience design (1)
- User-centered privacy engineering (1)
- VR (1)
- View selection (1)
- Virtual Agents (1)
- Virtuelle Realität (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Visuelle Wahrnehmung (1)
- Vulnerable Groups (1)
- adaptive trigger (1)
- aerodynamics (1)
- analog/digital signal processing (1)
- assistive robotics (1)
- audio-tactile feedback (1)
- authentication (1)
- authoring tools (1)
- brightfield microscopy (1)
- collision (1)
- component analyses (1)
- computer vision (1)
- controller design (1)
- depth perception (1)
- dynamic vector fields (1)
- elite sports (1)
- explainable AI (1)
- fingerprint (1)
- fitness-fatigue model (1)
- flight zone (1)
- geofence (1)
- head down bed rest (1)
- image fusion (1)
- interactive computer graphics (1)
- leaning-based interfaces (1)
- locomotion interface (1)
- mathematical modeling (1)
- mixed reality (1)
- multisensory (1)
- navigational search (1)
- near infrared (1)
- neutral buoyancy (1)
- optic flow (1)
- optical coherence tomography (1)
- optical sensor (1)
- pansharpening (1)
- performance modeling (1)
- performance prediction (1)
- presentation attack detection (1)
- presentation attack detection (PAD) (1)
- psychophysics (1)
- reinforcement learning (1)
- remote sensing (1)
- robot behaviour model (1)
- robot personalisation (1)
- self-motion perception (1)
- sensor resilience (1)
- sensory perception (1)
- space flight analog (1)
- spatial orientation (1)
- spatial updating (1)
- subjective visual vertical (1)
- training performance relationship (1)
- user modelling (1)
- vection (1)
- vibration (1)
- weight perception (1)
The visual and auditory quality of computer-mediated stimuli for virtual and extended reality (VR/XR) is rapidly improving. Still, it remains challenging to provide a fully embodied sensation and awareness of objects surrounding, approaching, or touching us in a 3D environment, though it can greatly aid task performance in a 3D user interface. For example, feedback can provide warning signals for potential collisions (e.g., bumping into an obstacle while navigating) or pinpointing areas where one’s attention should be directed to (e.g., points of interest or danger). These events inform our motor behaviour and are often associated with perception mechanisms associated with our so-called peripersonal and extrapersonal space models that relate our body to object distance, direction, and contact point/impact. We will discuss these references spaces to explain the role of different cues in our motor action responses that underlie 3D interaction tasks. However, providing proximity and collision cues can be challenging. Various full-body vibration systems have been developed that stimulate body parts other than the hands, but can have limitations in their applicability and feasibility due to their cost and effort to operate, as well as hygienic considerations associated with e.g., Covid-19. Informed by results of a prior study using low-frequencies for collision feedback, in this paper we look at an unobtrusive way to provide spatial, proximal and collision cues. Specifically, we assess the potential of foot sole stimulation to provide cues about object direction and relative distance, as well as collision direction and force of impact. Results indicate that in particular vibration-based stimuli could be useful within the frame of peripersonal and extrapersonal space perception that support 3DUI tasks. Current results favor the feedback combination of continuous vibrotactor cues for proximity, and bass-shaker cues for body collision. Results show that users could rather easily judge the different cues at a reasonably high granularity. This granularity may be sufficient to support common navigation tasks in a 3DUI.
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
This paper explores the role of artificial intelligence (AI) in elite sports. We approach the topic from two perspectives. Firstly, we provide a literature based overview of AI success stories in areas other than sports. We identified multiple approaches in the area of Machine Perception, Machine Learning and Modeling, Planning and Optimization as well as Interaction and Intervention, holding a potential for improving training and competition. Secondly, we discover the present status of AI use in elite sports. Therefore, in addition to another literature review, we interviewed leading sports scientist, which are closely connected to the main national service institute for elite sports in their countries. The analysis of this literature review and the interviews show that the most activity is carried out in the methodical categories of signal and image processing. However, projects in the field of modeling & planning have become increasingly popular within the last years. Based on these two perspectives, we extract deficits, issues and opportunities and summarize them in six key challenges faced by the sports analytics community. These challenges include data collection, controllability of an AI by the practitioners and explainability of AI results.
Kollaborative Industrieroboter werden für produzierende Unternehmen immer kosteneffizienter. Während diese Systeme für den menschlichen Mitarbeiter eine große Hilfe sein können, stellen sie gleichzeitig ein ernstes Gesundheitsrisiko dar, wenn die zwingend notwendigen Sicherheitsmaßnahmen nur unzureichend umgesetzt werden. Herkömmliche Sicherheitseinrichtungen wie Zäune oder Lichtvorhänge bieten einen guten Schutz, aber solch statische Schutzvorrichtungen sind in neuen, hochdynamischen Arbeitsszenarien problematisch.
Im Forschungsprojekt BeyondSPAI wurde ein Funktionsmuster eines Multisensorsystems zur Absicherung solcher dynamischer Arbeitsszenarien entworfen, implementiert und im Feld getestet. Kern des Systems ist eine robuste optische Materialklassifikation, die mit Hilfe eines intelligenten InGaAs-Kamerasystems Haut von anderen typischen Werkstückoberflächen (z.B. Holz, Metalle od. Kunststoffe) unterscheiden kann. Diese einzigartige Eigenschaft wird genutzt, um menschliche Mitarbeiter zuverlässig zu erkennen, so dass ein konventioneller Roboter in Folge als personenbewusster Cobot arbeiten kann.
Das System ist modular und kann leicht mit weiteren Sensoren verschiedenster Art erweitert werden. Es kann an verschiedene Marken von Industrierobotern angepasst werden und lässt sich schnell an bestehenden Robotersystemen integrieren. Die vier vom System bereitgestellten Sicherheitsausgänge können dazu verwendet werden - abhängig von der durchdrungenen Überwachungszone - entweder eine Warnung auszugeben, die Bewegung des Roboters auf eine sichere Geschwindigkeit zu verlangsamen, oder den Roboter sicher anzuhalten. Sobald alle Zonen wieder als „eindeutig frei von Personen“ identifiziert sind, kann der Roboter wieder beschleunigen, seine ursprüngliche Bewegung wiederaufnehmen und die Arbeit fortsetzen.
ProtSTonKGs: A Sophisticated Transformer Trained on Protein Sequences, Text, and Knowledge Graphs
(2022)
While most approaches individually exploit unstructured data from the biomedical literature or structured data from biomedical knowledge graphs, their union can better exploit the advantages of such approaches, ultimately improving representations of biology. Using multimodal transformers for such purposes can improve performance on context dependent classication tasks, as demonstrated by our previous model, the Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs (STonKGs). In this work, we introduce ProtSTonKGs, a transformer aimed at learning all-encompassing representations of protein-protein interactions. ProtSTonKGs presents an extension to our previous work by adding textual protein descriptions and amino acid sequences (i.e., structural information) to the text- and knowledge graph-based input sequence used in STonKGs. We benchmark ProtSTonKGs against STonKGs, resulting in improved F1 scores by up to 0.066 (i.e., from 0.204 to 0.270) in several tasks such as predicting protein interactions in several contexts. Our work demonstrates how multimodal transformers can be used to integrate heterogeneous sources of information, paving the foundation for future approaches that use multiple modalities for biomedical applications.
Vection underwater
(2022)
Contextual information is widely considered for NLP and knowledge discovery in life sciences since it highly influences the exact meaning of natural language. The scientific challenge is not only to extract such context data, but also to store this data for further query and discovery approaches. Classical approaches use RDF triple stores, which have serious limitations. Here, we propose a multiple step knowledge graph approach using labeled property graphs based on polyglot persistence systems to utilize context data for context mining, graph queries, knowledge discovery and extraction. We introduce the graph-theoretic foundation for a general context concept within semantic networks and show a proof of concept based on biomedical literature and text mining. Our test system contains a knowledge graph derived from the entirety of PubMed and SCAIView data and is enriched with text mining data and domain-specific language data using Biological Expression Language. Here, context is a more general concept than annotations. This dense graph has more than 71M nodes and 850M relationships. We discuss the impact of this novel approach with 27 real-world use cases represented by graph queries. Storing and querying a giant knowledge graph as a labeled property graph is still a technological challenge. Here, we demonstrate how our data model is able to support the understanding and interpretation of biomedical data. We present several real-world use cases that utilize our massive, generated knowledge graph derived from PubMed data and enriched with additional contextual data. Finally, we show a working example in context of biologically relevant information using SCAIView.
MOTIVATION
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited.
RESULTS
To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs (KGs). This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations in a shared embedding space. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against three baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.084 (i.e., from 0.881 to 0.965). Finally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications.
AVAILABILITY
We make the source code and the Python package of STonKGs available at GitHub (https://github.com/stonkgs/stonkgs) and PyPI (https://pypi.org/project/stonkgs/). The pre-trained STonKGs models and the task-specific classification models are respectively available at https://huggingface.co/stonkgs/stonkgs-150k and https://zenodo.org/communities/stonkgs.
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online.