006 Spezielle Computerverfahren
Refine
Departments, institutes and facilities
- Fachbereich Informatik (7)
- Fachbereich Wirtschaftswissenschaften (3)
- Institut für Verbraucherinformatik (IVI) (3)
- Fachbereich Ingenieurwissenschaften und Kommunikation (1)
- Institut für Sicherheitsforschung (ISF) (1)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (1)
- Institute of Visual Computing (IVC) (1)
Document Type
- Article (6)
- Conference Object (4)
- Part of a Book (1)
- Contribution to a Periodical (1)
- Preprint (1)
- Report (1)
Year of publication
- 2021 (14) (remove)
Keywords
- Augmented Reality (3)
- Machine Learning (3)
- 3D navigation (1)
- AR design (1)
- AR development (1)
- AR/VR (1)
- Auditory Cueing (1)
- Automatic Differentiation (1)
- Automatic pain detection (1)
- Bioinformatics (1)
- Classifiers (1)
- Compliant fingers (1)
- Computational fluid dynamics (1)
- Design Recommendations (1)
- Design Theory and Practice (1)
- Flow control (1)
- Fluency (1)
- Guidelines (1)
- Head-mounted Display (1)
- Knowledge Graphs (1)
- Language learning (1)
- Lattice Boltzmann Method (1)
- MR (1)
- Machine learning (1)
- Mixed Reality (1)
- Natural Language Processing (1)
- Out-of-view Objects (1)
- Pronunciation (1)
- Pytorch (1)
- Robust grasping (1)
- Slippage detection (1)
- Transformers (1)
- User Interface Design (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- XR (1)
- authoring tools (1)
- facial expression analysis (1)
- facial expressions of pain (1)
- leaning-based interfaces (1)
- locomotion interface (1)
- navigational search (1)
- neural networks (1)
- pain datasets (1)
- pain feature representation (1)
- practitioners (1)
- spatial orientation (1)
- spatial updating (1)
- survey (1)
- virtual reality (1)
Using Visual and Auditory Cues to Locate Out-of-View Objects in Head-Mounted Augmented Reality
(2021)
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited. To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs. This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against two baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.083. Additionally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications. Finally, the source code and pre-trained STonKGs models are available at https://github.com/stonkgs/stonkgs and https://huggingface.co/stonkgs/stonkgs-150k.
Dieses Dokument präsentiert eine Zusammenfassung der Dissertation der Autorin. In dieser Dissertation [Ha20] wurde ein neuartiger und hybrider Ansatz für die Scha ̈tzung der Intensität von Gesichtsmuskelbewegungen (Action Unit (AU)) vorgeschlagen und validiert. Dieser Ansatz basiert auf einer Gauß’schen Zustandsschätzung und kombiniert ein verformbares, AU-basiertes Gesichtsformmodell, ein viskoelastisches Modell der Gesichtsmuskelbewegung, mehrere erscheinungsbasierten AU-Klassifikatoren und eine Methode zur Erkennung von Gesichtspunkten. Es wurden mehrere Erweiterungen vorgeschlagen und in das Zustandsschätzungs-Framework integriert, um mit den personenspezifischen Eigenschaften sowie technischen und praktischen Herausforderungen umzugehen.Die mit der vorgeschlagenen Methode erzeugten AU-Intensitätsschätzungen wurden für die automatische Erkennung von Schmerzen und für die Analyse von Fahrerablenkung eingesetzt.
Over the last decades, different kinds of design guides have been created to maintain consistency and usability in interactive system development. However, in the case of spatial applications, practitioners from research and industry either have difficulty finding them or perceive such guides as lacking relevance, practicability, and applicability. This paper presents the current state of scientific research and industry practice by investigating currently used design recommendations for mixed reality (MR) system development. We analyzed and compared 875 design recommendations for MR applications elicited from 89 scientific papers and documentation from six industry practitioners in a literature review. In doing so, we identified differences regarding four key topics: Focus on unique MR design challenges, abstraction regarding devices and ecosystems, level of detail and abstraction of content, and covered topics. Based on that,we contribute to the MR design research by providing three factors for perceived irrelevance and six main implications for design recommendations that are applicable in scientific and industry practice.
Low-Cost In-Hand Slippage Detection and Avoidance for Robust Robotic Grasping with Compliant Fingers
(2021)
When users in virtual reality cannot physically walk and self-motions are instead only visually simulated, spatial updating is often impaired. In this paper, we report on a study that investigated if HeadJoystick, an embodied leaning-based flying interface, could improve performance in a 3D navigational search task that relies on maintaining situational awareness and spatial updating in VR. We compared it to Gamepad, a standard flying interface. For both interfaces, participants were seated on a swivel chair and controlled simulated rotations by physically rotating. They either leaned (forward/backward, right/left, up/down) or used the Gamepad thumbsticks for simulated translation. In a gamified 3D navigational search task, participants had to find eight balls within 5 min. Those balls were hidden amongst 16 randomly positioned boxes in a dark environment devoid of any landmarks. Compared to the Gamepad, participants collected more balls using the HeadJoystick. It also minimized the distance travelled, motion sickness, and mental task demand. Moreover, the HeadJoystick was rated better in terms of ease of use, controllability, learnability, overall usability, and self-motion perception. However, participants rated HeadJoystick could be more physically fatiguing after a long use. Overall, participants felt more engaged with HeadJoystick, enjoyed it more, and preferred it. Together, this provides evidence that leaning-based interfaces like HeadJoystick can provide an affordable and effective alternative for flying in VR and potentially telepresence drones.