Refine
H-BRS Bibliography
- yes (286)
Departments, institutes and facilities
- Institute of Visual Computing (IVC) (286) (remove)
Document Type
- Conference Object (203)
- Article (57)
- Report (9)
- Part of a Book (4)
- Doctoral Thesis (4)
- Conference Proceedings (3)
- Part of Periodical (2)
- Book (monograph, edited volume) (1)
- Contribution to a Periodical (1)
- Research Data (1)
Year of publication
Language
- English (286) (remove)
Keywords
- FPGA (12)
- Virtual Reality (10)
- 3D user interface (7)
- virtual reality (7)
- haptics (5)
- Augmented Reality (4)
- Education (4)
- Perception (4)
- Augmented reality (3)
- Hyperspectral image (3)
- Image Processing (3)
- Ray Tracing (3)
- Ray tracing (3)
- Virtual reality (3)
- Virtuelle Realität (3)
- Visualization (3)
- education (3)
- foveated rendering (3)
- guidance (3)
- low power (3)
- 3D user interfaces (2)
- Algorithms (2)
- Computer Graphics (2)
- Content Module (2)
- Distributed rendering (2)
- EEG (2)
- Eye Tracking (2)
- Force field (2)
- Fuzzy logic (2)
- Garbage collection (2)
- Human factors (2)
- Intelligent virtual agents (2)
- Java virtual machine (2)
- Large, high-resolution displays (2)
- Original Story (2)
- Perceptual Upright (2)
- Performance (2)
- Raman microscopy (2)
- Remote laboratory (2)
- Serious Games (2)
- Three-dimensional displays (2)
- Unity (2)
- VR (2)
- computer vision (2)
- digital design (2)
- edutainment (2)
- energy awareness (2)
- eye-tracking (2)
- human factors (2)
- hypermedia (2)
- image fusion (2)
- interaction (2)
- interface design (2)
- leaning (2)
- machine vision (2)
- microcontroller (2)
- multisensory cues (2)
- navigation (2)
- pansharpening (2)
- peripheral vision (2)
- remote lab (2)
- serious games (2)
- spatial updating (2)
- technological literacy (2)
- user study (2)
- vibration (2)
- virtual environments (2)
- 2D Level Design (1)
- 3D User Interface (1)
- 3D gaming (1)
- 3D interfaces (1)
- 3D navigation (1)
- 3D shape (1)
- AMBER (1)
- AR (1)
- Active locomotion (1)
- Adaptive Behavior (1)
- Adaptive Control (1)
- Agents (1)
- Alkane (1)
- BLOB Detection (1)
- Bicycle Simulator (1)
- Blob Detection (1)
- Bound Volume Hierarchy (1)
- Bounding Box (1)
- CUDA (1)
- Camera selection (1)
- Camera view analysis (1)
- Cell Processor (1)
- Cell/B.E. (1)
- Center-of-Mass (1)
- Centrifugation (1)
- Centrifuge (1)
- Chalcogenide glass sensor (1)
- Challenges (1)
- Chemical imaging (1)
- Clusters (1)
- Co-located Collaboration (1)
- Co-located work (1)
- Codes (1)
- Cognition (1)
- Cognitive informatics (1)
- Computer graphics (1)
- Computer-supported Cooperative Work (1)
- Computergrafik (1)
- Container Structure (1)
- Containerization (1)
- Correlative Microscopy (1)
- Created Gravity (1)
- Cross-sensitivity (1)
- Current measurement (1)
- CyberGlove (1)
- Cybersickness (1)
- Data structures (1)
- Datalog (1)
- Demonstration-based training (1)
- Digital Storytelling (1)
- Displacement (1)
- Docker (1)
- ERP (1)
- Ecosystem simulation (1)
- Educational institutions (1)
- Educational methods (1)
- Edutainment (1)
- Electromagnetic Fields (1)
- Electronic tongue (1)
- Emotion (1)
- Entropy (1)
- Escape analysis (1)
- Evaluation board (1)
- Event detection (1)
- Executive functions (1)
- Exercise (1)
- Expert system (1)
- Feedback (1)
- Five Factor Model (1)
- Fixed spatial data (1)
- Focus plus context (1)
- Foreground segmentation (1)
- Forschungsbericht (1)
- Foveated rendering (1)
- Game Engine (1)
- Games (1)
- Games and Simulations for Learning (1)
- Gaze Behavior (1)
- Gaze Depth Estimation (1)
- Gaze-contingent depth-of-field (1)
- Gender Issues in Computer Science Education (1)
- Geometry (1)
- Global Illumination (1)
- Global illumination (1)
- Grailog (1)
- Gravitation (1)
- Group Behavior (1)
- Group behavior (1)
- Group behavior analysis (1)
- Groupware (1)
- HCI (1)
- HDBR (1)
- Hand Guidance (1)
- Hand Tracking (1)
- Hardware (1)
- Head Mounted Display (1)
- Head-mounted Display (1)
- Heat shrink tubing (1)
- High-performance computing (1)
- High-resolution displays (1)
- Higher education (1)
- Human Factors (1)
- Human computer interaction (1)
- Human orientation perception (1)
- Human-Computer Interaction (1)
- Hydrocarbon (1)
- Image-based rendering (1)
- Immersion (1)
- Immersive Visualization Environment (1)
- Immersive analytics (1)
- Information interaction (1)
- Instantiation (1)
- Instruction design (1)
- Interaction (1)
- Interaction devices (1)
- Interaktion (1)
- Internet (1)
- Interoperability (1)
- Inventory (1)
- JavaScript (1)
- Laboratories (1)
- Large display interaction (1)
- Large high-resolution displays (1)
- Lighting simulation (1)
- Locomotion (1)
- Low-power design (1)
- Low-power digital design (1)
- Low-power education (1)
- MP2.5 (1)
- Main Memory (1)
- Management (1)
- Measurement (1)
- Memory management (1)
- Methodologies (1)
- Modular software packages (1)
- Molecular modeling (1)
- Molecular rotation (1)
- Motion (1)
- Motion Capture (1)
- Motion Sickness (1)
- Multi-camera (1)
- Multi-component heavy metal solution (1)
- Multilayer interaction (1)
- Multimodal Microspectroscopy (1)
- Multimodal hyperspectral data (1)
- Multisensory cues (1)
- Multiuser (1)
- Musical Performance (1)
- N200 (1)
- NVIDIA Tesla (1)
- Narration Module (1)
- Navigation (1)
- Navigation interface (1)
- Neural representations (1)
- Neuroscience (1)
- Numerical optimization (1)
- OER (1)
- Older adults (1)
- Organic compounds and Functional groups (1)
- Outer Space Research (1)
- P300 (1)
- Parallel Processing (1)
- Parallelization (1)
- Pattern recognition (1)
- Personality (1)
- Physical activity (1)
- Physical exercising game platform (1)
- Pointing (1)
- Pointing devices (1)
- Poisson Disc Distribution (1)
- Pose Estimation (1)
- Power dissipation (1)
- Power measurement (1)
- Presence (1)
- Programming (1)
- Proximity (1)
- Psychology (1)
- Quantum mechanical methods (1)
- Radiance caching (1)
- Radix Sort (1)
- Raumwahrnehmung (1)
- Reasoning (1)
- Recommender systems (1)
- Registration Refinement (1)
- Remote lab (1)
- Render Cache (1)
- Rendering (1)
- Reversible Logic Synthesis (1)
- Robotics (1)
- RuleML (1)
- S3D Video (1)
- S3D video (1)
- SVG (1)
- Saccades (1)
- Saccadic suppression (1)
- Scalable Vector Graphic (1)
- School experiments (1)
- Second Life (1)
- Self-motion perception (1)
- Sense of presence (1)
- Shadow detection (1)
- Smartphone (1)
- Social Virtual Reality (1)
- Software Architecture (1)
- Software Framework (1)
- Somatogravic Illusion (1)
- Split Axis (1)
- Standards (1)
- Star Trek (1)
- Stereoscopic Rendering (1)
- Stereoscopic rendering (1)
- Story Element (1)
- Supervised classification (1)
- Survey (1)
- Swim Stroke Analysis (1)
- Switches (1)
- Synthetic perception (1)
- SystemVerilog (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Terrain rendering (1)
- Tiled displays (1)
- Tiled-display walls (1)
- Topology (1)
- Touchscreen interaction (1)
- Touchscreens (1)
- Tracking (1)
- Traffic Simulations (1)
- Travel Techniques (1)
- UI design (1)
- Usability (1)
- User Roles (1)
- User Study (1)
- User engagement (1)
- User interface (1)
- User interfaces (1)
- User-Centered Approach (1)
- VR system design (1)
- VR-based systems (1)
- Verilog (1)
- Vibrational microspectroscopy (1)
- Video surveillance (1)
- View selection (1)
- Virtual Agents (1)
- Virtual Environment (1)
- Virtual Environments (1)
- Virtual Memory Palace (1)
- Virtual attention (1)
- Virtual environments (1)
- Visual perception (1)
- Visuelle Wahrnehmung (1)
- Wang-tiles (1)
- XML (1)
- XSLT (1)
- Young adults (1)
- accelerometer (1)
- adaptive agents (1)
- adaptive filters (1)
- adaptive trigger (1)
- affective computing (1)
- analysis (1)
- audio-tactile feedback (1)
- augmented, and virtual realities (1)
- authoring tools (1)
- automation (1)
- back-of-device interaction (1)
- bass-shaker (1)
- bicycle (1)
- body-centric cues (1)
- brain computer interfaces (1)
- brightfield microscopy (1)
- bus load (1)
- camera (1)
- can bus (1)
- chemical sensors (1)
- co-located collaboration (1)
- collaboration (1)
- collision (1)
- colorimetry (1)
- compensation (1)
- component analyses (1)
- computational logic (1)
- computer-supported collaborative work (1)
- controller design (1)
- cooperative path planning (1)
- cyanide (1)
- cybersickness (1)
- data analysis (1)
- data glove (1)
- data logging (1)
- database (1)
- depth perception (1)
- digital storytelling (1)
- directed hypergraphs (1)
- disabled people (1)
- e-learning (1)
- educational methods (1)
- electrical devices (1)
- electrical engineering education (1)
- electrochemical sensor (1)
- electronics (1)
- embedded systems (1)
- embodied interfaces (1)
- embroidery machine (1)
- emotion computing (1)
- engineering education (1)
- engineering for non-engineers (1)
- evaluation board development (1)
- eye movement (1)
- eye tracking (1)
- fiducial marker (1)
- field programmable gate arrays (1)
- flying (1)
- fpga (1)
- fuel (1)
- full-body interface (1)
- fuzzy logic (1)
- game engine (1)
- gaming (1)
- gaze (1)
- graphs (1)
- grasp motions (1)
- grasping (1)
- gravito-inertial force (1)
- hand guidance (1)
- hands-on lab (1)
- hands-on labs (1)
- haptic feedback (1)
- haptic interfaces (1)
- head down bed rest (1)
- heat shrink tubes (1)
- heavy metal (1)
- human computer interaction (1)
- human-centric lighting (1)
- hydrocarbon (1)
- image processing (1)
- immersion (1)
- immersive systems (1)
- information display methods (1)
- infrared pattern (1)
- interaction techniques (1)
- interactive computer graphics (1)
- ion-selective electrodes (1)
- large-high-resolution displays (1)
- leaning, self-motion perception (1)
- leaning-based interfaces (1)
- life-long learning (1)
- lifelong learning (1)
- linguistic variable (1)
- linguistic variables (1)
- lipid (1)
- locomotion (1)
- locomotion interface (1)
- low-power design (1)
- measurement (1)
- medical training (1)
- mesoscopic agents (1)
- microcontroller education (1)
- microcontrollers (1)
- mixed reality (1)
- mobile applications (1)
- mobile projection (1)
- momentary frequency (1)
- monitoring (1)
- mood (1)
- motion capture (1)
- motion cueing (1)
- motion platform (1)
- motion sickness (1)
- multi-layer display (1)
- multi-user VR (1)
- multiresolution analysis (1)
- multisensory (1)
- multisensory interface (1)
- natural user interface (1)
- navigational search (1)
- neural networks (1)
- neuro-cognitive performance (1)
- non-engineering students (1)
- octane (1)
- optical tracking (1)
- path tracing (1)
- pen interaction (1)
- perceived quality (1)
- perception of upright (1)
- performance optimizations (1)
- peripheral visual field (1)
- photometry (1)
- physical activity (1)
- physical model immersive (1)
- posture analysis (1)
- prefrontal cortex (1)
- prehensile motions (1)
- programming (1)
- project management (1)
- projection based systems (1)
- proxemics (1)
- psychophysics (1)
- quantum mechanics (1)
- rapid prototyping tool (1)
- ray tracing (1)
- real-time (1)
- region of interest (1)
- remote-lab (1)
- robotic arm (1)
- robotic evaluation (1)
- robotics (1)
- rules (1)
- scene element representation (1)
- security (1)
- see-through display (1)
- see-through head-mounted displays (1)
- semantic image seg-mentation (1)
- semi-continuous locomotion (1)
- sensemaking (1)
- sensory perception (1)
- short-term memory (1)
- simulator (1)
- situation awareness (1)
- software engineering (1)
- space flight analog (1)
- spatial augmented reality (1)
- spatial orientation (1)
- spectral rendering (1)
- spinal posture (1)
- stereoscopic vision (1)
- story authoring (1)
- subjective visual vertical (1)
- submillimeter precision (1)
- surface textures (1)
- surface topography (1)
- teleportation (1)
- telepresence (1)
- territoriality (1)
- tiled displays (1)
- time series processing (1)
- tools for education (1)
- travel techniques (1)
- un-manned aerial vehicle (1)
- unmanned ground vehicle (1)
- user acceptance (1)
- user engagement (1)
- user input (1)
- user interaction (1)
- vection (1)
- vestibular system (1)
- video lectures (1)
- view management (1)
- virtual environment framework (1)
- virtual locomotion (1)
- visual quality control (1)
- visualisation (1)
- visuohaptic feedback (1)
- wearable sensor (1)
- weight perception (1)
- whole-body interface (1)
- workday breaks (1)
- workspace awareness (1)
- zooming interface (1)
- zooming interfaces (1)
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
Modern GPUs come with dedicated hardware to perform ray/triangle intersections and bounding volume hierarchy (BVH) traversal. While the primary use case for this hardware is photorealistic 3D computer graphics, with careful algorithm design scientists can also use this special-purpose hardware to accelerate general-purpose computations such as point containment queries. This article explains the principles behind these techniques and their application to vector field visualization of large simulation data using particle tracing.
An electronic display often has to present information from several sources. This contribution reports about an approach, in which programmable logic (FPGA) synchronises and combines several graphics inputs. The application area is computer graphics, especially rendering of large 3D models, which is a computing intensive task. Therefore, complex scenes are generated on parallel systems and merged to give the requested output image. So far, the transportation of intermediate results is often done by a local area network. However, as this can be a limiting factor, the new approach removes this bottleneck and combines the graphic signals with an FPGA.
Low power dissipation is a current topic in digital design, and therefore, it should be covered in a state-of-the-art electrical engineering curriculum. This paper describes how low-power design can be addressed within a digital design course. Doing so would be beneficial for both topics because low-power design is not detached from the systems perspective, and the digital design course would be enriched by references to current challenges and applications. Thus, the presented course should serve as an example of how a course can be developed to also teach students about sustainable engineering.
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
In recent years, a variety of methods have been introduced to exploit the decrease in visual acuity of peripheral vision, known as foveated rendering. As more and more computationally involved shading is requested and display resolutions increase, maintaining low latencies is challenging when rendering in a virtual reality context. Here, foveated rendering is a promising approach for reducing the number of shaded samples. However, besides the reduction of the visual acuity, the eye is an optical system, filtering radiance through lenses. The lenses create depth-of-field (DoF) effects when accommodated to objects at varying distances. The central idea of this article is to exploit these effects as a filtering method to conceal rendering artifacts. To showcase the potential of such filters, we present a foveated rendering system, tightly integrated with a gaze-contingent DoF filter. Besides presenting benchmarks of the DoF and rendering pipeline, we carried out a perceptual study, showing that rendering quality is rated almost on par with full rendering when using DoF in our foveated mode, while shaded samples are reduced by more than 69%.
Rendering techniques for design evaluation and review or for visualizing large volume data often use computationally expensive ray-based methods. Due to the number of pixels and the amount of data, these methods often do not achieve interactive frame rates. A view direction based rendering technique renders the users central field of view in high quality whereas the surrounding is rendered with a level of detail approach depending on the distance to the users central field of view thus giving the opportunity to increase rendering efficiency. We propose a prototype implementation and evaluation of a focus-based rendering technique based on a hybrid ray tracing/sparse voxel octree rendering approach.
In contrast to projection-based systems, large, high resolution multi-display systems offer a high pixel density on a large visualization area. This enables users to step up to the displays and see a small but highly detailed area. If the users move back a few steps they don't perceive details at pixel level but will instead get an overview of the whole visualization. Rendering techniques for design evaluation and review or for visualizing large volume data (e.g. Big Data applications) often use computationally expensive ray-based methods. Due to the number of pixels and the amount of data, these methods often do not achieve interactive frame rates.
A view direction based (VDB) rendering technique renders the user's central field of view in high quality whereas the surrounding is rendered with a level-of-detail approach depending on the distance to the user's central field of view. This approach mimics the physiology of the human eye and conserves the advantage of highly detailed information when standing close to the multi-display system as well as the general overview of the whole scene. In this paper we propose a prototype implementation and evaluation of a focus-based rendering technique based on a hybrid ray tracing/sparse voxel octree rendering approach.
We present a system that combines voxel and polygonal representations into a single octree acceleration structure that can be used for ray tracing. Voxels are well-suited to create good level-of-detail for high-frequency models where polygonal simplifications usually fail due to the complex structure of the model. However, polygonal descriptions provide the higher visual fidelity. In addition, voxel representations often oversample the geometric domain especially for large triangles, whereas a few polygons can be tested for intersection more quickly.
Generating and visualizing large areas of vegetation that look natural makes terrain surfaces much more realistic. However, this is a challenging field in computer graphics, because ecological systems are complex and visually appealing plant models are geometrically detailed. This work presents Silva (System for the Instantiation of Large Vegetated Areas), a system to generate and visualize large vegetated areas based on the ecological surrounding. Silva generates vegetation on Wang-tiles with associated reusable distributions enabling multi-level instantiation. This paper presents a method to generate Poisson Disc Distributions (PDDs) with variable radii on Wang-tile sets (without a global optimization) that is able to generate seamless tilings. Because Silva has a freely configurable generation pipeline and can consider plant neighborhoods it is able to incorporate arbitrary abiotic and biotic components during generation. Based on multi-levelinstancing and nested kd-trees, the distributions on the Wang-tiles allow their acceleration structures to be reused during visualization. This enables Silva to visualize large vegetated areas of several hundred square kilometers with low render times and a small memory footprint.
Computer graphics research strives to synthesize images of a high visual realism that are indistinguishable from real visual experiences. While modern image synthesis approaches enable to create digital images of astonishing complexity and beauty, processing resources remain a limiting factor. Here, rendering efficiency is a central challenge involving a trade-off between visual fidelity and interactivity. For that reason, there is still a fundamental difference between the perception of the physical world and computer-generated imagery. At the same time, advances in display technologies drive the development of novel display devices. The dynamic range, the pixel densities, and refresh rates are constantly increasing. Display systems enable a larger visual field to be addressed by covering a wider field-of-view, due to either their size or in the form of head-mounted devices. Currently, research prototypes are ranging from stereo and multi-view systems, head-mounted devices with adaptable lenses, up to retinal projection, and lightfield/holographic displays. Computer graphics has to keep step with, as driving these devices presents us with immense challenges, most of which are currently unsolved. Fortunately, the human visual system has certain limitations, which means that providing the highest possible visual quality is not always necessary. Visual input passes through the eye’s optics, is filtered, and is processed at higher level structures in the brain. Knowledge of these processes helps to design novel rendering approaches that allow the creation of images at a higher quality and within a reduced time-frame. This thesis presents the state-of-the-art research and models that exploit the limitations of perception in order to increase visual quality but also to reduce workload alike - a concept we call perception-driven rendering. This research results in several practical rendering approaches that allow some of the fundamental challenges of computer graphics to be tackled. By using different tracking hardware, display systems, and head-mounted devices, we show the potential of each of the presented systems. The capturing of specific processes of the human visual system can be improved by combining multiple measurements using machine learning techniques. Different sampling, filtering, and reconstruction techniques aid the visual quality of the synthesized images. An in-depth evaluation of the presented systems including benchmarks, comparative examination with image metrics as well as user studies and experiments demonstrated that the methods introduced are visually superior or on the same qualitative level as ground truth, whilst having a significantly reduced computational complexity.
Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment in those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, trials of moderate-intensity exercise (i.e. self-paced cycling) and no-exercise (i.e. automatic propulsion) were performed within three levels of virtual environment exposure. Each trial was 5-min in duration and was followed by post-trial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore these change indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence.
A Low-Cost Based 6 DoF Head Tracker for Usability Application Studies in Virtual Environments
(2008)
Simultaneous detection of cyanide and heavy metals for environmental analysis by means of µISEs
(2010)
We propose a high-performance GPU implementation of Ray Histogram Fusion (RHF), a denoising method for stochastic global illumination rendering. Based on the CPU implementation of the original algorithm, we present a naive GPU implementation and the necessary optimization steps. Eventually, we show that our optimizations increase the performance of RHF by two orders of magnitude when compared to the original CPU implementation and one order of magnitude compared to the naive GPU implementation. We show how the quality for identical rendering times relates to unfiltered path tracing and how much time is needed to achieve identical quality when compared to an unfiltered path traced result. Finally, we summarize our work and describe possible future applications and research based on this.
The work at hand outlines a recording setup for capturing hand and finger movements of musicians. The focus is on a series of baseline experiments on the detectability of coloured markers under different lighting conditions. With the goal of capturing and recording hand and finger movements of musicians in mind, requirements for such a system and existing approaches are analysed and compared. The results of the experiments and the analysis of related work show that the envisioned setup is suited for the expected scenario.
Lower back pain is one of the most prevalent diseases in Western societies. A large percentage of European and American populations suffer from back pain at some point in their lives. One successful approach to address lower back pain is postural training, which can be supported by wearable devices, providing real-time feedback about the user’s posture. In this work, we analyze the changes in posture induced by postural training. To this end, we compare snapshots before and after training, as measured by the Gokhale SpineTracker™. Considering pairs of before and after snapshots in different positions (standing, sitting, and bending), we introduce a feature space, that allows for unsupervised clustering. We show that resulting clusters represent certain groups of postural changes, which are meaningful to professional posture trainers.
Motion capture, often abbreviated mocap, generally aims at recording any kind of motion -- be it from a person or an object -- and to transform it to a computer-readable format. Especially the data recorded from (professional and non-professional) human actors are typically used for analysis in e.g. medicine, sport sciences, or biomechanics for evaluation of human motion across various factors. Motion capture is also widely used in the entertainment industry: In video games and films realistic motion sequences and animations are generated through data-driven motion synthesis based on recorded motion (capture) data.
Although the amount of publicly available full-body-motion capture data is growing, the research community still lacks a comparable corpus of specialty motion data such as, e.g. prehensile movements for everyday actions. On the one hand, such data can be used to enrich (hand-over animation) full-body motion capture data - usually captured without hand motion data due to the drastic dimensional difference in articulation detail. On the other hand, it provides means to classify and analyse prehensile movements with or without respect to the concrete object manipulated and to transfer the acquired knowledge to other fields of research (e.g. from 'pure' motion analysis to robotics or biomechanics).
Therefore, the objective of this motion capture database is to provide well-documented, free motion capture data for research purposes.
The presented database GraspDB14 in sum contains over 2000 prehensile movements of ten different non-professional actors interacting with 15 different objects. Each grasp was realised five times by each actor. The motions are systematically named containing an (anonymous) identifier for each actor as well as one for the object grasped or interacted with.
The data were recorded as joint angles (and raw 8-bit sensor data) which can be transformed into positional 3D data (3D trajectories of each joint).
In this document, we provide a detailed description on the GraspDB14-database as well as on its creation (for reproducibility).
Chapter 2 gives a brief overview of motion capture techniques, freely available motion capture databases for both, full body motions and hand motions, and a short section on how such data is made useful and re-used. Chapter 3 describes the database recording process and details the recording setup and the recorded scenarios. It includes a list of objects and performed types of interaction. Chapter 4 covers used file formats, contents, and naming patterns. We provide various tools for parsing, conversion, and visualisation of the recorded motion sequences and document their usage in chapter 5.
It is challenging to provide users with a haptic weight sensation of virtual objects in VR since current consumer VR controllers and software-based approaches such as pseudo-haptics cannot render appropriate haptic stimuli. To overcome these limitations, we developed a haptic VR controller named Triggermuscle that adjusts its trigger resistance according to the weight of a virtual object. Therefore, users need to adapt their index finger force to grab objects of different virtual weights. Dynamic and continuous adjustment is enabled by a spring mechanism inside the casing of an HTC Vive controller. In two user studies, we explored the effect on weight perception and found large differences between participants for sensing change in trigger resistance and thus for discriminating virtual weights. The variations were easily distinguished and associated with weight by some participants while others did not notice them at all. We discuss possible limitations, confounding factors, how to overcome them in future research and the pros and cons of this novel technology.