Refine
H-BRS Bibliography
- yes (83)
Departments, institutes and facilities
- Fachbereich Informatik (83) (remove)
Document Type
- Conference Object (83) (remove)
Year of publication
Language
- English (83) (remove)
Has Fulltext
- no (83)
Keywords
- FPGA (2)
- Hyperspectral image (2)
- Image Processing (2)
- Intelligent virtual agents (2)
- Raman microscopy (2)
- Serious Games (2)
- Virtual Reality (2)
- Visualization (2)
- image fusion (2)
- machine vision (2)
Gone But Not Forgotten: Evaluating Performance and Scalability of Real-Time Mesoscopic Agents
(2020)
Simulating eye movements for virtual humans or avatars can improve social experiences in virtual reality (VR) games, especially when wearing head mounted displays. While other researchers have already demonstrated the importance of simulating meaningful eye movements, we compare three gaze models with different levels of fidelity regarding realism: (1) a base model with static fixation and saccadic movements, (2) a proposed simulation model that extends the saccadic model with gaze shifts based on a neural network, and (3) a user's real eye movements recorded by a proprietary eye tracker. Our between-groups design study with 42 subjects evaluates impact of eye movements on social VR user experience regarding perceived quality of communication and presence. The tasks include free conversation and two guessing games in a co-located setting. Results indicate that a high quality of communication in co-located VR can be achieved without using extended gaze behavior models besides saccadic simulation. Users might have to gain more experience with VR technology before being able to notice subtle details in gaze animation. In the future, remote VR collaboration involving different tasks requires further investigation.
Integration of Multi-modal Cues in Synthetic Attention Processes to Drive Virtual Agent Behavior
(2017)
Populating virtual worlds with intelligent agents can drastically improve a user's sense of presence. Applying these worlds to virtual training, simulations, or (serious) games, often requires multiple agents to be simulated in real time. The process of generating believable agent behavior starts with providing a plausible perception and attention process that is both efficient and controllable. We describe a conceptual framework for synthetic perception that specifically considers the mentioned requirements: plausibility, real-time performance, and controllability. A sample implementation will focus on sensing, attention, and memory to demonstrate the framework's capabilities in a real-time game engine scenario. A combination of dynamic geometric sensing and false coloring with static saliency information is provided to exemplify the collection of environmental stimuli. The subsequent attention process handles both bottom-up processing and task-oriented, top-down factors. Behavioral results can be influenced by controlling memory and attention The example case is demonstrated and discussed alongside future extensions.
A cost-efficient alternative to outside-in tracking systems for pointing interaction with large displays is to equip the pointing device with a camera, whose images are matched to display content. This work presents the Dynamic Marker Camera Tracking (DMCT) framework for display-based camera tracking. It accounts for typical display characteristics and uses dynamic on-screen markers overlaid to the display content that follow the camera. An example marker implementation and a tracking recovery method are presented. DMCT can measure pointing locations with sub-millimeter precision in large tracking volumes and computes 6-DoF camera poses for 3D interaction. 60 Hz update rate and 24 ms latency were achieved. DMCT's main limitation is the visible marker interfering with display content. In pointing effciency, the prototype is comparable to an OptiTrack system.