Refine
Department, Institute
Document Type
- Conference Object (8)
- Article (3)
- Part of a Book (1)
- Report (1)
Year of publication
Keywords
A Stereo Active Vision Interface is introduced which detects frontal faces in real world environments and performs particular active control tasks dependent on changes in the visual field. Firstly, connected skin colour regions in the visual scene are detected by applying a radial scanline algorithm. Secondly, facial features are searched for in the most salient skin colour region while the blob is tracked by the camera system. The facial features are evaluated and, based on the obtained results and the current state of the system, particular actions are performed. The SAVI system is thought of as a smart user interface for teleconferencing, telemedicine, and distance learning. The system is designedas a Perception-Action-Cycle (PAC), processing sensory data ofdifferent kinds and qualities. Both the vision module and the head motion control module work at frame rate. Hence, the system is able to react instantaneously to changing conditions in the visual scene.
Facial keypoints such as eye corners are important features for a number of different tasks in automatic face processing. The problem is that facial keypoints rather have an anatomical high-level definition than a low-level one. Therefore, they cannot be detected reliably by purely data-driven methods like corner detectors that are only based on the image data of the local neighborhood. In this contribution we introduce a method for the automatic detection of facial keypoints. The method integrates model knowledge to guarantee a consistent interpretation of the abundance of local features. The detection is based on a selective search and sequential tracking of edges controlled by model knowledge. For this, the edge detection has to be very flexible. Therefore, we apply a powerful filtering scheme based on steerable filters.
In the past decade computer models have become very popular in the field of biomechanics due to exponentially increasing computer power. Biomechanical computer models can roughly be subdivided into two groups: multi-body models and numerical models. The theoretical aspects of both modelling strategies will be introduced. However, the focus of this chapter lies on demonstrating the power and versatility of computer models in the field of biomechanics by presenting sophisticated finite element models of human body parts. Special attention is paid to explain the setup of individual models using medical scan data. In order to reach the goal of individualising the model a chain of tools including medical imaging, image acquisition and processing, mesh generation, material modelling and finite element simulation –possibly on parallel computer architectures- becomes necessary. The basic concepts of these tools are described and application results are presented. The chapter ends with a short outlook into the future of computer biomechanics.
This contribution describes the FIVIS project. The project’s goal is the development of an immersive bicycle simulation platform for several applications in the areas of biomechanics, sports, traffic education, road safety and entertainment. To take physical, optical and acoustical characteristics of cycling into account, FIVIS uses a special immersive visualization system, a motion platform and a standard bicycle with sensors and actuators, as well as a surround sound system. First experimental results have shown that the FIVIS simulator provides a realistic training and exercising environment for traffic education and stress research.
The perceived direction of “up” is determined by gravity, visual information, and an internal estimate of body orientation (Mittelstaedt, 1983; Dyde et al., 2006). Is the gravity level found on other worlds sufficient to maintain gravity’s contribution to this perception? Difficulties in stability reported anecdotally by astronauts on the lunar surface (NASA 1972) suggest that the moon’s gravity may not be, despite this value being far above the threshold for detecting linear acceleration. Knowing how much gravity is needed to provide a reliable orientation cue is required for training and preparing astronauts for future missions to the moon, mars and beyond.
The objective of the presented approach is to develop a 3D-reconstruction method for micro organisms from sequences of microscopic images by varying the level-of-focus. The approach is limited to translucent silicatebased marine and freshwater organisms (e.g. radiolarians). The proposed 3D-reconstruction method exploits the connectivity of similarly oriented and spatially adjacent edge elements in consecutive image layers. This yields a 3D-mesh representing the global shape of the objects together with details of the inner structure. Possible applications can be found in comparative morphology or hydrobiology, where e.g. deficiencies in growth and structure during incubation in toxic water or gravity effects on metabolism have to be determined.
The objective of the FIVIS project is to develop a bicycle simulator which is able to simulate real life bicycle ride situations as a virtual scenario within an immersive environment. A sample test bicycle is mounted on a motion platform to enable a close to reality simulation of turns and balance situations. The visual field of the bike rider is enveloped within a multi-screen visualization environment which provides visual data relative to the motion and activity of the test bicycle. This implies the bike rider has to pedal and steer the bicycle as they would a traditional bicycle, while forward motion is recorded and processed to control the visualization. Furthermore, the platform is fed with real forces and accelerations that have been logged by a mobile data acquisition system during real bicycle test drives. Thus, using a feedback system makes the movements of the platform reflect the virtual environment and the reaction of the driver (e.g. steering angle, step rate).
The relative contributions of radial and laminar optic flow to the perception of linear self-motion
(2012)
When illusory self-motion is induced in a stationary observer by optic flow, the perceived distance traveled is generally overestimated relative to the distance of a remembered target (Redlick, Harris, & Jenkin, 2001): subjects feel they have gone further than the simulated distance and indicate that they have arrived at a target's previously seen location too early. In this article we assess how the radial and laminar components of translational optic flow contribute to the perceived distance traveled. Subjects monocularly viewed a target presented in a virtual hallway wallpapered with stripes that periodically changed color to prevent tracking. The target was then extinguished and the visible area of the hallway shrunk to an oval region 40° (h) × 24° (v). Subjects either continued to look centrally or shifted their gaze eccentrically, thus varying the relative amounts of radial and laminar flow visible. They were then presented with visual motion compatible with moving down the hallway toward the target and pressed a button when they perceived that they had reached the target's remembered position. Data were modeled by the output of a leaky spatial integrator (Lappe, Jenkin, & Harris, 2007). The sensory gain varied systematically with viewing eccentricity while the leak constant was independent of viewing eccentricity. Results were modeled as the linear sum of separate mechanisms sensitive to radial and laminar optic flow. Results are compatible with independent channels for processing the radial and laminar flow components of optic flow that add linearly to produce large but predictable errors in perceived distance traveled.
Might the gravity levels found on other planets and on the moon be sufficient to provide an adequate perception of upright for astronauts? Can the amount of gravity required be predicted from the physiological threshold for linear acceleration? The perception of upright is determined not only by gravity but also visual information when available and assumptions about the orientation of the body. Here, we used a human centrifuge to simulate gravity levels from zero to earth gravity along the long-axis of the body and measured observers' perception of upright using the Oriented Character Recognition Test (OCHART) with and without visual cues arranged to indicate a direction of gravity that differed from the body's long axis. This procedure allowed us to assess the relative contribution of the added gravity in determining the perceptual upright. Control experiments off the centrifuge allowed us to measure the relative contributions of normal gravity, vision, and body orientation for each participant. We found that the influence of 1 g in determining the perceptual upright did not depend on whether the acceleration was created by lying on the centrifuge or by normal gravity. The 50% threshold for centrifuge-simulated gravity's ability to influence the perceptual upright was at around 0.15 g, close to the level of moon gravity but much higher than the threshold for detecting linear acceleration along the long axis of the body. This observation may partially explain the instability of moonwalkers but is good news for future missions to Mars.