Refine
Department, Institute
Document Type
- Conference Object (8)
- Article (1)
- Part of a Book (1)
- Report (1)
Year of publication
Keywords
Facial keypoints such as eye corners are important features for a number of different tasks in automatic face processing. The problem is that facial keypoints rather have an anatomical high-level definition than a low-level one. Therefore, they cannot be detected reliably by purely data-driven methods like corner detectors that are only based on the image data of the local neighborhood. In this contribution we introduce a method for the automatic detection of facial keypoints. The method integrates model knowledge to guarantee a consistent interpretation of the abundance of local features. The detection is based on a selective search and sequential tracking of edges controlled by model knowledge. For this, the edge detection has to be very flexible. Therefore, we apply a powerful filtering scheme based on steerable filters.
In the past decade computer models have become very popular in the field of biomechanics due to exponentially increasing computer power. Biomechanical computer models can roughly be subdivided into two groups: multi-body models and numerical models. The theoretical aspects of both modelling strategies will be introduced. However, the focus of this chapter lies on demonstrating the power and versatility of computer models in the field of biomechanics by presenting sophisticated finite element models of human body parts. Special attention is paid to explain the setup of individual models using medical scan data. In order to reach the goal of individualising the model a chain of tools including medical imaging, image acquisition and processing, mesh generation, material modelling and finite element simulation –possibly on parallel computer architectures- becomes necessary. The basic concepts of these tools are described and application results are presented. The chapter ends with a short outlook into the future of computer biomechanics.
A Stereo Active Vision Interface is introduced which detects frontal faces in real world environments and performs particular active control tasks dependent on changes in the visual field. Firstly, connected skin colour regions in the visual scene are detected by applying a radial scanline algorithm. Secondly, facial features are searched for in the most salient skin colour region while the blob is tracked by the camera system. The facial features are evaluated and, based on the obtained results and the current state of the system, particular actions are performed. The SAVI system is thought of as a smart user interface for teleconferencing, telemedicine, and distance learning. The system is designedas a Perception-Action-Cycle (PAC), processing sensory data ofdifferent kinds and qualities. Both the vision module and the head motion control module work at frame rate. Hence, the system is able to react instantaneously to changing conditions in the visual scene.
This contribution describes the FIVIS project. The project’s goal is the development of an immersive bicycle simulation platform for several applications in the areas of biomechanics, sports, traffic education, road safety and entertainment. To take physical, optical and acoustical characteristics of cycling into account, FIVIS uses a special immersive visualization system, a motion platform and a standard bicycle with sensors and actuators, as well as a surround sound system. First experimental results have shown that the FIVIS simulator provides a realistic training and exercising environment for traffic education and stress research.
The perceived direction of “up” is determined by gravity, visual information, and an internal estimate of body orientation (Mittelstaedt, 1983; Dyde et al., 2006). Is the gravity level found on other worlds sufficient to maintain gravity’s contribution to this perception? Difficulties in stability reported anecdotally by astronauts on the lunar surface (NASA 1972) suggest that the moon’s gravity may not be, despite this value being far above the threshold for detecting linear acceleration. Knowing how much gravity is needed to provide a reliable orientation cue is required for training and preparing astronauts for future missions to the moon, mars and beyond.
The objective of the presented approach is to develop a 3D-reconstruction method for micro organisms from sequences of microscopic images by varying the level-of-focus. The approach is limited to translucent silicatebased marine and freshwater organisms (e.g. radiolarians). The proposed 3D-reconstruction method exploits the connectivity of similarly oriented and spatially adjacent edge elements in consecutive image layers. This yields a 3D-mesh representing the global shape of the objects together with details of the inner structure. Possible applications can be found in comparative morphology or hydrobiology, where e.g. deficiencies in growth and structure during incubation in toxic water or gravity effects on metabolism have to be determined.
The objective of the FIVIS project is to develop a bicycle simulator which is able to simulate real life bicycle ride situations as a virtual scenario within an immersive environment. A sample test bicycle is mounted on a motion platform to enable a close to reality simulation of turns and balance situations. The visual field of the bike rider is enveloped within a multi-screen visualization environment which provides visual data relative to the motion and activity of the test bicycle. This implies the bike rider has to pedal and steer the bicycle as they would a traditional bicycle, while forward motion is recorded and processed to control the visualization. Furthermore, the platform is fed with real forces and accelerations that have been logged by a mobile data acquisition system during real bicycle test drives. Thus, using a feedback system makes the movements of the platform reflect the virtual environment and the reaction of the driver (e.g. steering angle, step rate).