006 Spezielle Computerverfahren
Refine
Departments, institutes and facilities
- Fachbereich Informatik (63)
- Institute of Visual Computing (IVC) (29)
- Fachbereich Wirtschaftswissenschaften (17)
- Institut für Verbraucherinformatik (IVI) (17)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (12)
- Institut für Sicherheitsforschung (ISF) (9)
- Fachbereich Ingenieurwissenschaften und Kommunikation (6)
- Graduierteninstitut (3)
- Institut für KI und Autonome Systeme (A2S) (3)
- Institut für Cyber Security & Privacy (ICSP) (2)
Document Type
- Conference Object (46)
- Article (38)
- Part of a Book (8)
- Preprint (5)
- Report (5)
- Contribution to a Periodical (4)
- Doctoral Thesis (4)
- Book (monograph, edited volume) (2)
- Research Data (2)
- Patent (1)
Year of publication
Keywords
- Augmented Reality (5)
- Machine Learning (4)
- Knowledge Graphs (3)
- Machine learning (3)
- Virtual Reality (3)
- deep learning (3)
- facial expression analysis (3)
- haptics (3)
- virtual reality (3)
- 3D user interface (2)
This dissertation presents a probabilistic state estimation framework for integrating data-driven machine learning models and a deformable facial shape model in order to estimate continuous-valued intensities of 22 different facial muscle movements, known as Action Units (AU), defined in the Facial Action Coding System (FACS). A practical approach is proposed and validated for integrating class-wise probability scores from machine learning models within a Gaussian state estimation framework. Furthermore, driven mass-spring-damper models are applied for modelling the dynamics of facial muscle movements. Both facial shape and appearance information are used for estimating AU intensities, making it a hybrid approach. Several features are designed and explored to help the probabilistic framework to deal with multiple challenges involved in automatic AU detection. The proposed AU intensity estimation method and its features are evaluated quantitatively and qualitatively using three different datasets containing either spontaneous or acted facial expressions with AU annotations. The proposed method produced temporally smoother estimates that facilitate a fine-grained analysis of facial expressions. It also performed reasonably well, even though it simultaneously estimates intensities of 22 AUs, some of which are subtle in expression or resemble each other closely. The estimated AU intensities tended to the lower range of values, and were often accompanied by a small delay in onset. This shows that the proposed method is conservative. In order to further improve performance, state-of-the-art machine learning approaches for AU detection could be integrated within the proposed probabilistic AU intensity estimation framework.