Refine
Departments, institutes and facilities
Document Type
- Patent (25)
- Article (9)
- Conference Object (9)
- Part of a Book (3)
- Research Data (1)
- Doctoral Thesis (1)
Year of publication
Keywords
- Capsule (1)
- Dreidimensionales maschinelles Sehen (1)
- Engineering (1)
- Fourier scatterometry (1)
- Machine Learning (1)
- Machine vision (1)
- Skin detection (1)
- Spectroscopy (1)
- analog/digital signal processing (1)
- avatars (1)
Lichtlaufzeitsensor
(2012)
Lichtlaufzeitkamera
(2012)
Abstandsmeßsystem
(2017)
Time of flight camera system
(2017)
Time of flight camera system
(2017)
Lichtlaufzeitkamerasystem
(2017)
3D time of flight distance measurement with custom solid state image sensors in CMOS, CCD technology
(2000)
Since we are living in a three-dimensional world, an adequate description of our environment for many applications includes the relative position and motion of the different objects in a scene. Nature has satisfied this need for spatial perception by providing most animals with at least two eyes. This stereo vision ability is the basis that allows the brain to calculate qualitative depth information of the observed scene. Another important parameter in the complex human depth perception is our experience and memory. Although it is far more difficult, a human being is even able to recognize depth information without stereo vision. For example, we can qualitatively deduce the 3D scene from most photos, assuming that the photos contain known objects [COR]. The acquisition, storage, processing and comparison of such a huge amount of information requires enormous computational power - with which nature fortunately provides us. Therefore, for a technical implementation, one should resort to other simpler measurement principles. Additionally, the qualitative distance estimates of such knowledge-based passive vision systems can be replaced by accurate range measurements.
3D-Imaging
(2009)
3D Time-of-Flight (ToF)
(2012)
3D Time-of-Flight (ToF)
(2015)
Entfernungsmesssystem
(2019)
A Fourier scatterometry setup is evaluated to recover the key parameters of optical phase gratings. Based on these parameters, systematic errors in the printing process of two photon polymerization (TPP) gray-scale lithography 3d printers can be compensated, namely tilt and curvature deviations. The proposed setup is significantly cheaper than a confocal microscope, which is usually used to determine calibrations parameters for compensation of the TPP printing process. The grating parameters recovered this way are compared to those obtained with a confocal microscope. A clear correlation between confocal and scatterometric measurements is first shown for structures containing either tilt or curvature. The correlation is also shown for structures containing a mixture of tilt and curvature errors (squared Pearson coefficient $r^2$ = 0.92). This new compensation method is demonstrated on a TPP printer: A diffractive optical element (DOE) printed with correction parameters obtained from Fourier scatterometry shows a significant reduction in noise as compared to the uncompensated system. This verifies the successful reduction of tilt and curvature errors. Further improvements of the method are proposed, which may enable the measurements to become more precise than confocal measurements in the future, since scatterometry is not affected by the diffraction limit.
Biometric authentication plays a vital role in various everyday applications with increasing demands for reliability and security. However, the use of real biometric data for research raises privacy concerns and data scarcity issues. A promising approach using synthetic biometric data to address the resulting unbalanced representation and bias, as well as the limited availability of diverse datasets for the development and evaluation of biometric systems, has emerged. Methods for a parameterized generation of highly realistic synthetic data are emerging and the necessary quality metrics to prove that synthetic data can compare to real data are open research tasks. The generation of 3D synthetic face data using game engines’ capabilities of generating varied realistic virtual characters is explored as a possible alternative for generating synthetic face data while maintaining reproducibility and ground truth, as opposed to other creation methods. While synthetic data offer several benefits, including improved resilience against data privacy concerns, the limitations and challenges associated with their usage are addressed. Our work shows concurrent behavior in comparing semi-synthetic data as a digital representation of a real identity with their real datasets. Despite slight asymmetrical performance in comparison with a larger database of real samples, a promising performance in face data authentication is shown, which lays the foundation for further investigations with digital avatars and the creation and analysis of fully synthetic data. Future directions for improving synthetic biometric data generation and their impact on advancing biometrics research are discussed.
Trends of environmental awareness, combined with a focus on personal fitness and health, motivate many people to switch from cars and public transport to micromobility solutions, namely bicycles, electric bicycles, cargo bikes, or scooters. To accommodate urban planning for these changes, cities and communities need to know how many micromobility vehicles are on the road. In a previous work, we proposed a concept for a compact, mobile, and energy-efficient system to classify and count micromobility vehicles utilizing uncooled long-wave infrared (LWIR) image sensors and a neuromorphic co-processor. In this work, we elaborate on this concept by focusing on the feature extraction process with the goal to increase the classification accuracy. We demonstrate that even with a reduced feature list compared with our early concept, we manage to increase the detection precision to more than 90%. This is achieved by reducing the images of 160 × 120 pixels to only 12 × 18 pixels and combining them with contour moments to a feature vector of only 247 bytes.
Due to their user-friendliness and reliability, biometric systems have taken a central role in everyday digital identity management for all kinds of private, financial and governmental applications with increasing security requirements. A central security aspect of unsupervised biometric authentication systems is the presentation attack detection (PAD) mechanism, which defines the robustness to fake or altered biometric features. Artifacts like photos, artificial fingers, face masks and fake iris contact lenses are a general security threat for all biometric modalities. The Biometric Evaluation Center of the Institute of Safety and Security Research (ISF) at the University of Applied Sciences Bonn-Rhein-Sieg has specialized in the development of a near-infrared (NIR)-based contact-less detection technology that can distinguish between human skin and most artifact materials. This technology is highly adaptable and has already been successfully integrated into fingerprint scanners, face recognition devices and hand vein scanners. In this work, we introduce a cutting-edge, miniaturized near-infrared presentation attack detection (NIR-PAD) device. It includes an innovative signal processing chain and an integrated distance measurement feature to boost both reliability and resilience. We detail the device’s modular configuration and conceptual decisions, highlighting its suitability as a versatile platform for sensor fusion and seamless integration into future biometric systems. This paper elucidates the technological foundations and conceptual framework of the NIR-PAD reference platform, alongside an exploration of its potential applications and prospective enhancements.
A Fourier scatterometry setup is evaluated to recover the key parameters of optical phase gratings. Based on these parameters, systematic errors in the printing process of two-photon polymerization (TPP) gray-scale lithography three-dimensional printers can be compensated, namely tilt and curvature deviations. The proposed setup is significantly cheaper than a confocal microscope, which is usually used to determine calibration parameters for compensation of the TPP printing process. The grating parameters recovered this way are compared to those obtained with a confocal microscope. A clear correlation between confocal and scatterometric measurements is first shown for structures containing either tilt or curvature. The correlation is also shown for structures containing a mixture of tilt and curvature errors (squared Pearson coefficient r2 = 0.92). This compensation method is demonstrated on a TPP printer: a diffractive optical element printed with correction parameters obtained from Fourier scatterometry shows a significant reduction in noise as compared to the uncompensated system. This verifies the successful reduction of tilt and curvature errors. Further improvements of the method are proposed, which may enable the measurements to become more precise than confocal measurements in the future, since scatterometry is not affected by the diffraction limit.