006 Spezielle Computerverfahren
Refine
H-BRS Bibliography
- yes (63) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (43)
- Institute of Visual Computing (IVC) (17)
- Fachbereich Wirtschaftswissenschaften (13)
- Institut für Verbraucherinformatik (IVI) (11)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (10)
- Institut für Sicherheitsforschung (ISF) (7)
- Fachbereich Ingenieurwissenschaften und Kommunikation (4)
- Graduierteninstitut (3)
- Institut für KI und Autonome Systeme (A2S) (2)
- Zentrum für Ethik und Verantwortung (ZEV) (2)
Document Type
- Conference Object (31)
- Article (10)
- Report (5)
- Part of a Book (4)
- Contribution to a Periodical (4)
- Preprint (4)
- Doctoral Thesis (3)
- Book (monograph, edited volume) (1)
- Research Data (1)
Year of publication
Has Fulltext
- no (63) (remove)
Keywords
- Augmented Reality (3)
- Machine Learning (3)
- Robotics (2)
- Virtual Reality (2)
- guidance (2)
- 3D user interface (1)
- 450 MHz (1)
- Agile software development (1)
- Algorithmik (1)
- Altenhilfe (1)
Die Forschung zur kontrovers diskutierten Robotik in der Pflege und Begleitung von Personen mit Demenz steht noch am Anfang, wenngleich bereits erste Systeme auf dem Markt sind. Der Beitrag gibt entlang beispielhafter, fallbezogener Auszüge Einblicke in das laufende multidisziplinäre Projekt EmoRobot, das sich explorativ und interpretativ mit der Erkundung des Einsatzes von Robotik in der emotionsorientierten Pflege und Versorgung von Personen mit Demenz befasst. Fokussiert werden dabei die je eigenen Relevanzen der Personen mit Demenz.
UX-Professionals stehen vor der Aufgabe ihre Fertigkeiten und Kenntnisse kontinuierlich auszubauen. Eine Möglichkeit dies zu tun sind Communities of Practice, also Gemeinschaften von Personen mit ähnlichen Aufgaben und Schwerpunkten sowie einem gemeinsamen Interesse an Lösungen. Sie agieren weitgehend selbstorganisiert und dienen dem Austausch und der gegenseitigen Unterstützung. So entstehen ein gemeinsamer Wissensschatz sowie ein Netzwerk zwischen allen UX-Interessierten. Der Aufbau einer Community of Practice für UX-Professionals wurde in einem mittelständigen Unternehmen über 18 Monate begleitet und ausgewertet. Die Ergebnisse führten zu Handlungsempfehlungen, um Hindernisse beim Aufbau zu reduzieren und einen Mehrwert für alle Beteiligten herbeizuführen.
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
Computer graphics research strives to synthesize images of a high visual realism that are indistinguishable from real visual experiences. While modern image synthesis approaches enable to create digital images of astonishing complexity and beauty, processing resources remain a limiting factor. Here, rendering efficiency is a central challenge involving a trade-off between visual fidelity and interactivity. For that reason, there is still a fundamental difference between the perception of the physical world and computer-generated imagery. At the same time, advances in display technologies drive the development of novel display devices. The dynamic range, the pixel densities, and refresh rates are constantly increasing. Display systems enable a larger visual field to be addressed by covering a wider field-of-view, due to either their size or in the form of head-mounted devices. Currently, research prototypes are ranging from stereo and multi-view systems, head-mounted devices with adaptable lenses, up to retinal projection, and lightfield/holographic displays. Computer graphics has to keep step with, as driving these devices presents us with immense challenges, most of which are currently unsolved. Fortunately, the human visual system has certain limitations, which means that providing the highest possible visual quality is not always necessary. Visual input passes through the eye’s optics, is filtered, and is processed at higher level structures in the brain. Knowledge of these processes helps to design novel rendering approaches that allow the creation of images at a higher quality and within a reduced time-frame. This thesis presents the state-of-the-art research and models that exploit the limitations of perception in order to increase visual quality but also to reduce workload alike - a concept we call perception-driven rendering. This research results in several practical rendering approaches that allow some of the fundamental challenges of computer graphics to be tackled. By using different tracking hardware, display systems, and head-mounted devices, we show the potential of each of the presented systems. The capturing of specific processes of the human visual system can be improved by combining multiple measurements using machine learning techniques. Different sampling, filtering, and reconstruction techniques aid the visual quality of the synthesized images. An in-depth evaluation of the presented systems including benchmarks, comparative examination with image metrics as well as user studies and experiments demonstrated that the methods introduced are visually superior or on the same qualitative level as ground truth, whilst having a significantly reduced computational complexity.
In this paper, we provide a participatory design study of a mobile health platform for older adults that provides an integrative perspective on health data collected from different devices and apps. We illustrate the diversity and complexity of older adults’ perspectives in the context of health and technology use, the challenges which follow on for the design of mobile health platforms that support active and healthy ageing (AHA) and our approach to addressing these challenges through a participatory design (PD) process. Interviews were conducted with older adults aged 65+ in a two-month study with the goal of understanding perspectives on health and technologies for AHA support. We identified challenges and derived design ideas for a mobile health platform called “My-AHA”. For researchers in this field, the structured documentation of our procedures and results, as well as the implications derived provide valuable insights for the design of mobile health platforms for older adults.
The detection of human skin in images is a very desirable feature for applications such as biometric face recognition, which is becoming more frequently used for, e.g., automated border or access control. However, distinguishing real skin from other materials based on imagery captured in the visual spectrum alone and in spite of varying skin types and lighting conditions can be dicult and unreliable. Therefore, spoofing attacks with facial disguises or masks are still a serious problem for state of the art face recognition algorithms. This dissertation presents a novel approach for reliable skin detection based on spectral remission properties in the short-wave infrared (SWIR) spectrum and proposes a cross-modal method that enhances existing solutions for face verification to ensure the authenticity of a face even in the presence of partial disguises or masks. Furthermore, it presents a reference design and the necessary building blocks for an active multispectral camera system that implements this approach, as well as an in-depth evaluation. The system acquires four-band multispectral images within T = 50ms. Using a machine-learning-based classifier, it achieves unprecedented skin detection accuracy, even in the presence of skin-like materials used for spoofing attacks. Paired with a commercial face recognition software, the system successfully rejected all evaluated attempts to counterfeit a foreign face.
In this paper, we introduce an optical sensor system, which is integrated into an industrial push-button. The sensor allows to classify the type of material that is in contact with the button when pressed into different material categories on the basis of the material's so called "spectral signature". An approach for a safety sensor system at circular table saws on the same base has been introduced previously on SIAS-2007. This contactless working sensor is able to distinguish reliably between skin, textiles, leather and various other kinds of materials. A typical application for this intelligent push-button is the use at possibly dangerous machines, whose operating instructions include either the prohibition or the obligation to wear gloves during the work at the machine. An exemple of machines at which no gloves are allowed are pillar drilling machines, because of the risk of getting caught in the drill chuck and being turned in by the machine. In many cases this causes very serious hand injuries. Depending on the application needs, the sensor system integrated into the push-button can be configured flexibly by software to prevent the operator from accidentally starting a machine with or without gloves, which can decrease the risk of severe accidents significantly. Especially two-hand controls are incentive to manipulation for easier handling. By equipping both push-buttons of a two-hand control with material classification properties, the user is forced to operate the controls with his bare fingers. That limitation disallows the manipulation of a two-hand control by a simple rodding device.
Informations- und Kommunikationstechnologie (IKT) in den Bereichen Smart Home und Smart Living ist durch die zunehmende Vernetzung des häuslichen Anwendungsfelds mit der Digitalisierung des Stromnetzes, alternativen Möglichkeiten der Energiegewinnung und -speicherung und neuer Mobilitätskonzepte geprägt und zu einem unverzichtbaren Bestandteil privaten wie unternehmerischen Handelns geworden.
Facial emotion recognition is the task to classify human emotions in face images. It is a difficult task due to high aleatoric uncertainty and visual ambiguity. A large part of the literature aims to show progress by increasing accuracy on this task, but this ignores the inherent uncertainty and ambiguity in the task. In this paper we show that Bayesian Neural Networks, as approximated using MC-Dropout, MC-DropConnect, or an Ensemble, are able to model the aleatoric uncertainty in facial emotion recognition, and produce output probabilities that are closer to what a human expects. We also show that calibration metrics show strange behaviors for this task, due to the multiple classes that can be considered correct, which motivates future work. We believe our work will motivate other researchers to move away from Classical and into Bayesian Neural Networks.
Selection Performance and Reliability of Eye and Head Gaze Tracking Under Varying Light Conditions
(2024)
This research investigates the efficacy of multisensory cues for locating targets in Augmented Reality (AR). Sensory constraints can impair perception and attention in AR, leading to reduced performance due to factors such as conflicting visual cues or a restricted field of view. To address these limitations, the research proposes head-based multisensory guidance methods that leverage audio-tactile cues to direct users' attention towards target locations. The research findings demonstrate that this approach can effectively reduce the influence of sensory constraints, resulting in improved search performance in AR. Additionally, the thesis discusses the limitations of the proposed methods and provides recommendations for future research.