006 Spezielle Computerverfahren
Refine
H-BRS Bibliography
- yes (88) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (63)
- Institute of Visual Computing (IVC) (29)
- Fachbereich Wirtschaftswissenschaften (17)
- Institut für Verbraucherinformatik (IVI) (14)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (12)
- Institut für Sicherheitsforschung (ISF) (9)
- Fachbereich Ingenieurwissenschaften und Kommunikation (6)
- Graduierteninstitut (3)
- Institut für KI und Autonome Systeme (A2S) (3)
- Institut für Cyber Security & Privacy (ICSP) (2)
Document Type
- Conference Object (36)
- Article (28)
- Part of a Book (5)
- Preprint (5)
- Report (5)
- Contribution to a Periodical (4)
- Doctoral Thesis (3)
- Book (monograph, edited volume) (1)
- Research Data (1)
Year of publication
Keywords
- Augmented Reality (5)
- Machine Learning (4)
- Knowledge Graphs (3)
- Virtual Reality (3)
- haptics (3)
- virtual reality (3)
- 3D user interface (2)
- Bioinformatics (2)
- Natural Language Processing (2)
- Ray tracing (2)
- Robotics (2)
- Skin detection (2)
- Transformers (2)
- authoring tools (2)
- biometrics (2)
- guidance (2)
- mixed reality (2)
- prototyping (2)
- 3D navigation (1)
- 450 MHz (1)
- AI usage in sports (1)
- AR (1)
- AR design (1)
- AR development (1)
- AR/VR (1)
- Agile software development (1)
- Algorithmik (1)
- Altenhilfe (1)
- Aneignungsstudie (1)
- Applications in Energy Transport (1)
- Artificial Intelligence (1)
- Auditory Cueing (1)
- Automatic Differentiation (1)
- Ball Tracking (1)
- Bayesian Deep Learning (1)
- Behaviour-Driven Development (1)
- Blasendiagramm (1)
- Business Process Intelligence (1)
- Camera selection (1)
- Camera view analysis (1)
- Case study (1)
- Classifiers (1)
- Codes (1)
- Collaborating industrial robots (1)
- Community of Practice (1)
- Complex Systems Modeling and Simulation (1)
- Complexity (1)
- Compliant fingers (1)
- Computational fluid dynamics (1)
- Computergrafik (1)
- Concurrent repeated failure prognosis (1)
- Conformation (1)
- Crossmedia (1)
- Crystal structure (1)
- Current research information systems (1)
- Curriculum (1)
- Cybersickness (1)
- Data Fusion (1)
- Data structures (1)
- Datenanalyse (1)
- Dementia (1)
- Demenz (1)
- Demonstration-based training (1)
- Design (1)
- Design Recommendations (1)
- Design Theory and Practice (1)
- Diagnostic bond graph-based online fault diagnosis (1)
- Disco (1)
- Distance Perception (1)
- Drosophila (1)
- Educational Data Mining (1)
- Educational Process Mining (1)
- Embedded system (1)
- Emotion (1)
- Entropy (1)
- Exergame (1)
- Experten (1)
- Facial Emotion Recognition (1)
- Fallbeschreibung (1)
- Feedback (1)
- Flow control (1)
- Fluency (1)
- Forests (1)
- Functional safety (1)
- Fuzzy Mining (1)
- Games and Simulations for Learning (1)
- Geometry (1)
- Geschäftsprozess (1)
- Graph embeddings (1)
- Graph theory (1)
- Guidelines (1)
- HCI (1)
- HDBR (1)
- Hardware (1)
- Head-mounted Display (1)
- Higher education (1)
- Human factors (1)
- Human orientation perception (1)
- Human-Centered Design (1)
- Human-Food-Interaction (1)
- Hyperspectral image (1)
- ICT (1)
- IEC 104 (1)
- IEC 61850 (1)
- Increasing fault magnitude (1)
- Inductive Logic Programming (1)
- Inductive Visual Mining (1)
- Information Security (1)
- Instruction design (1)
- Intermittent faults (1)
- Kinect (1)
- Kollektiventscheidung (1)
- Komplexitätstheorie (1)
- LTE-M (1)
- Language learning (1)
- Langzeitbehandlung (1)
- Lattice Boltzmann Method (1)
- Ligands (1)
- Living Lab (1)
- Locomotion (1)
- MQTT (1)
- MR (1)
- Mathematical methods (1)
- Microgravity (1)
- Mixed Reality (1)
- Model-driven engineering (1)
- Molecular structure (1)
- Motion Sickness (1)
- Multi-camera (1)
- NIR-point sensor (1)
- NLP (1)
- Navigation (1)
- Negotiation of Taste (1)
- Neural representations (1)
- Neuroscience (1)
- Non-linear systems (1)
- OCT (1)
- Object-Based Image Analysis (OBIA) (1)
- Optical Flow (1)
- Out-of-view Objects (1)
- PAD (1)
- Perception (1)
- Perceptual Upright (1)
- Pflegepersonal (1)
- ProM (1)
- Process Mining (1)
- Pronunciation (1)
- Proximity (1)
- Psychology (1)
- Pytorch (1)
- Qualitative study (1)
- Raman microscopy (1)
- RapidMiner (1)
- Real-Time Image Processing (1)
- Reasoning (1)
- Recommender systems (1)
- Remaining Useful Life (RUL) estimates (1)
- Requirements (1)
- Requirements Engineering (1)
- Review (1)
- Robust grasping (1)
- SMPA loop (1)
- Semantic search (1)
- Serious Games (1)
- Slippage detection (1)
- Smart Grid (1)
- Smart Home (1)
- Smart InGaAs camera-system (1)
- Social-Choice-Theorie (1)
- Spectroscopy (1)
- Spherical Treadmill (1)
- Spieltheorie (1)
- Studenten (1)
- Studienverlauf (1)
- Survey (1)
- Taste (1)
- Technologie (1)
- Three-dimensional displays (1)
- Topology (1)
- Traffic Simulations (1)
- Travel Techniques (1)
- Tree Stumps (1)
- UAV (1)
- Ultrasonic array (1)
- Uncertainty Quantification (1)
- Underwater (1)
- Unmanned Aerial Vehicle (UAV) (1)
- Unterstützung (1)
- Usable Security and Privacy (1)
- User Experience (1)
- User Interface Design (1)
- User centered design (1)
- User experience design (1)
- User feedback (1)
- User-Centered Design (1)
- User-centered privacy engineering (1)
- VR (1)
- Videogame (1)
- View selection (1)
- Virtual Agents (1)
- Virtuelle Realität (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Visuelle Wahrnehmung (1)
- Vulnerable Groups (1)
- Wissensaustausch (1)
- XR (1)
- adaptive trigger (1)
- aerodynamics (1)
- analog/digital signal processing (1)
- assistive robotics (1)
- audio-tactile feedback (1)
- augmented reality (1)
- authentication (1)
- authoring (1)
- brightfield microscopy (1)
- co-design (1)
- collision (1)
- component analyses (1)
- computer vision (1)
- controller design (1)
- depth perception (1)
- dynamic vector fields (1)
- elite sports (1)
- explainable AI (1)
- fingerprint (1)
- fitness-fatigue model (1)
- flight zone (1)
- geofence (1)
- head down bed rest (1)
- image fusion (1)
- interaction design (1)
- interactive computer graphics (1)
- interface design (1)
- leaning-based interfaces (1)
- locomotion interface (1)
- mathematical modeling (1)
- multisensory (1)
- navigational search (1)
- near infrared (1)
- neural networks (1)
- neutral buoyancy (1)
- optic flow (1)
- optical coherence tomography (1)
- optical sensor (1)
- pansharpening (1)
- path tracing (1)
- performance modeling (1)
- performance prediction (1)
- practitioners (1)
- presentation attack detection (1)
- presentation attack detection (PAD) (1)
- psychophysics (1)
- real-time (1)
- reinforcement learning (1)
- remote sensing (1)
- robot behaviour model (1)
- robot personalisation (1)
- self-motion perception (1)
- sensor resilience (1)
- sensory perception (1)
- space flight analog (1)
- spatial orientation (1)
- spatial updating (1)
- subjective visual vertical (1)
- training performance relationship (1)
- user modelling (1)
- vection (1)
- vibration (1)
- virtual reality, XR (1)
- weight perception (1)
Most VE-frameworks try to support many different input and output devices. They do not concentrate so much on the rendering because this is tradi- tionally done by graphics workstation. In this short paper we present a modern VE framework that has a small kernel and is able to use different renderers. This includes sound renderers, physics renderers and software based graphics renderers. While our VE framework, named basho is still under development we have an alpha version running under Linux and MacOS X.
In this paper, we introduce an optical sensor system, which is integrated into an industrial push-button. The sensor allows to classify the type of material that is in contact with the button when pressed into different material categories on the basis of the material's so called "spectral signature". An approach for a safety sensor system at circular table saws on the same base has been introduced previously on SIAS-2007. This contactless working sensor is able to distinguish reliably between skin, textiles, leather and various other kinds of materials. A typical application for this intelligent push-button is the use at possibly dangerous machines, whose operating instructions include either the prohibition or the obligation to wear gloves during the work at the machine. An exemple of machines at which no gloves are allowed are pillar drilling machines, because of the risk of getting caught in the drill chuck and being turned in by the machine. In many cases this causes very serious hand injuries. Depending on the application needs, the sensor system integrated into the push-button can be configured flexibly by software to prevent the operator from accidentally starting a machine with or without gloves, which can decrease the risk of severe accidents significantly. Especially two-hand controls are incentive to manipulation for easier handling. By equipping both push-buttons of a two-hand control with material classification properties, the user is forced to operate the controls with his bare fingers. That limitation disallows the manipulation of a two-hand control by a simple rodding device.
Die Forschung zur kontrovers diskutierten Robotik in der Pflege und Begleitung von Personen mit Demenz steht noch am Anfang, wenngleich bereits erste Systeme auf dem Markt sind. Der Beitrag gibt entlang beispielhafter, fallbezogener Auszüge Einblicke in das laufende multidisziplinäre Projekt EmoRobot, das sich explorativ und interpretativ mit der Erkundung des Einsatzes von Robotik in der emotionsorientierten Pflege und Versorgung von Personen mit Demenz befasst. Fokussiert werden dabei die je eigenen Relevanzen der Personen mit Demenz.
Kleinere, günstigere und effizientere Sensoren und Aktoren sowie Funkprotokolle haben dazu geführt, dass Smart Home Produkte in zunehmend auch für den privaten Massenmarkt erschwinglich werden. Damit stehen Hersteller und Anbieter vor der Herausforderung, komplexe cyber-physische Systeme für Jedermann handhabbar zu gestalten. Es fehlen allerdings empirische Erkenntnisse über die Rolle von Smart Home im Alltag. Wir präsentieren Ergebnisse aus einer Living Lab Studie, in der 14 Haushalte mit einer am Markt erhältlichen Smart Home Nachrüstlösung ausgestattet und über neun Monate empirisch begleitet wurden. Anhand der Analyse von Interviews, Beobachtungen und Co-Design Workshops in den Phasen der Produktauswahl, Installation, Konfiguration und längerfristigen Nutzung zeigen wir Herausforderungen und Potentiale von Smart Home Systemen auf. Unsere Erkenntnisse deuten darauf hin, dass das Smart Home immer noch von technischen Details dominiert wird. Zugleich fehlen Nutzern angemessene Steuerungs- und Kontrollmöglichkeiten, um weiterhin die Entscheidungshoheit im eigenen Zuhause zu behalten.
„Industrie 4.0“ und weitere Schlagwörter wie „Big Data“, „Internet der Dinge“ oder „Cyber-physical Systems“ werden gegenwärtig in der Wirtschaft häufig aufgegriffen. Ausgangspunkt hierfür ist die Vernetzung von IT-Technologien sowie die durchgängige Digitalisierung. Nicht nur die Geschäftsfelder und Business-Modelle der Unternehmen selbst unterliegen dabei ei-nem entsprechend radikalen Wandel, dieser bezieht sich auch auf die Arbeitsumgebungen der Mitarbeiter, sowie den privaten und den öffentlichen Raum (Botthof, 2015; Hartmann, 2015).
Informations- und Kommunikationstechnologie (IKT) in den Bereichen Smart Home und Smart Living ist durch die zunehmende Vernetzung des häuslichen Anwendungsfelds mit der Digitalisierung des Stromnetzes, alternativen Möglichkeiten der Energiegewinnung und -speicherung und neuer Mobilitätskonzepte geprägt und zu einem unverzichtbaren Bestandteil privaten wie unternehmerischen Handelns geworden.
UX-Professionals stehen vor der Aufgabe ihre Fertigkeiten und Kenntnisse kontinuierlich auszubauen. Eine Möglichkeit dies zu tun sind Communities of Practice, also Gemeinschaften von Personen mit ähnlichen Aufgaben und Schwerpunkten sowie einem gemeinsamen Interesse an Lösungen. Sie agieren weitgehend selbstorganisiert und dienen dem Austausch und der gegenseitigen Unterstützung. So entstehen ein gemeinsamer Wissensschatz sowie ein Netzwerk zwischen allen UX-Interessierten. Der Aufbau einer Community of Practice für UX-Professionals wurde in einem mittelständigen Unternehmen über 18 Monate begleitet und ausgewertet. Die Ergebnisse führten zu Handlungsempfehlungen, um Hindernisse beim Aufbau zu reduzieren und einen Mehrwert für alle Beteiligten herbeizuführen.
Die Entwicklung intelligenter Technologien zur Unterstützung im Alltag und in den eigenen vier Wänden begleitet unsere Gesellschaft schon seit dem Zeitalter des Personal Computers. Mit dem Aufkommen des Internet der Dinge und begünstigt durch immer kleiner und günstiger werdende Hardware ergeben sich neue Potenziale, die das Thema Smart Home attraktiver als je zuvor werden lassen. Eine Vielzahl der aktuell im Markt verfügbaren Lösungen adressiert die Bedürfnisse Komfort, Sicherheit und effiziente Energienutzung. Die versprochene Intelligenz – smartness, wie sie der Begriff selbst suggeriert – wird vor allem bei Lösungen im privaten Nachrüstbereich überwiegend durch die Interaktion der Nutzer selbst und entsprechende regelbasierte Konfigurationen erzeugt. Diese notwendige Art der Interaktion und die damit verbundenen Aufwände sind jedoch von starker Bedeutung für das gesamte Nutzungserlebnis Smart Home und führen nicht selten zu Frustration oder gar Resignation in der Nutzung.
Getrieben durch kleiner und günstiger werdende Sensoren und der damit verbundenen Messbarmachung immer weiter reichender Teile des Alltages, hat sich die Gestaltung von Verbrauchsvisualisierunen bzw. Verbrauchsfeedbacksystemen zur Unterstützung von nachhaltigem Verhalten zu einem sehr aktiven Forschungsfeld entwickelt.
Noncooperative Game Theory
(2016)
The detection of human skin in images is a very desirable feature for applications such as biometric face recognition, which is becoming more frequently used for, e.g., automated border or access control. However, distinguishing real skin from other materials based on imagery captured in the visual spectrum alone and in spite of varying skin types and lighting conditions can be dicult and unreliable. Therefore, spoofing attacks with facial disguises or masks are still a serious problem for state of the art face recognition algorithms. This dissertation presents a novel approach for reliable skin detection based on spectral remission properties in the short-wave infrared (SWIR) spectrum and proposes a cross-modal method that enhances existing solutions for face verification to ensure the authenticity of a face even in the presence of partial disguises or masks. Furthermore, it presents a reference design and the necessary building blocks for an active multispectral camera system that implements this approach, as well as an in-depth evaluation. The system acquires four-band multispectral images within T = 50ms. Using a machine-learning-based classifier, it achieves unprecedented skin detection accuracy, even in the presence of skin-like materials used for spoofing attacks. Paired with a commercial face recognition software, the system successfully rejected all evaluated attempts to counterfeit a foreign face.
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
Females are influenced more than males by visual cues during many spatial orientation tasks; but females rely more heavily on gravitational cues during visual-vestibular conflict. Are there gender biases in the relative contributions of vision, gravity and the internal representation of the body to the perception of upright? And might any such biases be affected by low gravity? 16 participants (8 female) viewed a highly polarized visual scene tilted ±112° while lying supine on the European Space Agency's short-arm human centrifuge. The centrifuge was rotated to simulate 24 logarithmically spaced g-levels along the long axis of the body (0.04-0.5g at ear-level). The perception of upright was measured using the Oriented Character Recognition Test (OCHART). OCHART uses the ambiguous symbol "p" shown in different orientations. Participants decided whether it was a "p" or a "d" from which the perceptual upright (PU) can be calculated for each visual/gravity combination. The relative contribution of vision, gravity and the internal representation of the body were then calculated. Experiments were repeated while upright. The relative contribution of vision on the PU was less in females compared to males (t=-18.48, p≤0.01). Females placed more emphasis on the gravity cue instead (f:28.4%, m:24.9%) while body weightings were constant (f:63.0%, m:63.2%). When upright (1g) in this and other studies (e.g., Barnett-Cowan et al. 2010, EJN, 31,1899) females placed more emphasis on vision in this task than males. The reduction in weight allocated by females to vision when in simulated low-gravity conditions compared to when upright under normal gravity may be related to similar female behaviour in response to other instances of visual-vestibular conflict. Why this is the case and at which point the perceptual change happens requires further research.
Background: Virtual reality combined with spherical treadmills is used across species for studying neural circuits underlying navigation.
New Method: We developed an optical flow-based method for tracking treadmil ball motion in real-time using a single high-resolution camera.
Results: Tracking accuracy and timing were determined using calibration data. Ball tracking was performed at 500 Hz and integrated with an open source game engine for virtual reality projection. The projection was updated at 120 Hz with a latency with respect to ball motion of 30 ± 8 ms.
Comparison: with Existing Method(s) Optical flow based tracking of treadmill motion is typically achieved using optical mice. The camera-based optical flow tracking system developed here is based on off-the-shelf components and offers control over the image acquisition and processing parameters. This results in flexibility with respect to tracking conditions – such as ball surface texture, lighting conditions, or ball size – as well as camera alignment and calibration.
Conclusions: A fast system for rotational ball motion tracking suitable for virtual reality animal behavior across different scales was developed and characterized.
Computer graphics research strives to synthesize images of a high visual realism that are indistinguishable from real visual experiences. While modern image synthesis approaches enable to create digital images of astonishing complexity and beauty, processing resources remain a limiting factor. Here, rendering efficiency is a central challenge involving a trade-off between visual fidelity and interactivity. For that reason, there is still a fundamental difference between the perception of the physical world and computer-generated imagery. At the same time, advances in display technologies drive the development of novel display devices. The dynamic range, the pixel densities, and refresh rates are constantly increasing. Display systems enable a larger visual field to be addressed by covering a wider field-of-view, due to either their size or in the form of head-mounted devices. Currently, research prototypes are ranging from stereo and multi-view systems, head-mounted devices with adaptable lenses, up to retinal projection, and lightfield/holographic displays. Computer graphics has to keep step with, as driving these devices presents us with immense challenges, most of which are currently unsolved. Fortunately, the human visual system has certain limitations, which means that providing the highest possible visual quality is not always necessary. Visual input passes through the eye’s optics, is filtered, and is processed at higher level structures in the brain. Knowledge of these processes helps to design novel rendering approaches that allow the creation of images at a higher quality and within a reduced time-frame. This thesis presents the state-of-the-art research and models that exploit the limitations of perception in order to increase visual quality but also to reduce workload alike - a concept we call perception-driven rendering. This research results in several practical rendering approaches that allow some of the fundamental challenges of computer graphics to be tackled. By using different tracking hardware, display systems, and head-mounted devices, we show the potential of each of the presented systems. The capturing of specific processes of the human visual system can be improved by combining multiple measurements using machine learning techniques. Different sampling, filtering, and reconstruction techniques aid the visual quality of the synthesized images. An in-depth evaluation of the presented systems including benchmarks, comparative examination with image metrics as well as user studies and experiments demonstrated that the methods introduced are visually superior or on the same qualitative level as ground truth, whilst having a significantly reduced computational complexity.
In mathematical modeling by means of performance models, the Fitness-Fatigue Model (FF-Model) is a common approach in sport and exercise science to study the training performance relationship. The FF-Model uses an initial basic level of performance and two antagonistic terms (for fitness and fatigue). By model calibration, parameters are adapted to the subject’s individual physical response to training load. Although the simulation of the recorded training data in most cases shows useful results when the model is calibrated and all parameters are adjusted, this method has two major difficulties. First, a fitted value as basic performance will usually be too high. Second, without modification, the model cannot be simply used for prediction. By rewriting the FF-Model such that effects of former training history can be analyzed separately – we call those terms preload – it is possible to close the gap between a more realistic initial performance level and an athlete's actual performance level without distorting other model parameters and increase model accuracy substantially. Fitting error of the preload-extended FF-Model is less than 32% compared to the error of the FF-Model without preloads. Prediction error of the preload-extended FF-Model is around 54% of the error of the FF-Model without preloads.