Refine
H-BRS Bibliography
- yes (69) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (69) (remove)
Document Type
- Conference Object (37)
- Article (18)
- Doctoral Thesis (3)
- Preprint (3)
- Report (3)
- Part of a Book (2)
- Book (monograph, edited volume) (1)
- Research Data (1)
- Master's Thesis (1)
Year of publication
- 2019 (69) (remove)
Keywords
- Navigation (3)
- Drosophila (2)
- Hyperspectral image (2)
- Raman microscopy (2)
- Ray tracing (2)
- UAV (2)
- Virtual Reality (2)
- aerodynamics (2)
- dynamic vector fields (2)
- flight zone (2)
- geofence (2)
- image fusion (2)
- pansharpening (2)
- simulation (2)
- 802.11 (1)
- ACPYPE (1)
- Active Learning (1)
- Antifuse memory (1)
- Aufrecht (1)
- Augmented Reality (1)
- Augmented reality (1)
- Ball Tracking (1)
- Ball tracking (1)
- Benchmark (1)
- Benchmarking (1)
- Bond graph (1)
- CIBERSORT (1)
- Carbohydrate (1)
- Chemical imaging (1)
- Chip ID (1)
- Common Criteria (1)
- Comparative Analysis (1)
- Comparative analysis (1)
- Computer graphics (1)
- Computergrafik (1)
- Cooperative Intelligent Transport Systems (ITS) (1)
- Counterfeit protection (1)
- Customization (1)
- Cutting sticks problem (1)
- DFA Lab (1)
- DPA Lab (1)
- Directional antennas (1)
- Distributed Robot Systems (1)
- Dynamic motion primitives (1)
- EM leakage (1)
- Enterprise-Resource-Planning (1)
- Everyday object manipulation (1)
- Ewing´s Sarcoma Family of Tumors (1)
- Exergame (1)
- Experiment design (1)
- Fault Channel Watermarking Lab (1)
- Fault analysis (1)
- Field sequential imaging (1)
- Flexible robots (1)
- Force and tactile sensing (1)
- Force field (1)
- Forschungsbericht (1)
- Glycam06 (1)
- Gravitation (1)
- Gromacs (1)
- Group behavior analysis (1)
- HIF1α (1)
- Head Mounted Display (1)
- Healthcare logistics (1)
- Heart Rate Prediction (1)
- Hough Forests (1)
- IC identification (1)
- ISO 27000 (1)
- IT-Sicherheitsanforderungen (1)
- Image-based rendering (1)
- Immersive analytics (1)
- Indirect Encodings (1)
- Instantaneous assignment (1)
- Interactive Object Detection (1)
- Interference (1)
- Künstliche Gravitation (1)
- Large display interaction (1)
- Lead userness (1)
- Leakage circuits (1)
- Learning from demonstration (1)
- Long-Term Autonomy (1)
- MAP-Elites (1)
- MOOC (1)
- Measurement (1)
- Middleware and Programming Environments (1)
- Multi-Modal Interaction (1)
- Multi-robot systems (1)
- Multilayer interaction (1)
- Multimodal hyperspectral data (1)
- Natural language understanding (1)
- Networked Robots (1)
- Neuroevolution (1)
- Nonbonded scaling factor (1)
- Object Detection (1)
- Open Source (1)
- Open source firmware (1)
- Open source software (1)
- Optical Flow (1)
- Optical flow (1)
- Pattern recognition (1)
- Perception (1)
- Perceptual Upright (1)
- Prediction of physiological responses to strain (1)
- Quality Diversity (1)
- ROPOD (1)
- Rapid prototyping (1)
- Raumwahrnehmung (1)
- Real-Time Image Processing (1)
- Real-time image processing (1)
- Reference Architectural Model Automotive (RAMA) (1)
- Robot commands (1)
- Robotics (1)
- Schutzobjekte (1)
- Set partition problem (1)
- Sichere Kommunikation Kritische Infrastrukturen (1)
- Side Channel Watermarking Lab (1)
- Side channels (1)
- Side-channel analysis (1)
- Simulation (1)
- Software (1)
- Software IP protection (1)
- Spatio-Temporal (1)
- Spherical Treadmill (1)
- Spherical treadmill (1)
- State machines (1)
- Stream cipher (1)
- Supervised classification (1)
- Task allocation (1)
- Temporal constraints (1)
- Time extended assignment (1)
- Toyota HSR (1)
- User interfaces (1)
- Vehicle-2-Infrastructure Kommunikation (1)
- Vehicle-2-Vehicle Kommunikation (1)
- Vibrational microspectroscopy (1)
- Virtual reality (1)
- Virtuelle Realität (1)
- Visualization (1)
- Visuelle Wahrnehmung (1)
- V˙CO2 prediction (1)
- V˙O2 prediction (1)
- Wahrnehmung (1)
- Weltraumforschung (1)
- Whole body motion (1)
- Wi-Fi (1)
- WiAFirm (1)
- Zentrifuge (1)
- accelerometer (1)
- acute (1)
- allopurinol (1)
- anomaly detection (1)
- audio-tactile feedback (1)
- automated sensor-screening (1)
- back-of-device interaction (1)
- brightfield microscopy (1)
- childhood (1)
- closed kinematic chain (1)
- collaborative learning (1)
- cross-disciplinary (1)
- depth perception (1)
- digital platform ecosystem (1)
- diversity (1)
- drone video quality (1)
- external faults (1)
- fault handling (1)
- fitness-fatigue model (1)
- gamification (1)
- genes (1)
- genetics (1)
- guidance (1)
- haptic feedback (1)
- haptic interfaces (1)
- high degree of diagnostic coverage and reliability (1)
- human computer interaction (1)
- innovative work behavior (1)
- intercultural learning (1)
- international (1)
- international teams (1)
- learning-based fault detection and diagnosis (1)
- leukemia (1)
- lymphocytic (1)
- mathematical modeling (1)
- mobile applications (1)
- modelling methodology (1)
- motion estimation (1)
- multi-channel power sourcing (1)
- multibody system (1)
- multibond graphs (1)
- multidisciplinary (1)
- multiresolution analysis (1)
- mutation (1)
- naive physics (1)
- optical flow (1)
- pen interaction (1)
- performance modeling (1)
- performance prediction (1)
- peripheral vision (1)
- posture analysis (1)
- project-based learning (1)
- qualitative reasoning (1)
- reference dataset (1)
- requirements analysis (1)
- robotics (1)
- semiconducting metal oxide gas sensor array (1)
- sensemaking (1)
- sensor-based fault detection and diagnosis (1)
- serious games (1)
- service robots (1)
- spinal posture (1)
- stress detection (1)
- structural equation modeling (1)
- training performance relationship (1)
- tumor microenvironment (1)
- tumor-infiltrating immune cells (1)
- user study (1)
- wearable sensor (1)
- welfare technology (1)
Energy Profiles of the Ring Puckering of Cyclopentane, Methylcyclopentane and Ethylcyclopentane
(2019)
Emotion and gender recognition from facial features are important properties of human empathy. Robots should also have these capabilities. For this purpose we have designed special convolutional modules that allow a model to recognize emotions and gender with a considerable lower number of parameters, enabling real-time evaluation on a constrained platform. We report accuracies of 96% in the IMDB gender dataset and 66% in the FER-2013 emotion dataset, while requiring a computation time of less than 0.008 seconds on a Core i7 CPU. All our code, demos and pre-trained architectures have been released under an open-source license in our repository at https://github.com/oarriaga/face classification.
Die Wahrnehmung des perzeptionellen Aufrecht (perceptual upright, PU) variiert in Abhängigkeit der Gewichtung verschiedener gravitationsbezogener und körperbasierter Merkmale zwischen Kontexten und aufgrund individueller Unterschiede. Ziel des Vorhabens war es, systematisch zu untersuchen, welche Zusammenhänge zwischen visuellen und gravitationsbedingten Merkmalen bestehen. Das Vorhaben baute auf vorangegangen Untersuchungen auf, deren Ergebnisse indizieren, dass eine Gravitation von ca. 0,15g notwendig ist, um effiziente Selbstorientierungsinformationen bereit zu stellen (Herpers et. al, 2015; Harris et. al, 2014).
In dem hier beschriebenen Vorhaben wurden nun gezielt künstliche Gravitationsbedingungen berücksichtigt, um die Gravitationsschwelle, ab der ein wahrnehmbarer Einfluss beobachtbar ist, genauer zu quantifizieren bzw. die oben genannte Hypothese zu bestätigen. Es konnte gezeigt werden, dass die zentripetale Kraft, die auf einer rotierenden Zentrifuge entlang der Längsachse des Körpers wirkt, genauso efektiv wie Stehen mit normaler Schwerkraft ist, um das Gefühl des perzeptionellen Aufrechts auszulösen. Die erzielten Daten deuten zudem darauf hin, dass ein Gravitationsfeld von mindestens 0,15 g notwendig ist, um eine efektive Orientierungsinformation für die Wahrnehmung von Aufrecht zu liefern. Dies entspricht in etwa der Gravitationskraft von 0,17 g, die auf dem Mond besteht. Für eine lineare Beschleunigung des Körpers liegt der vestibulare Schwellenwert bei etwa 0,1 m/s2 und somit liegt der Wert für die Situation auf dem Mond von 1,6 m/s2 deutlich über diesem Schwellenwert.
More and more devices will be connected to the internet [3]. Many devicesare part of the so-called Internet of Things (IoT) which contains many low-powerdevices often powered by a battery. These devices mainly communicate with the manufacturers back-end and deliver personal data and secrets like passwords.
Computer graphics research strives to synthesize images of a high visual realism that are indistinguishable from real visual experiences. While modern image synthesis approaches enable to create digital images of astonishing complexity and beauty, processing resources remain a limiting factor. Here, rendering efficiency is a central challenge involving a trade-off between visual fidelity and interactivity. For that reason, there is still a fundamental difference between the perception of the physical world and computer-generated imagery. At the same time, advances in display technologies drive the development of novel display devices. The dynamic range, the pixel densities, and refresh rates are constantly increasing. Display systems enable a larger visual field to be addressed by covering a wider field-of-view, due to either their size or in the form of head-mounted devices. Currently, research prototypes are ranging from stereo and multi-view systems, head-mounted devices with adaptable lenses, up to retinal projection, and lightfield/holographic displays. Computer graphics has to keep step with, as driving these devices presents us with immense challenges, most of which are currently unsolved. Fortunately, the human visual system has certain limitations, which means that providing the highest possible visual quality is not always necessary. Visual input passes through the eye’s optics, is filtered, and is processed at higher level structures in the brain. Knowledge of these processes helps to design novel rendering approaches that allow the creation of images at a higher quality and within a reduced time-frame. This thesis presents the state-of-the-art research and models that exploit the limitations of perception in order to increase visual quality but also to reduce workload alike - a concept we call perception-driven rendering. This research results in several practical rendering approaches that allow some of the fundamental challenges of computer graphics to be tackled. By using different tracking hardware, display systems, and head-mounted devices, we show the potential of each of the presented systems. The capturing of specific processes of the human visual system can be improved by combining multiple measurements using machine learning techniques. Different sampling, filtering, and reconstruction techniques aid the visual quality of the synthesized images. An in-depth evaluation of the presented systems including benchmarks, comparative examination with image metrics as well as user studies and experiments demonstrated that the methods introduced are visually superior or on the same qualitative level as ground truth, whilst having a significantly reduced computational complexity.
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews ofstatic scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.