006 Spezielle Computerverfahren
Refine
H-BRS Bibliography
- yes (88) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (63)
- Institute of Visual Computing (IVC) (29)
- Fachbereich Wirtschaftswissenschaften (17)
- Institut für Verbraucherinformatik (IVI) (14)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (12)
- Institut für Sicherheitsforschung (ISF) (9)
- Fachbereich Ingenieurwissenschaften und Kommunikation (6)
- Graduierteninstitut (3)
- Institut für KI und Autonome Systeme (A2S) (3)
- Institut für Cyber Security & Privacy (ICSP) (2)
Document Type
- Conference Object (36)
- Article (28)
- Part of a Book (5)
- Preprint (5)
- Report (5)
- Contribution to a Periodical (4)
- Doctoral Thesis (3)
- Book (monograph, edited volume) (1)
- Research Data (1)
Year of publication
Keywords
- Augmented Reality (5)
- Machine Learning (4)
- Knowledge Graphs (3)
- Virtual Reality (3)
- haptics (3)
- virtual reality (3)
- 3D user interface (2)
- Bioinformatics (2)
- Natural Language Processing (2)
- Ray tracing (2)
- Robotics (2)
- Skin detection (2)
- Transformers (2)
- authoring tools (2)
- biometrics (2)
- guidance (2)
- mixed reality (2)
- prototyping (2)
- 3D navigation (1)
- 450 MHz (1)
- AI usage in sports (1)
- AR (1)
- AR design (1)
- AR development (1)
- AR/VR (1)
- Agile software development (1)
- Algorithmik (1)
- Altenhilfe (1)
- Aneignungsstudie (1)
- Applications in Energy Transport (1)
- Artificial Intelligence (1)
- Auditory Cueing (1)
- Automatic Differentiation (1)
- Ball Tracking (1)
- Bayesian Deep Learning (1)
- Behaviour-Driven Development (1)
- Blasendiagramm (1)
- Business Process Intelligence (1)
- Camera selection (1)
- Camera view analysis (1)
- Case study (1)
- Classifiers (1)
- Codes (1)
- Collaborating industrial robots (1)
- Community of Practice (1)
- Complex Systems Modeling and Simulation (1)
- Complexity (1)
- Compliant fingers (1)
- Computational fluid dynamics (1)
- Computergrafik (1)
- Concurrent repeated failure prognosis (1)
- Conformation (1)
- Crossmedia (1)
- Crystal structure (1)
- Current research information systems (1)
- Curriculum (1)
- Cybersickness (1)
- Data Fusion (1)
- Data structures (1)
- Datenanalyse (1)
- Dementia (1)
- Demenz (1)
- Demonstration-based training (1)
- Design (1)
- Design Recommendations (1)
- Design Theory and Practice (1)
- Diagnostic bond graph-based online fault diagnosis (1)
- Disco (1)
- Distance Perception (1)
- Drosophila (1)
- Educational Data Mining (1)
- Educational Process Mining (1)
- Embedded system (1)
- Emotion (1)
- Entropy (1)
- Exergame (1)
- Experten (1)
- Facial Emotion Recognition (1)
- Fallbeschreibung (1)
- Feedback (1)
- Flow control (1)
- Fluency (1)
- Forests (1)
- Functional safety (1)
- Fuzzy Mining (1)
- Games and Simulations for Learning (1)
- Geometry (1)
- Geschäftsprozess (1)
- Graph embeddings (1)
- Graph theory (1)
- Guidelines (1)
- HCI (1)
- HDBR (1)
- Hardware (1)
- Head-mounted Display (1)
- Higher education (1)
- Human factors (1)
- Human orientation perception (1)
- Human-Centered Design (1)
- Human-Food-Interaction (1)
- Hyperspectral image (1)
- ICT (1)
- IEC 104 (1)
- IEC 61850 (1)
- Increasing fault magnitude (1)
- Inductive Logic Programming (1)
- Inductive Visual Mining (1)
- Information Security (1)
- Instruction design (1)
- Intermittent faults (1)
- Kinect (1)
- Kollektiventscheidung (1)
- Komplexitätstheorie (1)
- LTE-M (1)
- Language learning (1)
- Langzeitbehandlung (1)
- Lattice Boltzmann Method (1)
- Ligands (1)
- Living Lab (1)
- Locomotion (1)
- MQTT (1)
- MR (1)
- Mathematical methods (1)
- Microgravity (1)
- Mixed Reality (1)
- Model-driven engineering (1)
- Molecular structure (1)
- Motion Sickness (1)
- Multi-camera (1)
- NIR-point sensor (1)
- NLP (1)
- Navigation (1)
- Negotiation of Taste (1)
- Neural representations (1)
- Neuroscience (1)
- Non-linear systems (1)
- OCT (1)
- Object-Based Image Analysis (OBIA) (1)
- Optical Flow (1)
- Out-of-view Objects (1)
- PAD (1)
- Perception (1)
- Perceptual Upright (1)
- Pflegepersonal (1)
- ProM (1)
- Process Mining (1)
- Pronunciation (1)
- Proximity (1)
- Psychology (1)
- Pytorch (1)
- Qualitative study (1)
- Raman microscopy (1)
- RapidMiner (1)
- Real-Time Image Processing (1)
- Reasoning (1)
- Recommender systems (1)
- Remaining Useful Life (RUL) estimates (1)
- Requirements (1)
- Requirements Engineering (1)
- Review (1)
- Robust grasping (1)
- SMPA loop (1)
- Semantic search (1)
- Serious Games (1)
- Slippage detection (1)
- Smart Grid (1)
- Smart Home (1)
- Smart InGaAs camera-system (1)
- Social-Choice-Theorie (1)
- Spectroscopy (1)
- Spherical Treadmill (1)
- Spieltheorie (1)
- Studenten (1)
- Studienverlauf (1)
- Survey (1)
- Taste (1)
- Technologie (1)
- Three-dimensional displays (1)
- Topology (1)
- Traffic Simulations (1)
- Travel Techniques (1)
- Tree Stumps (1)
- UAV (1)
- Ultrasonic array (1)
- Uncertainty Quantification (1)
- Underwater (1)
- Unmanned Aerial Vehicle (UAV) (1)
- Unterstützung (1)
- Usable Security and Privacy (1)
- User Experience (1)
- User Interface Design (1)
- User centered design (1)
- User experience design (1)
- User feedback (1)
- User-Centered Design (1)
- User-centered privacy engineering (1)
- VR (1)
- Videogame (1)
- View selection (1)
- Virtual Agents (1)
- Virtuelle Realität (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Visuelle Wahrnehmung (1)
- Vulnerable Groups (1)
- Wissensaustausch (1)
- XR (1)
- adaptive trigger (1)
- aerodynamics (1)
- analog/digital signal processing (1)
- assistive robotics (1)
- audio-tactile feedback (1)
- augmented reality (1)
- authentication (1)
- authoring (1)
- brightfield microscopy (1)
- co-design (1)
- collision (1)
- component analyses (1)
- computer vision (1)
- controller design (1)
- depth perception (1)
- dynamic vector fields (1)
- elite sports (1)
- explainable AI (1)
- fingerprint (1)
- fitness-fatigue model (1)
- flight zone (1)
- geofence (1)
- head down bed rest (1)
- image fusion (1)
- interaction design (1)
- interactive computer graphics (1)
- interface design (1)
- leaning-based interfaces (1)
- locomotion interface (1)
- mathematical modeling (1)
- multisensory (1)
- navigational search (1)
- near infrared (1)
- neural networks (1)
- neutral buoyancy (1)
- optic flow (1)
- optical coherence tomography (1)
- optical sensor (1)
- pansharpening (1)
- path tracing (1)
- performance modeling (1)
- performance prediction (1)
- practitioners (1)
- presentation attack detection (1)
- presentation attack detection (PAD) (1)
- psychophysics (1)
- real-time (1)
- reinforcement learning (1)
- remote sensing (1)
- robot behaviour model (1)
- robot personalisation (1)
- self-motion perception (1)
- sensor resilience (1)
- sensory perception (1)
- space flight analog (1)
- spatial orientation (1)
- spatial updating (1)
- subjective visual vertical (1)
- training performance relationship (1)
- user modelling (1)
- vection (1)
- vibration (1)
- virtual reality, XR (1)
- weight perception (1)
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
Die Forschung zur kontrovers diskutierten Robotik in der Pflege und Begleitung von Personen mit Demenz steht noch am Anfang, wenngleich bereits erste Systeme auf dem Markt sind. Der Beitrag gibt entlang beispielhafter, fallbezogener Auszüge Einblicke in das laufende multidisziplinäre Projekt EmoRobot, das sich explorativ und interpretativ mit der Erkundung des Einsatzes von Robotik in der emotionsorientierten Pflege und Versorgung von Personen mit Demenz befasst. Fokussiert werden dabei die je eigenen Relevanzen der Personen mit Demenz.
Current research in augmented, virtual, and mixed reality (XR) reveals a lack of tool support for designing and, in particular, prototyping XR applications. While recent tools research is often motivated by studying the requirements of non-technical designers and end-user developers, the perspective of industry practitioners is less well understood. In an interview study with 17 practitioners from different industry sectors working on professional XR projects, we establish the design practices in industry, from early project stages to the final product. To better understand XR design challenges, we characterize the different methods and tools used for prototyping and describe the role and use of key prototypes in the different projects. We extract common elements of XR prototyping, elaborating on the tools and materials used for prototyping and establishing different views on the notion of fidelity. Finally, we highlight key issues for future XR tools research.
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.
The visual and auditory quality of computer-mediated stimuli for virtual and extended reality (VR/XR) is rapidly improving. Still, it remains challenging to provide a fully embodied sensation and awareness of objects surrounding, approaching, or touching us in a 3D environment, though it can greatly aid task performance in a 3D user interface. For example, feedback can provide warning signals for potential collisions (e.g., bumping into an obstacle while navigating) or pinpointing areas where one’s attention should be directed to (e.g., points of interest or danger). These events inform our motor behaviour and are often associated with perception mechanisms associated with our so-called peripersonal and extrapersonal space models that relate our body to object distance, direction, and contact point/impact. We will discuss these references spaces to explain the role of different cues in our motor action responses that underlie 3D interaction tasks. However, providing proximity and collision cues can be challenging. Various full-body vibration systems have been developed that stimulate body parts other than the hands, but can have limitations in their applicability and feasibility due to their cost and effort to operate, as well as hygienic considerations associated with e.g., Covid-19. Informed by results of a prior study using low-frequencies for collision feedback, in this paper we look at an unobtrusive way to provide spatial, proximal and collision cues. Specifically, we assess the potential of foot sole stimulation to provide cues about object direction and relative distance, as well as collision direction and force of impact. Results indicate that in particular vibration-based stimuli could be useful within the frame of peripersonal and extrapersonal space perception that support 3DUI tasks. Current results favor the feedback combination of continuous vibrotactor cues for proximity, and bass-shaker cues for body collision. Results show that users could rather easily judge the different cues at a reasonably high granularity. This granularity may be sufficient to support common navigation tasks in a 3DUI.
Foreword to the Special Section on the Symposium on Virtual and Augmented Reality 2019 (SVR 2019)
(2020)
Kollaborative Industrieroboter werden für produzierende Unternehmen immer kosteneffizienter. Während diese Systeme für den menschlichen Mitarbeiter eine große Hilfe sein können, stellen sie gleichzeitig ein ernstes Gesundheitsrisiko dar, wenn die zwingend notwendigen Sicherheitsmaßnahmen nur unzureichend umgesetzt werden. Herkömmliche Sicherheitseinrichtungen wie Zäune oder Lichtvorhänge bieten einen guten Schutz, aber solch statische Schutzvorrichtungen sind in neuen, hochdynamischen Arbeitsszenarien problematisch.
Im Forschungsprojekt BeyondSPAI wurde ein Funktionsmuster eines Multisensorsystems zur Absicherung solcher dynamischer Arbeitsszenarien entworfen, implementiert und im Feld getestet. Kern des Systems ist eine robuste optische Materialklassifikation, die mit Hilfe eines intelligenten InGaAs-Kamerasystems Haut von anderen typischen Werkstückoberflächen (z.B. Holz, Metalle od. Kunststoffe) unterscheiden kann. Diese einzigartige Eigenschaft wird genutzt, um menschliche Mitarbeiter zuverlässig zu erkennen, so dass ein konventioneller Roboter in Folge als personenbewusster Cobot arbeiten kann.
Das System ist modular und kann leicht mit weiteren Sensoren verschiedenster Art erweitert werden. Es kann an verschiedene Marken von Industrierobotern angepasst werden und lässt sich schnell an bestehenden Robotersystemen integrieren. Die vier vom System bereitgestellten Sicherheitsausgänge können dazu verwendet werden - abhängig von der durchdrungenen Überwachungszone - entweder eine Warnung auszugeben, die Bewegung des Roboters auf eine sichere Geschwindigkeit zu verlangsamen, oder den Roboter sicher anzuhalten. Sobald alle Zonen wieder als „eindeutig frei von Personen“ identifiziert sind, kann der Roboter wieder beschleunigen, seine ursprüngliche Bewegung wiederaufnehmen und die Arbeit fortsetzen.
Females are influenced more than males by visual cues during many spatial orientation tasks; but females rely more heavily on gravitational cues during visual-vestibular conflict. Are there gender biases in the relative contributions of vision, gravity and the internal representation of the body to the perception of upright? And might any such biases be affected by low gravity? 16 participants (8 female) viewed a highly polarized visual scene tilted ±112° while lying supine on the European Space Agency's short-arm human centrifuge. The centrifuge was rotated to simulate 24 logarithmically spaced g-levels along the long axis of the body (0.04-0.5g at ear-level). The perception of upright was measured using the Oriented Character Recognition Test (OCHART). OCHART uses the ambiguous symbol "p" shown in different orientations. Participants decided whether it was a "p" or a "d" from which the perceptual upright (PU) can be calculated for each visual/gravity combination. The relative contribution of vision, gravity and the internal representation of the body were then calculated. Experiments were repeated while upright. The relative contribution of vision on the PU was less in females compared to males (t=-18.48, p≤0.01). Females placed more emphasis on the gravity cue instead (f:28.4%, m:24.9%) while body weightings were constant (f:63.0%, m:63.2%). When upright (1g) in this and other studies (e.g., Barnett-Cowan et al. 2010, EJN, 31,1899) females placed more emphasis on vision in this task than males. The reduction in weight allocated by females to vision when in simulated low-gravity conditions compared to when upright under normal gravity may be related to similar female behaviour in response to other instances of visual-vestibular conflict. Why this is the case and at which point the perceptual change happens requires further research.
Facial emotion recognition is the task to classify human emotions in face images. It is a difficult task due to high aleatoric uncertainty and visual ambiguity. A large part of the literature aims to show progress by increasing accuracy on this task, but this ignores the inherent uncertainty and ambiguity in the task. In this paper we show that Bayesian Neural Networks, as approximated using MC-Dropout, MC-DropConnect, or an Ensemble, are able to model the aleatoric uncertainty in facial emotion recognition, and produce output probabilities that are closer to what a human expects. We also show that calibration metrics show strange behaviors for this task, due to the multiple classes that can be considered correct, which motivates future work. We believe our work will motivate other researchers to move away from Classical and into Bayesian Neural Networks.
The increasing ubiquity of Artificial Intelligence (AI) poses significant political consequences. The rapid proliferation of AI over the past decade has prompted legislators and regulators to attempt to contain AI’s technological consequences. For Germany, relevant design requirements have been expressed by the European Commission’s High-Level Expert Group on Artificial Intelligence (HLEG AI), and, at the national level, by the German government’s Data Ethics Commission (DEK) as well as the German Bundestag’s Commission of Inquiry on Artificial Intelligence (EKKI).
This paper addresses the classification of Arabic text data in the field of Natural Language Processing (NLP), with a particular focus on Natural Language Inference (NLI) and Contradiction Detection (CD). Arabic is considered a resource-poor language, meaning that there are few data sets available, which leads to limited availability of NLP methods. To overcome this limitation, we create a dedicated data set from publicly available resources. Subsequently, transformer-based machine learning models are being trained and evaluated. We find that a language-specific model (AraBERT) performs competitively with state-of-the-art multilingual approaches, when we apply linguistically informed pre-training methods such as Named Entity Recognition (NER). To our knowledge, this is the first large-scale evaluation for this task in Arabic, as well as the first application of multi-task pre-training in this context.
In mathematical modeling by means of performance models, the Fitness-Fatigue Model (FF-Model) is a common approach in sport and exercise science to study the training performance relationship. The FF-Model uses an initial basic level of performance and two antagonistic terms (for fitness and fatigue). By model calibration, parameters are adapted to the subject’s individual physical response to training load. Although the simulation of the recorded training data in most cases shows useful results when the model is calibrated and all parameters are adjusted, this method has two major difficulties. First, a fitted value as basic performance will usually be too high. Second, without modification, the model cannot be simply used for prediction. By rewriting the FF-Model such that effects of former training history can be analyzed separately – we call those terms preload – it is possible to close the gap between a more realistic initial performance level and an athlete's actual performance level without distorting other model parameters and increase model accuracy substantially. Fitting error of the preload-extended FF-Model is less than 32% compared to the error of the FF-Model without preloads. Prediction error of the preload-extended FF-Model is around 54% of the error of the FF-Model without preloads.