006 Spezielle Computerverfahren
Refine
H-BRS Bibliography
- yes (63) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (43)
- Institute of Visual Computing (IVC) (17)
- Fachbereich Wirtschaftswissenschaften (13)
- Institut für Verbraucherinformatik (IVI) (11)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (10)
- Institut für Sicherheitsforschung (ISF) (7)
- Fachbereich Ingenieurwissenschaften und Kommunikation (4)
- Graduierteninstitut (3)
- Institut für KI und Autonome Systeme (A2S) (2)
- Zentrum für Ethik und Verantwortung (ZEV) (2)
Document Type
- Conference Object (31)
- Article (10)
- Report (5)
- Part of a Book (4)
- Contribution to a Periodical (4)
- Preprint (4)
- Doctoral Thesis (3)
- Book (monograph, edited volume) (1)
- Research Data (1)
Year of publication
Has Fulltext
- no (63) (remove)
Keywords
- Augmented Reality (3)
- Machine Learning (3)
- Robotics (2)
- Virtual Reality (2)
- guidance (2)
- 3D user interface (1)
- 450 MHz (1)
- Agile software development (1)
- Algorithmik (1)
- Altenhilfe (1)
- Aneignungsstudie (1)
- Applications in Energy Transport (1)
- Auditory Cueing (1)
- Automatic Differentiation (1)
- Bayesian Deep Learning (1)
- Behaviour-Driven Development (1)
- Bioinformatics (1)
- Blasendiagramm (1)
- Business Process Intelligence (1)
- Case study (1)
- Classifiers (1)
- Collaborating industrial robots (1)
- Community of Practice (1)
- Complex Systems Modeling and Simulation (1)
- Compliant fingers (1)
- Computational fluid dynamics (1)
- Computergrafik (1)
- Concurrent repeated failure prognosis (1)
- Conformation (1)
- Crossmedia (1)
- Crystal structure (1)
- Curriculum (1)
- Cybersickness (1)
- Data Fusion (1)
- Datenanalyse (1)
- Dementia (1)
- Demenz (1)
- Design (1)
- Diagnostic bond graph-based online fault diagnosis (1)
- Disco (1)
- Distance Perception (1)
- Educational Data Mining (1)
- Educational Process Mining (1)
- Embedded system (1)
- Emotion (1)
- Exergame (1)
- Experten (1)
- Facial Emotion Recognition (1)
- Fallbeschreibung (1)
- Flow control (1)
- Fluency (1)
- Forests (1)
- Functional safety (1)
- Fuzzy Mining (1)
- Games and Simulations for Learning (1)
- Geschäftsprozess (1)
- HCI (1)
- Head-mounted Display (1)
- Higher education (1)
- Human factors (1)
- Human-Food-Interaction (1)
- Hyperspectral image (1)
- ICT (1)
- IEC 104 (1)
- IEC 61850 (1)
- Increasing fault magnitude (1)
- Inductive Logic Programming (1)
- Inductive Visual Mining (1)
- Information Security (1)
- Intermittent faults (1)
- Kinect (1)
- Knowledge Graphs (1)
- Kollektiventscheidung (1)
- Komplexitätstheorie (1)
- LTE-M (1)
- Language learning (1)
- Langzeitbehandlung (1)
- Lattice Boltzmann Method (1)
- Ligands (1)
- Living Lab (1)
- Locomotion (1)
- MQTT (1)
- Mathematical methods (1)
- Microgravity (1)
- Model-driven engineering (1)
- Molecular structure (1)
- Motion Sickness (1)
- NIR-point sensor (1)
- Natural Language Processing (1)
- Negotiation of Taste (1)
- Neural representations (1)
- Non-linear systems (1)
- Object-Based Image Analysis (OBIA) (1)
- Out-of-view Objects (1)
- Pflegepersonal (1)
- ProM (1)
- Process Mining (1)
- Pronunciation (1)
- Pytorch (1)
- Qualitative study (1)
- Raman microscopy (1)
- RapidMiner (1)
- Ray tracing (1)
- Reasoning (1)
- Remaining Useful Life (RUL) estimates (1)
- Requirements (1)
- Review (1)
- Robust grasping (1)
- Serious Games (1)
- Skin detection (1)
- Slippage detection (1)
- Smart Grid (1)
- Smart Home (1)
- Smart InGaAs camera-system (1)
- Social-Choice-Theorie (1)
- Spieltheorie (1)
- Studenten (1)
- Studienverlauf (1)
- Survey (1)
- Taste (1)
- Technologie (1)
- Traffic Simulations (1)
- Transformers (1)
- Travel Techniques (1)
- Tree Stumps (1)
- UAV (1)
- Ultrasonic array (1)
- Uncertainty Quantification (1)
- Underwater (1)
- Unmanned Aerial Vehicle (UAV) (1)
- Unterstützung (1)
- User Experience (1)
- User centered design (1)
- User feedback (1)
- User-Centered Design (1)
- Videogame (1)
- Virtual Agents (1)
- Virtuelle Realität (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Visuelle Wahrnehmung (1)
- Vulnerable Groups (1)
- Wissensaustausch (1)
- aerodynamics (1)
- audio-tactile feedback (1)
- authoring tools (1)
- brightfield microscopy (1)
- co-design (1)
- component analyses (1)
- depth perception (1)
- dynamic vector fields (1)
- flight zone (1)
- geofence (1)
- haptics (1)
- image fusion (1)
- multisensory (1)
- neural networks (1)
- neutral buoyancy (1)
- optic flow (1)
- pansharpening (1)
- path tracing (1)
- prototyping (1)
- real-time (1)
- remote sensing (1)
- self-motion perception (1)
- sensory perception (1)
- vection (1)
- virtual reality (1)
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited. To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs. This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against two baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.083. Additionally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications. Finally, the source code and pre-trained STonKGs models are available at https://github.com/stonkgs/stonkgs and https://huggingface.co/stonkgs/stonkgs-150k.
Taste is a complex phenomenon that depends on the individual experience and is a matter of collective negotiation and mediation. On the contrary, it is uncommon to include taste and its many facets in everyday design, particularly online shopping for fresh food products. To realize this unused potential, we conducted two Co-Design workshops. Based on the participants’ results in the workshops, we prototyped and evaluated a click-dummy smart-phone app to explore consumers’ needs for digital taste depiction. We found that emphasizing the natural qualities of food products, external reviews, and personalizing features lead to a reflection on the individual taste experience. The self-reflection through our design enables consumers to develop their taste competencies and thus strengthen their autonomy in decision-making. Ultimately, exploring taste as a social experience adds to a broader understanding of taste beyond a sensory phenomenon.
Using Visual and Auditory Cues to Locate Out-of-View Objects in Head-Mounted Augmented Reality
(2021)
„Industrie 4.0“ und weitere Schlagwörter wie „Big Data“, „Internet der Dinge“ oder „Cyber-physical Systems“ werden gegenwärtig in der Wirtschaft häufig aufgegriffen. Ausgangspunkt hierfür ist die Vernetzung von IT-Technologien sowie die durchgängige Digitalisierung. Nicht nur die Geschäftsfelder und Business-Modelle der Unternehmen selbst unterliegen dabei ei-nem entsprechend radikalen Wandel, dieser bezieht sich auch auf die Arbeitsumgebungen der Mitarbeiter, sowie den privaten und den öffentlichen Raum (Botthof, 2015; Hartmann, 2015).
Vection underwater
(2022)
Getrieben durch kleiner und günstiger werdende Sensoren und der damit verbundenen Messbarmachung immer weiter reichender Teile des Alltages, hat sich die Gestaltung von Verbrauchsvisualisierunen bzw. Verbrauchsfeedbacksystemen zur Unterstützung von nachhaltigem Verhalten zu einem sehr aktiven Forschungsfeld entwickelt.
Low-Cost In-Hand Slippage Detection and Avoidance for Robust Robotic Grasping with Compliant Fingers
(2021)
This paper addresses the classification of Arabic text data in the field of Natural Language Processing (NLP), with a particular focus on Natural Language Inference (NLI) and Contradiction Detection (CD). Arabic is considered a resource-poor language, meaning that there are few data sets available, which leads to limited availability of NLP methods. To overcome this limitation, we create a dedicated data set from publicly available resources. Subsequently, transformer-based machine learning models are being trained and evaluated. We find that a language-specific model (AraBERT) performs competitively with state-of-the-art multilingual approaches, when we apply linguistically informed pre-training methods such as Named Entity Recognition (NER). To our knowledge, this is the first large-scale evaluation for this task in Arabic, as well as the first application of multi-task pre-training in this context.
This work addresses the issue of finding an optimal flight zone for a side-by-side tracking and following Unmanned Aerial Vehicle(UAV) adhering to space-restricting factors brought upon by a dynamic Vector Field Extraction (VFE) algorithm. The VFE algorithm demands a relatively perpendicular field of view of the UAV to the tracked vehicle, thereby enforcing the space-restricting factors which are distance, angle and altitude. The objective of the UAV is to perform side-by-side tracking and following of a lightweight ground vehicle while acquiring high quality video of tufts attached to the side of the tracked vehicle. The recorded video is supplied to the VFE algorithm that produces the positions and deformations of the tufts over time as they interact with the surrounding air, resulting in an airflow model of the tracked vehicle. The present limitations of wind tunnel tests and computational fluid dynamics simulation suggest the use of a UAV for real world evaluation of the aerodynamic properties of the vehicle’s exterior. The novelty of the proposed approach is alluded to defining the specific flight zone restricting factors while adhering to the VFE algorithm, where as a result we were capable of formalizing a locally-static and a globally-dynamic geofence attached to the tracked vehicle and enclosing the UAV.
The latest trends in inverse rendering techniques for reconstruction use neural networks to learn 3D representations as neural fields. NeRF-based techniques fit multi-layer perceptrons (MLPs) to a set of training images to estimate a radiance field which can then be rendered from any virtual camera by means of volume rendering algorithms. Major drawbacks of these representations are the lack of well-defined surfaces and non-interactive rendering times, as wide and deep MLPs must be queried millions of times per single frame. These limitations have recently been singularly overcome, but managing to accomplish this simultaneously opens up new use cases. We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates. KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes, and it demonstrably performs in real-time with plenty of room for future optimizations and extensions.
Noncooperative Game Theory
(2016)
Kollaborative Industrieroboter werden für produzierende Unternehmen immer kosteneffizienter. Während diese Systeme für den menschlichen Mitarbeiter eine große Hilfe sein können, stellen sie gleichzeitig ein ernstes Gesundheitsrisiko dar, wenn die zwingend notwendigen Sicherheitsmaßnahmen nur unzureichend umgesetzt werden. Herkömmliche Sicherheitseinrichtungen wie Zäune oder Lichtvorhänge bieten einen guten Schutz, aber solch statische Schutzvorrichtungen sind in neuen, hochdynamischen Arbeitsszenarien problematisch.
Im Forschungsprojekt BeyondSPAI wurde ein Funktionsmuster eines Multisensorsystems zur Absicherung solcher dynamischer Arbeitsszenarien entworfen, implementiert und im Feld getestet. Kern des Systems ist eine robuste optische Materialklassifikation, die mit Hilfe eines intelligenten InGaAs-Kamerasystems Haut von anderen typischen Werkstückoberflächen (z.B. Holz, Metalle od. Kunststoffe) unterscheiden kann. Diese einzigartige Eigenschaft wird genutzt, um menschliche Mitarbeiter zuverlässig zu erkennen, so dass ein konventioneller Roboter in Folge als personenbewusster Cobot arbeiten kann.
Das System ist modular und kann leicht mit weiteren Sensoren verschiedenster Art erweitert werden. Es kann an verschiedene Marken von Industrierobotern angepasst werden und lässt sich schnell an bestehenden Robotersystemen integrieren. Die vier vom System bereitgestellten Sicherheitsausgänge können dazu verwendet werden - abhängig von der durchdrungenen Überwachungszone - entweder eine Warnung auszugeben, die Bewegung des Roboters auf eine sichere Geschwindigkeit zu verlangsamen, oder den Roboter sicher anzuhalten. Sobald alle Zonen wieder als „eindeutig frei von Personen“ identifiziert sind, kann der Roboter wieder beschleunigen, seine ursprüngliche Bewegung wiederaufnehmen und die Arbeit fortsetzen.