Refine
H-BRS Bibliography
- yes (80)
Departments, institutes and facilities
- Fachbereich Informatik (80) (remove)
Document Type
- Conference Object (38)
- Article (19)
- Preprint (7)
- Doctoral Thesis (4)
- Part of a Book (3)
- Research Data (3)
- Report (3)
- Book (monograph, edited volume) (1)
- Conference Proceedings (1)
- Contribution to a Periodical (1)
Year of publication
- 2021 (80) (remove)
Keywords
- Usable Security (4)
- Big Data Analysis (3)
- Machine Learning (3)
- AML (2)
- Augmented Reality (2)
- Authentication features (2)
- Cognitive robot control (2)
- Explainable robotics (2)
- Generative Models (2)
- HSP90 (2)
- Human-Computer Interaction (2)
- Learning from experience (2)
- LoRa (2)
- LoRaWAN (2)
- Low-Power Wide Area Network (LP-WAN) (2)
- Measurement (2)
- Path Loss (2)
- Quality diversity (2)
- Risk-based Authentication (RBA) (2)
- Robotics (2)
- Urban (2)
- 3D navigation (1)
- AD (1)
- AES (1)
- API Documentation (1)
- API Gebrauchstauglichkeit (1)
- API usability (1)
- Adaptive Control (1)
- Artificial Intelligence (1)
- Assistive robots (1)
- Auditory Cueing (1)
- BPMS (1)
- Benchmarking (1)
- Bioinformatics (1)
- Block cipher (1)
- Bond Graph Modelling (1)
- Branch and cut (1)
- CC (1)
- CEHL (1)
- Cache line fingerprinting (1)
- Classifiers (1)
- Clustering (1)
- Co-creative processes (1)
- Cognitive informatics (1)
- Cognitive robotics (1)
- Compliant fingers (1)
- Computational creativity (1)
- Computing methodologies (1)
- Content Security Policies (1)
- Continual robot learning (1)
- Correlative Microscopy (1)
- Cortex-M3 (1)
- Creative Commons (1)
- DPA (1)
- Datenbanksysteme (1)
- Developer Centered Security (1)
- Differential analysis (1)
- Digitale Lehre (1)
- Dimensionality reduction (1)
- Divergent optimization (1)
- Drug (1)
- E-Health (1)
- ELM (1)
- Earth Observation (1)
- Employee data protection (1)
- Evolutionary optimization (1)
- Explainable Machine Learning (1)
- Failure Prognosis (1)
- Fault Detection & Diagnosis (1)
- Fault Diagnosis (1)
- Feature extraction (1)
- Fluency (1)
- Foveated rendering (1)
- GDPR (1)
- GLI (1)
- Gabor filter (1)
- Global illumination (1)
- HSP70 (1)
- HTTP (1)
- Head-mounted Display (1)
- Header whitelisting (1)
- Heat Shock Protein (1)
- Hochleistungssport (1)
- Hochschullehre (1)
- Human centered computing (1)
- Human computer interaction (1)
- Human factors (1)
- Hybrid Failure Prognosis (1)
- Hybrid Systems (1)
- Hyperspectral image (1)
- Inductive Logic Programming (1)
- Informationsflüsse (1)
- Informationsgewinnung (1)
- Informationsverarbeitung (1)
- Integer programming (1)
- Intelligent Autonomous Systems (1)
- Intermediaries (1)
- Knowledge Graphs (1)
- Künstliche Intelligenz (1)
- Language learning (1)
- Large high-resolution displays (1)
- Leistungsdiagnostik (1)
- Leistungssport (1)
- Leukemia (1)
- MBZ (1)
- Machine-learning (1)
- Mebendazole (1)
- Memory-Constrained Devices (1)
- Methodik (1)
- Microarchitectural Data Sampling (MDS) (1)
- Mixed (1)
- Model-based Fault Diagnosis (1)
- Modelling (1)
- Molecular dynamics (1)
- Multimodal Microspectroscopy (1)
- NISTPQC (1)
- Natural Language Processing (1)
- OER (1)
- Object detection (1)
- Ontology (1)
- Open Educational Ressources (1)
- Out-of-view Objects (1)
- PDSTSP (1)
- PHR (1)
- Parallel drone scheduling traveling salesman problem (1)
- Password (1)
- Personal Health Record (1)
- Post-Quantum Signatures (1)
- Privacy engineering (1)
- Process Models (1)
- Process views (1)
- Pronunciation (1)
- QoS (1)
- Quality control (1)
- Quantum mechanics (1)
- Radiance caching (1)
- Reflectance modeling (1)
- Registration Refinement (1)
- Risk-based Authentication (1)
- Robot failure diagnosis (1)
- Robot learning (1)
- Robot software (1)
- Robotics competitions (1)
- Robust grasping (1)
- SAML (1)
- SOAP (1)
- SVM (1)
- Secure Coding Practices (1)
- Semantic gap (1)
- Separation algorithm (1)
- Sicherheits-APIs (1)
- Side channel attack (1)
- Signature Verification (1)
- Slippage detection (1)
- Smartphone (1)
- Softwareentwicklung (1)
- Spielanalyse (1)
- Streaming (1)
- Support Vector Machine (1)
- Surrogate-assistance (1)
- Synergetik (1)
- Tautomers (1)
- Touchscreens (1)
- Trainingssteuerung (1)
- Transformers (1)
- Unidirectional thermoplastic composites (1)
- Usable Privacy (1)
- Usable Security and Privacy (1)
- User interface (1)
- Variational Autoencoder (1)
- Virtual Reality (1)
- Virtual reality (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Visualization design and evaluation methods (1)
- Visualization systems and tools (1)
- Web (1)
- Wettkampfanalyse (1)
- XML Signature (1)
- XML Signature Wrapping (1)
- YAWL (1)
- ZombieLoad (1)
- architectural distortion (1)
- breast cancer (1)
- component based (1)
- convolutional neural networks (1)
- developer centered security (1)
- domain adaptation (1)
- entwicklerzentrierte Sicherheit (1)
- extreme learning machine (1)
- indicators calculation (1)
- information flows (1)
- informational self-determination (1)
- leaning-based interfaces (1)
- learning traces (1)
- locomotion interface (1)
- mebendazole (1)
- mental models (1)
- multi robot systems (1)
- navigational search (1)
- privacy at work (1)
- property-based testing for robots (1)
- reuse of indicators (1)
- security (1)
- security APIs (1)
- simulation-based robot testing (1)
- software development (1)
- spatial orientation (1)
- spatial updating (1)
- trace model (1)
- trace-based system (1)
- transfer learning (1)
- unsupervised learning (1)
- usable privacy controls (1)
- verification and validation of robot action execution (1)
- virtual reality (1)
It has been well proved that deep networks are efficient at extracting features from a given (source) labeled dataset. However, it is not always the case that they can generalize well to other (target) datasets which very often have a different underlying distribution. In this report, we evaluate four different domain adaptation techniques for image classification tasks: DeepCORAL, DeepDomainConfusion, CDAN and CDAN+E. These techniques are unsupervised given that the target dataset dopes not carry any labels during training phase. We evaluate model performance on the office-31 dataset. A link to the github repository of this report can be found here: https://github.com/agrija9/Deep-Unsupervised-Domain-Adaptation.
Object-Based Trace Model for Automatic Indicator Computation in the Human Learning Environments
(2021)
This paper proposes a traces model in the form of an object or class model (in the UML sense) which allows the automatic calculation of indicators of various kinds and independently of the computer environment for human learning (CEHL). The model is based on the establishment of a trace-based system that encompasses all the logic of traces collecting and indicators calculation. It is im-plemented in the form of a trace database. It is an important contribution in the field of the exploitation of the traces of apprenticeship in a CEHL because it pro-vides a general formalism for modeling the traces and allowing the calculation of several indicators at the same time. Also, with the inclusion of calculated indica-tors as potential learning traces, our model provides a formalism for classifying the various indicators in the form of inheritance relationships, which promotes the reuse of indicators already calculated. Economically, the model can allow organi-zations with different learning platforms to invest only in one traces Management System. At the social level, it can allow a better sharing of trace databases be-tween the various research institutions in the field of CEHL.
Robot Action Diagnosis and Experience Correction by Falsifying Parameterised Execution Models
(2021)
When faced with an execution failure, an intelligent robot should be able to identify the likely reasons for the failure and adapt its execution policy accordingly. This paper addresses the question of how to utilise knowledge about the execution process, expressed in terms of learned constraints, in order to direct the diagnosis and experience acquisition process. In particular, we present two methods for creating a synergy between failure diagnosis and execution model learning. We first propose a method for diagnosing execution failures of parameterised action execution models, which searches for action parameters that violate a learned precondition model. We then develop a strategy that uses the results of the diagnosis process for generating synthetic data that are more likely to lead to successful execution, thereby increasing the set of available experiences to learn from. The diagnosis and experience correction methods are evaluated for the problem of handle grasping, such that we experimentally demonstrate the effectiveness of the diagnosis algorithm and show that corrected failed experiences can contribute towards improving the execution success of a robot.
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited. To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs. This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against two baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.083. Additionally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications. Finally, the source code and pre-trained STonKGs models are available at https://github.com/stonkgs/stonkgs and https://huggingface.co/stonkgs/stonkgs-150k.