Refine
H-BRS Bibliography
- yes (76) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (76) (remove)
Document Type
- Conference Object (31)
- Article (18)
- Preprint (12)
- Report (4)
- Research Data (3)
- Doctoral Thesis (3)
- Contribution to a Periodical (2)
- Master's Thesis (2)
- Working Paper (1)
Year of publication
- 2020 (76) (remove)
Language
- English (76) (remove)
Keywords
- Quality diversity (3)
- post-buckling (3)
- ARIMA (2)
- Autoencoder (2)
- Automatic Short Answer Grading (2)
- Bayesian optimization (2)
- Black-Box Optimization (2)
- Deep Learning (2)
- Domestic Robots (2)
- Evolutionary Computation (2)
- Evolutionary computation (2)
- Human-Centered Robotics (2)
- Hyper-parameter Tuning (2)
- Learning and Adaptive Systems (2)
- Scale Tuning (2)
- Usable Security (2)
- computer vision (2)
- confidence level (2)
- data filtering (2)
- neural network (2)
- nonlinear stability (2)
- object detection (2)
- robotics (2)
- short-term load forecasting (2)
- 2D Level Design (1)
- 3D interfaces (1)
- AMD Family 15h (1)
- Active Learning (1)
- Aerodynamics (1)
- Apprenticeship Learning (1)
- Ausbreitung (1)
- Authentication (1)
- Automated design (1)
- BERT (1)
- BPMS (1)
- Basis set (1)
- Bayesian Deep Learning (1)
- Blockchain (1)
- CPUID instruction (1)
- Cache-independent (1)
- Calibration (1)
- Chloroquine (1)
- Co-located Collaboration (1)
- Cognition (1)
- Collaborating industrial robots (1)
- Compliant Manipulation (1)
- Computational Fluid Dynamics (1)
- Computational fluid dynamics (1)
- Computer Vision (1)
- Computer-supported Cooperative Work (1)
- Connectivity in rural areas (1)
- Covert channel (1)
- Critical power (1)
- Cross-core (1)
- Cryptography (1)
- Digital common goods (1)
- Digitisation (1)
- Drahtloses lokales Netz (1)
- ELMo (1)
- Embedded system (1)
- Eriodictyol (1)
- Evolutionary optimization (1)
- Executive functions (1)
- Facial Emotion Recognition (1)
- Feature Model (1)
- Filtering (1)
- Functional safety (1)
- GPT (1)
- GPT-2 (1)
- Gaussian processes (1)
- Group Behavior (1)
- Groupware (1)
- Hydroxychloroquine (1)
- IT professionals (1)
- Image Classification (1)
- Information hiding (1)
- Instruction scheduling (1)
- Interferenz (1)
- Lattice Boltzmann Method (1)
- Learning analytics (1)
- Learning data mining (1)
- Level-of-Detail (1)
- Maker space (1)
- Multi-Solution Optimization (1)
- Multi-objective optimization (1)
- Multimodal optimization (1)
- Multithreaded and multicore architecture (1)
- NIR-point sensor (1)
- Network simulation verification (1)
- OER (1)
- Object Detection (1)
- Older adults (1)
- Open source software (1)
- Out Of Distribution (OOD) data (1)
- Password (1)
- Performance profiling (1)
- Phenotypic niching (1)
- Physical activity (1)
- Probabilistic model (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Quality Diversity (1)
- Re-authentication (1)
- Relative Energies (1)
- Rendering (1)
- Risk-based Authentication (1)
- Risk-based Authentication (RBA) (1)
- Robot Perception (1)
- SARS-CoV-2 (1)
- Self-motion perception (1)
- Signal (1)
- Silmitasertib (1)
- Skin detection (1)
- Smart InGaAs camera-system (1)
- Surrogate Modeling (1)
- Surrogate models (1)
- Task Frame Formalism (1)
- Transfer Learning (1)
- Two-factor Authentication (1)
- Ultrasonic array (1)
- Uncertainty Estimation (1)
- Uncertainty Quantification (1)
- Unknown parameter degradation (1)
- User Roles (1)
- Valproic acid (1)
- WWW (1)
- Web components (1)
- WiFi-based Long Distance networks (1)
- Workflow (1)
- YAWL (1)
- Young adults (1)
- asset transfer (1)
- automated sensor-screening (1)
- autonomous driving (1)
- bicausal diagnostic Bond Graphs (1)
- blockchain (1)
- caching (1)
- complete basis set limit (1)
- concrete plate (1)
- conformations (1)
- constraint relaxation (1)
- constructive process deviance (1)
- control architectures (1)
- convolutional neural networks (1)
- cooperative path planning (1)
- cryptocurrency (1)
- curved shell (1)
- deep learning (1)
- designing air flow (1)
- diversity (1)
- drugs (1)
- dynamics (1)
- energy (1)
- failure prognostic (1)
- feature discovery (1)
- felt obligations (1)
- finite element method (1)
- general plate theory (1)
- genetic neutrality (1)
- global illumination (1)
- graphene oxide powder (1)
- graphene oxide powders (1)
- haptics (1)
- high diagnostic coverage and reliability (1)
- high dynamic range resistance readout (1)
- iOER (1)
- instance segmentation (1)
- long short-term memory (1)
- mesoscopic agents (1)
- mp2 (1)
- multi-layer display (1)
- multi-objective optimization (1)
- multimodal optimization (1)
- nano-composite (1)
- navigation (1)
- neural networks (1)
- optimized geometries (1)
- parameter estimation (1)
- payment protocol (1)
- phenotypic diversity (1)
- phenotypic feature (1)
- phenotypic niching (1)
- porous material (1)
- predictive maintenance (1)
- process (1)
- propan-2-ol (1)
- refined beam theory (1)
- reinforcement learning (1)
- remaining useful life (1)
- repeated trend projection (1)
- robot action diagnosability (1)
- robot control (1)
- robot execution failures (1)
- security (1)
- semantic image seg-mentation (1)
- semiconducting metal oxide gas sensor array (1)
- shell theory (1)
- small molecule (1)
- stereoscopic vision (1)
- stiffeners (1)
- surrogate assisted phenotypic niching (1)
- surrogate models (1)
- task models (1)
- telepresence (1)
- un-manned aerial vehicle (1)
- uncertainties (1)
- unmanned ground vehicle (1)
- usability (1)
- usable secure email (1)
- user interface design (1)
- water dimer (1)
- wavelet (1)
- wind nuisance threshold (1)
- Wavenet (1)
Computers can help us to trigger our intuition about how to solve a problem. But how does a computer take into account what a user wants and update these triggers? User preferences are hard to model as they are by nature vague, depend on the user’s background and are not always deterministic, changing depending on the context and process under which they were established. We pose that the process of preference discovery should be the object of interest in computer aided design or ideation. The process should be transparent, informative, interactive and intuitive. We formulate Hyper-Pref, a cyclic co-creative process between human and computer, which triggers the user’s intuition about what is possible and is updated according to what the user wants based on their decisions. We combine quality diversity algorithms, a divergent optimization method that can produce many, diverse solutions, with variational autoencoders to both model that diversity as well as the user’s preferences, discovering the preference hypervolume within large search spaces.
Short summary
This dataset accompanies our paper
A. Mitrevski, P. G. Plöger, and G. Lakemeyer, "Representation and Experience-Based Learning of Explainable Models for Robot Action Execution," in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
Contents
There are three zip archives included, each of them a dump of a MongoDB database corresponding to one of the three experiments in the paper:
Grasping a drawer handle (handle_drawer_logs.zip)
Grasping a fridge handle (handle_fridge_logs.zip)
Pulling an object (pull_logs.zip)
All three experiments were performed with a Toyota HSR. Only the data necessary for learning the models used in our experiments are included here.
Usage
After unzipping the archives, each database can be restored with the command
mongorestore [directory_name]
This will create a MongoDB database with the name of the directory (handle_drawer_logs, handle_fridge_logs, and pull_logs).
Code for processing the data and model learning can be found in our <a href="https://github.com/alex-mitrevski/explainable-robot-execution-models">GitHub repository.
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
The ongoing digitisation in everyday working life means that ever larger amounts of personal data of employees are processed by their employers. This development is particularly problematic with regard to employee data protection and the right to informational self-determination. We strive for the use of company Privacy Dashboards as a means to compensate for missing transparency and control. For conceptual design we use among other things the method of mental models. We present the methodology and first results of our research. We highlight the opportunities that such an approach offers for the user-centred development of Privacy Dashboards.
With the digital transformation, software systems have become an integral part of our society and economy. In every part of our life, software systems are increasingly utilized to, e.g., simplify housework or to optimize business processes. All these applications are connected to the Internet, which already includes millions of software services consumed by billions of people. Applications which process such a magnitude of users and data traffic requires to be highly scalable and are therefore denoted as Ultra Large Scale (ULS) systems. Roy Fielding has defined one of the first approaches which allows designing modern ULS software systems. In his doctoral thesis, Fielding introduced the architectural style Representational State Transfer (REST) which builds the theoretical foundation of the web. At present, the web is considered as the world's largest ULS system. Due to a large number of users and the significance of software for society and the economy, the security of ULS systems is another crucial quality factor besides high scalability.
Telepresence robots allow users to be spatially and socially present in remote environments. Yet, it can be challenging to remotely operate telepresence robots, especially in dense environments such as academic conferences or workplaces. In this paper, we primarily focus on the effect that a speed control method, which automatically slows the telepresence robot down when getting closer to obstacles, has on user behaviors. In our first user study, participants drove the robot through a static obstacle course with narrow sections. Results indicate that the automatic speed control method significantly decreases the number of collisions. For the second study we designed a more naturalistic, conference-like experimental environment with tasks that require social interaction, and collected subjective responses from the participants when they were asked to navigate through the environment. While about half of the participants preferred automatic speed control because it allowed for smoother and safer navigation, others did not want to be influenced by an automatic mechanism. Overall, the results suggest that automatic speed control simplifies the user interface for telepresence robots in static dense environments, but should be considered as optionally available, especially in situations involving social interactions.
The ability to finely segment different instances of various objects in an environment forms a critical tool in the perception tool-box of any autonomous agent. Traditionally instance segmentation is treated as a multi-label pixel-wise classification problem. This formulation has resulted in networks that are capable of producing high-quality instance masks but are extremely slow for real-world usage, especially on platforms with limited computational capabilities. This thesis investigates an alternate regression-based formulation of instance segmentation to achieve a good trade-off between mask precision and run-time. Particularly the instance masks are parameterized and a CNN is trained to regress to these parameters, analogous to bounding box regression performed by an object detection network.
In this investigation, the instance segmentation masks in the Cityscape dataset are approximated using irregular octagons and an existing object detector network (i.e., SqueezeDet) is modified to regresses to the parameters of these octagonal approximations. The resulting network is referred to as SqueezeDetOcta. At the image boundaries, object instances are only partially visible. Due to the convolutional nature of most object detection networks, special handling of the boundary adhering object instances is warranted. However, the current object detection techniques seem to be unaffected by this and handle all the object instances alike. To this end, this work proposes selectively learning only partial, untainted parameters of the bounding box approximation of the boundary adhering object instances. Anchor-based object detection networks like SqueezeDet and YOLOv2 have a discrepancy between the ground-truth encoding/decoding scheme and the coordinate space used for clustering, to generate the prior anchor shapes. To resolve this disagreement, this work proposes clustering in a space defined by two coordinate axes representing the natural log transformations of the width and height of the ground-truth bounding boxes.
When both SqueezeDet and SqueezeDetOcta were trained from scratch, SqueezeDetOcta lagged behind the SqueezeDet network by a massive ≈ 6.19 mAP. Further analysis revealed that the sparsity of the annotated data was the reason for this lackluster performance of the SqueezeDetOcta network. To mitigate this issue transfer-learning was used to fine-tune the SqueezeDetOcta network starting from the trained weights of the SqueezeDet network. When all the layers of the SqueezeDetOcta were fine-tuned, it outperformed the SqueezeDet network paired with logarithmically extracted anchors by ≈ 0.77 mAP. In addition to this, the forward pass latencies of both SqueezeDet and SqueezeDetOcta are close to ≈ 19ms. Boundary adhesion considerations, during training, resulted in an improvement of ≈ 2.62 mAP of the baseline SqueezeDet network. A SqueezeDet network paired with logarithmically extracted anchors improved the performance of the baseline SqueezeDet network by ≈ 1.85 mAP.
In summary, this work demonstrates that if given sufficient fine instance annotated data, an existing object detection network can be modified to predict much finer approximations (i.e., irregular octagons) of the instance annotations, whilst having the same forward pass latency as that of the bounding box predicting network. The results justify the merits of logarithmically extracted anchors to boost the performance of any anchor-based object detection network. The results also showed that the special handling of image boundary adhering object instances produces more performant object detectors.