Refine
Departments, institutes and facilities
- Fachbereich Informatik (62)
- Fachbereich Angewandte Naturwissenschaften (53)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (47)
- Fachbereich Wirtschaftswissenschaften (43)
- Fachbereich Ingenieurwissenschaften und Kommunikation (33)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (21)
- Institut für funktionale Gen-Analytik (IFGA) (15)
- Institut für Verbraucherinformatik (IVI) (14)
- Institute of Visual Computing (IVC) (13)
- Institut für Cyber Security & Privacy (ICSP) (10)
Document Type
- Article (110)
- Conference Object (53)
- Part of a Book (16)
- Preprint (12)
- Research Data (6)
- Doctoral Thesis (5)
- Master's Thesis (5)
- Report (4)
- Book (monograph, edited volume) (2)
- Conference Proceedings (2)
Year of publication
- 2022 (219) (remove)
Language
- English (219) (remove)
Keywords
- Machine Learning (5)
- virtual reality (4)
- Cathepsin K (3)
- GDPR (3)
- Knowledge Graphs (3)
- Lignin (3)
- usable privacy (3)
- 3D user interface (2)
- Bioinformatics (2)
- Chemometrics (2)
Robots applied in therapeutic scenarios, for instance in the therapy of individuals with Autism Spectrum Disorder, are sometimes used for imitation learning activities in which a person needs to repeat motions by the robot. To simplify the task of incorporating new types of motions that a robot can perform, it is desirable that the robot has the ability to learn motions by observing demonstrations from a human, such as a therapist. In this paper, we investigate an approach for acquiring motions from skeleton observations of a human, which are collected by a robot-centric RGB-D camera. Given a sequence of observations of various joints, the joint positions are mapped to match the configuration of a robot before being executed by a PID position controller. We evaluate the method, in particular the reproduction error, by performing a study with QTrobot in which the robot acquired different upper-body dance moves from multiple participants. The results indicate the method's overall feasibility, but also indicate that the reproduction quality is affected by noise in the skeleton observations.
Using a life-cycle approach, we identify key gaps for social reform in Georgia. The reduction of informal work is the most pressing of these, since formal employment is the backbone of any robust and reliable social insurance scheme. At the same time, greater financial resources are required through taxation in order to enable systematic social reform in Georgia. Both interventions are needed in order to fill the gaps in the current social protection system, which include the limited scope of pension and health insurance, as well as the lack of permanent unemployment insurance and universal child benefits.
Against the background of Germany’s long experience with social protection, we outline the main principles of the German welfare state and present the design of three main social insurance branches (pensions, health and unemployment). Based on the mixed experience that has emerged in Germany, in particular due to path dependencies and political deadlock, we derive lessons that inform a clear and coherent vision for social reform in Georgia.
In recent years, the ability of intelligent systems to be understood by developers and users has received growing attention. This holds in particular for social robots, which are supposed to act autonomously in the vicinity of human users and are known to raise peculiar, often unrealistic attributions and expectations. However, explainable models that, on the one hand, allow a robot to generate lively and autonomous behavior and, on the other, enable it to provide human-compatible explanations for this behavior are missing. In order to develop such a self-explaining autonomous social robot, we have equipped a robot with own needs that autonomously trigger intentions and proactive behavior, and form the basis for understandable self-explanations. Previous research has shown that undesirable robot behavior is rated more positively after receiving an explanation. We thus aim to equip a social robot with the capability to automatically generate verbal explanations of its own behavior, by tracing its internal decision-making routes. The goal is to generate social robot behavior in a way that is generally interpretable, and therefore explainable on a socio-behavioral level increasing users' understanding of the robot's behavior. In this article, we present a social robot interaction architecture, designed to autonomously generate social behavior and self-explanations. We set out requirements for explainable behavior generation architectures and propose a socio-interactive framework for behavior explanations in social human-robot interactions that enables explaining and elaborating according to users' needs for explanation that emerge within an interaction. Consequently, we introduce an interactive explanation dialog flow concept that incorporates empirically validated explanation types. These concepts are realized within the interaction architecture of a social robot, and integrated with its dialog processing modules. We present the components of this interaction architecture and explain their integration to autonomously generate social behaviors as well as verbal self-explanations. Lastly, we report results from a qualitative evaluation of a working prototype in a laboratory setting, showing that (1) the robot is able to autonomously generate naturalistic social behavior, and (2) the robot is able to verbally self-explain its behavior to the user in line with users' requests.
Introduction. The experience of pain is regularly accompanied by facial expressions. The gold standard for analyzing these facial expressions is the Facial Action Coding System (FACS), which provides so-called action units (AUs) as parametrical indicators of facial muscular activity. Particular combinations of AUs have appeared to be pain-indicative. The manual coding of AUs is, however, too time- and labor-intensive in clinical practice. New developments in automatic facial expression analysis have promised to enable automatic detection of AUs, which might be used for pain detection. Objective. Our aim is to compare manual with automatic AU coding of facial expressions of pain. Methods. FaceReader7 was used for automatic AU detection. We compared the performance of FaceReader7 using videos of 40 participants (20 younger with a mean age of 25.7 years and 20 older with a mean age of 52.1 years) undergoing experimentally induced heat pain to manually coded AUs as gold standard labeling. Percentages of correctly and falsely classified AUs were calculated, and we computed as indicators of congruency, "sensitivity/recall," "precision," and "overall agreement (F1)." Results. The automatic coding of AUs only showed poor to moderate outcomes regarding sensitivity/recall, precision, and F1. The congruency was better for younger compared to older faces and was better for pain-indicative AUs compared to other AUs. Conclusion. At the moment, automatic analyses of genuine facial expressions of pain may qualify at best as semiautomatic systems, which require further validation by human observers before they can be used to validly assess facial expressions of pain.
Trojanized software packages used in software supply chain attacks constitute an emerging threat. Unfortunately, there is still a lack of scalable approaches that allow automated and timely detection of malicious software packages and thus most detections are based on manual labor and expertise. However, it has been observed that most attack campaigns comprise multiple packages that share the same or similar malicious code. We leverage that fact to automatically reproduce manually identified clusters of known malicious packages that have been used in real world attacks, thus, reducing the need for expert knowledge and manual inspection. Our approach, AST Clustering using MCL to mimic Expertise (ACME), yields promising results with a 𝐹1 score of 0.99. Signatures are automatically generated based on characteristic code fragments from clusters and are subsequently used to scan the whole npm registry for unreported malicious packages. We are able to identify and report six malicious packages that have been removed from npm consequentially. Therefore, our approach can support the detection by reducing manual labor and hence may be employed by maintainers of package repositories to detect possible software supply chain attacks through trojanized software packages.
Guzzo et al. (Reference Guzzo, Schneider and Nalbantian2022) argue that open science practices may marginalize inductive and abductive research and preclude leveraging big data for scientific research. We share their assessment that the hypothetico-deductive paradigm has limitations (see also Staw, Reference Staw2016) and that big data provide grand opportunities (see also Oswald et al., Reference Oswald, Behrend, Putka and Sinar2020). However, we arrive at very different conclusions. Rather than opposing open science practices that build on a hypothetico-deductive paradigm, we should take initiative to do open science in a way compatible with the very nature of our discipline, namely by incorporating ambiguity and inductive decision-making. In this commentary, we (a) argue that inductive elements are necessary for research in naturalistic field settings across different stages of the research process, (b) discuss some misconceptions of open science practices that hide or discourage inductive elements, and (c) propose that field researchers can take ownership of open science in a way that embraces ambiguity and induction. We use an example research study to illustrate our points.