Refine
Departments, institutes and facilities
- Fachbereich Wirtschaftswissenschaften (58)
- Fachbereich Informatik (43)
- Fachbereich Sozialpolitik und Soziale Sicherung (37)
- Fachbereich Ingenieurwissenschaften und Kommunikation (32)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (28)
- Fachbereich Angewandte Naturwissenschaften (26)
- Institut für Verbraucherinformatik (IVI) (12)
- Institut für Cyber Security & Privacy (ICSP) (9)
- Institute of Visual Computing (IVC) (8)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (7)
Document Type
- Article (54)
- Conference Object (53)
- Part of a Book (51)
- Book (monograph, edited volume) (19)
- Preprint (12)
- Contribution to a Periodical (8)
- Research Data (6)
- Doctoral Thesis (6)
- Master's Thesis (5)
- Report (4)
Year of publication
- 2022 (227) (remove)
Has Fulltext
- no (227) (remove)
Keywords
- Lehrbuch (4)
- Medienästhetik (4)
- Medien (3)
- Medienwissenschaft (3)
- usable privacy (3)
- Cathepsin K (2)
- Chemometrics (2)
- Control Systems and Automation (2)
- Design (2)
- Electrical Machines and Power Electronics (2)
Self-supervised learning has proved to be a powerful approach to learn image representations without the need of large labeled datasets. For underwater robotics, it is of great interest to design computer vision algorithms to improve perception capabilities such as sonar image classification. Due to the confidential nature of sonar imaging and the difficulty to interpret sonar images, it is challenging to create public large labeled sonar datasets to train supervised learning algorithms. In this work, we investigate the potential of three self-supervised learning methods (RotNet, Denoising Autoencoders, and Jigsaw) to learn high-quality sonar image representation without the need of human labels. We present pre-training and transfer learning results on real-life sonar image datasets. Our results indicate that self-supervised pre-training yields classification performance comparable to supervised pre-training in a few-shot transfer learning setup across all three methods. Code and self-supervised pre-trained models are be available at https://github.com/agrija9/ssl-sonar-images
We introduce canonical weight normalization for convolutional neural networks. Inspired by the canonical tensor decomposition, we express the weight tensors in so-called canonical networks as scaled sums of outer vector products. In particular, we train network weights in the decomposed form, where scale weights are optimized separately for each mode. Additionally, similarly to weight normalization, we include a global scaling parameter. We study the initialization of the canonical form by running the power method and by drawing randomly from Gaussian or uniform distributions. Our results indicate that we can replace the power method with cheaper initializations drawn from standard distributions. The canonical re-parametrization leads to competitive normalization performance on the MNIST, CIFAR10, and SVHN data sets. Moreover, the formulation simplifies network compression. Once training has converged, the canonical form allows convenient model-compression by truncating the parameter sums.
Comparative study of 3D object detection frameworks based on LiDAR data and sensor fusion techniques
(2022)
Estimating and understanding the surroundings of the vehicle precisely forms the basic and crucial step for the autonomous vehicle. The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time. Generally, the perception system involves various subsystems such as localization, obstacle (static and dynamic) detection, and avoidance, mapping systems, and others. For perceiving the environment, these vehicles will be equipped with various exteroceptive (both passive and active) sensors in particular cameras, Radars, LiDARs, and others. These systems are equipped with deep learning techniques that transform the huge amount of data from the sensors into semantic information on which the object detection and localization tasks are performed. For numerous driving tasks, to provide accurate results, the location and depth information of a particular object is necessary. 3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object. Based on recent research, 3D object detection frameworks performing object detection and localization on LiDAR data and sensor fusion techniques show significant improvement in their performance. In this work, a comparative study of the effect of using LiDAR data for object detection frameworks and the performance improvement seen by using sensor fusion techniques are performed. Along with discussing various state-of-the-art methods in both the cases, performing experimental analysis, and providing future research directions.
Vietnam requires a sustainable urbanization, for which city sensing is used in planning and de-cision-making. Large cities need portable, scalable, and inexpensive digital technology for this purpose. End-to-end air quality monitoring companies such as AirVisual and Plume Air have shown their reliability with portable devices outfitted with superior air sensors. They are pricey, yet homeowners use them to get local air data without evaluating the causal effect. Our air quality inspection system is scalable, reasonably priced, and flexible. Minicomputer of the sys-tem remotely monitors PMS7003 and BME280 sensor data through a microcontroller processor. The 5-megapixel camera module enables researchers to infer the causal relationship between traffic intensity and dust concentration. The design enables inexpensive, commercial-grade hardware, with Azure Blob storing air pollution data and surrounding-area imagery and pre-venting the system from physically expanding. In addition, by including an air channel that re-plenishes and distributes temperature, the design improves ventilation and safeguards electrical components. The gadget allows for the analysis of the correlation between traffic and air quali-ty data, which might aid in the establishment of sustainable urban development plans and poli-cies.
State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
21 pages, with supplementary
Differential-Algebraic Equations and Beyond: From Smooth to Nonsmooth Constrained Dynamical Systems
(2022)
Integrated solar water splitting devices that produce hydrogen without the use of power inverters operate outdoors and are hence exposed to varying weather conditions. As a result, they might sometimes work at non-optimal operation points below or above the maximum power point of the photovoltaic component, which would directly translate into efficiency losses. Up until now, however, no common parameter describing and quantifying this and other real-life operating related losses (e.g. spectral mismatch) exists in the community. Therefore, the annual-hydrogen-yield-climatic-response (AHYCR) ratio is introduced as a figure of merit to evaluate the outdoor performance of integrated solar water splitting devices. This value is defined as the ratio between the real annual hydrogen yield and the theoretical yield assuming the solar-to-hydrogen device efficiency at standard conditions. This parameter is derived for an exemplary system based on state-of-the-art AlGaAs//Si dual-junction solar cells and an anion exchange membrane electrolyzer using hourly resolved climate data from a location in southern California and from reanalysis data of Antarctica. This work will help to evaluate, compare and optimize the climatic response of solar water splitting devices in different climate zones.
Forschungsdatenmanagement (FDM) nimmt im Forschungsalltag der Hochschulen für angewandte Wissenschaften (HAW) eine zunehmende größere Rolle ein und stellt an viele Forschende bisher unbekannte Anforderungen. So gilt es ein FAIRes und nachhaltiges Datenmanagement im Sinne des Kodex „Leitlinien zur Sicherung guter wissenschaftlicher Praxis“ der DFG zu betreiben – für sich selbst, für die eigene Forschungsgruppe und für die Forschungsgemeinschaft. In dieser Veranstaltung anlässlich des Tags der Forschungsdaten in NRW am 15.11.2022 wurden die wesentlichen Grundzüge des Forschungsdatenmanagements anhand des Forschungsdatenlebenszyklus nähergebracht und Praxisbeispiele und nützliche Links vorgestellt.
50 Jahre: Von der FH zur HAW
(2022)
Graph databases employ graph structures such as nodes, attributes and edges to model and store relationships among data. To access this data, graph query languages (GQL) such as Cypher are typically used, which might be difficult to master for end-users. In the context of relational databases, sequence to SQL models, which translate natural language questions to SQL queries, have been proposed. While these Neural Machine Translation (NMT) models increase the accessibility of relational databases, NMT models for graph databases are not yet available mainly due to the lack of suitable parallel training data. In this short paper we sketch an architecture which enables the generation of synthetic training data for the graph query language Cypher.
In recent years, the ability of intelligent systems to be understood by developers and users has received growing attention. This holds in particular for social robots, which are supposed to act autonomously in the vicinity of human users and are known to raise peculiar, often unrealistic attributions and expectations. However, explainable models that, on the one hand, allow a robot to generate lively and autonomous behavior and, on the other, enable it to provide human-compatible explanations for this behavior are missing. In order to develop such a self-explaining autonomous social robot, we have equipped a robot with own needs that autonomously trigger intentions and proactive behavior, and form the basis for understandable self-explanations. Previous research has shown that undesirable robot behavior is rated more positively after receiving an explanation. We thus aim to equip a social robot with the capability to automatically generate verbal explanations of its own behavior, by tracing its internal decision-making routes. The goal is to generate social robot behavior in a way that is generally interpretable, and therefore explainable on a socio-behavioral level increasing users' understanding of the robot's behavior. In this article, we present a social robot interaction architecture, designed to autonomously generate social behavior and self-explanations. We set out requirements for explainable behavior generation architectures and propose a socio-interactive framework for behavior explanations in social human-robot interactions that enables explaining and elaborating according to users' needs for explanation that emerge within an interaction. Consequently, we introduce an interactive explanation dialog flow concept that incorporates empirically validated explanation types. These concepts are realized within the interaction architecture of a social robot, and integrated with its dialog processing modules. We present the components of this interaction architecture and explain their integration to autonomously generate social behaviors as well as verbal self-explanations. Lastly, we report results from a qualitative evaluation of a working prototype in a laboratory setting, showing that (1) the robot is able to autonomously generate naturalistic social behavior, and (2) the robot is able to verbally self-explain its behavior to the user in line with users' requests.
Introduction. The experience of pain is regularly accompanied by facial expressions. The gold standard for analyzing these facial expressions is the Facial Action Coding System (FACS), which provides so-called action units (AUs) as parametrical indicators of facial muscular activity. Particular combinations of AUs have appeared to be pain-indicative. The manual coding of AUs is, however, too time- and labor-intensive in clinical practice. New developments in automatic facial expression analysis have promised to enable automatic detection of AUs, which might be used for pain detection. Objective. Our aim is to compare manual with automatic AU coding of facial expressions of pain. Methods. FaceReader7 was used for automatic AU detection. We compared the performance of FaceReader7 using videos of 40 participants (20 younger with a mean age of 25.7 years and 20 older with a mean age of 52.1 years) undergoing experimentally induced heat pain to manually coded AUs as gold standard labeling. Percentages of correctly and falsely classified AUs were calculated, and we computed as indicators of congruency, "sensitivity/recall," "precision," and "overall agreement (F1)." Results. The automatic coding of AUs only showed poor to moderate outcomes regarding sensitivity/recall, precision, and F1. The congruency was better for younger compared to older faces and was better for pain-indicative AUs compared to other AUs. Conclusion. At the moment, automatic analyses of genuine facial expressions of pain may qualify at best as semiautomatic systems, which require further validation by human observers before they can be used to validly assess facial expressions of pain.
The latest trends in inverse rendering techniques for reconstruction use neural networks to learn 3D representations as neural fields. NeRF-based techniques fit multi-layer perceptrons (MLPs) to a set of training images to estimate a radiance field which can then be rendered from any virtual camera by means of volume rendering algorithms. Major drawbacks of these representations are the lack of well-defined surfaces and non-interactive rendering times, as wide and deep MLPs must be queried millions of times per single frame. These limitations have recently been singularly overcome, but managing to accomplish this simultaneously opens up new use cases. We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates. KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes, and it demonstrably performs in real-time with plenty of room for future optimizations and extensions.
This project focuses on object detection in dense volume data. There are several types of dense volume data, namely Computed Tomography (CT) scan, Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI). This work focuses on CT scans. CT scans are not limited to the medical domain; they are also used in industries. CT scans are used in airport baggage screening, assembly lines, and the object detection systems in these places should be able to detect objects fast. One of the ways to address the issue of computational complexity and make the object detection systems fast is to use low-resolution images. Low-resolution CT scanning is fast. The entire process of scanning and detection can be made faster by using low-resolution images. Even in the medical domain, to reduce the rad iation dose, the exposure time of the patient should be reduced. The exposure time of patients could be reduced by allowing low-resolution CT scans. Hence it is essential to find out which object detection model has better accuracy as well as speed at low-resolution CT scans. However, the existing approaches did not provide details about how the model would perform when the resolution of CT scans is varied. Hence in this project, the goal is to analyze the impact of varying resolution of CT scans on both the speed and accuracy of the model. Three object detection models, namely RetinaNet, YOLOv3, and YOLOv5, were trained at various resolutions. Among the three models, it was found that YOLOv5 has the best mAP and f1 score at multiple resolutions on the DeepLesion dataset. RetinaNet model h as the least inference time on the DeepLesion dataset. From the experiments, it could be asserted that sacrificing mean average precision (mAP) to improve inference time by reducing resolution is feasible.
In (dynamic) adaptive mesh refinement (AMR) an input mesh is refined or coarsened to the need of the numerical application. This refinement happens with no respect to the originally meshed domain and is therefore limited to the geometrical accuracy of the original input mesh. We presented a novel approach to equip this input mesh with additional geometry information, to allow refinement and high-order cells based on the geometry of the original domain. We already showed a limited implementation of this algorithm. Now we evaluate this prototype with a numerical application and we prove its influence on the accuracy of certain numerical results. To be as practical as possible, we implement the ability to import meshes generated by Gmsh and equip them with the needed geometry information. Furthermore, we improve the mapping algorithm, which maps the geometry information of the boundary of a cell into the cell's volume. With these preliminary steps done, we use out new approach in a simulation of the advection of a concentration along the boundary of a sphere shell and past the boundary of a rotating cylinder. We evaluate the accuracy of our approach in comparison to the conventional refinement of cells to answer our research question: How does the performance and accuracy of the hexahedral curved domain AMR algorithm compare to linear AMR when solving the advection equation with the linear finite volume method? To answer this question, we show the influence of curved AMR on our simulation results and see, that it is even able to outperform far finer linear meshes in terms of accuracy. We also see that the current implementation of this approach is too slow for practical usage. We can therefore prove the benefits of curved AMR in certain, geometry-related application scenarios and show possible improvements to make it more feasible and practical in the future.
In the field of autonomous robotics, sensors have played a major role in defining the scope of technology and to a great extent, limitations of it as well. This cycle of constant updates and hence technological advancement has made given birth to some serious industries which were once inconceivable. Industries like autonomous driving which has a serious impact on safety and security of people, also has an equally harsh implication on the dynamics and economics of the market. With sensors like LiDAR and RADAR delivering 3D measurements as point clouds, there is a necessity to process the raw measurements directly and many research groups are working on the same. A sizable research has gone in solving the task of object detection on 2D images. In this thesis we aim to develop a LiDAR based 3D object detection scheme. We combine the ideas of PointPillars and feature pyramid networks from 2D vision to propose Pillar-FPN. The proposed method directly takes 3D point clouds as input and outputs a 3D bounding box. Our pipeline consists of multiple variations of proposed Pillar-FPN at the feature fusion level that are described in the results section. We have trained our model on the KITTI train dataset and evaluated it on KITTI validation dataset.
Entspannung im Arbeitsalltag – Einsatz von Mentalsystemen für die betriebliche Gesundheitsförderung
(2022)
Diese Formelsammlung enthält und erklärt finanzmathematische Formeln innerhalb finanzwirtschaftlicher Zusammenhänge, wie sie in den Wirtschaftswissenschaften und in der wirtschaftswissenschaftlichen Praxis fundamental notwendig sind. Das Verständnis der Formeln und deren praktische Anwendung werden durch nützliche Hilfen und verständliche Beispiele sinnvoll unterstützt, so dass der Kontext finanzmathematischer Formeln klar und erklärlich dargestellt wird. Diese Formelsammlung ist ein unverzichtbares Tool für Studierende der Wirtschaftswissenschaften, aber auch ein nützliches Nachschlagewerk für Verantwortliche aus Wirtschaft, Politik und Lehre. (Verlagsangaben)
Controlling
(2022)
Technological objects present themselves as necessary, only to become obsolete faster than ever before. This phenomenon has led to a population that experiences a plethora of technological objects and interfaces as they age, which become associated with certain stages of life and disappear thereafter. Noting the expanding body of literature within HCI about appropriation, our work pinpoints an area that needs more attention, “outdated technologies.” In other words, we assert that design practices can profit as much from imaginaries of the future as they can from reassessing artefacts from the past in a critical way. In a two-week fieldwork with 37 HCI students, we gathered an international collection of nostalgic devices from 14 different countries to investigate what memories people still have of older technologies and the ways in which these memories reveal normative and accidental use of technological objects. We found that participants primarily remembered older technologies with positive connotations and shared memories of how they had adapted and appropriated these technologies, rather than normative uses. We refer to this phenomenon as nostalgic reminiscence. In the future, we would like to develop this concept further by discussing how nostalgic reminiscence can be operationalized to stimulate speculative design in the present.
This edited volume on “Recent Advances in Renewable Energy” presents a selection of refereed papers presented at the 1st International Conference on Electrical Systems and Automation. The book provides rigorous discussions, the state of the art, and recent developments in the field of renewable energy sources supported by examples and case studies, making it an educational tool for relevant undergraduate and graduate courses. The book will be a valuable reference for beginners, researchers, and professionals interested in renewable energy.
This book which is the second part of two volumes on ''Control of Electrical and Electronic Systems” presents a compilation of selected contributions to the 1st International Conference on Electrical Systems & Automation. The book provides rigorous discussions, the state of the art, and recent developments in the modelling, simulation and control of power electronics, industrial systems, and embedded systems. The book will be a valuable reference for beginners, researchers, and professionals interested in control of electrical and electronic systems.
Telogene Einzelhaare sind häufig vorkommende Spurentypen an Tatorten. Derzeit werden sie zumeist von der STR-Typisierung ausgeschlossen, weil ihre STR-Profile aufgrund geringer DNA-Mengen und starker DNA-Degradierung in vielen Fällen unvollständig und schwierig zu interpretieren sind. In der vorliegenden Arbeit wurde eine systematische Vorgehensweise angewandt, um Korrelationen zwischen der DNA-Menge und DNA-Degradierung zu dem Erfolg der STR-Typisierung aufzuweisen und darauf basierend den Typisierungs-Erfolg von DNA aus Haaren vorhersagen zu können.
Zu diesem Zweck wurde ein human- (RiboD) und ein canin-spezifischer (RiboDog) qPCR-basierter Assay zur Messung der DNA-Menge und Bewertung der DNA-Integrität mittels eines Degradierungswerts (D-Wert) entwickelt. Aufgrund der Lage der genutzten Primer, welche auf ubiquitär vorkommende ribosomale DNA-Sequenzen abzielen, ist das Funktionsprinzip schnell und kostengünstig auf unterschiedliche Spezies anzuwenden. Die Funktionsweise der Assays wurde mittels seriell degradierter DNA bestätigt und der humane Assay wurde im Vergleich zum kommerziellen Quantifiler? Trio DNA Quantification Kit validiert. Schließlich wurde mit den Assays an DNA aus telogenen und katagenen Einzelhaaren von Menschen und Hunden der Zusammenhang zwischen DNA-Menge und DNA-Integrität zu der Vollständigkeit der STR-Allele (Allel Recovery) von DNA-Profilen untersucht, die mittels kapillarelektrophoretischer (CE) STR-Kits erhaltenen wurde. Es zeigte sich, dass bei humanen Einzelhaaren die Allel-Recovery sowohl von der DNA-Menge als auch der DNA-Integrität abhängt. Dagegen war die DNA-Degradierung bei einzelnen Hundehaaren durchweg geringer und die Allel-Recovery hing allein von der extrahierten DNA-Menge ab.
Um die STR-Analytik degradierter humaner DNA-Proben weiter zu verbessern, wurde ein neuartiger NGS-basierter Assay (maSTR, Mini-Amplikon-STR) etabliert, der die 16 forensischen STR-Loci des European Standard Sets und Amelogenin als sehr kurze Amplikons (76-296 bp) parallel amplifiziert. Mit intakter DNA generierte der maSTR-Assay im Mengenbereich von 200 pg eingesetzter DNA reproduzierbare, vollständige Profile ohne Allelic Drop-ins. Bei niedrigeren DNA-Mengen traten vereinzelt Allelic Drop-ins auf, wobei unter Verwendung von mindestens 43 pg DNA vollständige Profile erhalten wurden.
Die kombinierte Strategie aus RiboD-Messungen der DNA-Menge und -Integrität und daraus resultierendem STR-Typisierungserfolg des maSTR-Assays wurde an degradierter DNA validiert. Anschließend wurde die Strategie auf DNA aus telogenen und katagenen Einzelhaaren angewandt und mit den Ergebnissen des CE-basierten PowerPlex? ESX 17-Kits verglichen, das dasselbe STR-Marker-Set analysiert. Dabei zeigte sich, dass der Erfolg der STR-Typisierung beider STR-Assays sowohl von der optimalen Menge der Template-DNA als auch von der DNA-Integrität abhängt. Mit dem maSTR-Assay wurden vollständige Profile mit ungefähr 50 pg Input-DNA für leicht degradierte DNA aus Einzelhaaren nachgewiesen, sowie mit ungefähr 500 pg stark degradierter DNA. Aufgrund der geringen DNA-Mengen von telogenen Einzelhaaren schwankte die Reproduzierbarkeit der maSTR-Ergebnisse, war jedoch stets dem PowerPlex? ESX 17-Kit in Bezug auf die Allel-Recovery überlegen.
Ein Vergleich mit zwei, hinsichtlich der Längenverteilung der Amplikons komplementären CE-basierten STR-Kits (PowerPlex? ESX 17 und ESI 17 Fast), sowie mit einem kommerziellen NGS-Kit (ForenSeq? DNA Signature Prep) ergab, dass nicht die Technik der NGS, sondern die Kürze der Amplikons der wichtigste Faktor zur Typisierung degradierter DNA ist. Der maSTR-Assay wies in allen Vergleichen mit den genutzten kommerziellen Kits jedoch eine höhere Anzahl an Allelic Drop-ins auf. Diese traten umso häufiger auf, je geringer die verwendete DNA-Menge und je stärker degradiert diese war.
Weil Profile mit Allelic Drop-ins Mischprofilen entsprechen, wurden die per maSTR-Assay generierten STR-Profile mit Verfahren zur Interpretation von Mischspuren untersucht. Bei der Composite-Interpretation werden alle vorkommenden Allele von Replikaten gezählt, bei der Consensus-Interpretation lediglich die reproduzierbaren Allele. Dabei stellte sich heraus, dass im Fall von wenigen Allelic Drop-ins (PowerPlex? ESX 17-generierte Profile) die Composite-Interpretation und bei Allelic Drop-in-haltigen Profilen (maSTR-generierte Profile) die Consensus-Interpretation am besten geeignet ist.
Schließlich wurde mittels der GenoProof Mixture 3-Software untersucht, inwieweit semi- und vollständig kontinuierliche probabilistische Verfahren bei der biostatistischen Bewertung der DNA-Profile aus Einzelhaaren geeignet sind. Dabei zeigte sich, dass der maSTR-Assay aufgrund der hohen Anzahl an Allelic Drop-ins den CE-basierten Methoden nur in Fällen von DNA leicht überlegen ist, die in ausreichender Menge und gering degradiert vorliegt. In diesem Bereich gelingt die Zuordnung des Profils aus Haaren zum Referenzprofil jedoch ebenfalls mittels CE-basierten Methoden.
Aus allen Ergebnissen wurde eine Empfehlung für die Handhabung von DNA aus ausgefallenen Einzelhaaren abgeleitet, die auf dem DNA-Degradierungsgrad in Kombination mit der DNA-Menge basiert. Die vorliegende Arbeit schafft somit eine Grundlage, um ausgefallene Einzelhaare in der Routine-Arbeit von kriminaltechnischen Ermittlungen nutzbar zu machen, sowie gegebenenfalls auf andere Spurentypen mit degradierter DNA geringer Menge anzuwenden. Dadurch könnte die Nutzbarkeit solcher Spurentypen für die forensische Kriminalistik erhöht werden, insbesondere wenn die standardmäßig verwendeten CE-basierten Methoden versagen. Perspektivisch ist die Technik der NGS aufgrund der großen Multiplexierbarkeit uniformer, kurzer Marker generell der CE-basierten Technik bei der Typisierung degradierter DNA überlegen.
Social protection has been increasingly recognized by experts from different fields as a key instrument for social, economic, political, and environmental development. It is also known for tackling multiple goals related to the reduction of risk, poverty and inequality at once. Yet, its instruments are often seen in isolation, programmes are still managed in silos and the systemic aspect is often overlooked. Engaging in critical discussions about the systemic aspect of social protection and outlining what it really takes to pursue a systemic approach has motivated the two editors, Prof. Dr. Esther Schüring from H-BRS and Dr. Markus Loewe from the German Institute of Development and Sustainability (IDOS) to launch the very first Handbook on Social Protection Systems in late 2021.