Refine
H-BRS Bibliography
- yes (325) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (86)
- Fachbereich Wirtschaftswissenschaften (69)
- Fachbereich Angewandte Naturwissenschaften (57)
- Fachbereich Ingenieurwissenschaften und Kommunikation (55)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (46)
- Fachbereich Sozialpolitik und Soziale Sicherung (40)
- Institute of Visual Computing (IVC) (18)
- Institut für Verbraucherinformatik (IVI) (16)
- Institut für funktionale Gen-Analytik (IFGA) (16)
- Graduierteninstitut (15)
Document Type
- Article (122)
- Conference Object (65)
- Part of a Book (43)
- Book (monograph, edited volume) (23)
- Preprint (18)
- Doctoral Thesis (15)
- Report (10)
- Contribution to a Periodical (6)
- Master's Thesis (6)
- Working Paper (5)
Year of publication
- 2020 (325) (remove)
Keywords
- Digitalisierung (5)
- Inborn error of metabolism (3)
- Lehrbuch (3)
- Organic aciduria (3)
- Quality diversity (3)
- Usable Security (3)
- post-buckling (3)
- ARIMA (2)
- Artificial Intelligence (2)
- Autoencoder (2)
Studi ini bertujuan untuk memvalidasi perangkat penilaian efikasi diri yang berkaitan dengan kesehatan kerja yang dikembangkan pada tahap studi pendahuluan. Skala Efikasi Diri untuk Kesehatan Kerja (SEDKK) berlandaskan konsep efikasi diri pada teori kognitif sosial yang mengukur empat faktor yang berpengaruh pada kesehatan setiap individu yang bekerja, seperti: perilaku makan dan minum, tidur, keamanan dan kesehatan kerja, serta kegiatan pemulihan dari stres bekerja. Hasil analisis faktor eksploratori menunjukan bahwa ada empat faktor yang terefleksikan dari butir-butir SEDKK. Validitas konstruk SEDKK dapat dibuktikan dengan korelasi positif antara SEDKK dan skala Efikasi Diri Umum yang sangat signifikan. Pengujian validitas kriteria dapat ditelusuri melalui efek SEDKK terhadap kondisi kesehatan umum, kepuasan akan kesehatan pribadi, keseimbangan kehidupan kerja/KKK (work life balance), perilaku sehat, dan perilaku berisiko. Namun demikian, asumsi mengenai reliabilitas tes berulang (test-retest) pada penelitian ini ditolak. Implikasi dan saran-saran untuk penelitian selanjutnya didiskusikan pada artikel ini.
Any political phenomenon can only be properly understood in its broader con-text. Questions of international cooperation are thus necessarily framed by his-torical processes and relations of power. We therefore start our first discussion with an examination of the global ‘status quo’ and embed the topic of this pub-lication, ODA graduation, into the shifting world order, analysing current roles and settings in international relations and identifying changes in positions, sta-tus and categories. What are the overarching issues determining world politics and who are the old and the new actors driving them? What is the impact of these global shifts on international cooperation, especially development coop-eration? Of what relevance are roles, status and categories and what is the im-pact of changes in positions and relations? What challenges face multilateralism and what ways exist to maintain and renew strategic partnerships and shared values?
Am Beispiel einer jahrelang in Präsenz gelehrten Veranstaltung mit Vorlesungen, Übungen und Laborpraktika wird gezeigt, wie die Vermittlung prüfungsrelevanter Kompetenzen auch „online“ gelang. Das passende „Setting“ des Lehr- und Lernprozesses unter Beachtung von Handlungsempfehlungen ist auch für die Zukunft relevant.
Bedingt durch die fortlaufende Digitalisierung und den Big Data-Trend stehen immer mehr Daten zur Verfügung. Daraus resultieren viele Potenziale – gerade für Unternehmen. Die Fähigkeit zur Bewältigung und Auswertung dieser Daten schlägt sich in der Rolle des Data Scientist nieder, welcher aktuell einer der gefragtesten Berufe ist. Allerdings ist die Integration von Daten in Unternehmensstrategie und -kultur eine große Herausforderung. So müssen komplexe Daten und Analyseergebnisse auch nicht datenaffinen Stakeholdern kommuniziert werden. Hier kommt dem Data Storytelling eine entscheidende Rolle zu, denn um mit Daten eine Veränderung hervorrufen zu können, müssen vorerst Verständnis und Motivation für den Sachverhalt zielgruppenspezifisch geschaffen werden. Allerdings handelt es sich bei Data Storytelling noch um ein Nischenthema. Diese Arbeit leitet mithilfe einer systematischen Literaturanalyse die Erfolgsfaktoren von Data Storytelling für eine effektive und effiziente Kommunikation von Daten her, um Data Scientists in Forschung und Praxis bei der Kommunikation der Daten und Ergebnisse zu unterstützen.
An essential measure of autonomy in assistive service robots is adaptivity to the various contexts of human-oriented tasks, which are subject to subtle variations in task parameters that determine optimal behaviour. In this work, we propose an apprenticeship learning approach to achieving context-aware action generalization on the task of robot-to-human object hand-over. The procedure combines learning from demonstration and reinforcement learning: a robot first imitates a demonstrator’s execution of the task and then learns contextualized variants of the demonstrated action through experience. We use dynamic movement primitives as compact motion representations, and a model-based C-REPS algorithm for learning policies that can specify hand-over position, conditioned on context variables. Policies are learned using simulated task executions, before transferring them to the robot and evaluating emergent behaviours. We additionally conduct a user study involving participants assuming different postures and receiving an object from a robot, which executes hand-overs by either imitating a demonstrated motion, or adapting its motion to hand-over positions suggested by the learned policy. The results confirm the hypothesized improvements in the robot’s perceived behaviour when it is context-aware and adaptive, and provide useful insights that can inform future developments.
Deep learning models are extensively used in various safety critical applications. Hence these models along with being accurate need to be highly reliable. One way of achieving this is by quantifying uncertainty. Bayesian methods for UQ have been extensively studied for Deep Learning models applied on images but have been less explored for 3D modalities such as point clouds often used for Robots and Autonomous Systems. In this work, we evaluate three uncertainty quantification methods namely Deep Ensembles, MC-Dropout and MC-DropConnect on the DarkNet21Seg 3D semantic segmentation model and comprehensively analyze the impact of various parameters such as number of models in ensembles or forward passes, and drop probability values, on task performance and uncertainty estimate quality. We find that Deep Ensembles outperforms other methods in both performance and uncertainty metrics. Deep ensembles outperform other methods by a margin of 2.4% in terms of mIOU, 1.3% in terms of accuracy, while providing reliable uncertainty for decision making.
Short summary
This dataset accompanies our paper
A. Mitrevski, P. G. Plöger, and G. Lakemeyer, "Representation and Experience-Based Learning of Explainable Models for Robot Action Execution," in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
Contents
There are three zip archives included, each of them a dump of a MongoDB database corresponding to one of the three experiments in the paper:
Grasping a drawer handle (handle_drawer_logs.zip)
Grasping a fridge handle (handle_fridge_logs.zip)
Pulling an object (pull_logs.zip)
All three experiments were performed with a Toyota HSR. Only the data necessary for learning the models used in our experiments are included here.
Usage
After unzipping the archives, each database can be restored with the command
mongorestore [directory_name]
This will create a MongoDB database with the name of the directory (handle_drawer_logs, handle_fridge_logs, and pull_logs).
Code for processing the data and model learning can be found in our <a href="https://github.com/alex-mitrevski/explainable-robot-execution-models">GitHub repository.
Describing the elephant: a foundational model of human needs, motivation, behaviour, and wellbeing
(2020)
Models of basic psychological needs have been present and popular in the academic and lay literature for more than a century yet reviews of needs models show an astonishing lack of consensus. This raises the question of what basic human psychological needs are and if this can be consolidated into a model or framework that can align previous research and empirical study. The authors argue that the lack of consensus arises from researchers describing parts of the proverbial elephant correctly but failing to describe the full elephant. Through redefining what human needs are and matching this to an evolutionary framework we can see broad consensus across needs models and neatly slot constructs and psychological and behavioural theories into this framework. This enables a descriptive model of drives, motives, and well-being that can be simply outlined but refined enough to do justice to the complexities of human behaviour. This also raises some issues of how subjective well-being is and should be measured. Further avenues of research and how to continue building this model and framework are proposed.
The ability to finely segment different instances of various objects in an environment forms a critical tool in the perception tool-box of any autonomous agent. Traditionally instance segmentation is treated as a multi-label pixel-wise classification problem. This formulation has resulted in networks that are capable of producing high-quality instance masks but are extremely slow for real-world usage, especially on platforms with limited computational capabilities. This thesis investigates an alternate regression-based formulation of instance segmentation to achieve a good trade-off between mask precision and run-time. Particularly the instance masks are parameterized and a CNN is trained to regress to these parameters, analogous to bounding box regression performed by an object detection network.
In this investigation, the instance segmentation masks in the Cityscape dataset are approximated using irregular octagons and an existing object detector network (i.e., SqueezeDet) is modified to regresses to the parameters of these octagonal approximations. The resulting network is referred to as SqueezeDetOcta. At the image boundaries, object instances are only partially visible. Due to the convolutional nature of most object detection networks, special handling of the boundary adhering object instances is warranted. However, the current object detection techniques seem to be unaffected by this and handle all the object instances alike. To this end, this work proposes selectively learning only partial, untainted parameters of the bounding box approximation of the boundary adhering object instances. Anchor-based object detection networks like SqueezeDet and YOLOv2 have a discrepancy between the ground-truth encoding/decoding scheme and the coordinate space used for clustering, to generate the prior anchor shapes. To resolve this disagreement, this work proposes clustering in a space defined by two coordinate axes representing the natural log transformations of the width and height of the ground-truth bounding boxes.
When both SqueezeDet and SqueezeDetOcta were trained from scratch, SqueezeDetOcta lagged behind the SqueezeDet network by a massive ≈ 6.19 mAP. Further analysis revealed that the sparsity of the annotated data was the reason for this lackluster performance of the SqueezeDetOcta network. To mitigate this issue transfer-learning was used to fine-tune the SqueezeDetOcta network starting from the trained weights of the SqueezeDet network. When all the layers of the SqueezeDetOcta were fine-tuned, it outperformed the SqueezeDet network paired with logarithmically extracted anchors by ≈ 0.77 mAP. In addition to this, the forward pass latencies of both SqueezeDet and SqueezeDetOcta are close to ≈ 19ms. Boundary adhesion considerations, during training, resulted in an improvement of ≈ 2.62 mAP of the baseline SqueezeDet network. A SqueezeDet network paired with logarithmically extracted anchors improved the performance of the baseline SqueezeDet network by ≈ 1.85 mAP.
In summary, this work demonstrates that if given sufficient fine instance annotated data, an existing object detection network can be modified to predict much finer approximations (i.e., irregular octagons) of the instance annotations, whilst having the same forward pass latency as that of the bounding box predicting network. The results justify the merits of logarithmically extracted anchors to boost the performance of any anchor-based object detection network. The results also showed that the special handling of image boundary adhering object instances produces more performant object detectors.
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path-traced results, but with a greatly reduced computational complexity, allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
Evaluation of a Multi-Layer 2.5D display in comparison to conventional 3D stereoscopic glasses
(2020)
In this paper we propose and evaluate a custom-build projection-based multilayer 2.5D display, consisting of three layers of images, and compare performance to a stereoscopic 3D display. Stereoscopic vision can increase the involvement and enhance game experience, however may induce possible side effects, e.g. motion sickness and simulator sickness. To overcome the disadvantage of multiple discrete depths, in our system perspective rendering and head-tracking is used. A study was performed to evaluate this display with 20 participants playing custom-designed games. The results indicated that the multi-layer display caused fewer side effects than the stereoscopic display and provided good usability. The participants also stated a better or equal spatial perception, while the cognitive load stayed the same.
Graph drawing with spring embedders employs a V x V computation phase over the graph's vertex set to compute repulsive forces. Here, the efficacy of forces diminishes with distance: a vertex can effectively only influence other vertices in a certain radius around its position. Therefore, the algorithm lends itself to an implementation using search data structures to reduce the runtime complexity. NVIDIA RT cores implement hierarchical tree traversal in hardware. We show how to map the problem of finding graph layouts with force-directed methods to a ray tracing problem that can subsequently be implemented with dedicated ray tracing hardware. With that, we observe speedups of 4x to 13x over a CUDA software implementation.
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
Medien spielen eine Schlüsselrolle für die öffentliche Meinung und Akzeptanz neuer Technologien. Mit einer qualitativen Inhaltsanalyse journalistischer Artikel zum Elektrofahrrad wurden Akteure und ihre Einstellungen und Handlungen in Bezug auf das Elektrofahrrad untersucht. In die Analyse flossen 444 Artikel ausgewählter deutscher Qualitätsmedien aus dem Jahr 2018 ein. Die Untersuchung zeigt den gesellschaftlich relevanten Diskurs über Elektrofahrräder auf und bietet Anknüpfungspunkte für die Förderung von Individualmobilität und der Entwicklung zukunftsfähiger Mobilitätskonzepte.
Unsachgemäß entsorgte Zigarettenkippen stellen aufgrund der in ihnen enthaltenen Giftstoffe ein relevantes, ökologisches Problem dar. Diese Forschungsarbeit untersucht den Einsatz von Nudging zur Bekämpfung der Problematik. In einer quantiativen Online-Befragung wurden zunächst die Gründe für das umweltschädliche Verhalten untersucht (N = 96). Hierbei konnte die Gegenwartstendenz von Personen als statistisch signifikanter Hauptgrund ermittelt werden. Viele Personen gaben an, die langfristigen ökologischen Kosten einer unsachgemäßen Entsorgung aufgrund des kurzfristigen persönlichen Nutzens zu ignorieren. Dieser entsteht durch die Gemütlichkeit des „Wegschnipsens“ einer Zigarettenkippe. Im Anschluss wurde ein auf die Gegenwartstendenz von Personen fokussierter Nudge entwickelt und in einem Feldexperiment auf seine Wirksamkeit überprüft, indem die Relation von unsachgemäß zu sachgemäß entsorgten Zigarettenkippen vor und nach dem Einsatz des Nudges dokumentiert wurde. Ohne Einsatz des Nudges (N = 92) wurden am Erhebungsort 64.1 Prozent und mit Einsatz des Nudges (N = 142) lediglich 38.0 Prozent der Zigarettenkippen unsachgemäß entsorgt. In dem Feldexperiment konnte der Nudge effektiv zur Förderung von nachhaltigem Verhalten eingesetzt werden.
OSC data
(2020)
Telepresence robots allow users to be spatially and socially present in remote environments. Yet, it can be challenging to remotely operate telepresence robots, especially in dense environments such as academic conferences or workplaces. In this paper, we primarily focus on the effect that a speed control method, which automatically slows the telepresence robot down when getting closer to obstacles, has on user behaviors. In our first user study, participants drove the robot through a static obstacle course with narrow sections. Results indicate that the automatic speed control method significantly decreases the number of collisions. For the second study we designed a more naturalistic, conference-like experimental environment with tasks that require social interaction, and collected subjective responses from the participants when they were asked to navigate through the environment. While about half of the participants preferred automatic speed control because it allowed for smoother and safer navigation, others did not want to be influenced by an automatic mechanism. Overall, the results suggest that automatic speed control simplifies the user interface for telepresence robots in static dense environments, but should be considered as optionally available, especially in situations involving social interactions.
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.
An internal model of self-motion provides a fundamental basis for action in our daily lives, yet little is known about its development. The ability to control self-motion develops in youth and often deteriorates with advanced age. Self-motion generates relative motion between the viewer and the environment. Thus, the smoothness of the visual motion created will vary as control improves. Here, we study the influence of the smoothness of visually simulated self-motion on an observer's ability to judge how far they have travelled over a wide range of ages. Previous studies were typically highly controlled and concentrated on university students. But are such populations representative of the general public? And are there developmental and sex effects? Here, estimates of distance travelled (visual odometry) during visually induced self-motion were obtained from 466 participants drawn from visitors to a public science museum. Participants were presented with visual motion that simulated forward linear self-motion through a field of lollipops using a head-mounted virtual reality display. They judged the distance of their simulated motion by indicating when they had reached the position of a previously presented target. The simulated visual motion was presented with or without horizontal or vertical sinusoidal jitter. Participants' responses indicated that they felt they travelled further in the presence of vertical jitter. The effectiveness of the display increased with age over all jitter conditions. The estimated time for participants to feel that they had started to move also increased slightly with age. There were no differences between the sexes. These results suggest that age should be taken into account when generating motion in a virtual reality environment. Citizen science studies like this can provide a unique and valuable insight into perceptual processes in a truly representative sample of people.
Diese etablierte Formelsammlung enthält und erklärt mathematische Formeln innerhalb ökonomischer Zusammenhänge, wie sie in den Wirtschaftswissenschaften und in der wirtschaftswissenschaftlichen Praxis unbedingt notwendig sind. Das Verständnis der Formeln und deren praktische Anwendung werden durch nützliche Hilfen und erklärliche Beispiele sinnvoll unterstützt, so dass der Kontext wirtschaftsmathematischer Formeln klar und verständlich dargestellt wird. Diese Formelsammlung ist ein unverzichtbares Tool für Studierende der Wirtschaftswissenschaften, aber auch ein nützliches Nachschlagewerk für Verantwortliche aus Wirtschaft, Politik und Lehre.
Diese etablierte Formelsammlung enthält und erklärt statistische Formeln, wie sie in den Wirtschaftswissenschaften und in der wirtschaftswissenschaftlichen Praxis unbedingt notwendig sind. Das Verständnis der Formeln und deren praktische Anwendung werden durch nützliche Hilfen und verständliche Beispiele sinnvoll unterstützt, so dass der Kontext wirtschaftsstatistischer Formeln klar und erklärlich dargestellt wird. Diese Formelsammlung ist ein unverzichtbares Tool für Studierende der Wirtschaftswissenschaften, aber auch ein nützliches Nachschlagewerk für Verantwortliche aus Wirtschaft, Politik und Lehre.
The development of metals tailored to the metallurgical conditions of laser-based additive manufacturing is crucial to advance the maturity of these materials for their use in structural applications. While efforts in this regard are being carried out around the globe, the use of high strength eutectic alloys have, so far, received minor attention, although previous works showed that rapid solidification techniques can result in ultrafine microstructures with excellent mechanical performance, albeit for small sample sizes. In the present work, a eutectic Ti-32.5Fe alloy has been produced by laser powder bed fusion aiming at exploiting rapid solidification and the capability to produce bulk ultrafine microstructures provided by this processing technique.
Process energy densities between 160 J/mm³ and 180 J/mm³ resulted in a dense and crack-free material with an oxygen content of ~ 0.45 wt.% in which a hierarchical microstructure is formed by µm-sized η-Ti4Fe2Ox dendrites embedded in an ultrafine eutectic β-Ti/TiFe matrix. The microstructure was studied three-dimensionally using near-field synchrotron ptychographic X-ray computed tomography with an actual spatial resolution down to 39 nm to analyse the morphology of the eutectic and dendritic structures as well as to quantify their mass density, size and distribution. Inter-lamellar spacings down to ~ 30–50 nm were achieved, revealing the potential of laser-based additive manufacturing to generate microstructures smaller than those obtained by classical rapid solidification techniques for bulk materials. The alloy was deformed at 600 °C under compressive loading up to a strain of ~ 30% without damage formation, resulting in a compressive yield stress of ~ 800 MPa.
This study provides a first demonstration of the feasibility to produce eutectic Ti-Fe alloys with ultrafine microstructures by laser powder bed fusion that are suitable for structural applications at elevated temperature.
Im Rahmen dieser Forschungsarbeit wurde eine praxisorientierte Methode entwickelt, die es ermöglicht, Bodenproben nach ihrer Entnahme auf dem Feld aufzubereiten und hinsichtlich ihres Mikroplastikgehaltes analysieren zu können. Die Extraktionsmethode wurde bereits für zwei Polymere, PA 12 und PE (Mulchfolienpartikel), mit Wiederfindungsraten von je 100 % für Partikel größer als 0,5 mm validiert. Für Partikel größer als 63 μm liegt die Wiederfindungsrate für PE-Mulchfolienpartikel bei 97 % beziehungs-weise für PA-Partikel bei 86 %. Weiterhin wurden verschiedene spektroskopische Detektions-methoden untersucht und hinsichtlich ihrer Potentiale und Grenzen miteinander verglichen. Dabei wurde festgestellt, dass die Digitalmikroskopie zwar sehr gut geeignet ist, die Farbe, Größe, Form und Anzahl der Partikel zu bestimmen, jedoch stark von der subjektiven Einschätzung abhängig ist. Sie sollte daher in jedem Fall mit einer weiteren Detektionsmethode kombiniert werden. In dieser Arbeit wurde hierzu die ATR-FTIR-Spektroskopie verwendet. Diese ermöglicht zusätzlich die Bestimmung des Polymertyps einzelner Partikel mit einer unteren Nachweisgrenze von 500 μm. Die Methode konnte auf insgesamt fünf landwirtschaftlich genutzten Flächen angewendet werden, wovon zwei konventionell und drei ökologisch bewirtschaftet werden. Um einen ersten Eindruck über die aktuelle Mikroplastik-Belastung von Agrarböden zu erhalten, wurden die mit Hilfe der in dieser Forschungsarbeit entwickelten Methode erhaltenen Ergebnisse extrapoliert und als Emissionskoeffizienten in verschiedenen Einheiten angegeben.
This article explores the opportunities, challenges, as well as the activities of the Chinese governmental and commercial stakeholders to promote cross-border e-commerce trade between China and Africa, based on the classification and correlation analysis of the literature from 2011 to 2019. The results show that the biggest driver for the development of China-Africa cross-border e-commerce trade is the gap between the rapid growth of the African population, especially the middle class, and the limited local capability to satisfy their demand. The rapid development of the internet and mobile internet is another driving factor. The biggest challenge is the last mile delivery of logistics, and online payment issues in Africa. At the macro-level the Chinese government has promoted measures such as infrastructure investment, e-commerce test zones and the establishment of pilot projects. At the firm level, Chinese companies have focused on solving practical micro-level local operational problems such as logistics, online payment, and talent training. The results also show that the referred literature is still in its infancy, mostly theoretical and less practical, and requires more in-depth domain specific analysis in the future.
Due to the popularity of the Internet and the networked services that it facilitates, networked devices have become increasingly common in both the workplace and everyday life in recent years—following the trail blazed by smartphones. The data provided by these devices allow for the creation of rich user profiles. As a result, the collection, processing and exchange of such personal data have become drivers of economic growth. History shows that the adoption of new technologies is likely to influence both individual and societal concepts of privacy. Research into privacy has therefore been confronted with continuously changing concepts due to technological progress. From a legal perspective, privacy laws that reflect social values are sought. Privacy enhancing technologies are developed or adapted to take account of technological development. Organizations must also identify protective measures that are effective in terms of scalability and automation. Similarly, research is being conducted from the perspective of Human-Computer Interaction (HCI) to explore design spaces that empower individuals to manage their protection needs with regard to novel data, which they may perceive as sensitive. Taking such an HCI perspective with regard to understanding privacy management on the Internet of Things (IoT), this research mainly focuses on three interrelated goals across the fields of application: 1. Exploring and analyzing how people make sense of data, especially when managing privacy and data disclosure; 2. Identifying, framing and evaluating potential resources for designing sense-making processes; and 3. Exploring the fitness of the identified concepts for inclusion in legal and technical perspectives on supporting decisions regarding privacy on the IoT. Although this work's point of departure is the HCI perspective, it emphasizes the importance of the interrelationships among seemingly independent perspectives. Their interdependence is therefore also emphasized and taken into account by subscribing to a user-centered design process throughout this study. More specifically, this thesis adopts a design case study approach. This approach makes it possible to conduct full user-centered design lifecycles in a concrete application case with participants in the context of everyday life. Based on this approach, it was possible to investigate several domains of the IoT that are currently relevant, namely smart metering, smartphones, smart homes and connected cars. The results show that the participants were less concerned about (raw) data than about the information that could potentially be derived from it. Against the background of the constant collection of highly technical and abstract data, the content of which only becomes visible through the application of complex algorithms, this study indicates that people should learn to explore and understand these data flexibly, and provides insights in how to design for supporting this aim. From the point of view of design for usable privacy protection measures, the information that is provided to users about data disclosure should be focused on the consequences thereof for users' environments and life. A related concept from law is “informed consent,” which I propose should be further developed in order to implement usable mechanisms for individual privacy protection in the era of the IoT. Finally, this thesis demonstrates how research on HCI can be methodologically embedded in a regulative process that will inform both the development of technology and the drafting of legislation.
Long-term variability of solar irradiance and its implications for photovoltaic power in West Africa
(2020)
This paper addresses long-term changes in solar irradiance for West Africa (3° N to 20° N and 20° W to 16° E) and its implications for photovoltaic power systems. Here we use satellite irradiance (Surface Solar Radiation Data Set-Heliosat, Edition 2.1, SARAH-2.1) to derive photovoltaic yields. Based on 35 years of data (1983–2017) the temporal and regional variability as well as long-term trends of global and direct horizontal irradiance are analyzed. Furthermore, at four locations a detailed time series analysis is undertaken. The dry and the wet season are considered separately.
This work presents the development of a measuring system for the quality control of ultrapure water. The new systems combines ozonation and UV radiation for the oxidation of organic substances. The change in conductivity caused by the oxidation is furthermore correlated with the TOC of the solution.
In der heutigen Zeit nimmt die Bedeutung schlanker und effektiver Prozesse in Unternehmen vor dem Hintergrund des Wettbewerbs sowie Kostendrucks stetig zu. Um dieser Herausforderung entgegenzuwirken, fokussieren sich Unternehmen auf die Identifikation neuer innovativer Potenziale. Aufgrund der Tatsache, dass monotone und regelbasierte Prozesse durch Softwareroboter automatisiert werden können, ist das Interesse an Robotic Process Automation (RPA) in den letzten Jahren stetig gestiegen. Bevor sich Unternehmen allerdings für oder gegen den Einsatz von RPA entscheiden, ist es zunächst notwendig, dass die Entscheidungsträger ein Verständnis von RPA erlangen sowie die entsprechenden Einsatzpotenziale und Risiken einschätzen können. Dieser Artikel trägt diesem Bedürfnis Rechnung, indem es diese auf Basis einer Literaturrecherche ermittelt und bewertet. Im Ausblick wird das zukünftige Potenzial von RPA eingeschätzt.
Digitale Güter
(2020)
When a robotic agent experiences a failure while acting in the world, it should be possible to discover why that failure has occurred, namely to diagnose the failure. In this paper, we argue that the diagnosability of robot actions, at least in a classical sense, is a feature that cannot be taken for granted since it strongly depends on the underlying action representation. We specifically define criteria that determine the diagnosability of robot actions. The diagnosability question is then analysed in the context of a handle manipulation action, such that we discuss two different representations of the action – a composite policy with a learned success model for the action parameters, and a neural network-based monolithic policy – both of which exist on different sides of the diagnosability spectrum. Through this comparison, we conclude that composite actions are more suited to explicit diagnosis, but representations with less prior knowledge are more flexible. This suggests that model learning may provide balance between flexibility and diagnosability; however, data-driven diagnosis methods also need to be enhanced in order to deal with the complexity of modern robots.
Object detectors have improved considerably in the last years by using advanced CNN architectures. However, many detector hyper-parameters are generally manually tuned, or they are used with values set by the detector authors. Automatic Hyper-parameter optimization has not been explored in improving CNN-based object detectors hyper-parameters. In this work, we propose the use of Black-box optimization methods to tune the prior/default box scales in Faster R-CNN and SSD, using Bayesian Optimization, SMAC, and CMA-ES. We show that by tuning the input image size and prior box anchor scale on Faster R-CNN mAP increases by 2% on PASCAL VOC 2007, and by 3% with SSD. On the COCO dataset with SSD there are mAP improvement in the medium and large objects, but mAP decreases by 1% in small objects. We also perform a regression analysis to find the significant hyper-parameters to tune.
Background: 3-hydroxy-3-methylglutaryl-coenzyme A lyase deficiency (HMGCLD) is an autosomal recessive disorder of ketogenesis and leucine degradation due to mutations in HMGCL.
Method: We performed a systematic literature search to identify all published cases. Two hundred eleven patients of whom relevant clinical data were available were included in this analysis. Clinical course, biochemical findings and mutation data are highlighted and discussed. An overview on all published HMGCL variants is provided.
Results: More than 95% of patients presented with acute metabolic decompensation. Most patients manifested within the first year of life, 42.4% already neonatally. Very few individuals remained asymptomatic. The neurologic long-term outcome was favorable with 62.6% of patients showing normal development.
Conclusion: This comprehensive data analysis provides a systematic overview on all published cases with HMGCLD including a list of all known HMGCL mutations.
Computers can help us to trigger our intuition about how to solve a problem. But how does a computer take into account what a user wants and update these triggers? User preferences are hard to model as they are by nature vague, depend on the user’s background and are not always deterministic, changing depending on the context and process under which they were established. We pose that the process of preference discovery should be the object of interest in computer aided design or ideation. The process should be transparent, informative, interactive and intuitive. We formulate Hyper-Pref, a cyclic co-creative process between human and computer, which triggers the user’s intuition about what is possible and is updated according to what the user wants based on their decisions. We combine quality diversity algorithms, a divergent optimization method that can produce many, diverse solutions, with variational autoencoders to both model that diversity as well as the user’s preferences, discovering the preference hypervolume within large search spaces.
Until recently, studies regarding e-banking transactions have focused more on motivational factors that trigger the intention to accept and use the e-banking transaction, rather than the de-motivational factors that propel the action. However, in the developing countries like Sub-Sahara economies, the factors associated with the former have not been explored and are still rudimentary in the literature. Drawing from the Technology Threat Avoidance Theory (TTAT), the study seeks to examine the impact of online identity theft on customers’ willingness to engage in e-banking transactions in Ghana. A quantitative survey of 393 valid responses from retail bank customers amongst two leading commercial banks in Ghana for the analyses. Results from the PLS-SEM showed that the research constructs; perceived online identity theft’ positively and significantly predict “fear of financial loss”, “fear of reputational damage”, and “security and privacy concern” whilst the former has a negative mediated-relationship between perceived online identity theft and the intention to engage in e-banking transaction. This study is the first of its kind that has extended the application of the TTAT framework into the study of e-banking transactions. The study serves as a practical tool that will enable the banks in their quest to assess customers’ restriction/aversion towards the use of Fintech while ensuring sustainable growth of e-banking transactions in an emerging economy context. The study is limited to only banking institutions in Ghana without considering other players in the financial sub-sector. Future research direction has been suggested in the concluding part of the paper.
Do socio-economic factors impede the engagement in online banking transactions? Evidence from Ghana
(2020)
Researchers have long pondered on the online banking transaction adoption. Some of these studies focus primarily on the motivating factors that affect customers’ intention to adopt/accept these services (technologies). However, research into the constraining factors, in particular socio-economic factors, barely exist in the literature, especially in the context of sub-Saharan Africa. Against this background, the paper seeks to fill in this gap by: (1) assessing the socio-economic factors impeding the engagement of e-banking transactions among retail bank customers in Ghana, and (2) examining the moderating effect of ‘customer experience of Internet’ on the identified factors that inhibit the engagement in online banking in Ghana. The paper used a quantitative research approach to obtain data from two leading Ghanaian banks. Out of the 450 questionnaires distributed, 393 were valid for analysis. Data were analyzed with the aid of PLS-SEM (partial least squares and structural equation modeling). Findings revealed that perceived knowledge gap and the price of digital devices were directly important to the intention to disembark on e-banking transactions among Ghanaian bank customers. Whilst customer experience (frequent use of the Internet), as a moderator variable, has a significant effect on the interaction between perceived knowledge gap and the intent to disembark on e-banking transactions; and finance charges and the intent to disembark on e-banking transactions. Study implications and directions for future research are discussed in the paper.
4GREAT is an extension of the German Receiver for Astronomy at Terahertz frequencies (GREAT) operated aboard the Stratospheric Observatory for Infrared Astronomy (SOFIA). The spectrometer comprises four different detector bands and their associated subsystems for simultaneous and fully independent science operation. All detector beams are co-aligned on the sky. The frequency bands of 4GREAT cover 491-635, 890-1090, 1240-1525 and 2490-2590 GHz, respectively. This paper presents the design and characterization of the instrument, and its in-flight performance. 4GREAT saw first light in June 2018, and has been offered to the interested SOFIA communities starting with observing cycle 6.
2-methylacetoacetyl-coenzyme A thiolase (beta-ketothiolase) deficiency: one disease - two pathways
(2020)
Background: 2-methylacetoacetyl-coenzyme A thiolase deficiency (MATD; deficiency of mitochondrial acetoacetyl-coenzyme A thiolase T2/ “beta-ketothiolase”) is an autosomal recessive disorder of ketone body utilization and isoleucine degradation due to mutations in ACAT1.
Methods: We performed a systematic literature search for all available clinical descriptions of patients with MATD. Two hundred forty-four patients were identified and included in this analysis. Clinical course and biochemical data are presented and discussed.
Results: For 89.6% of patients at least one acute metabolic decompensation was reported. Age at first symptoms ranged from 2 days to 8 years (median 12 months). More than 82% of patients presented in the first 2 years of life, while manifestation in the neonatal period was the exception (3.4%). 77.0% (157 of 204 patients) of patients showed normal psychomotor development without neurologic abnormalities. Conclusion: This comprehensive data analysis provides a systematic overview on all cases with MATD identified in the literature. It demonstrates that MATD is a rather benign disorder with often favourable outcome, when compared with many other organic acidurias.
Intelligente Dialogsysteme – Chatbots – werden immer häufiger als virtuelle Ansprechpartner von Unternehmen und Institutionen eingesetzt. Auf Basis einer Wissensdatenbank können Chatbots einen größeren Anteil von Kundenanfragen automatisiert beantworten. Analog ist der Einsatz von Chatbots als digitaler Ansprechpartner öffentlicher Verwaltungen denkbar. Sie könnten Bürgern helfen, sich innerhalb der behördlichen Strukturen zu orientieren und Verwaltungsleistungen effizient und effektiv in Anspruch zu nehmen.
Diese Arbeit überprüft den Einsatz eines Chatbots in der öffentlichen Verwaltung hinsichtlich der entstehenden Kosten und des erwartbaren Nutzens. Auf Basis einer umfangreichen Literaturauswertung und der prototypischen Realisierung eines Chatbots für ein Stadtportal werden dabei Herausforderungen dieser Anwendungsdomäne herausgearbeitet, konkrete Funktionsweise und Implementierungsstrategien von Chatbots erörtert und einige Erfolgsfaktoren formuliert, die den Kern einer Handlungsempfehlung für Entscheidungsträger öffentlicher Verwaltungen bilden.
Object detectors have improved considerably in the last years by using advanced Convolutional Neural Networks (CNNs) architectures. However, many detector hyper-parameters are not generally tuned, and they are used with values set by the detector authors. Blackbox optimization methods have gained more attention in recent years because of its ability to optimize the hyper-parameters of various machine learning algorithms and deep learning models. However, these methods are not explored in improving CNN-based object detector's hyper-parameters. In this research work, we propose the use of blackbox optimization methods such as Gaussian Process based Bayesian Optimization (BOGP), Sequential Model-based Algorithm Configuration (SMAC), and Covariance Matrix Adaptation Evolution Strategy (CMA-ES) to tune the hyper-parameters in Faster R-CNN and Single Shot MultiBox Detector (SSD). In Faster R-CNN, tuning the input image size, prior box anchor scales and ratios using BOGP, SMAC, and CMA-ES has increased the performance around 1.5% in terms of Mean Average Precision (mAP) on PASCAL VOC. Tuning the anchor scales of SSD has increased the mAP by 3% on PASCAL VOC and marine debris datasets. On the COCO dataset with SSD, mAP improvement is observed in the medium and large objects, but mAP decreases by 1% in small objects. The experimental results show that the blackbox optimization methods have proved to increase the mAP performance by optimizing the object detectors. Moreover, it has achieved better results than the hand-tuned configurations in most of the cases.
The motor protein myosin drives a wide range of cellular and muscular functions by generating directed movement and force, fueled through adenosine triphosphate (ATP) hydrolysis. Release of the hydrolysis product adenosine diphosphate (ADP) is a fundamental and regulatory process during force production. However, details about the molecular mechanism accompanying ADP release are scarce due to the lack of representative structures. Here we solved a novel blebbistatin-bound myosin conformation with critical structural elements in positions between the myosin pre-power stroke and rigor states. ADP in this structure is repositioned towards the surface by the phosphate-sensing P-loop, and stabilized in a partially unbound conformation via a salt-bridge between Arg131 and Glu187. A 5 Å rotation separates the mechanical converter in this conformation from the rigor position. The crystallized myosin structure thus resembles a conformation towards the end of the two-step power stroke, associated with ADP release. Computationally reconstructing ADP release from myosin by means of molecular dynamics simulations further supported the existence of an equivalent conformation along the power stroke that shows the same major characteristics in the myosin motor domain as the resolved blebbistatin-bound myosin-II·ADP crystal structure, and identified a communication hub centered on Arg232 that mediates chemomechanical energy transduction.
In recent years, a plethora of observations with high spectral resolution of sub-millimetre and far-infrared transitions of methylidene (CH), conducted with Herschel and SOFIA, have demonstrated this radical to be a valuable proxy for molecular hydrogen that can be used for characterising molecular gas within the interstellar medium on a Galactic scale, including the CO-dark component. We report the discovery of the 13CH isotopologue in the interstellar medium using the upGREAT receiver on board SOFIA. We have detected the three hyperfine structure components of the ≈2 THz frequency transition from its X2Π1∕2 ground-state towards the high-mass star-forming regions Sgr B2(M), G34.26+0.15, W49(N), and W51E and determined 13CH column densities. The ubiquity of molecules containing carbon in the interstellar medium has turned the determination of the ratio between the abundances of the two stable isotopes of carbon, 12C/13C, into a cornerstone for Galactic chemical evolution studies. Whilst displaying a rising gradient with galactocentric distance, this ratio, when measured using observations of different molecules (CO, H2CO, and others), shows systematic variations depending on the tracer used. These observed inconsistencies may arise from optical depth effects, chemical fractionation, or isotope-selective photo-dissociation. Formed from C+ either through UV-driven or turbulence-driven chemistry, CH reflects the fractionation of C+, and does not show any significant fractionation effects, unlike other molecules that were previously used to determine the 12C/13C isotopic ratio. This makes it an ideal tracer for the 12C/13C ratio throughout the Galaxy. By comparing the derived column densities of 13CH with previously obtained SOFIA data of the corresponding transitions of the main isotopologue 12CH, we therefore derive 12C/13C isotopic ratios toward Sgr B2(M), G34.26+0.15, W49(N) and W51E. Adding our values derived from 12∕13CH to previous calculations of the Galactic isotopic gradient, we derive a revised value of 12C/13C = 5.87(0.45)RGC + 13.25(2.94).
Die Bundesrepublik Deutschland erlebt in jüngster Vergangenheit verstärkt Dieselfahrverbote in Großstädten. Gleichzeitig erfahren Großstädte als Lebensmittelpunkt eine steigende Beliebtheit. Für Verkehrsunternehmen gilt es, der Bevölkerung nachhaltige Mobilitätslösungen zu bieten, die ein Höchstmaß an Flexibilität ermöglichen. Moderne Mobility-as-a-Service-Konzepte und Innovationen in der Mobilität stellen den klassischen, planorientierten, öffentlichen Personennahverkehr und damit auch die Existenz von Bushaltestellen infrage. Mittels qualitativer Experten-Interviews lässt sich feststellen, dass sich die Bushaltestelle in den Innenstädten vor dem Hintergrund zunehmender digitaler Vernetzung von Mobilitätsanbietern und daraus resultierender modernen Mobility-as-a-service-Konzepte verändern wird. Die Ergebnisse deuten darauf hin, dass die Bushaltestelle in den Innenstädten auch in Zukunft bestehen bleibt und um „on demand“-Verkehre ergänzt wird. Ein radikaler Wandel, wie eine flächendeckende Einführung von autonom fahrenden Bussen, könnte langfristig eine Runderneuerung der Haltestelle zur Folge haben.
Green infrastructure improves environmental health in cities, benefits human health, and provides habitat for wildlife. Increasing urbanization has demanded the expansion of urban areas and transformation of existing cities. The adoption of compact design in urban planning is a recommended strategy to minimize environmental impacts; however, it may undermine green infrastructure networks within cities as it sets a battleground for urban space. Under this scenario, multifunctionality of green spaces is highly desirable but reconciling human needs and biodiversity conservation in a limited space is still a challenge. Through a systematic review, we first compiled urban green space's characteristics that affect mental health and urban wildlife support, and then identified potential synergies and trade-offs between these dimensions. A framework based on the One Health approach is proposed, synthesizing the interlinkages between green space quality, mental health, and wildlife support; providing a new holistic perspective on the topic. Looking at the human-wildlife-environment relationships simultaneously may contribute to practical guidance on more effective green space design and management that benefit all dimensions.