Refine
Departments, institutes and facilities
- Fachbereich Wirtschaftswissenschaften (89)
- Fachbereich Informatik (65)
- Fachbereich Angewandte Naturwissenschaften (58)
- Fachbereich Ingenieurwissenschaften und Kommunikation (58)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (50)
- Fachbereich Sozialpolitik und Soziale Sicherung (44)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (23)
- Institut für Verbraucherinformatik (IVI) (17)
- Institut für Medienentwicklung und -analyse (IMEA) (16)
- Institut für funktionale Gen-Analytik (IFGA) (16)
Document Type
- Article (142)
- Part of a Book (68)
- Conference Object (65)
- Book (monograph, edited volume) (20)
- Preprint (12)
- Contribution to a Periodical (8)
- Report (8)
- Research Data (6)
- Doctoral Thesis (6)
- Master's Thesis (6)
Year of publication
- 2022 (362) (remove)
Keywords
- Machine Learning (5)
- Lehrbuch (4)
- Medienästhetik (4)
- virtual reality (4)
- Cathepsin K (3)
- GDPR (3)
- Knowledge Graphs (3)
- Lignin (3)
- Medien (3)
- Medienwissenschaft (3)
Auswirkungen einer anhaltenden, inflationären Geldpolitik in der Eurozone auf den privaten Sparer
(2022)
Die vorliegende Bachelorarbeit setzt sich kritisch mit den Auswirkungen einer anhaltenden, inflationären Geldpolitik in der Eurozone auf den privaten Sparer auseinander. Im Rahmen dieser Arbeit wird aufgezeigt, wie die starke Erhöhung der Geldmenge Einfluss auf die Möglichkeiten und Entscheidungen des Sparers hat und wie weit eine solche Geldpolitik mit den Interessen des Sparers vereinbar ist.
The aim of this master thesis was to probe the view of Bonn’s citizens on the smart city project of the German city. A literature review helped defining the smart city term and identifying the smart city concept that is mostly used in Germany. This can be summarized as an urban planning concept using information and communication technology to build citizen centric, sustainable cities. According to this, a smart city should include transparent communication and participation of its citizens. The websites and different publications of Bonn were researched to understand its smart city strategy and vision. This revealed inconsistencies. To resolve these inconsistencies, three representatives of the city were inter-viewed. Based on the knowledge gained up to this point, two groups of Bonn’s inhabitants discussed the Smart City Bonn and presented their perception of it. With the help of this methodology, the following results were obtained. Communication and participation of the city are in many cases in line with the current recommendations for a smart city. Bonn has apparently recognized the relevance of these aspects in theory but should also implement them more consistently in practice. Currently the city council publishes contradictory information and does not plan to incorporate the sight of Bonn’s citizens to develop the smart city strat-egy in the first place, as it is recommended in common literature.
Self-supervised learning has proved to be a powerful approach to learn image representations without the need of large labeled datasets. For underwater robotics, it is of great interest to design computer vision algorithms to improve perception capabilities such as sonar image classification. Due to the confidential nature of sonar imaging and the difficulty to interpret sonar images, it is challenging to create public large labeled sonar datasets to train supervised learning algorithms. In this work, we investigate the potential of three self-supervised learning methods (RotNet, Denoising Autoencoders, and Jigsaw) to learn high-quality sonar image representation without the need of human labels. We present pre-training and transfer learning results on real-life sonar image datasets. Our results indicate that self-supervised pre-training yields classification performance comparable to supervised pre-training in a few-shot transfer learning setup across all three methods. Code and self-supervised pre-trained models are be available at https://github.com/agrija9/ssl-sonar-images
We introduce canonical weight normalization for convolutional neural networks. Inspired by the canonical tensor decomposition, we express the weight tensors in so-called canonical networks as scaled sums of outer vector products. In particular, we train network weights in the decomposed form, where scale weights are optimized separately for each mode. Additionally, similarly to weight normalization, we include a global scaling parameter. We study the initialization of the canonical form by running the power method and by drawing randomly from Gaussian or uniform distributions. Our results indicate that we can replace the power method with cheaper initializations drawn from standard distributions. The canonical re-parametrization leads to competitive normalization performance on the MNIST, CIFAR10, and SVHN data sets. Moreover, the formulation simplifies network compression. Once training has converged, the canonical form allows convenient model-compression by truncating the parameter sums.
Comparative study of 3D object detection frameworks based on LiDAR data and sensor fusion techniques
(2022)
Estimating and understanding the surroundings of the vehicle precisely forms the basic and crucial step for the autonomous vehicle. The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time. Generally, the perception system involves various subsystems such as localization, obstacle (static and dynamic) detection, and avoidance, mapping systems, and others. For perceiving the environment, these vehicles will be equipped with various exteroceptive (both passive and active) sensors in particular cameras, Radars, LiDARs, and others. These systems are equipped with deep learning techniques that transform the huge amount of data from the sensors into semantic information on which the object detection and localization tasks are performed. For numerous driving tasks, to provide accurate results, the location and depth information of a particular object is necessary. 3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object. Based on recent research, 3D object detection frameworks performing object detection and localization on LiDAR data and sensor fusion techniques show significant improvement in their performance. In this work, a comparative study of the effect of using LiDAR data for object detection frameworks and the performance improvement seen by using sensor fusion techniques are performed. Along with discussing various state-of-the-art methods in both the cases, performing experimental analysis, and providing future research directions.
Wo individuelle Nutzenkalküle und geteilte Erwartungen enden, beginnt das Terrain der Kunstfertigkeit. Unter der Annahme, dass politisches Entscheiden als pragmatischer Problemlösungs- und Abwägungsprozess zu betrachten ist, rollen die Beiträge dieses interdisziplinär angelegten Bandes die Frage neu auf, wie in der Politik unter Bedingungen begrenzter Rationalität Handlungsalternativen entworfen, verhandelt und ausgewählt werden. Die Pandemie hat diesem Anliegen eine ungeahnte Dramatik verliehen, eingeübte Grundsätze, Entscheidungsarenen und Praktiken der Politik stehen mehr denn je zur Disposition. Es ist an der Zeit, diese neu zu vermessen.
(Verlagsangaben)
Fatigue strength estimation is a costly manual material characterization process in which state-of-the-art approaches follow a standardized experiment and analysis procedure. In this paper, we examine a modular, Machine Learning-based approach for fatigue strength estimation that is likely to reduce the number of experiments and, thus, the overall experimental costs. Despite its high potential, deployment of a new approach in a real-life lab requires more than the theoretical definition and simulation. Therefore, we study the robustness of the approach against misspecification of the prior and discretization of the specified loads. We identify its applicability and its advantageous behavior over the state-of-the-art methods, potentially reducing the number of costly experiments.
Dieses Buch zeigt konkret auf, was Geschäftsprozessmanagement ist und wie man es nutzen kann. Hierzu werden die zentralen Aspekte erklärt und praxistaugliche Tools anhand von Beispielen vorgestellt. Erleichtern Sie sich die tägliche Praxis der Analyse und Optimierung von Geschäftsprozessen! Der Inhalt Durchgängiges Fallbeispiel Überblick über praxisrelevante Modellierungsmethoden Modellierung von Prozesslandkarten, Swimlanes, BPMN- und eEPK-Diagrammen Analyse und Optimierung von Prozessen Prozesscontrolling mit Kennzahlen
Vietnam requires a sustainable urbanization, for which city sensing is used in planning and de-cision-making. Large cities need portable, scalable, and inexpensive digital technology for this purpose. End-to-end air quality monitoring companies such as AirVisual and Plume Air have shown their reliability with portable devices outfitted with superior air sensors. They are pricey, yet homeowners use them to get local air data without evaluating the causal effect. Our air quality inspection system is scalable, reasonably priced, and flexible. Minicomputer of the sys-tem remotely monitors PMS7003 and BME280 sensor data through a microcontroller processor. The 5-megapixel camera module enables researchers to infer the causal relationship between traffic intensity and dust concentration. The design enables inexpensive, commercial-grade hardware, with Azure Blob storing air pollution data and surrounding-area imagery and pre-venting the system from physically expanding. In addition, by including an air channel that re-plenishes and distributes temperature, the design improves ventilation and safeguards electrical components. The gadget allows for the analysis of the correlation between traffic and air quali-ty data, which might aid in the establishment of sustainable urban development plans and poli-cies.
Using a life-cycle approach, we identify key gaps for social reform in Georgia. The reduction of informal work is the most pressing of these, since formal employment is the backbone of any robust and reliable social insurance scheme. At the same time, greater financial resources are required through taxation in order to enable systematic social reform in Georgia. Both interventions are needed in order to fill the gaps in the current social protection system, which include the limited scope of pension and health insurance, as well as the lack of permanent unemployment insurance and universal child benefits.
Against the background of Germany’s long experience with social protection, we outline the main principles of the German welfare state and present the design of three main social insurance branches (pensions, health and unemployment). Based on the mixed experience that has emerged in Germany, in particular due to path dependencies and political deadlock, we derive lessons that inform a clear and coherent vision for social reform in Georgia.
Buchbesprechung
(2022)
Europäische Sozialpolitik
(2022)
The processing of employees’ personal data is dramatically increasing, yet there is a lack of tools that allow employees to manage their privacy. In order to develop these tools, one needs to understand what sensitive personal data are and what factors influence employees’ willingness to disclose. Current privacy research, however, lacks such insights, as it has focused on other contexts in recent decades. To fill this research gap, we conducted a cross-sectional survey with 553 employees from Germany. Our survey provides multiple insights into the relationships between perceived data sensitivity and willingness to disclose in the employment context. Among other things, we show that the perceived sensitivity of certain types of data differs substantially from existing studies in other contexts. Moreover, currently used legal and contextual distinctions between different types of data do not accurately reflect the subtleties of employees’ perceptions. Instead, using 62 different data elements, we identified four groups of personal data that better reflect the multi-dimensionality of perceptions. However, previously found common disclosure antecedents in the context of online privacy do not seem to affect them. We further identified three groups of employees that differ in their perceived data sensitivity and willingness to disclose, but neither in their privacy beliefs nor in their demographics. Our findings thus provide employers, policy makers, and researchers with a better understanding of employees’ privacy perceptions and serve as a basis for future targeted research
on specific types of personal data and employees.
Ziel der achten Auflage des wissenschaftlichen Workshops “Usable Security and Privacy” auf der Mensch und Computer 2022 ist es, aktuelle Forschungs- und Praxisbeiträge zu präsentieren und anschließend mit den Teilnehmenden zu diskutieren. Der Workshop soll ein etabliertes Forum fortführen und weiterentwickeln, in dem sich Experten aus verschiedenen Bereichen, z. B. Usability und Security Engineering, transdisziplinär austauschen können.
Zum Geleit
(2022)
Eintreten und abschalten
(2022)
Background There is a lack of cardiac magnetic resonance (CMR) data regarding mid- to long-term myocardial damage due to Covid-19 in elite athletes. Objective This study investigated mid-to long-term consequences of myocardial involvement after a Covid-19 infection in elite athletes.
Methods Between January 2020 and October 2021, 27 athletes of the German Olympic centre Rhineland with confirmed Covid-19 infection were analyzed. 9 healthy non-athlete volunteers served as control. CMR was performed in mean 182 days (SD 99) after initial positive test result.
Results CMR did not reveal any signs of acute myocarditis in regard to the current Lake Louise criteria or myocardial damage in any of the 26 elite athletes with previous Covid-19 infection. Nevertheless, 92 % of the athletes experienced a symptomatic course and 54 % reported lasting symptoms for more than 4 weeks. In one male athlete CMR revealed an arrhythmogenic right ventricular cardiomyopathy (ARVC) and this athlete was excluded from the study. Athletes had significantly enlarged left and right ventricle volumes and increased left ventricular myocardial mass in comparison to the healthy control group (LVEDVi 103.4 vs. 91.1 ml/m 2 p=0.031; RVEDVi 104.1 vs. 86.6 ml/m 2 p=0.007; and LVMi 59.0 vs. 46.2 g/m 2 p=0.002).
Conclusion Our findings suggest that the risk for mid-to long-term myocardial damage seems to be very low to negligible in elite athletes. No conclusions can be drawn regarding myocardial injury in the acute phase of infection nor about possible long-term myocardial effects in the general population.
Bionik
(2022)
Wie machen die das… kann angesichts der erstaunlichen Fähigkeiten mancher Lebewesen gefragt werden. Die Bionik fragt noch weiter …und wie kann man das nachmachen? Hier liegt ein Schwerpunkt dieses Lehrbuches, das die Bionik nicht nur an zahlreichen Beispielen erklärt, sondern auch eine Vorgehensweise für die Identifizierung biologischer Lösungen und deren Übertragung auf technische Anwendungen vermittelt. Basisinformationen der Biologie und Grundlagen der Konstruktionstechnik gewährleisten einen leichten Zugang zum Stoff. Mit dem 3D-Druck als Schlüsseltechnologie und der Thematisierung der Nachhaltigkeit geht das Buch zudem auf aktuelle Entwicklungen ein. Dieser ganzheitliche Blick auf die Bionik soll den Leser zur Durchführung bionischer Projekte befähigen und motivieren. Die vorliegende Auflage wurde überarbeitet und um aktuelle Forschungserkenntnisse und Entwicklungen ergänzt. (Verlagsangaben)
From Conclusion to Coda
(2022)
Medienkulturwissenschaft
(2022)
Der Band erklärt die Entstehung, Entwicklung und inhaltliche Breite der Medienkulturwissenschaft. Deren Felder werden ebenso demonstriert wie Forschungsfragen entworfen. Ein spezielles Augenmerk liegt auf interdisziplinären Verhältnissen, etwa zur Kommunikations- und Literaturwissenschaft. Zudem wird aus dieser Perspektive die Historie von Einzelmedien vorgestellt und ausgewählte Phänomene mit der medienkulturwissenschaftlichen „Brille“ skizziert. Dadurch kann die Geschichte und Philosophie der Medienkulturwissenschaft ebenso diskutiert werden wie deren Anwendungsfälle sowie ihre Positionen innerhalb eines Medienstudiums mit starkem Praxisbezug, bei dem die Theorien und Ästhetiken der Medien nicht außer Acht gelassen werden. (Verlagsangaben)
Liebe Leserinnen und Leser
(2022)
Statistik im Fokus
(2022)
论“数字化大学”的内涵及发展
(2022)
Qualität der Qualitätsprüfung: Testberichte im klassischen und modernen Videospieljournalismus
(2022)
Die Hochzeit des gedruckten Videospieljournalismus um die Jahrtausendwende ist vorüber. Seit über 15 Jahren sind die verkauften Auflagen der klassischen Videospielzeitschriften wie Gamestar oder PC Games rückläufig. Andere Magazine wurden zwischenzeitlich aus wirtschaftlichen Gründen eingestellt, darunter PC Action oder auch der einstige Marktführer Computer Bild Spiele. Trotzdem entwickelte sich eine journalistische Gegenbewegung, die Kieron Gillen im Jahr 2004 in seinem Manifest "The New Games Journalism" begründete. Es entstanden in Deutschland Videospielzeitschriften wie GAIN oder WASD, deren Berichterstattung Videospiele weniger als Produkt, sondern zunehmend als künstlerisches Objekt wahrnehmen und sie in einen gesellschaftlichen und kulturellen Kontext einordnen.
Ungeachtet dessen erfolgt in den Redaktionen eine technische und inhaltliche Sichtung der Videospiele, die dem Publikum als Testbericht präsentiert wird. Da es sich dabei aus historischer Perspektive um den Kerninhalt von Videospielzeitschriften handelt, soll dieser als Analysegegenstand dieser Arbeit dienen und ein Indiz für die Qualität der Magazine als Ganzes sein. Mit Blick auf die unterschiedlichen Entwicklungen im Videospieljournalismus soll folgende Frage beantwortet werden: Verfügen moderne Videospielzeitschriften über eine höhere Qualität als klassische Magazine? Dazu erfolgt eine qualitative Inhaltsanalyse der Testberichte und ein Vergleich mit etablierten Qualitätsmerkmalen aus dem allgemeinen Journalismus, ebenso wie dem Fach-, Nutzwert- und Videospieljournalismus.
Jahresbericht 2021
(2022)
In Forschung, Lehre und Transfer neue Wege beschreiten und Akzente setzen – das hat die Hochschule Bonn-Rhein-Sieg (H-BRS) im vergangenen Jahr trotz der Corona-Pandemie geschafft. Talente, Ideen und Kooperationen sind in unterschiedlicher Weise zur Geltung gekommen, stets im engen Austausch zwischen angewandter Wissenschaft, Gesellschaft und Wirtschaft. „Entfalten“ lautet deshalb das Motto des Jahresberichts der H-BRS für das Jahr 2021, der jetzt erschienen ist.
Im Zuge der Migrationsbewegung in den Jahren 2015 und 2016 hat die menschenwürdige Unterbringung von geflüchteten Menschen in Kommunen in Deutschland an Aufmerksamkeit gewonnen. Der Anstieg der Asylbewerber:innen in den Kommunen sowie die Bundesinitiative „Schutz von geflüchteten Menschen in Flüchtlingsunterkünften“ haben Veränderungen im Hinblick auf Schutzstandards in der kommunalen Unterbringung geflüchteter Menschen hervorgerufen. Der Artikel erklärt diese Veränderungen mittels einer akteurszentrierten organisationssoziologischen Herangehensweise. Grundlage sind empirische Forschungsergebnisse des Projektes „Organisational Perspectives on Human Security Standards for Refugees in Germany“ aus zwei deutschen Kommunen.
When the Artemis missions launch, NASA's Orion spacecraft (and crew as of the Artemis II mission) will be exposed to the deep space radiation environment beyond the protection of Earth's magnetosphere. Hence, it is essential to characterize the effects of space radiation, microgravity, and the combination thereof on cells and organisms, i.e., to quantify any correlations between the deep space radiation environment, genetic variation, and induced genetic changes in cells. To address this, the Artemis I mission will include the Peristaltic Laboratory for Automated Science with Multigenerations (PLASM) hardware containing the Deep Space Radiation Genomics (DSRG) experiment. The scientific aims of DSRG are (i) to identify the metabolic and genomic pathways in yeast affected by microgravity, space radiation, and their combination, and (ii) to differentiate between gravity and radiation exposure on single-gene deletion/overexpressing strains' ability to thrive in the spaceflight environment. Yeast is used as a model system because 70% of its essential genes have a human homolog, and over half of these homologs can functionally replace their human counterpart. As part of the experiment preparation towards spaceflight, an Experiment Verification Test (EVT) was performed at the Kennedy Space Center to verify that the experiment design, hardware, and approach to automated operations will enable achieving the scientific aims. For the EVT, fluidic systems were assembled, sterilized, loaded, and acceptance-tested, and subsequently integrated with the engineering parts to produce a flight-like PLASM unit. Each fluidic system consisted of (i) a Media Bag, (ii) four Culture Bags loaded with Saccharomyces cerevisiae (two with deletion series and the remaining two with overexpression series), and (iii) tubing and check valves. The EVT PLASM unit was put under a temperature profile replicating the anticipated different phases of flight, including handover to launch, spaceflight, and splashdown to handover back to the science team, for a 58-day period. At EVT completion, the rate of activation, cellular growth, RNA integrity, and sample contamination were interrogated. All of the experiment's success criteria were satisfied, encouraging our efforts to perform this investigation on Artemis I. This manuscript thus describes the process of spaceflight experiment design maturation with a focus on the EVT, its results, DSRG's preparation for its planned launch on Artemis I in 2022, and how the PLASM hardware can enable other scientific goals on future Artemis missions and/or the Lunar Orbital Platform – Gateway.
Haut und Design
(2022)
The white ground crater by the Phiale Painter (450–440 BC) exhibited in the “Pietro Griffo” Archaeological Museum in Agrigento (Italy) depicts two scenes from Perseus myth. The vase is of utmost importance to archaeologists because the figures are drawn on a white background with remarkable daintiness and attention to detail. Notwithstanding the white ground ceramics being well documented from an archaeological and historical point of view, doubts concerning the compositions of pigments and binders and the production technique are still unsolved. This kind of vase is a valuable rarity, the use of which is documented in elitist funeral rituals. The study aims to investigate the constituent materials and the execution technique of this magnificent crater. The investigation was carried out using non-destructive and non-invasive techniques in situ. Portable X-ray fluorescence and Fourier-transform total reflection infrared spectroscopy complemented the use of visible and ultraviolet light photography to get an overview and specific information on the vase. The XRF data were used to produce false colour maps showing the location of the various elements detected, using the program SmART_scan. The use of gypsum as the material for the white ground is an important result that deserves to be further investigated in similar vases.
Comparative study of 3D object detection frameworks based on LiDAR data and sensor fusion techniques
(2022)
Estimating and understanding the surroundings of the vehicle precisely forms the basic and crucial step for the autonomous vehicle. The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time. Generally, the perception system involves various subsystems such as localization, obstacle (static and dynamic) detection, and avoidance, mapping systems, and others. For perceiving the environment, these vehicles will be equipped with various exteroceptive (both passive and active) sensors in particular cameras, Radars, LiDARs, and others. These systems are equipped with deep learning techniques that transform the huge amount of data from the sensors into semantic information on which the object detection and localization tasks are performed. For numerous driving tasks, to provide accurate results, the location and depth information of a particular object is necessary. 3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object. Based on recent research, 3D object detection frameworks performing object detection and localization on LiDAR data and sensor fusion techniques show significant improvement in their performance. In this work, a comparative study of the effect of using LiDAR data for object detection frameworks and the performance improvement seen by using sensor fusion techniques are performed. Along with discussing various state-of-the-art methods in both the cases, performing experimental analysis, and providing future research directions.
The Poverty Reduction Effect of Social Protection: The Pros and Cons of a Multidisciplinary Approach
(2022)
There is a growing body of knowledge on the complex effects of social protection on poverty in Africa. This article explores the pros and cons of a multidisciplinary approach to studying social protection policies. Our research aimed at studying the interaction between cash transfers and social health protection policies in terms of their impact on inclusive growth in Ghana and Kenya. Also, it explored the policy reform context over time to unravel programme dynamics and outcomes. The analysis combined econometric and qualitative impact assessments with national- and local-level political economic analyses. In particular, dynamic effects and improved understanding of processes are well captured by this approach, thus, pushing the understanding of implementation challenges over and beyond a ‘technological fix,’ as has been argued before by Niño-Zarazúa et al. (World Dev 40:163–176, 2012), However, multidisciplinary research puts considerable demands on data and data handling. Finally, some poverty reduction effects play out over a longer time, requiring longitudinal consistent data that is still scarce.
The cooperation between researchers and practitioners during the different stages of the research process is promoted as it can be of benefit to both society and research supporting processes of ‘transformation’. While acknowledging the important potential of research–practice–collaborations (RPCs), this paper reflects on RPCs from a political-economic perspective to also address potential unintended adverse effects on knowledge generation due to divergent interests, incomplete information or the unequal distribution of resources. Asymmetries between actors may induce distorted and biased knowledge and even help produce or exacerbate existing inequalities. Potential merits and limitations of RPCs, therefore, need to be gauged. Taking RPCs seriously requires paying attention to these possible tensions—both in general and with respect to international development research, in particular: On the one hand, there are attempts to contribute to societal change and ethical concerns of equity at the heart of international development research, and on the other hand, there is the relative risk of encountering asymmetries more likely.
In young adulthood, important foundations are laid for health later in life. Hence, more attention should be paid to the health measures concerning students. A research field that is relevant to health but hitherto somewhat neglected in the student context is the phenomenon of presenteeism. Presenteeism refers to working despite illness and is associated with negative health and work-related effects. The study attempts to bridge the research gap regarding students and examines the effects of and reasons for this behavior. The consequences of digital learning on presenteeism behavior are moreover considered. A student survey (N = 1036) and qualitative interviews (N = 11) were conducted. The results of the quantitative study show significant negative relationships between presenteeism and health status, well-being, and ability to study. An increased experience of stress and a low level of detachment as characteristics of digital learning also show significant relationships with presenteeism. The qualitative interviews highlighted the aspect of not wanting to miss anything as the most important reason for presenteeism. The results provide useful insights for developing countermeasures to be easily integrated into university life, such as establishing fixed learning partners or the use of additional digital learning material.
Background: Since presenteeism is related to numerous negative health and work-related effects, measures are required to reduce it. There are initial indications that how an organization deals with health has a decisive influence on employees’ presenteeism behavior.
Aims: The concept of health-promoting collaboration was developed on the basis of these indications. As an extension of healthy leadership it includes not only the leader but also co-workers. In modern forms of collaboration, leaders cannot be assigned sole responsibility for employees’ health, since the leader is often hardly visible (digital leadership) or there is no longer a clear leader (shared leadership). The study examines the concept of health-promoting collaboration in relation to presenteeism. Relationships between health-promoting collaboration, well-being and work ability are also in focus, regarding presenteeism as a mediator.
Methods: The data comprise the findings of a quantitative survey of 308 employees at a German university of applied sciences. Correlation and mediator analyses were conducted.
Results: The results show a significant negative relationship between health-promoting collaboration and presenteeism. Significant positive relationships were found between health-promoting collaboration and both well-being and work ability. Presenteeism was identified as a mediator of these relationships.
Conclusion: The relevance of health-promoting collaboration in reducing presenteeism was demonstrated and various starting points for practice were proposed. Future studies should investigate further this newly developed concept in relation to presenteeism.
Deployment of modern data-driven machine learning methods, most often realized by deep neural networks (DNNs), in safety-critical applications such as health care, industrial plant control, or autonomous driving is highly challenging due to numerous model-inherent shortcomings. These shortcomings are diverse and range from a lack of generalization over insufficient interpretability and implausible predictions to directed attacks by means of malicious inputs. Cyber-physical systems employing DNNs are therefore likely to suffer from so-called safety concerns, properties that preclude their deployment as no argument or experimental setup can help to assess the remaining risk. In recent years, an abundance of state-of-the-art techniques aiming to address these safety concerns has emerged. This chapter provides a structured and broad overview of them. We first identify categories of insufficiencies to then describe research activities aiming at their detection, quantification, or mitigation. Our work addresses machine learning experts and safety engineers alike: The former ones might profit from the broad range of machine learning topics covered and discussions on limitations of recent methods. The latter ones might gain insights into the specifics of modern machine learning methods. We hope that this contribution fuels discussions on desiderata for machine learning systems and strategies on how to help to advance existing approaches accordingly.
Breaking new ground and setting new trends in research, teaching and transfer - this is what the Hochschule Bonn-Rhein-Sieg (H-BRS) managed to do last year despite the Corona pandemic. Talents, ideas and cooperations have come to fruition in various ways, always in close exchange between applied science, society and business. "expand" is therefore the motto of the annual report of the H-BRS for the year 2021, which has now been published.
We analyze short-term effects of free hospitalization insurance for the poorest quintile of the population in the province of Khyber Pakhtunkhwa, Pakistan. First, we exploit that eligibility is based on an exogenous poverty score threshold and apply a regression discontinuity design. Second, we exploit imperfect rollout and compare insured and uninsured households using propensity score matching. With both methods we fail to detect significant effects on the incidence of hospitalization. Whereas the program did not meaningfully increase the quantity of health care consumed, insured households more often choose private hospitals, indicating a shift towards higher perceived quality of care.
Research has identified nudging as a promising and effective tool to improve healthy eating behavior in a cafeteria setting. However, it remains unclear who is and who is not “nudgeable” (susceptible to nudges). An important influencing factor at the individual level is nudge acceptance. While some progress has been made in determining influences on the acceptance of healthy eating nudges, research on how personal characteristics (such as the perception of social norms) affect nudge acceptance remains scarce. We conducted a survey on 1032 university students to assess the acceptance of nine different types of healthy eating nudges in a cafeteria setting with four influential factors (social norms, health-promoting collaboration, responsibility to promote healthy eating, and procrastination). These factors are likely to play a role within a university and a cafeteria setting. The present study showed that key influential factors of nudge acceptance were the perceived responsibility to promote healthy eating and health-promoting collaboration. We also identified three different student clusters with respect to nudge acceptance, demonstrating that not all nudges were accepted equally. In particular, default, salience, and priming nudges were at least moderately accepted regardless of the degree of nudgeability. Our findings provide useful policy implications for nudge development by university, cafeteria, and public health officials. Recommendations are formulated for strengthening the theoretical background of nudge acceptance and the susceptibility to nudges.
Modern PCR-based analytical techniques have reached sensitivity levels that allow for obtaining complete forensic DNA profiles from even tiny traces containing genomic DNA amounts as small as 125 pg. Yet these techniques have reached their limits when it comes to the analysis of traces such as fingerprints or single cells. One suggestion to overcome these limits has been the usage of whole genome amplification (WGA) methods. These methods aim at increasing the copy number of genomic DNA and by this means generate more template DNA for subsequent analyses. Their application in forensic contexts has so far remained mostly an academic exercise, and results have not shown significant improvements and even have raised additional analytical problems. Until very recently, based on these disappointments, the forensic application of WGA seems to have largely been abandoned. In the meantime, however, novel improved methods are pointing towards a perspective for WGA in specific forensic applications. This review article tries to summarize current knowledge about WGA in forensics and suggests the forensic analysis of single-donor bioparticles and of single cells as promising applications.
A main factor hampering life in space is represented by high atomic number nuclei and energy (HZE) ions that constitute about 1% of the galactic cosmic rays. In the frame of the “STARLIFE” project, we accessed the Heavy Ion Medical Accelerator (HIMAC) facility of the National Institute of Radiological Sciences (NIRS) in Chiba, Japan. By means of this facility, the extremophilic species Haloterrigena hispanica and Parageobacillus thermantarcticus were irradiated with high LET ions (i.e., Fe, Ar, and He ions) at doses corresponding to long permanence in the space environment. The survivability of HZE-treated cells depended upon either the storage time and the hydration state during irradiation; indeed, dry samples were shown to be more resistant than hydrated ones. With particular regard to spores of the species P. thermantarcticus, they were the most resistant to irradiation in a water medium: an analysis of the changes in their biochemical fingerprinting during irradiation showed that, below the survivability threshold, the spores undergo to a germination-like process, while for higher doses, inactivation takes place as a consequence of the concomitant release of the core’s content and a loss of integrity of the main cellular components. Overall, the results reported here suggest that the selected extremophilic microorganisms could serve as biological model for space simulation and/or real space condition exposure, since they showed good resistance to ionizing radiation exposure and were able to resume cellular growth after long-term storage.
Recent advances in Natural Language Processing have substantially improved contextualized representations of language. However, the inclusion of factual knowledge, particularly in the biomedical domain, remains challenging. Hence, many Language Models (LMs) are extended by Knowledge Graphs (KGs), but most approaches require entity linking (i.e., explicit alignment between text and KG entities). Inspired by single-stream multimodal Transformers operating on text, image and video data, this thesis proposes the Sophisticated Transformer trained on biomedical text and Knowledge Graphs (STonKGs). STonKGs incorporates a novel multimodal architecture based on a cross encoder that uses the attention mechanism on a concatenation of input sequences derived from text and KG triples, respectively. Over 13 million so-called text-triple pairs, coming from PubMed and assembled using the Integrated Network and Dynamical Reasoning Assembler (INDRA), were used in an unsupervised pre-training procedure to learn representations of biomedical knowledge in STonKGs. By comparing STonKGs to an NLP- and a KG-baseline (operating on either text or KG data) on a benchmark consisting of eight fine-tuning tasks, the proposed knowledge integration method applied in STonKGs was empirically validated. Specifically, on tasks with a comparatively small dataset size and a larger number of classes, STonKGs resulted in considerable performance gains, beating the F1-score of the best baseline by up to 0.083. Both the source code as well as the code used to implement STonKGs are made publicly available so that the proposed method of this thesis can be extended to many other biomedical applications.
In March 2020, the world was hit by the coronavirus disease (COVID‐19) pandemic which led to all‐embracing measures to contain its spread. Most employees were forced to work from home and take care of their children because schools and daycares were closed. We present data from a research project in a large multinational organisation in the Netherlands with monthly quantitative measurements from January to May 2020 (N = 253–516), enriched with qualitative data from participants' comments before and after telework had started. Growth curve modelling showed major changes in employees' work‐related well‐being reflected in decreasing work engagement and increasing job satisfaction. For work‐non‐work balance, workload and autonomy, cubic trends over time were found, reflecting initial declines during crisis onset (March/April) and recovery in May. Participants' additional remarks exemplify that employees struggled with fulfilling different roles simultaneously, developing new routines and managing boundaries between life domains. Moderation analyses demonstrated that demographic variables shaped time trends. The diverging trends in well‐being indicators raise intriguing questions and show that close monitoring and fine‐grained analyses are needed to arrive at a better understanding of the impact of the crisis across time and among different groups of employees.
Unlimited paid time off policies are currently fashionable and widely discussed by HR professionals around the globe. While on the one hand, paid time off is considered a key benefit by employees and unlimited paid time off policies (UPTO) are seen as a major perk which may help in recruiting and retaining talented employees, on the other hand, early adopters reported that employees took less time off than previously, presumably leading to higher burnout rates. In this conceptual review, we discuss the theoretical and empirical evidence regarding the potential effects of UPTO on leave utilization, well-being and performance outcomes. We start out by defining UPTO and placing it in a historical and international perspective. Next, we discuss the key role of leave utilization in translating UPTO into concrete actions. The core of our article constitutes the description of the effects of UPTO and the two pathways through which these effects are assumed to unfold: autonomy need satisfaction and detrimental social processes. We moreover discuss the boundary conditions which facilitate or inhibit the successful utilization of UPTO on individual, team, and organizational level. In reviewing the literature from different fields and integrating existing theories, we arrive at a conceptual model and five propositions, which can guide future research on UPTO. We conclude with a discussion of the theoretical and societal implications of UPTO.
Contextual information is widely considered for NLP and knowledge discovery in life sciences since it highly influences the exact meaning of natural language. The scientific challenge is not only to extract such context data, but also to store this data for further query and discovery approaches. Classical approaches use RDF triple stores, which have serious limitations. Here, we propose a multiple step knowledge graph approach using labeled property graphs based on polyglot persistence systems to utilize context data for context mining, graph queries, knowledge discovery and extraction. We introduce the graph-theoretic foundation for a general context concept within semantic networks and show a proof of concept based on biomedical literature and text mining. Our test system contains a knowledge graph derived from the entirety of PubMed and SCAIView data and is enriched with text mining data and domain-specific language data using Biological Expression Language. Here, context is a more general concept than annotations. This dense graph has more than 71M nodes and 850M relationships. We discuss the impact of this novel approach with 27 real-world use cases represented by graph queries. Storing and querying a giant knowledge graph as a labeled property graph is still a technological challenge. Here, we demonstrate how our data model is able to support the understanding and interpretation of biomedical data. We present several real-world use cases that utilize our massive, generated knowledge graph derived from PubMed data and enriched with additional contextual data. Finally, we show a working example in context of biologically relevant information using SCAIView.
Recovery Across Different Temporal Settings: How Lunchtime Activities Influence Evening Activities
(2022)
Recovery from work stress during workday breaks, free evenings, weekends, and vacations is known to benefit employee health and well-being. However, how recovery at different temporal settings is interconnected is not well understood. We hypothesized that on days when employees engage in recovery-enhancing lunchtime activities, they will experience higher resources when leaving home from work (i.e., low fatigue and high positive affect) and consequently spend more time on recovery-enhancing activities in the evening, thus creating a positive recovery cycle. In this study, 97 employees were randomized into lunchtime park walk and relaxation groups. As evening activities, we measured time spent on physical exercise, physical activity in natural surroundings, and social activities. Afternoon resources and time spent on evening activities were assessed twice a week before, during, and after the intervention, for five weeks. Our results based on multilevel analyses showed that on days when employees completed the lunchtime park walk, they spent more time on evening physical exercise and physical activity in natural surroundings compared to days when the lunch break was spent as usual. However, neither lunchtime relaxation exercises nor afternoon resources were associated with any of the evening activities. Our findings suggest that other factors than afternoon resources are more important in determining how much time employees spend on various evening activities. Fifteen-minute lunchtime park walks inspired employees to engage in similar healthbenefitting activities during their free time.
For research in audiovisual interview archives often it is not only of interest what is said but also how. Sentiment analysis and emotion recognition can help capture, categorize and make these different facets searchable. In particular, for oral history archives, such indexing technologies can be of great interest. These technologies can help understand the role of emotions in historical remembering. However, humans often perceive sentiments and emotions ambiguously and subjectively. Moreover, oral history interviews have multi-layered levels of complex, sometimes contradictory, sometimes very subtle facets of emotions. Therefore, the question arises of the chance machines and humans have capturing and assigning these into predefined categories. This paper investigates the ambiguity in human perception of emotions and sentiment in German oral history interviews and the impact on machine learning systems. Our experiments reveal substantial differences in human perception for different emotions. Furthermore, we report from ongoing machine learning experiments with different modalities. We show that the human perceptual ambiguity and other challenges, such as class imbalance and lack of training data, currently limit the opportunities of these technologies for oral history archives. Nonetheless, our work uncovers promising observations and possibilities for further research.
Flüssigkeit, die in Werbespots symbolisch für Menstruationsblut steht, war jahrzehntelang blau, erst im September 2021 zeigte ein Hersteller erstmalig eine Flüssigkeit, welche realitätsnah in der Farbe Rot dargestellt wurde (1). Hygieneartikel, die Menstruierende zwingend benötigen, sind in Deutschland mit wenigen Ausnahmen auf öffentlichen Toiletten nicht verfügbar: Das Nicht-Sichtbarsein offenbarte auch im Jahr 2021 das Tabu um natürliche biologische Prozesse des weiblichen Körpers. Scham und Einschränkungen, die sich verhindern ließen, sind die Folge. Menstruierende werden in ihrem Wohlbefinden limitiert, und negative Erlebnisse führen dazu, dass Betroffene in der Ausübung von sozialen, schulischen und beruflichen Aktivitäten nicht nur durch die Menstruation selbst, sondern auch durch Normen und Erziehungsmuster beeinträchtigt sind, wie zahlreiche internationale Studien gezeigt haben (2). Für den deutschen Hochschulkontext fehlen solche Studien bislang.
The non-scientific questioning of scientific research during the COVID-19 pandemic, the unwillingness of a president of the United States of America to accept the result of a democratically held election: just in recent times, there have been quite a few striking examples of long-held certainties appearing as nothing more than just illusions. This essay reflects on the severe consequences of the loss of such certainties in the spheres of democratic politics on the one hand and of science, especially for highly differentiated societies, on the other hand as well as on their interdependencies. Furthermore, the author tries to make the case that this disillusionment could prove to be a salutary shock – reminding us that we need to take a stand for the things we hold as certainties, oftentimes even as calming ones, if we want them to stay how we always thought they were.
Despite the increasing interest in single family offices (SFOs) as an investment owned by an entrepreneurial family, research on SFOs is still in its infancy. In particular, little is known about the capital structures of SFOs or the roots of SFO heterogeneity regarding financial decisions. By drawing on a hand-collected sample of 104 SFOs and private equity (PE) firms, we compare the financing choices of these two investor types in the context of direct entrepreneurial investments (DEIs). Our data thereby provide empirical evidence that SFOs are less likely to raise debt than PE firms, suggesting that SFOs follow pecking-order theory. Regarding the heterogeneity of the financial decisions of SFOs, our data indicate that the relationship between SFOs and debt financing is reinforced by the idiosyncrasies of entrepreneurial families, such as higher levels of owner management and a higher firm age. Surprisingly, our data do not support a moderating effect for the emphasis placed on socioemotional wealth (SEW).
State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
21 pages, with supplementary
Was ist ein Labor?
(2022)
ProtSTonKGs: A Sophisticated Transformer Trained on Protein Sequences, Text, and Knowledge Graphs
(2022)
While most approaches individually exploit unstructured data from the biomedical literature or structured data from biomedical knowledge graphs, their union can better exploit the advantages of such approaches, ultimately improving representations of biology. Using multimodal transformers for such purposes can improve performance on context dependent classication tasks, as demonstrated by our previous model, the Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs (STonKGs). In this work, we introduce ProtSTonKGs, a transformer aimed at learning all-encompassing representations of protein-protein interactions. ProtSTonKGs presents an extension to our previous work by adding textual protein descriptions and amino acid sequences (i.e., structural information) to the text- and knowledge graph-based input sequence used in STonKGs. We benchmark ProtSTonKGs against STonKGs, resulting in improved F1 scores by up to 0.066 (i.e., from 0.204 to 0.270) in several tasks such as predicting protein interactions in several contexts. Our work demonstrates how multimodal transformers can be used to integrate heterogeneous sources of information, paving the foundation for future approaches that use multiple modalities for biomedical applications.
Robots applied in therapeutic scenarios, for instance in the therapy of individuals with Autism Spectrum Disorder, are sometimes used for imitation learning activities in which a person needs to repeat motions by the robot. To simplify the task of incorporating new types of motions that a robot can perform, it is desirable that the robot has the ability to learn motions by observing demonstrations from a human, such as a therapist. In this paper, we investigate an approach for acquiring motions from skeleton observations of a human, which are collected by a robot-centric RGB-D camera. Given a sequence of observations of various joints, the joint positions are mapped to match the configuration of a robot before being executed by a PID position controller. We evaluate the method, in particular the reproduction error, by performing a study with QTrobot in which the robot acquired different upper-body dance moves from multiple participants. The results indicate the method's overall feasibility, but also indicate that the reproduction quality is affected by noise in the skeleton observations.
Typically, plastic packaging materials are produced using additives, like e.g. stabilisers, to introduce specific desired properties into the material or, in case of stabilisers, to prolong the shelf life of such packaging materials. However, those stabilisers are typically fossil-based and can pose risks to both environmental and human health. Therefore, the present study presents more sustainable alternatives based on regional renewable resources which show the relevant antioxidant, antimicrobial and UV absorbing properties to successfully serve as a plastic stabiliser. In the study, all plants are extracted and characterised with regard to not only antioxidant, antimicrobial and UV absorbing effects, but also with regard to additional relevant information like chemical constituents, molar mass distribution, absorbance in the visible range et cetera. The extraction process is furthermore optimised and, where applicable, reasonable opportunities for waste valorisation are explored and analysed. Furthermore, interactions between analysed plant extracts are described and model films based on Poly-Lactic Acid are prepared, incorporating analysed plant extracts. Based on those model films, formulation tests and migration analysis according to EU legislation is conducted.
The well-known aromatic and medicinal plant thyme (Thymus vulgaris L.) includes phenolic terpenoids like thymol and carvacrol which have strong antioxidant, antimicrobial and UV absorbing effects. Analyses show that those effects can be used in both lipophilic and hydrophilic surroundings, that the variant Varico 3 is a more potent cultivar than other analysed thyme variants, and that a passive extraction setup can be used for extract preparation while distillation of the Essential Oils can be a more efficient approach.
Macromolecular antioxidant polyphenols, particularly proanthocyanidins, have been found in the seed coats of the European horse chestnut (Aesculus hippocastanum L.) which are regularly discarded in phytopharmaceutical industry. In this study, such effects and compounds have been reported for the first time while a valorisation of waste materials has been analysed successfully. Furthermore, a passive extraction setup for waste materials and whole seeds has been developed. In extracts of snowdrops, precisely Galanthus elwesii HOOK.F., high concentrations of tocopherol have been found which promote a particularly high antioxidant capacity in lipophilic surroundings. Different coniferous woods (Abies div., Picea div.) which are in use as Christmas trees are extracted after separating the biomass in leafs and wood parts before being analysed regarding extraction optimisation and drought resistance of active substances. Antioxidant and UV absorbing proanthocyanidins are found even in dried biomasses, allowing the circular use of already used Christmas trees as bio-based stabilisers and the production of sustainable paper as a byproduct.
Due to the COVID-19 pandemic, health education programs and workplace health promotion (WHP) could only be offered under difficult conditions, if at all. In Germany for example, mandatory lockdowns, working from home, and physical distancing have led to a sharp decline in expenditure on prevention and health promotion from 2019 to 2020. At the same time, the pandemic has negatively affected many people’s mental health. Therefore, our goal was to examine audiovisual stimulation as a possible measure in the context of WHP, because its usage is contact-free, time flexible, and offers, additionally, voice-guided health education programs. In an online survey following a cross-sectional single case study design with 393 study participants, we examined the associations between audiovisual stimulation and mental health, work engagement, and burnout. Using multiple regression analyses, we could identify positive associations between audiovisual stimulation and mental health, burnout, and work engagement. However, longitudinal data are needed to further investigate causal mechanisms between mental health and the use of audiovisual stimulation. Nevertheless, especially with regard to the pandemic, audiovisual stimulation may represent a promising measure for improving mental health at the workplace.
Differential-Algebraic Equations and Beyond: From Smooth to Nonsmooth Constrained Dynamical Systems
(2022)
Effective Neighborhood Feature Exploitation in Graph CNNs for Point Cloud Object-Part Segmentation
(2022)
Part segmentation is the task of semantic segmentation applied on objects and carries a wide range of applications from robotic manipulation to medical imaging. This work deals with the problem of part segmentation on raw, unordered point clouds of 3D objects. While pioneering works on deep learning for point clouds typically ignore taking advantage of local geometric structure around individual points, the subsequent methods proposed to extract features by exploiting local geometry have not yielded significant improvements either. In order to investigate further, a graph convolutional network (GCN) is used in this work in an attempt to increase the effectiveness of such neighborhood feature exploitation approaches. Most of the previous works also focus only on segmenting complete point cloud data. Considering the impracticality of such approaches, taking into consideration the real world scenarios where complete point clouds are scarcely available, this work proposes approaches to deal with partial point cloud segmentation.
In the attempt to better capture neighborhood features, this work proposes a novel method to learn regional part descriptors which guide and refine the segmentation predictions. The proposed approach helps the network achieve state-of-the-art performance of 86.4% mIoU on the ShapeNetPart dataset for methods which do not use any preprocessing techniques or voting strategies. In order to better deal with partial point clouds, this work also proposes new strategies to train and test on partial data. While achieving significant improvements compared to the baseline performance, the problem of partial point cloud segmentation is also viewed through an alternate lens of semantic shape completion.
Semantic shape completion networks not only help deal with partial point cloud segmentation but also enrich the information captured by the system by predicting complete point clouds with corresponding semantic labels for each point. To this end, a new network architecture for semantic shape completion is also proposed based on point completion network (PCN) which takes advantage of a graph convolution based hierarchical decoder for completion as well as segmentation. In addition to predicting complete point clouds, results indicate that the network is capable of reaching within a margin of 5% to the mIoU performance of dedicated segmentation networks for partial point cloud segmentation.
As cameras are ubiquitous in autonomous systems, object detection is a crucial task. Object detectors are widely used in applications such as autonomous driving, healthcare, and robotics. Given an image, an object detector outputs both the bounding box coordinates as well as classification probabilities for each object detected. The state-of-the-art detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications in particular. It is therefore crucial to explain the reason behind each detector decision in order to gain user trust, enhance detector performance, and analyze their failure.
Previous work fails to explain as well as evaluate both bounding box and classification decisions individually for various detectors. Moreover, no tools explain each detector decision, evaluate the explanations, and also identify the reasons for detector failures. This restricts the flexibility to analyze detectors. The main contribution presented here is an open-source Detector Explanation Toolkit (DExT). It is used to explain the detector decisions, evaluate the explanations, and analyze detector errors. The detector decisions are explained visually by highlighting the image pixels that most influence a particular decision. The toolkit implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. To the author’s knowledge, this is the first work to conduct extensive qualitative and novel quantitative evaluations of different explanation methods across various detectors. The qualitative evaluation incorporates a visual analysis of the explanations carried out by the author as well as a human-centric evaluation. The human-centric evaluation includes a user study to understand user trust in the explanations generated across various explanation methods for different detectors. Four multi-object visualization methods are provided to merge the explanations of multiple objects detected in an image as well as the corresponding detector outputs in a single image. Finally, DExT implements the procedure to analyze detector failures using the formulated approach.
The visual analysis illustrates that the ability to explain a model is more dependent on the model itself than the actual ability of the explanation method. In addition, the explanations are affected by the object explained, the decision explained, detector architecture, training data labels, and model parameters. The results of the quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. In addition, a single explanation method cannot generate more faithful explanations than other methods for both the bounding box and the classification decision across different detectors. Both the quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. Finally, a convex polygon-based multi-object visualization method provides more human-understandable visualization than other methods.
The author expects that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
E-Sport im Fernsehen - Eine Analyse der Chancen eines neuen Themenfelds bei deutschen Fernsehsendern
(2022)
In den letzten Jahren hat die mediale Präsenz des E-Sports in Deutschland zugenommen, was dazu führte, dass auch die Allgemeinheit sich mit dem Thema auseinandersetzt. Dadurch sind Fernsehunternehmen auf die ursprüngliche Nischensportart, welche im Internet beheimatet und dort stark verwurzelt ist, aufmerksam geworden und bauen ihr Engagement in dem Bereich aus, um an dem wachsenden Erfolg teilzuhaben, der dem E-Sport prognostiziert wird. Doch eine erfolgreiche und geeignete Thematisierung des Trendthemas scheint aufgrund der besonderen Rahmenbedingungen nicht so einfach zu sein, wie es bei anderen klassischen Sportarten der Fall ist. Daraus ergibt sich die Frage: Welche Chancen hat die Thematisierung des E-Sports bei deutschen Fernsehsendern, wenn man die besonderen Gegebenheiten zusammen betrachtet? Die TV-Sender haben hierbei die Aufgabe, ein Publikum zu gewinnen, welches eigentlich gewohnt ist, dieses Thema im Internet zu konsumieren – dabei verliert das Fernsehen seit dem digitalen Zeitalter sowieso schon immer mehr Zuschauende an ebendieses. Neben den Hindernissen, die überwunden werden müssen, bietet der E-Sport den Fernsehunternehmen aber auch Mehrwerte – beides wird in dieser Arbeit ergründet.
Diese explorative Forschungsarbeit bietet einen Ansatz für die weitere Erforschung des E-Sports in den deutschen Medien – vor allem, da existierende Arbeiten sich hauptsächlich auf Live-Streaming-Portale oder die Darstellung des E-Sports in den klassischen Medien beziehen und ein Bezug zu den Intentionen und Gedanken der Fernsehunternehmen nicht vorhanden ist. Um diese Lücke zu schließen, wurden sieben Handelnde bei deutschen TV-Sendern oder Senderfamilien interviewt, die den E-Sport schon in unterschiedlicher Intensivität behandelt oder Überlegungen dazu durchgeführt haben. Den Abschluss dieser Arbeit - und gleichzeitige Anknüpfungspunkte für eine weiterführende Forschung zu dem Thema - bilden die acht Hypothesen, die einen Aufschluss darüber geben, welche Faktoren einen Einfluss auf die Chancen einer Thematisierung haben und die durch die Methode der qualitativen Inhaltsanalyse erstellt wurden. Der Forschungsgegenstand wurde dabei unter einer Vielzahl besonderer Aspekte und deren Wechselwirkungen betrachtet, wie z. B. den unterschiedlichen Senderformen, den Umständen innerhalb der E-Sport-Branche oder den vorhandenen Unternehmensstrukturen.
Trojanized software packages used in software supply chain attacks constitute an emerging threat. Unfortunately, there is still a lack of scalable approaches that allow automated and timely detection of malicious software packages and thus most detections are based on manual labor and expertise. However, it has been observed that most attack campaigns comprise multiple packages that share the same or similar malicious code. We leverage that fact to automatically reproduce manually identified clusters of known malicious packages that have been used in real world attacks, thus, reducing the need for expert knowledge and manual inspection. Our approach, AST Clustering using MCL to mimic Expertise (ACME), yields promising results with a 𝐹1 score of 0.99. Signatures are automatically generated based on characteristic code fragments from clusters and are subsequently used to scan the whole npm registry for unreported malicious packages. We are able to identify and report six malicious packages that have been removed from npm consequentially. Therefore, our approach can support the detection by reducing manual labor and hence may be employed by maintainers of package repositories to detect possible software supply chain attacks through trojanized software packages.
Dass die weitgehende kommerzielle Datenausspähung der großen Internetunternehmen nicht allein ein Problem der davon betroffenen Bürgerinnen und Bürger ist, sondern letztlich auch weitreichende gesellschaftliche Folgen hat, wurde mit dem Aufkommen des Rechtspopulismus in den USA, Brasilien und Europa zum Thema mindestens der Diskussion in Fachkreisen. Hass und Hetze im Netz, Fake News, politische Wahlwerbung und Manipulation in Social Media sind als Bedrohung für die freiheitlichen Demokratien westlicher Ausprägung unübersehbar geworden.
Hydrogen is a versatile energy carrier. When produced with renewable energy by water splitting, it is a carbon neutral alternative to fossil fuels. The industrialization process of this technology is currently dominated by electrolyzers powered by solar or wind energy. For small scale applications, however, more integrated device designs for water splitting using solar energy might optimize hydrogen production due to lower balance of system costs and a smarter thermal management. Such devices offer the opportunity to thermally couple the solar cell and the electrochemical compartment. In this way, heat losses in the absorber can be turned into an efficiency boost for the device via simultaneously enhancing the catalytic performance of the water splitting reactions, cooling the absorber, and decreasing the ohmic losses.[1,2] However,integrated devices (sometimes also referred to as “artificial leaves”), currently suffer from a lower technology readiness level (TRL) than the completely decoupled approach.