Refine
Departments, institutes and facilities
- Fachbereich Informatik (975)
- Fachbereich Angewandte Naturwissenschaften (627)
- Institut für funktionale Gen-Analytik (IFGA) (560)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (390)
- Fachbereich Ingenieurwissenschaften und Kommunikation (382)
- Fachbereich Wirtschaftswissenschaften (315)
- Institute of Visual Computing (IVC) (286)
- Institut für Cyber Security & Privacy (ICSP) (242)
- Institut für Verbraucherinformatik (IVI) (165)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (126)
Document Type
- Article (1640)
- Conference Object (1429)
- Part of a Book (247)
- Preprint (88)
- Report (73)
- Doctoral Thesis (64)
- Book (monograph, edited volume) (57)
- Master's Thesis (35)
- Working Paper (34)
- Conference Proceedings (27)
Year of publication
Language
- English (3783) (remove)
Keywords
- Machine Learning (15)
- Virtual Reality (15)
- FPGA (14)
- Robotics (14)
- Sustainability (14)
- ENaC (13)
- GC/MS (13)
- virtual reality (13)
- sustainability (12)
- ICT (11)
The processing of employees’ personal data is dramatically increasing, yet there is a lack of tools that allow employees to manage their privacy. In order to develop these tools, one needs to understand what sensitive personal data are and what factors influence employees’ willingness to disclose. Current privacy research, however, lacks such insights, as it has focused on other contexts in recent decades. To fill this research gap, we conducted a cross-sectional survey with 553 employees from Germany. Our survey provides multiple insights into the relationships between perceived data sensitivity and willingness to disclose in the employment context. Among other things, we show that the perceived sensitivity of certain types of data differs substantially from existing studies in other contexts. Moreover, currently used legal and contextual distinctions between different types of data do not accurately reflect the subtleties of employees’ perceptions. Instead, using 62 different data elements, we identified four groups of personal data that better reflect the multi-dimensionality of perceptions. However, previously found common disclosure antecedents in the context of online privacy do not seem to affect them. We further identified three groups of employees that differ in their perceived data sensitivity and willingness to disclose, but neither in their privacy beliefs nor in their demographics. Our findings thus provide employers, policy makers, and researchers with a better understanding of employees’ privacy perceptions and serve as a basis for future targeted research
on specific types of personal data and employees.
Background There is a lack of cardiac magnetic resonance (CMR) data regarding mid- to long-term myocardial damage due to Covid-19 in elite athletes. Objective This study investigated mid-to long-term consequences of myocardial involvement after a Covid-19 infection in elite athletes.
Methods Between January 2020 and October 2021, 27 athletes of the German Olympic centre Rhineland with confirmed Covid-19 infection were analyzed. 9 healthy non-athlete volunteers served as control. CMR was performed in mean 182 days (SD 99) after initial positive test result.
Results CMR did not reveal any signs of acute myocarditis in regard to the current Lake Louise criteria or myocardial damage in any of the 26 elite athletes with previous Covid-19 infection. Nevertheless, 92 % of the athletes experienced a symptomatic course and 54 % reported lasting symptoms for more than 4 weeks. In one male athlete CMR revealed an arrhythmogenic right ventricular cardiomyopathy (ARVC) and this athlete was excluded from the study. Athletes had significantly enlarged left and right ventricle volumes and increased left ventricular myocardial mass in comparison to the healthy control group (LVEDVi 103.4 vs. 91.1 ml/m 2 p=0.031; RVEDVi 104.1 vs. 86.6 ml/m 2 p=0.007; and LVMi 59.0 vs. 46.2 g/m 2 p=0.002).
Conclusion Our findings suggest that the risk for mid-to long-term myocardial damage seems to be very low to negligible in elite athletes. No conclusions can be drawn regarding myocardial injury in the acute phase of infection nor about possible long-term myocardial effects in the general population.
From Conclusion to Coda
(2022)
When the Artemis missions launch, NASA's Orion spacecraft (and crew as of the Artemis II mission) will be exposed to the deep space radiation environment beyond the protection of Earth's magnetosphere. Hence, it is essential to characterize the effects of space radiation, microgravity, and the combination thereof on cells and organisms, i.e., to quantify any correlations between the deep space radiation environment, genetic variation, and induced genetic changes in cells. To address this, the Artemis I mission will include the Peristaltic Laboratory for Automated Science with Multigenerations (PLASM) hardware containing the Deep Space Radiation Genomics (DSRG) experiment. The scientific aims of DSRG are (i) to identify the metabolic and genomic pathways in yeast affected by microgravity, space radiation, and their combination, and (ii) to differentiate between gravity and radiation exposure on single-gene deletion/overexpressing strains' ability to thrive in the spaceflight environment. Yeast is used as a model system because 70% of its essential genes have a human homolog, and over half of these homologs can functionally replace their human counterpart. As part of the experiment preparation towards spaceflight, an Experiment Verification Test (EVT) was performed at the Kennedy Space Center to verify that the experiment design, hardware, and approach to automated operations will enable achieving the scientific aims. For the EVT, fluidic systems were assembled, sterilized, loaded, and acceptance-tested, and subsequently integrated with the engineering parts to produce a flight-like PLASM unit. Each fluidic system consisted of (i) a Media Bag, (ii) four Culture Bags loaded with Saccharomyces cerevisiae (two with deletion series and the remaining two with overexpression series), and (iii) tubing and check valves. The EVT PLASM unit was put under a temperature profile replicating the anticipated different phases of flight, including handover to launch, spaceflight, and splashdown to handover back to the science team, for a 58-day period. At EVT completion, the rate of activation, cellular growth, RNA integrity, and sample contamination were interrogated. All of the experiment's success criteria were satisfied, encouraging our efforts to perform this investigation on Artemis I. This manuscript thus describes the process of spaceflight experiment design maturation with a focus on the EVT, its results, DSRG's preparation for its planned launch on Artemis I in 2022, and how the PLASM hardware can enable other scientific goals on future Artemis missions and/or the Lunar Orbital Platform – Gateway.
The white ground crater by the Phiale Painter (450–440 BC) exhibited in the “Pietro Griffo” Archaeological Museum in Agrigento (Italy) depicts two scenes from Perseus myth. The vase is of utmost importance to archaeologists because the figures are drawn on a white background with remarkable daintiness and attention to detail. Notwithstanding the white ground ceramics being well documented from an archaeological and historical point of view, doubts concerning the compositions of pigments and binders and the production technique are still unsolved. This kind of vase is a valuable rarity, the use of which is documented in elitist funeral rituals. The study aims to investigate the constituent materials and the execution technique of this magnificent crater. The investigation was carried out using non-destructive and non-invasive techniques in situ. Portable X-ray fluorescence and Fourier-transform total reflection infrared spectroscopy complemented the use of visible and ultraviolet light photography to get an overview and specific information on the vase. The XRF data were used to produce false colour maps showing the location of the various elements detected, using the program SmART_scan. The use of gypsum as the material for the white ground is an important result that deserves to be further investigated in similar vases.
Comparative study of 3D object detection frameworks based on LiDAR data and sensor fusion techniques
(2022)
Estimating and understanding the surroundings of the vehicle precisely forms the basic and crucial step for the autonomous vehicle. The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time. Generally, the perception system involves various subsystems such as localization, obstacle (static and dynamic) detection, and avoidance, mapping systems, and others. For perceiving the environment, these vehicles will be equipped with various exteroceptive (both passive and active) sensors in particular cameras, Radars, LiDARs, and others. These systems are equipped with deep learning techniques that transform the huge amount of data from the sensors into semantic information on which the object detection and localization tasks are performed. For numerous driving tasks, to provide accurate results, the location and depth information of a particular object is necessary. 3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object. Based on recent research, 3D object detection frameworks performing object detection and localization on LiDAR data and sensor fusion techniques show significant improvement in their performance. In this work, a comparative study of the effect of using LiDAR data for object detection frameworks and the performance improvement seen by using sensor fusion techniques are performed. Along with discussing various state-of-the-art methods in both the cases, performing experimental analysis, and providing future research directions.
The Poverty Reduction Effect of Social Protection: The Pros and Cons of a Multidisciplinary Approach
(2022)
There is a growing body of knowledge on the complex effects of social protection on poverty in Africa. This article explores the pros and cons of a multidisciplinary approach to studying social protection policies. Our research aimed at studying the interaction between cash transfers and social health protection policies in terms of their impact on inclusive growth in Ghana and Kenya. Also, it explored the policy reform context over time to unravel programme dynamics and outcomes. The analysis combined econometric and qualitative impact assessments with national- and local-level political economic analyses. In particular, dynamic effects and improved understanding of processes are well captured by this approach, thus, pushing the understanding of implementation challenges over and beyond a ‘technological fix,’ as has been argued before by Niño-Zarazúa et al. (World Dev 40:163–176, 2012), However, multidisciplinary research puts considerable demands on data and data handling. Finally, some poverty reduction effects play out over a longer time, requiring longitudinal consistent data that is still scarce.
The cooperation between researchers and practitioners during the different stages of the research process is promoted as it can be of benefit to both society and research supporting processes of ‘transformation’. While acknowledging the important potential of research–practice–collaborations (RPCs), this paper reflects on RPCs from a political-economic perspective to also address potential unintended adverse effects on knowledge generation due to divergent interests, incomplete information or the unequal distribution of resources. Asymmetries between actors may induce distorted and biased knowledge and even help produce or exacerbate existing inequalities. Potential merits and limitations of RPCs, therefore, need to be gauged. Taking RPCs seriously requires paying attention to these possible tensions—both in general and with respect to international development research, in particular: On the one hand, there are attempts to contribute to societal change and ethical concerns of equity at the heart of international development research, and on the other hand, there is the relative risk of encountering asymmetries more likely.
In young adulthood, important foundations are laid for health later in life. Hence, more attention should be paid to the health measures concerning students. A research field that is relevant to health but hitherto somewhat neglected in the student context is the phenomenon of presenteeism. Presenteeism refers to working despite illness and is associated with negative health and work-related effects. The study attempts to bridge the research gap regarding students and examines the effects of and reasons for this behavior. The consequences of digital learning on presenteeism behavior are moreover considered. A student survey (N = 1036) and qualitative interviews (N = 11) were conducted. The results of the quantitative study show significant negative relationships between presenteeism and health status, well-being, and ability to study. An increased experience of stress and a low level of detachment as characteristics of digital learning also show significant relationships with presenteeism. The qualitative interviews highlighted the aspect of not wanting to miss anything as the most important reason for presenteeism. The results provide useful insights for developing countermeasures to be easily integrated into university life, such as establishing fixed learning partners or the use of additional digital learning material.
Background: Since presenteeism is related to numerous negative health and work-related effects, measures are required to reduce it. There are initial indications that how an organization deals with health has a decisive influence on employees’ presenteeism behavior.
Aims: The concept of health-promoting collaboration was developed on the basis of these indications. As an extension of healthy leadership it includes not only the leader but also co-workers. In modern forms of collaboration, leaders cannot be assigned sole responsibility for employees’ health, since the leader is often hardly visible (digital leadership) or there is no longer a clear leader (shared leadership). The study examines the concept of health-promoting collaboration in relation to presenteeism. Relationships between health-promoting collaboration, well-being and work ability are also in focus, regarding presenteeism as a mediator.
Methods: The data comprise the findings of a quantitative survey of 308 employees at a German university of applied sciences. Correlation and mediator analyses were conducted.
Results: The results show a significant negative relationship between health-promoting collaboration and presenteeism. Significant positive relationships were found between health-promoting collaboration and both well-being and work ability. Presenteeism was identified as a mediator of these relationships.
Conclusion: The relevance of health-promoting collaboration in reducing presenteeism was demonstrated and various starting points for practice were proposed. Future studies should investigate further this newly developed concept in relation to presenteeism.
Deployment of modern data-driven machine learning methods, most often realized by deep neural networks (DNNs), in safety-critical applications such as health care, industrial plant control, or autonomous driving is highly challenging due to numerous model-inherent shortcomings. These shortcomings are diverse and range from a lack of generalization over insufficient interpretability and implausible predictions to directed attacks by means of malicious inputs. Cyber-physical systems employing DNNs are therefore likely to suffer from so-called safety concerns, properties that preclude their deployment as no argument or experimental setup can help to assess the remaining risk. In recent years, an abundance of state-of-the-art techniques aiming to address these safety concerns has emerged. This chapter provides a structured and broad overview of them. We first identify categories of insufficiencies to then describe research activities aiming at their detection, quantification, or mitigation. Our work addresses machine learning experts and safety engineers alike: The former ones might profit from the broad range of machine learning topics covered and discussions on limitations of recent methods. The latter ones might gain insights into the specifics of modern machine learning methods. We hope that this contribution fuels discussions on desiderata for machine learning systems and strategies on how to help to advance existing approaches accordingly.
We analyze short-term effects of free hospitalization insurance for the poorest quintile of the population in the province of Khyber Pakhtunkhwa, Pakistan. First, we exploit that eligibility is based on an exogenous poverty score threshold and apply a regression discontinuity design. Second, we exploit imperfect rollout and compare insured and uninsured households using propensity score matching. With both methods we fail to detect significant effects on the incidence of hospitalization. Whereas the program did not meaningfully increase the quantity of health care consumed, insured households more often choose private hospitals, indicating a shift towards higher perceived quality of care.
Research has identified nudging as a promising and effective tool to improve healthy eating behavior in a cafeteria setting. However, it remains unclear who is and who is not “nudgeable” (susceptible to nudges). An important influencing factor at the individual level is nudge acceptance. While some progress has been made in determining influences on the acceptance of healthy eating nudges, research on how personal characteristics (such as the perception of social norms) affect nudge acceptance remains scarce. We conducted a survey on 1032 university students to assess the acceptance of nine different types of healthy eating nudges in a cafeteria setting with four influential factors (social norms, health-promoting collaboration, responsibility to promote healthy eating, and procrastination). These factors are likely to play a role within a university and a cafeteria setting. The present study showed that key influential factors of nudge acceptance were the perceived responsibility to promote healthy eating and health-promoting collaboration. We also identified three different student clusters with respect to nudge acceptance, demonstrating that not all nudges were accepted equally. In particular, default, salience, and priming nudges were at least moderately accepted regardless of the degree of nudgeability. Our findings provide useful policy implications for nudge development by university, cafeteria, and public health officials. Recommendations are formulated for strengthening the theoretical background of nudge acceptance and the susceptibility to nudges.
Modern PCR-based analytical techniques have reached sensitivity levels that allow for obtaining complete forensic DNA profiles from even tiny traces containing genomic DNA amounts as small as 125 pg. Yet these techniques have reached their limits when it comes to the analysis of traces such as fingerprints or single cells. One suggestion to overcome these limits has been the usage of whole genome amplification (WGA) methods. These methods aim at increasing the copy number of genomic DNA and by this means generate more template DNA for subsequent analyses. Their application in forensic contexts has so far remained mostly an academic exercise, and results have not shown significant improvements and even have raised additional analytical problems. Until very recently, based on these disappointments, the forensic application of WGA seems to have largely been abandoned. In the meantime, however, novel improved methods are pointing towards a perspective for WGA in specific forensic applications. This review article tries to summarize current knowledge about WGA in forensics and suggests the forensic analysis of single-donor bioparticles and of single cells as promising applications.
A main factor hampering life in space is represented by high atomic number nuclei and energy (HZE) ions that constitute about 1% of the galactic cosmic rays. In the frame of the “STARLIFE” project, we accessed the Heavy Ion Medical Accelerator (HIMAC) facility of the National Institute of Radiological Sciences (NIRS) in Chiba, Japan. By means of this facility, the extremophilic species Haloterrigena hispanica and Parageobacillus thermantarcticus were irradiated with high LET ions (i.e., Fe, Ar, and He ions) at doses corresponding to long permanence in the space environment. The survivability of HZE-treated cells depended upon either the storage time and the hydration state during irradiation; indeed, dry samples were shown to be more resistant than hydrated ones. With particular regard to spores of the species P. thermantarcticus, they were the most resistant to irradiation in a water medium: an analysis of the changes in their biochemical fingerprinting during irradiation showed that, below the survivability threshold, the spores undergo to a germination-like process, while for higher doses, inactivation takes place as a consequence of the concomitant release of the core’s content and a loss of integrity of the main cellular components. Overall, the results reported here suggest that the selected extremophilic microorganisms could serve as biological model for space simulation and/or real space condition exposure, since they showed good resistance to ionizing radiation exposure and were able to resume cellular growth after long-term storage.
Recent advances in Natural Language Processing have substantially improved contextualized representations of language. However, the inclusion of factual knowledge, particularly in the biomedical domain, remains challenging. Hence, many Language Models (LMs) are extended by Knowledge Graphs (KGs), but most approaches require entity linking (i.e., explicit alignment between text and KG entities). Inspired by single-stream multimodal Transformers operating on text, image and video data, this thesis proposes the Sophisticated Transformer trained on biomedical text and Knowledge Graphs (STonKGs). STonKGs incorporates a novel multimodal architecture based on a cross encoder that uses the attention mechanism on a concatenation of input sequences derived from text and KG triples, respectively. Over 13 million so-called text-triple pairs, coming from PubMed and assembled using the Integrated Network and Dynamical Reasoning Assembler (INDRA), were used in an unsupervised pre-training procedure to learn representations of biomedical knowledge in STonKGs. By comparing STonKGs to an NLP- and a KG-baseline (operating on either text or KG data) on a benchmark consisting of eight fine-tuning tasks, the proposed knowledge integration method applied in STonKGs was empirically validated. Specifically, on tasks with a comparatively small dataset size and a larger number of classes, STonKGs resulted in considerable performance gains, beating the F1-score of the best baseline by up to 0.083. Both the source code as well as the code used to implement STonKGs are made publicly available so that the proposed method of this thesis can be extended to many other biomedical applications.
In March 2020, the world was hit by the coronavirus disease (COVID‐19) pandemic which led to all‐embracing measures to contain its spread. Most employees were forced to work from home and take care of their children because schools and daycares were closed. We present data from a research project in a large multinational organisation in the Netherlands with monthly quantitative measurements from January to May 2020 (N = 253–516), enriched with qualitative data from participants' comments before and after telework had started. Growth curve modelling showed major changes in employees' work‐related well‐being reflected in decreasing work engagement and increasing job satisfaction. For work‐non‐work balance, workload and autonomy, cubic trends over time were found, reflecting initial declines during crisis onset (March/April) and recovery in May. Participants' additional remarks exemplify that employees struggled with fulfilling different roles simultaneously, developing new routines and managing boundaries between life domains. Moderation analyses demonstrated that demographic variables shaped time trends. The diverging trends in well‐being indicators raise intriguing questions and show that close monitoring and fine‐grained analyses are needed to arrive at a better understanding of the impact of the crisis across time and among different groups of employees.
Unlimited paid time off policies are currently fashionable and widely discussed by HR professionals around the globe. While on the one hand, paid time off is considered a key benefit by employees and unlimited paid time off policies (UPTO) are seen as a major perk which may help in recruiting and retaining talented employees, on the other hand, early adopters reported that employees took less time off than previously, presumably leading to higher burnout rates. In this conceptual review, we discuss the theoretical and empirical evidence regarding the potential effects of UPTO on leave utilization, well-being and performance outcomes. We start out by defining UPTO and placing it in a historical and international perspective. Next, we discuss the key role of leave utilization in translating UPTO into concrete actions. The core of our article constitutes the description of the effects of UPTO and the two pathways through which these effects are assumed to unfold: autonomy need satisfaction and detrimental social processes. We moreover discuss the boundary conditions which facilitate or inhibit the successful utilization of UPTO on individual, team, and organizational level. In reviewing the literature from different fields and integrating existing theories, we arrive at a conceptual model and five propositions, which can guide future research on UPTO. We conclude with a discussion of the theoretical and societal implications of UPTO.
Contextual information is widely considered for NLP and knowledge discovery in life sciences since it highly influences the exact meaning of natural language. The scientific challenge is not only to extract such context data, but also to store this data for further query and discovery approaches. Classical approaches use RDF triple stores, which have serious limitations. Here, we propose a multiple step knowledge graph approach using labeled property graphs based on polyglot persistence systems to utilize context data for context mining, graph queries, knowledge discovery and extraction. We introduce the graph-theoretic foundation for a general context concept within semantic networks and show a proof of concept based on biomedical literature and text mining. Our test system contains a knowledge graph derived from the entirety of PubMed and SCAIView data and is enriched with text mining data and domain-specific language data using Biological Expression Language. Here, context is a more general concept than annotations. This dense graph has more than 71M nodes and 850M relationships. We discuss the impact of this novel approach with 27 real-world use cases represented by graph queries. Storing and querying a giant knowledge graph as a labeled property graph is still a technological challenge. Here, we demonstrate how our data model is able to support the understanding and interpretation of biomedical data. We present several real-world use cases that utilize our massive, generated knowledge graph derived from PubMed data and enriched with additional contextual data. Finally, we show a working example in context of biologically relevant information using SCAIView.
Recovery Across Different Temporal Settings: How Lunchtime Activities Influence Evening Activities
(2022)
Recovery from work stress during workday breaks, free evenings, weekends, and vacations is known to benefit employee health and well-being. However, how recovery at different temporal settings is interconnected is not well understood. We hypothesized that on days when employees engage in recovery-enhancing lunchtime activities, they will experience higher resources when leaving home from work (i.e., low fatigue and high positive affect) and consequently spend more time on recovery-enhancing activities in the evening, thus creating a positive recovery cycle. In this study, 97 employees were randomized into lunchtime park walk and relaxation groups. As evening activities, we measured time spent on physical exercise, physical activity in natural surroundings, and social activities. Afternoon resources and time spent on evening activities were assessed twice a week before, during, and after the intervention, for five weeks. Our results based on multilevel analyses showed that on days when employees completed the lunchtime park walk, they spent more time on evening physical exercise and physical activity in natural surroundings compared to days when the lunch break was spent as usual. However, neither lunchtime relaxation exercises nor afternoon resources were associated with any of the evening activities. Our findings suggest that other factors than afternoon resources are more important in determining how much time employees spend on various evening activities. Fifteen-minute lunchtime park walks inspired employees to engage in similar healthbenefitting activities during their free time.
For research in audiovisual interview archives often it is not only of interest what is said but also how. Sentiment analysis and emotion recognition can help capture, categorize and make these different facets searchable. In particular, for oral history archives, such indexing technologies can be of great interest. These technologies can help understand the role of emotions in historical remembering. However, humans often perceive sentiments and emotions ambiguously and subjectively. Moreover, oral history interviews have multi-layered levels of complex, sometimes contradictory, sometimes very subtle facets of emotions. Therefore, the question arises of the chance machines and humans have capturing and assigning these into predefined categories. This paper investigates the ambiguity in human perception of emotions and sentiment in German oral history interviews and the impact on machine learning systems. Our experiments reveal substantial differences in human perception for different emotions. Furthermore, we report from ongoing machine learning experiments with different modalities. We show that the human perceptual ambiguity and other challenges, such as class imbalance and lack of training data, currently limit the opportunities of these technologies for oral history archives. Nonetheless, our work uncovers promising observations and possibilities for further research.
Despite the increasing interest in single family offices (SFOs) as an investment owned by an entrepreneurial family, research on SFOs is still in its infancy. In particular, little is known about the capital structures of SFOs or the roots of SFO heterogeneity regarding financial decisions. By drawing on a hand-collected sample of 104 SFOs and private equity (PE) firms, we compare the financing choices of these two investor types in the context of direct entrepreneurial investments (DEIs). Our data thereby provide empirical evidence that SFOs are less likely to raise debt than PE firms, suggesting that SFOs follow pecking-order theory. Regarding the heterogeneity of the financial decisions of SFOs, our data indicate that the relationship between SFOs and debt financing is reinforced by the idiosyncrasies of entrepreneurial families, such as higher levels of owner management and a higher firm age. Surprisingly, our data do not support a moderating effect for the emphasis placed on socioemotional wealth (SEW).
State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
21 pages, with supplementary
ProtSTonKGs: A Sophisticated Transformer Trained on Protein Sequences, Text, and Knowledge Graphs
(2022)
While most approaches individually exploit unstructured data from the biomedical literature or structured data from biomedical knowledge graphs, their union can better exploit the advantages of such approaches, ultimately improving representations of biology. Using multimodal transformers for such purposes can improve performance on context dependent classication tasks, as demonstrated by our previous model, the Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs (STonKGs). In this work, we introduce ProtSTonKGs, a transformer aimed at learning all-encompassing representations of protein-protein interactions. ProtSTonKGs presents an extension to our previous work by adding textual protein descriptions and amino acid sequences (i.e., structural information) to the text- and knowledge graph-based input sequence used in STonKGs. We benchmark ProtSTonKGs against STonKGs, resulting in improved F1 scores by up to 0.066 (i.e., from 0.204 to 0.270) in several tasks such as predicting protein interactions in several contexts. Our work demonstrates how multimodal transformers can be used to integrate heterogeneous sources of information, paving the foundation for future approaches that use multiple modalities for biomedical applications.
Robots applied in therapeutic scenarios, for instance in the therapy of individuals with Autism Spectrum Disorder, are sometimes used for imitation learning activities in which a person needs to repeat motions by the robot. To simplify the task of incorporating new types of motions that a robot can perform, it is desirable that the robot has the ability to learn motions by observing demonstrations from a human, such as a therapist. In this paper, we investigate an approach for acquiring motions from skeleton observations of a human, which are collected by a robot-centric RGB-D camera. Given a sequence of observations of various joints, the joint positions are mapped to match the configuration of a robot before being executed by a PID position controller. We evaluate the method, in particular the reproduction error, by performing a study with QTrobot in which the robot acquired different upper-body dance moves from multiple participants. The results indicate the method's overall feasibility, but also indicate that the reproduction quality is affected by noise in the skeleton observations.
Typically, plastic packaging materials are produced using additives, like e.g. stabilisers, to introduce specific desired properties into the material or, in case of stabilisers, to prolong the shelf life of such packaging materials. However, those stabilisers are typically fossil-based and can pose risks to both environmental and human health. Therefore, the present study presents more sustainable alternatives based on regional renewable resources which show the relevant antioxidant, antimicrobial and UV absorbing properties to successfully serve as a plastic stabiliser. In the study, all plants are extracted and characterised with regard to not only antioxidant, antimicrobial and UV absorbing effects, but also with regard to additional relevant information like chemical constituents, molar mass distribution, absorbance in the visible range et cetera. The extraction process is furthermore optimised and, where applicable, reasonable opportunities for waste valorisation are explored and analysed. Furthermore, interactions between analysed plant extracts are described and model films based on Poly-Lactic Acid are prepared, incorporating analysed plant extracts. Based on those model films, formulation tests and migration analysis according to EU legislation is conducted.
The well-known aromatic and medicinal plant thyme (Thymus vulgaris L.) includes phenolic terpenoids like thymol and carvacrol which have strong antioxidant, antimicrobial and UV absorbing effects. Analyses show that those effects can be used in both lipophilic and hydrophilic surroundings, that the variant Varico 3 is a more potent cultivar than other analysed thyme variants, and that a passive extraction setup can be used for extract preparation while distillation of the Essential Oils can be a more efficient approach.
Macromolecular antioxidant polyphenols, particularly proanthocyanidins, have been found in the seed coats of the European horse chestnut (Aesculus hippocastanum L.) which are regularly discarded in phytopharmaceutical industry. In this study, such effects and compounds have been reported for the first time while a valorisation of waste materials has been analysed successfully. Furthermore, a passive extraction setup for waste materials and whole seeds has been developed. In extracts of snowdrops, precisely Galanthus elwesii HOOK.F., high concentrations of tocopherol have been found which promote a particularly high antioxidant capacity in lipophilic surroundings. Different coniferous woods (Abies div., Picea div.) which are in use as Christmas trees are extracted after separating the biomass in leafs and wood parts before being analysed regarding extraction optimisation and drought resistance of active substances. Antioxidant and UV absorbing proanthocyanidins are found even in dried biomasses, allowing the circular use of already used Christmas trees as bio-based stabilisers and the production of sustainable paper as a byproduct.
Differential-Algebraic Equations and Beyond: From Smooth to Nonsmooth Constrained Dynamical Systems
(2022)
Effective Neighborhood Feature Exploitation in Graph CNNs for Point Cloud Object-Part Segmentation
(2022)
Part segmentation is the task of semantic segmentation applied on objects and carries a wide range of applications from robotic manipulation to medical imaging. This work deals with the problem of part segmentation on raw, unordered point clouds of 3D objects. While pioneering works on deep learning for point clouds typically ignore taking advantage of local geometric structure around individual points, the subsequent methods proposed to extract features by exploiting local geometry have not yielded significant improvements either. In order to investigate further, a graph convolutional network (GCN) is used in this work in an attempt to increase the effectiveness of such neighborhood feature exploitation approaches. Most of the previous works also focus only on segmenting complete point cloud data. Considering the impracticality of such approaches, taking into consideration the real world scenarios where complete point clouds are scarcely available, this work proposes approaches to deal with partial point cloud segmentation.
In the attempt to better capture neighborhood features, this work proposes a novel method to learn regional part descriptors which guide and refine the segmentation predictions. The proposed approach helps the network achieve state-of-the-art performance of 86.4% mIoU on the ShapeNetPart dataset for methods which do not use any preprocessing techniques or voting strategies. In order to better deal with partial point clouds, this work also proposes new strategies to train and test on partial data. While achieving significant improvements compared to the baseline performance, the problem of partial point cloud segmentation is also viewed through an alternate lens of semantic shape completion.
Semantic shape completion networks not only help deal with partial point cloud segmentation but also enrich the information captured by the system by predicting complete point clouds with corresponding semantic labels for each point. To this end, a new network architecture for semantic shape completion is also proposed based on point completion network (PCN) which takes advantage of a graph convolution based hierarchical decoder for completion as well as segmentation. In addition to predicting complete point clouds, results indicate that the network is capable of reaching within a margin of 5% to the mIoU performance of dedicated segmentation networks for partial point cloud segmentation.
As cameras are ubiquitous in autonomous systems, object detection is a crucial task. Object detectors are widely used in applications such as autonomous driving, healthcare, and robotics. Given an image, an object detector outputs both the bounding box coordinates as well as classification probabilities for each object detected. The state-of-the-art detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications in particular. It is therefore crucial to explain the reason behind each detector decision in order to gain user trust, enhance detector performance, and analyze their failure.
Previous work fails to explain as well as evaluate both bounding box and classification decisions individually for various detectors. Moreover, no tools explain each detector decision, evaluate the explanations, and also identify the reasons for detector failures. This restricts the flexibility to analyze detectors. The main contribution presented here is an open-source Detector Explanation Toolkit (DExT). It is used to explain the detector decisions, evaluate the explanations, and analyze detector errors. The detector decisions are explained visually by highlighting the image pixels that most influence a particular decision. The toolkit implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. To the author’s knowledge, this is the first work to conduct extensive qualitative and novel quantitative evaluations of different explanation methods across various detectors. The qualitative evaluation incorporates a visual analysis of the explanations carried out by the author as well as a human-centric evaluation. The human-centric evaluation includes a user study to understand user trust in the explanations generated across various explanation methods for different detectors. Four multi-object visualization methods are provided to merge the explanations of multiple objects detected in an image as well as the corresponding detector outputs in a single image. Finally, DExT implements the procedure to analyze detector failures using the formulated approach.
The visual analysis illustrates that the ability to explain a model is more dependent on the model itself than the actual ability of the explanation method. In addition, the explanations are affected by the object explained, the decision explained, detector architecture, training data labels, and model parameters. The results of the quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. In addition, a single explanation method cannot generate more faithful explanations than other methods for both the bounding box and the classification decision across different detectors. Both the quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. Finally, a convex polygon-based multi-object visualization method provides more human-understandable visualization than other methods.
The author expects that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
Trojanized software packages used in software supply chain attacks constitute an emerging threat. Unfortunately, there is still a lack of scalable approaches that allow automated and timely detection of malicious software packages and thus most detections are based on manual labor and expertise. However, it has been observed that most attack campaigns comprise multiple packages that share the same or similar malicious code. We leverage that fact to automatically reproduce manually identified clusters of known malicious packages that have been used in real world attacks, thus, reducing the need for expert knowledge and manual inspection. Our approach, AST Clustering using MCL to mimic Expertise (ACME), yields promising results with a 𝐹1 score of 0.99. Signatures are automatically generated based on characteristic code fragments from clusters and are subsequently used to scan the whole npm registry for unreported malicious packages. We are able to identify and report six malicious packages that have been removed from npm consequentially. Therefore, our approach can support the detection by reducing manual labor and hence may be employed by maintainers of package repositories to detect possible software supply chain attacks through trojanized software packages.
Hydrogen is a versatile energy carrier. When produced with renewable energy by water splitting, it is a carbon neutral alternative to fossil fuels. The industrialization process of this technology is currently dominated by electrolyzers powered by solar or wind energy. For small scale applications, however, more integrated device designs for water splitting using solar energy might optimize hydrogen production due to lower balance of system costs and a smarter thermal management. Such devices offer the opportunity to thermally couple the solar cell and the electrochemical compartment. In this way, heat losses in the absorber can be turned into an efficiency boost for the device via simultaneously enhancing the catalytic performance of the water splitting reactions, cooling the absorber, and decreasing the ohmic losses.[1,2] However,integrated devices (sometimes also referred to as “artificial leaves”), currently suffer from a lower technology readiness level (TRL) than the completely decoupled approach.
Integrated solar water splitting devices that produce hydrogen without the use of power inverters operate outdoors and are hence exposed to varying weather conditions. As a result, they might sometimes work at non-optimal operation points below or above the maximum power point of the photovoltaic component, which would directly translate into efficiency losses. Up until now, however, no common parameter describing and quantifying this and other real-life operating related losses (e.g. spectral mismatch) exists in the community. Therefore, the annual-hydrogen-yield-climatic-response (AHYCR) ratio is introduced as a figure of merit to evaluate the outdoor performance of integrated solar water splitting devices. This value is defined as the ratio between the real annual hydrogen yield and the theoretical yield assuming the solar-to-hydrogen device efficiency at standard conditions. This parameter is derived for an exemplary system based on state-of-the-art AlGaAs//Si dual-junction solar cells and an anion exchange membrane electrolyzer using hourly resolved climate data from a location in southern California and from reanalysis data of Antarctica. This work will help to evaluate, compare and optimize the climatic response of solar water splitting devices in different climate zones.
This open access book brings together the latest developments from industry and research on automated driving and artificial intelligence.
Environment perception for highly automated driving heavily employs deep neural networks, facing many challenges. How much data do we need for training and testing? How to use synthetic data to save labeling costs for training? How do we increase robustness and decrease memory usage? For inevitably poor conditions: How do we know that the network is uncertain about its decisions? Can we understand a bit more about what actually happens inside neural networks? This leads to a very practical problem particularly for DNNs employed in automated driving: What are useful validation techniques and how about safety?
This book unites the views from both academia and industry, where computer vision and machine learning meet environment perception for highly automated driving. Naturally, aspects of data, robustness, uncertainty quantification, and, last but not least, safety are at the core of it. This book is unique: In its first part, an extended survey of all the relevant aspects is provided. The second part contains the detailed technical elaboration of the various questions mentioned above.
Research-Practice-Collaborations Addressing One Health and Urban Transformation. A Case Study
(2022)
One Health is an integrative approach at the interface of humans, animals and the environment, which can be implemented as Research-Practice-Collaboration (RPC) for its interdisciplinarity and intersectoral focus on the co-production of knowledge. To exemplify this, the present commentary shows the example of the Forschungskolleg “One Health and Urban Transformation” funded by the Ministry of Culture and Science of the State Government of Nord Rhine Westphalia in Germany. After analysis, the factors identified for a better implementation of RPC for One Health were the ones that allowed for constant communication and the reduction of power asymmetries between practitioners and academics in the co-production of knowledge. In this light, the training of a new generation of scientists at the boundaries of different disciplines that have mediation skills between academia and practice is an important contribution with great implications for societal change that can aid the further development of RPC.
Shaping off-job life is becoming increasingly important for workers to increase and maintain their optimal functioning (i.e., feeling and performing well). Proactively shaping the job domain (referred to as job crafting) has been extensively studied, but crafting in the off-job domain has received markedly less research attention. Based on the Integrative Needs Model of Crafting, needs-based off-job crafting is defined as workers’ proactive and self-initiated changes in their off-job lives, which target psychological needs satisfaction. Off-job crafting is posited as a possible means for workers to fulfill their needs and enhance well-being and performance over time. We developed a new scale to measure off-job crafting and examined its relationships to optimal functioning in different work contexts in different regions around the world (the United States, Germany, Austria, Switzerland, Finland, Japan, and the United Kingdom). Furthermore, we examined the criterion, convergent, incremental, discriminant, and structural validity evidence of the Needs-based Off-job Crafting Scale using multiple methods (longitudinal and cross-sectional survey studies, an “example generation”-task). The results showed that off-job crafting was related to optimal functioning over time, especially in the off-job domain but also in the job domain. Moreover, the novel off-job crafting scale had good convergent and discriminant validity, internal consistency, and test–retest reliability. To conclude, our series of studies in various countries show that off-job crafting can enhance optimal functioning in different life domains and support people in performing their duties sustainably. Therefore, shaping off-job life may be beneficial in an intensified and continually changing and challenging working life.
Guzzo et al. (Reference Guzzo, Schneider and Nalbantian2022) argue that open science practices may marginalize inductive and abductive research and preclude leveraging big data for scientific research. We share their assessment that the hypothetico-deductive paradigm has limitations (see also Staw, Reference Staw2016) and that big data provide grand opportunities (see also Oswald et al., Reference Oswald, Behrend, Putka and Sinar2020). However, we arrive at very different conclusions. Rather than opposing open science practices that build on a hypothetico-deductive paradigm, we should take initiative to do open science in a way compatible with the very nature of our discipline, namely by incorporating ambiguity and inductive decision-making. In this commentary, we (a) argue that inductive elements are necessary for research in naturalistic field settings across different stages of the research process, (b) discuss some misconceptions of open science practices that hide or discourage inductive elements, and (c) propose that field researchers can take ownership of open science in a way that embraces ambiguity and induction. We use an example research study to illustrate our points.
Introduction: Recovery experiences have thus far been portrayed as experiences that simply “happen” to people. However, recovery can also be understood from a crafting perspective; that is, individuals may proactively shape their work and non-work activities to recover from stress, satisfy their psychological needs, and achieve optimal functioning.
Materials and Methods: In my talk, I will present the theoretical basis of needs-based crafting based on a conceptual review of the literature. Moreover, I will present empirical findings on the validation of a newly developed off-job crafting scale.
Results: In five sub studies, we found that off-job crafting was related to optimal functioning over time. Moreover, the newly developed off-job crafting scale had good convergent and discriminant validity, internal consistency, and test-retest reliability.
Conclusions: Theoretical and empirical evidence suggests that needs-based crafting can enhance optimal functioning in different life domains and support people in performing their work duties sustainably. Proactive attempts to achieve better recovery through needs satisfaction may be beneficial in an intensified and continually changing and challenging working life. Our line of research provides important avenues for organizational research and practices regarding recovery and needs satisfaction occurring at work and outside work.
The processing of employee personal data is dramatically increasing. To protect employees' fundamental right to privacy, the law provides for the implementation of privacy controls, including transparency and intervention. At present, however, the stakeholders responsible for putting these obligations into action, such as employers and software engineers, simply lack the fundamental knowledge needed to design and implement the necessary controls. Indeed, privacy research has so far focused mainly on consumer relations in the private context. In contrast, privacy in the employment context is less well studied. However, since privacy is highly context-dependent, existing knowledge and privacy controls from other contexts cannot simply be adopted to the employment context. In particular, privacy in employment is subject to different legal and social norms, which require a different conceptualization of the right to privacy than is usual in other contexts. To adequately address these aspects, there is broad consensus that privacy must be regarded as a socio-technical concept in which human factors must be considered alongside technical-legal factors. Today, however, there is a particular lack of knowledge about human factors in employee privacy. Disregarding the needs and concerns of individuals or lack of usability, though, are common reasons for the failure of privacy and security measures in practice. This dissertation addresses key knowledge gaps on human factors in employee privacy by presenting the results of a total of three in-depth studies with employees in Germany. The results provide insights into employees' perceptions of the right to privacy, as well as their perceptions and expectations regarding the processing of employee personal data. The insights gained provide a foundation for the human-centered design and implementation of employee-centric privacy controls, i.e., privacy controls that incorporate the views, expectations, and capabilities of employees. Specifically, this dissertation presents the first mental models of employees on the right to informational self-determination, the German equivalent of the right to privacy. The results provide insights into employees' (1) perceptions of categories of data, (2) familiarity and expectations of the right to privacy, and (3) perceptions of data processing, data flow, safeguards, and threat models. In addition, three major types of mental models are presented, each with a different conceptualization of the right to privacy and a different desire for control. Moreover, this dissertation provides multiple insights into employees' perceptions of data sensitivity and willingness to disclose personal data in employment. Specifically, it highlights the uniqueness of the employment context compared to other contexts and breaks down the multi-dimensionality of employees' perceptions of personal data. As a result, the dimensions in which employees perceive data are presented, and differences among employees are highlighted. This is complemented by identifying personal characteristics and attitudes toward employers, as well as toward the right to privacy, that influence these perceptions. Furthermore, this dissertation provides insights into practical aspects for the implementation of personal data management solutions to safeguard employee privacy. Specifically, it presents the results of a user-centered design study with employees who process personal data of other employees as part of their job. Based on the results obtained, a privacy pattern is presented that harmonizes privacy obligations with personal data processing activities. The pattern is useful for designing privacy controls that help these employees handle employee personal data in a privacy-compliant manner, taking into account their skills and knowledge, thus helping to protect employee privacy. The outcome of this dissertation benefits a wide range of stakeholders who are involved in the protection of employee privacy. For example, it highlights the challenges to be considered by employers and software engineers when conceptualizing and designing employee-centric privacy controls. Policymakers and researchers gain a better understanding of employees' perceptions of privacy and obtain fundamental knowledge for future research into theoretical and abstract concepts or practical issues of employee privacy. Employers, IT engineers, and researchers gain insights into ways to empower data processing employees to handle employee personal data in a privacy-compliant manner, enabling employers to improve and promote compliance. Since the basic principles underlying informational self-determination have been incorporated into European privacy legislation, we are confident that our results are also of relevance to stakeholders outside Germany.
The corporate landscape is experiencing an increasing change in business models due to digitization. An increasing availability of data along the business processes enhance the opportunities for process automation. Technologies such as Robotic Process Automation (RPA) are widely used for business process optimization, but as a side effect an increase in stand-alone solutions and a lack of holistic approaches can be observed. Intelligent Process Automation (IPA) is said to support more complex processes and enable automated decision-making, but due to the lack of connectors makes the implementation difficult. RPA marketplaces can be a bridging technology to help companies implement Intelligent Process Automation. This paper explores the drivers and challenges for the adoption of RPA marketplaces to realize IPA. For this purpose, we conducted ten expert interviews with decision makers and IT staff from the process automation sector.
Silicon carbide and graphene possess extraordinary chemical and physical properties. Here, these different systems are linked and the changes in structural and dynamic properties are investigated. For the simulations performed a classical molecular dynamic (MD) approach was used. In this approach, a graphene layer (N = 240 atoms) was grafted at different distances on top of a 6H-SiC structure (N = 2400 atoms) and onto a 3C-SiC structure (N = 1728 atoms). The distances between the graphene and the 6H are 1.0, 1.3 and 1.5 Å and the distances between the graphene layer and the 3C-SiC are 2.0, 2.3, and 2.5 Å. Each system has been equilibrated at room temperature until no further relaxation was observed. The 6H-SiC structure in combination with graphene proves to be more stable compared to the combination with 3C-SiC. This can be seen well in the determined energies. Pair distribution functions were influenced slightly by the graphene layer due to steric and energetic changes. This becomes clear from the small shifts of the C-C distances. Interactions as well as bonds between graphene and SiC lead to the fact that small shoulders of the high-frequency SiC-peaks are visible in the spectra and at the same time the high-frequency peaks of graphene are completely absent.
Graph databases employ graph structures such as nodes, attributes and edges to model and store relationships among data. To access this data, graph query languages (GQL) such as Cypher are typically used, which might be difficult to master for end-users. In the context of relational databases, sequence to SQL models, which translate natural language questions to SQL queries, have been proposed. While these Neural Machine Translation (NMT) models increase the accessibility of relational databases, NMT models for graph databases are not yet available mainly due to the lack of suitable parallel training data. In this short paper we sketch an architecture which enables the generation of synthetic training data for the graph query language Cypher.
MOTIVATION
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited.
RESULTS
To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs (KGs). This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations in a shared embedding space. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against three baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.084 (i.e., from 0.881 to 0.965). Finally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications.
AVAILABILITY
We make the source code and the Python package of STonKGs available at GitHub (https://github.com/stonkgs/stonkgs) and PyPI (https://pypi.org/project/stonkgs/). The pre-trained STonKGs models and the task-specific classification models are respectively available at https://huggingface.co/stonkgs/stonkgs-150k and https://zenodo.org/communities/stonkgs.
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online.
In recent years, the ability of intelligent systems to be understood by developers and users has received growing attention. This holds in particular for social robots, which are supposed to act autonomously in the vicinity of human users and are known to raise peculiar, often unrealistic attributions and expectations. However, explainable models that, on the one hand, allow a robot to generate lively and autonomous behavior and, on the other, enable it to provide human-compatible explanations for this behavior are missing. In order to develop such a self-explaining autonomous social robot, we have equipped a robot with own needs that autonomously trigger intentions and proactive behavior, and form the basis for understandable self-explanations. Previous research has shown that undesirable robot behavior is rated more positively after receiving an explanation. We thus aim to equip a social robot with the capability to automatically generate verbal explanations of its own behavior, by tracing its internal decision-making routes. The goal is to generate social robot behavior in a way that is generally interpretable, and therefore explainable on a socio-behavioral level increasing users' understanding of the robot's behavior. In this article, we present a social robot interaction architecture, designed to autonomously generate social behavior and self-explanations. We set out requirements for explainable behavior generation architectures and propose a socio-interactive framework for behavior explanations in social human-robot interactions that enables explaining and elaborating according to users' needs for explanation that emerge within an interaction. Consequently, we introduce an interactive explanation dialog flow concept that incorporates empirically validated explanation types. These concepts are realized within the interaction architecture of a social robot, and integrated with its dialog processing modules. We present the components of this interaction architecture and explain their integration to autonomously generate social behaviors as well as verbal self-explanations. Lastly, we report results from a qualitative evaluation of a working prototype in a laboratory setting, showing that (1) the robot is able to autonomously generate naturalistic social behavior, and (2) the robot is able to verbally self-explain its behavior to the user in line with users' requests.
Introduction. The experience of pain is regularly accompanied by facial expressions. The gold standard for analyzing these facial expressions is the Facial Action Coding System (FACS), which provides so-called action units (AUs) as parametrical indicators of facial muscular activity. Particular combinations of AUs have appeared to be pain-indicative. The manual coding of AUs is, however, too time- and labor-intensive in clinical practice. New developments in automatic facial expression analysis have promised to enable automatic detection of AUs, which might be used for pain detection. Objective. Our aim is to compare manual with automatic AU coding of facial expressions of pain. Methods. FaceReader7 was used for automatic AU detection. We compared the performance of FaceReader7 using videos of 40 participants (20 younger with a mean age of 25.7 years and 20 older with a mean age of 52.1 years) undergoing experimentally induced heat pain to manually coded AUs as gold standard labeling. Percentages of correctly and falsely classified AUs were calculated, and we computed as indicators of congruency, "sensitivity/recall," "precision," and "overall agreement (F1)." Results. The automatic coding of AUs only showed poor to moderate outcomes regarding sensitivity/recall, precision, and F1. The congruency was better for younger compared to older faces and was better for pain-indicative AUs compared to other AUs. Conclusion. At the moment, automatic analyses of genuine facial expressions of pain may qualify at best as semiautomatic systems, which require further validation by human observers before they can be used to validly assess facial expressions of pain.
Current research in augmented, virtual, and mixed reality (XR) reveals a lack of tool support for designing and, in particular, prototyping XR applications. While recent tools research is often motivated by studying the requirements of non-technical designers and end-user developers, the perspective of industry practitioners is less well understood. In an interview study with 17 practitioners from different industry sectors working on professional XR projects, we establish the design practices in industry, from early project stages to the final product. To better understand XR design challenges, we characterize the different methods and tools used for prototyping and describe the role and use of key prototypes in the different projects. We extract common elements of XR prototyping, elaborating on the tools and materials used for prototyping and establishing different views on the notion of fidelity. Finally, we highlight key issues for future XR tools research.
Modern GPUs come with dedicated hardware to perform ray/triangle intersections and bounding volume hierarchy (BVH) traversal. While the primary use case for this hardware is photorealistic 3D computer graphics, with careful algorithm design scientists can also use this special-purpose hardware to accelerate general-purpose computations such as point containment queries. This article explains the principles behind these techniques and their application to vector field visualization of large simulation data using particle tracing.
BACKGROUND: Humans demonstrate many physiological changes in microgravity for which long-duration head down bed rest (HDBR) is a reliable analog. However, information on how HDBR affects sensory processing is lacking.
OBJECTIVE: We previously showed [25] that microgravity alters the weighting applied to visual cues in determining the perceptual upright (PU), an effect that lasts long after return. Does long-duration HDBR have comparable effects?
METHODS: We assessed static spatial orientation using the luminous line test (subjective visual vertical, SVV) and the oriented character recognition test (PU) before, during and after 21 days of 6° HDBR in 10 participants. Methods were essentially identical as previously used in orbit [25].
RESULTS: Overall, HDBR had no effect on the reliance on visual relative to body cues in determining the PU. However, when considering the three critical time points (pre-bed rest, end of bed rest, and 14 days post-bed rest) there was a significant decrease in reliance on visual relative to body cues, as found in microgravity. The ratio had an average time constant of 7.28 days and returned to pre-bed-rest levels within 14 days. The SVV was unaffected.
CONCLUSIONS: We conclude that bed rest can be a useful analog for the study of the perception of static self-orientation during long-term exposure to microgravity. More detailed work on the precise time course of our effects is needed in both bed rest and microgravity conditions.
The latest trends in inverse rendering techniques for reconstruction use neural networks to learn 3D representations as neural fields. NeRF-based techniques fit multi-layer perceptrons (MLPs) to a set of training images to estimate a radiance field which can then be rendered from any virtual camera by means of volume rendering algorithms. Major drawbacks of these representations are the lack of well-defined surfaces and non-interactive rendering times, as wide and deep MLPs must be queried millions of times per single frame. These limitations have recently been singularly overcome, but managing to accomplish this simultaneously opens up new use cases. We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates. KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes, and it demonstrably performs in real-time with plenty of room for future optimizations and extensions.
This project focuses on object detection in dense volume data. There are several types of dense volume data, namely Computed Tomography (CT) scan, Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI). This work focuses on CT scans. CT scans are not limited to the medical domain; they are also used in industries. CT scans are used in airport baggage screening, assembly lines, and the object detection systems in these places should be able to detect objects fast. One of the ways to address the issue of computational complexity and make the object detection systems fast is to use low-resolution images. Low-resolution CT scanning is fast. The entire process of scanning and detection can be made faster by using low-resolution images. Even in the medical domain, to reduce the rad iation dose, the exposure time of the patient should be reduced. The exposure time of patients could be reduced by allowing low-resolution CT scans. Hence it is essential to find out which object detection model has better accuracy as well as speed at low-resolution CT scans. However, the existing approaches did not provide details about how the model would perform when the resolution of CT scans is varied. Hence in this project, the goal is to analyze the impact of varying resolution of CT scans on both the speed and accuracy of the model. Three object detection models, namely RetinaNet, YOLOv3, and YOLOv5, were trained at various resolutions. Among the three models, it was found that YOLOv5 has the best mAP and f1 score at multiple resolutions on the DeepLesion dataset. RetinaNet model h as the least inference time on the DeepLesion dataset. From the experiments, it could be asserted that sacrificing mean average precision (mAP) to improve inference time by reducing resolution is feasible.
In (dynamic) adaptive mesh refinement (AMR) an input mesh is refined or coarsened to the need of the numerical application. This refinement happens with no respect to the originally meshed domain and is therefore limited to the geometrical accuracy of the original input mesh. We presented a novel approach to equip this input mesh with additional geometry information, to allow refinement and high-order cells based on the geometry of the original domain. We already showed a limited implementation of this algorithm. Now we evaluate this prototype with a numerical application and we prove its influence on the accuracy of certain numerical results. To be as practical as possible, we implement the ability to import meshes generated by Gmsh and equip them with the needed geometry information. Furthermore, we improve the mapping algorithm, which maps the geometry information of the boundary of a cell into the cell's volume. With these preliminary steps done, we use out new approach in a simulation of the advection of a concentration along the boundary of a sphere shell and past the boundary of a rotating cylinder. We evaluate the accuracy of our approach in comparison to the conventional refinement of cells to answer our research question: How does the performance and accuracy of the hexahedral curved domain AMR algorithm compare to linear AMR when solving the advection equation with the linear finite volume method? To answer this question, we show the influence of curved AMR on our simulation results and see, that it is even able to outperform far finer linear meshes in terms of accuracy. We also see that the current implementation of this approach is too slow for practical usage. We can therefore prove the benefits of curved AMR in certain, geometry-related application scenarios and show possible improvements to make it more feasible and practical in the future.
In the field of autonomous robotics, sensors have played a major role in defining the scope of technology and to a great extent, limitations of it as well. This cycle of constant updates and hence technological advancement has made given birth to some serious industries which were once inconceivable. Industries like autonomous driving which has a serious impact on safety and security of people, also has an equally harsh implication on the dynamics and economics of the market. With sensors like LiDAR and RADAR delivering 3D measurements as point clouds, there is a necessity to process the raw measurements directly and many research groups are working on the same. A sizable research has gone in solving the task of object detection on 2D images. In this thesis we aim to develop a LiDAR based 3D object detection scheme. We combine the ideas of PointPillars and feature pyramid networks from 2D vision to propose Pillar-FPN. The proposed method directly takes 3D point clouds as input and outputs a 3D bounding box. Our pipeline consists of multiple variations of proposed Pillar-FPN at the feature fusion level that are described in the results section. We have trained our model on the KITTI train dataset and evaluated it on KITTI validation dataset.
Modeling of Creep Behavior of Particulate Composites with Focus on Interfacial Adhesion Effect
(2022)
Evaluation of creep compliance of particulate composites using empirical models always provides parameters depending on initial stress and material composition. The effort spent to connect model parameters with physical properties has not resulted in success yet. Further, during the creep, delamination between matrix and filler may occur depending on time and initial stress, reducing an interface adhesion and load transfer to filler particles. In this paper, the creep compliance curves of glass beads reinforced poly(butylene terephthalate) composites were fitted with Burgers and Findley models providing different sets of time-dependent model parameters for each initial stress. Despite the finding that the Findley model performs well in a primary creep, the Burgers model is more suitable if secondary creep comes into play; they allow only for a qualitative prediction of creep behavior because the interface adhesion and its time dependency is an implicit, hidden parameter. As Young’s modulus is a parameter of these models (and the majority of other creep models), it was selected to be introduced as a filler content-dependent parameter with the help of the cube in cube elementary volume approach of Paul. The analysis led to the time-dependent creep compliance that depends only on the time-dependent creep of the matrix and the normalized particle distance (or the filler volume content), and it allowed accounting for the adhesion effect. Comparison with the experimental data confirmed that the elementary volume-based creep compliance function can be used to predict the realistic creep behavior of particulate composites.
Technological objects present themselves as necessary, only to become obsolete faster than ever before. This phenomenon has led to a population that experiences a plethora of technological objects and interfaces as they age, which become associated with certain stages of life and disappear thereafter. Noting the expanding body of literature within HCI about appropriation, our work pinpoints an area that needs more attention, “outdated technologies.” In other words, we assert that design practices can profit as much from imaginaries of the future as they can from reassessing artefacts from the past in a critical way. In a two-week fieldwork with 37 HCI students, we gathered an international collection of nostalgic devices from 14 different countries to investigate what memories people still have of older technologies and the ways in which these memories reveal normative and accidental use of technological objects. We found that participants primarily remembered older technologies with positive connotations and shared memories of how they had adapted and appropriated these technologies, rather than normative uses. We refer to this phenomenon as nostalgic reminiscence. In the future, we would like to develop this concept further by discussing how nostalgic reminiscence can be operationalized to stimulate speculative design in the present.
Contract-based nature protection schemes are a voluntary mechanism, with a limited contract duration, that aim to raise the acceptance of biodiversity conservation practices in agriculture among farmers and other land users. The purpose of this paper is to analyse the institutional settings of contract-based nature protection based on the– “Institutions of Sustainability” (IoS) framework in the German Rhine-Sieg district, and to outline the way in which policy measures should be designed to encourage farmers to participate in contract-based nature protection programmes. This was achieved by answering research questions to identify the challenges, potentials and obstacles of a contract-based nature protection scheme in different “sub-arenas” as defined in the IoS framework. Qualitative research methods were used as the methodology. The analysis shows that main constraints for sufficient implementation of contract-based nature protection schemes are the limited consideration of the impact of climate change during the contract period, the limited consideration of regional conditions as regards the measures taken on the ground and an inflexible contract duration.
This edited volume on “Recent Advances in Renewable Energy” presents a selection of refereed papers presented at the 1st International Conference on Electrical Systems and Automation. The book provides rigorous discussions, the state of the art, and recent developments in the field of renewable energy sources supported by examples and case studies, making it an educational tool for relevant undergraduate and graduate courses. The book will be a valuable reference for beginners, researchers, and professionals interested in renewable energy.
This book which is the second part of two volumes on ''Control of Electrical and Electronic Systems” presents a compilation of selected contributions to the 1st International Conference on Electrical Systems & Automation. The book provides rigorous discussions, the state of the art, and recent developments in the modelling, simulation and control of power electronics, industrial systems, and embedded systems. The book will be a valuable reference for beginners, researchers, and professionals interested in control of electrical and electronic systems.
Social protection has been increasingly recognized by experts from different fields as a key instrument for social, economic, political, and environmental development. It is also known for tackling multiple goals related to the reduction of risk, poverty and inequality at once. Yet, its instruments are often seen in isolation, programmes are still managed in silos and the systemic aspect is often overlooked. Engaging in critical discussions about the systemic aspect of social protection and outlining what it really takes to pursue a systemic approach has motivated the two editors, Prof. Dr. Esther Schüring from H-BRS and Dr. Markus Loewe from the German Institute of Development and Sustainability (IDOS) to launch the very first Handbook on Social Protection Systems in late 2021.
The human enzymes GLYAT (glycine N-acyltransferase), GLYATL1 (glutamine N-phenylacetyltransferase) and GLYATL2 (glycine N-acyltransferase-like protein 2) are not only important in the detoxification of xenobiotics via the human liver, but are also involved in the elimination of acyl residues that accumulate in the form of their coenzyme A (coA) esters in some rare inborn errors of metabolism. This concerns, for example, disorders in the degradation of branched-chain amino acids, such as isovaleric acidemia or propionic acidemia. In addition, they also assist in the elimination of ammonium, which is produced during the transamination of amino acids and accumulates in urea cycle defects. Sequence variants of the enzymes have also been investigated, which may provide evidence of impaired enzyme activities, from which therapy adjustments can potentially be derived. A modified Escherichia coli strain was chosen for the overexpression and partial biochemical characterization of the enzymes, which may allow solubility and proper folding. Since post-translational protein modifications are very limited in bacteria, we also attempted to overexpress the enzymes in HEK293 cells (human-derived). In addition to characterization via immunoblots and activity assays, intracellular localization of the enzymes was also performed using GFP coupling and confocal laser scanning microscopy in transfected HEK293 cells. The GLYATL2 enzyme may have tasks beyond detoxification and metabolic defects and the preliminary molecular biology work has been performed as part of this project - the enzyme activity determinations were outsourced to a co-supervised bachelor thesis. The enzyme activity determinations with purified recombinant human enzyme from Escherichia coli provided a threefold higher activity of the sequence variant p.(Asn156Ser) for GLYAT, which should be considered as the probably authentic wild type of the enzyme. In addition, a reduced activity of the GLYAT variant p.(Gln61Leu), which is very common in South Africa, was shown, which could be of particular importance in the treatment of isovaleric acidemia, which is also common in South Africa. Intracellularly, GLYAT and GLYATL1 could be localized mitochondrially. As the analyses have shown, sequence variations of GLYAT and GLYATL1 influence their enzyme activity. As an example, the GLYAT variant p.(Gln61Leu) is frequently found in South Africa. In the case of reduced GLYAT activity, patients could be increasingly treated with L-carnitine in the sense of an individualized therapy, since the conjugation of the toxic isovaleryl-coA with glycine is restricted by the GLYAT sequence variation. Activity-reducing variants identified in this project are of particular interest, as they may influence the treatment of certain metabolic defects.
While the recent discussion on Art. 25 GDPR often considers the approach of data protection by design as an innovative idea, the notion of making data protection law more effective through requiring the data controller to implement the legal norms into the processing design is almost as old as the data protection debate. However, there is another, more recent shift in establishing the data protection by design approach through law, which is not yet understood to its fullest extent in the debate. Art. 25 GDPR requires the controller to not only implement the legal norms into the processing design but to do so in an effective manner. By explicitly declaring the effectiveness of the protection measures to be the legally required result, the legislator inevitably raises the question of which methods can be used to test and assure such efficacy. In our opinion, extending the legal compatibility assessment to the real effects of the required measures opens this approach to interdisciplinary methodologies. In this paper, we first summarise the current state of research on the methodology established in Art. 25 sect. 1 GDPR, and pinpoint some of the challenges of incorporating interdisciplinary research methodologies. On this premise, we present an empirical research methodology and first findings which offer one approach to answering the question on how to specify processing purposes effectively. Lastly, we discuss the implications of these findings for the legal interpretation of Art. 25 GDPR and related provisions, especially with respect to a more effective implementation of transparency and consent, and provide an outlook on possible next research steps.
SLC6A14 (ATB0,+) is unique among SLC proteins in its ability to transport 18 of the 20 proteinogenic (dipolar and cationic) amino acids and naturally occurring and synthetic analogues (including anti-viral prodrugs and nitric oxide synthase (NOS) inhibitors). SLC6A14 mediates amino acid uptake in multiple cell types where increased expression is associated with pathophysiological conditions including some cancers. Here, we investigated how a key position within the core LeuT-fold structure of SLC6A14 influences substrate specificity. Homology modelling and sequence analysis identified the transmembrane domain 3 residue V128 as equivalent to a position known to influence substrate specificity in distantly related SLC36 and SLC38 amino acid transporters. SLC6A14, with and without V128 mutations, was heterologously expressed and function determined by radiotracer solute uptake and electrophysiological measurement of transporter-associated current. Substituting the amino acid residue occupying the SLC6A14 128 position modified the binding pocket environment and selectively disrupted transport of cationic (but not dipolar) amino acids and related NOS inhibitors. By understanding the molecular basis of amino acid transporter substrate specificity we can improve knowledge of how this multi-functional transporter can be targeted and how the LeuT-fold facilitates such diversity in function among the SLC6 family and other SLC amino acid transporters.
While many proteins are known clients of heat shock protein 90 (Hsp90), it is unclear whether the transcription factor, thyroid hormone receptor beta (TRb), interacts with Hsp90 to control hormonal perception and signaling. Higher Hsp90 expression in mouse fibroblasts was elicited by the addition of triiodothyronine (T3). T3 bound to Hsp90 and enhanced adenosine triphosphate (ATP) binding of Hsp90 due to a specific binding site for T3, as identified by molecular docking experiments. The binding of TRb to Hsp90 was prevented by T3 or by the thyroid mimetic sobetirome. Purified recombinant TRb trapped Hsp90 from cell lysate or purified Hsp90 in pull-down experiments. The affinity of Hsp90 for TRb was 124 nM. Furthermore, T3 induced the release of bound TRb from Hsp90, which was shown by streptavidin-conjugated quantum dot (SAv-QD) masking assay. The data indicate that the T3 interaction with TRb and Hsp90 may be an amplifier of the cellular stress response by blocking Hsp90 activity.
Due to expected positive impacts on business, the application of artificial intelligence has been widely increased. The decision-making procedures of those models are often complex and not easily understandable to the company’s stakeholders, i.e. the people having to follow up on recommendations or try to understand automated decisions of a system. This opaqueness and black-box nature might hinder adoption, as users struggle to make sense and trust the predictions of AI models. Recent research on eXplainable Artificial Intelligence (XAI) focused mainly on explaining the models to AI experts with the purpose of debugging and improving the performance of the models. In this article, we explore how such systems could be made explainable to the stakeholders. For doing so, we propose a new convolutional neural network (CNN)-based explainable predictive model for product backorder prediction in inventory management. Backorders are orders that customers place for products that are currently not in stock. The company now takes the risk to produce or acquire the backordered products while in the meantime, customers can cancel their orders if that takes too long, leaving the company with unsold items in their inventory. Hence, for their strategic inventory management, companies need to make decisions based on assumptions. Our argument is that these tasks can be improved by offering explanations for AI recommendations. Hence, our research investigates how such explanations could be provided, employing Shapley additive explanations to explain the overall models’ priority in decision-making. Besides that, we introduce locally interpretable surrogate models that can explain any individual prediction of a model. The experimental results demonstrate effectiveness in predicting backorders in terms of standard evaluation metrics and outperform known related works with AUC 0.9489. Our approach demonstrates how current limitations of predictive technologies can be addressed in the business domain.
A precise characterization of substances is essential for the safe handling of explosives. One parameter regularly characterized is the impact sensitivity. This is typically determined using a drop hammer. However, the results can vary depending on the test method and even the operator, and it is not possible to distinguish the type of decomposition such as detonation and deflagration. This study monitors the reaction progress by constructing a drop hammer to measure the decomposition reaction of four different primary explosives (tetrazene, silver azide, lead azide, lead styphnate) in order to determine the reproducibility of this method. Additionally, further possible evaluation methods are explored to improve on the current binary statistical analysis. To determine whether classification was possible based on extracted features, the responses of equipped sensor arrays, which measure and monitor the reactions, were studied and evaluated. Features were extracted from this data and were evaluated using multivariate methods such as principal component analysis (PCA) and linear discriminant analysis (LDA). The results indicate that although the measurements show substance specific trends, they also show a large scatter for each substance. By reducing the dimensions of the extracted features, different sample clusters can be represented and the calculated loadings allow significant parameters to be determined for classification. The results also suggest that differentiation of different reaction mechanisms is feasible. Testing of the regressor function shows reliable results considering the comparatively small amount of data.
This paper explores the role of artificial intelligence (AI) in elite sports. We approach the topic from two perspectives. Firstly, we provide a literature based overview of AI success stories in areas other than sports. We identified multiple approaches in the area of Machine Perception, Machine Learning and Modeling, Planning and Optimization as well as Interaction and Intervention, holding a potential for improving training and competition. Secondly, we discover the present status of AI use in elite sports. Therefore, in addition to another literature review, we interviewed leading sports scientist, which are closely connected to the main national service institute for elite sports in their countries. The analysis of this literature review and the interviews show that the most activity is carried out in the methodical categories of signal and image processing. However, projects in the field of modeling & planning have become increasingly popular within the last years. Based on these two perspectives, we extract deficits, issues and opportunities and summarize them in six key challenges faced by the sports analytics community. These challenges include data collection, controllability of an AI by the practitioners and explainability of AI results.
The backdated research dedicated to digital entrepreneurship education is immense, which makes it difficult to create an overview. Conversely, forward-thinking bibliometric visualization mapping and clustering can assist in visualizing and structuring difficult research literature. Hence, the goal of this mapping visualization study is to thoroughly discover and create clusters of EE to convey a taxonomic structure that can oblige as a basis for upcoming research. The analyzed data, which is drawn from Google Scholar through Publish or Perish tool, contain 1000 documents published between 2007 and 2022. This taxonomy should generate stronger bonds with digital entrepreneurial education research; on the other, it should stand in international research association to boost both interdisciplinary digital entrepreneurial education and its influence on a universal basis. This work strengthens student’s understanding of current digital entrepreneurial education research by classifying and decontaminating the most powerful knowledgeable relationship among its contributions and contributors. The bibliographic analysis includes ‘citation network’, ‘author’s research area’ and ‘paper content’ regarding the desired topic. In this paper, the above three mentioned terms are integrated which produces a bibliographic model of authors, titles of their papers, keywords and abstract by using Harzing’s Publish or Perish tool for extracting data from Google Scholar and further using VOSViewer to visualize networking map of co-authorship and term co-occurrence to administer the data for an instinctive and appropriate understanding of university students concerning ‘digital entrepreneurial intention’ research. This paper uses bibliometric analysis to analyze the keyword co-occurrence and co-authorship and VOSViewer is used for visualization.
Microarray-based experiments revealed that thyroid hormone triiodothyronine (T3) enhanced the binding of Cy5-labeled ATP on heat shock protein 90 (Hsp90). By molecular docking experiments with T3 on Hsp90, we identified a T3 binding site (TBS) near the ATP binding site on Hsp90. A synthetic peptide encoding HHHHHHRIKEIVKKHSQFIGYPITLFVEKE derived from the TBS on Hsp90 showed, in MST experiments, the binding of T3 at an EC50 of 50 μM. The binding motif can influence the activity of Hsp90 by hindering ATP accessibility or the release of ADP.
Cathepsin K (CatK) is a target for the treatment of osteoporosis, arthritis, and bone metastasis. Peptidomimetics with a cyanohydrazide warhead represent a new class of highly potent CatK inhibitors; however, their binding mechanism is unknown. We investigated two model cyanohydrazide inhibitors with differently positioned warheads: an azadipeptide nitrile Gü1303 and a 3-cyano-3-aza-β-amino acid Gü2602. Crystal structures of their covalent complexes were determined with mature CatK as well as a zymogen-like activation intermediate of CatK. Binding mode analysis, together with quantum chemical calculations, revealed that the extraordinary picomolar potency of Gü2602 is entropically favoured by its conformational flexibility at the nonprimed-primed subsites boundary. Furthermore, we demonstrated by live cell imaging that cyanohydrazides effectively target mature CatK in osteosarcoma cells. Cyanohydrazides also suppressed the maturation of CatK by inhibiting the autoactivation of the CatK zymogen. Our results provide structural insights for the rational design of cyanohydrazide inhibitors of CatK as potential drugs.
There is an unmet need for the development and validation of biomarkers and surrogate endpoints for clinical trials in propionic acidemia (PA) and methylmalonic acidemia (MMA). This review examines the pathophysiology and clinical consequences of PA and MMA that could form the basis for potential biomarkers and surrogate endpoints. Changes in primary metabolites such as methylcitric acid (MCA), MCA:citric acid ratio, oxidation of 13C-propionate (exhaled 13CO2), and propionylcarnitine (C3) have demonstrated clinical relevance in patients with PA or MMA. Methylmalonic acid, another primary metabolite, is a potential biomarker, but only in patients with MMA. Other potential biomarkers in patients with either PA and MMA include secondary metabolites, such as ammonium, or the mitochondrial disease marker, fibroblast growth factor 21. Additional research is needed to validate these biomarkers as surrogate endpoints, and to determine whether other metabolites or markers of organ damage could also be useful biomarkers for clinical trials of investigational drug treatments in patients with PA or MMA. This review examines the evidence supporting a variety of possible biomarkers for drug development in propionic and methylmalonic acidemias.
Hydrogen‐Bonded Cholesteric Liquid Crystals—A Modular Approach Toward Responsive Photonic Materials
(2022)
A supramolecular approach for photonic materials based on hydrogen-bonded cholesteric liquid crystals is presented. The modular toolbox of low-molecular-weight hydrogen-bond donors and acceptors provides a simple route toward liquid crystalline materials with tailor-made thermal and photonic properties. Initial studies reveal broad application potential of the liquid crystalline thin films for chemo- and thermosensing. The chemosensing performance is based on the interruption of the intermolecular forces between the donor and acceptor moieties by interference with halogen-bond donors. Future studies will expand the scope of analytes and sensing in aqueous media. In addition, the implementation of the reported materials in additive manufacturing and printed photonic devices is planned.
The epithelial sodium channel (ENaC) is a heterotrimeric ion channel that plays a key role in sodium and water homeostasis in tetrapod vertebrates. In the aldosterone-sensitive distal nephron, hormonally controlled ENaC expression matches dietary sodium intake to its excretion. Furthermore, ENaC mediates sodium absorption across the epithelia of the colon, sweat ducts, reproductive tract, and lung. ENaC is a constitutively active ion channel and its expression, membrane abundance, and open probability (PO) are controlled by multiple intracellular and extracellular mediators and mechanisms [9]. Aberrant ENaC regulation is associated with severe human diseases, including hypertension, cystic fibrosis, pulmonary edema, pseudohypoaldosteronism type 1, and nephrotic syndrome [9].
The implementation of the Sustainable Development Goals (SDGs) and the conservation and protection of nature are among the greatest challenges facing urban regions. There are few approaches so far that link the SDGs to natural diversity and related ecosystem services at the local level and track them in terms of increasing sustainable development at the local level. We want to close this gap by developing a set of indicators that capture ecosystem services in the sense of the SDGs and which are based on data that are freely available throughout Germany and Europe. Based on 10 SDGs and 35 SDG indicators, we are developing an ecosystem service and biodiversity-related indicator set for the evaluation of sustainable development in urban areas. We further show that it is possible to close many of the data gaps between SDGs and locally collected data mentioned in the literature and to translate the universal SDGs to the local level. Our example develops this set of indicators for the Bonn/Rhein-Sieg metropolitan area in North Rhine-Westphalia, Germany, which comprises both rural and densely populated settlements. This set of indicators can also help improve communication and plan sustainable development by increasing transparency in local sustainability, implementing a visible sustainability monitoring system, and strengthening the collaboration between local stakeholders.
The aim of this paper is to assess the objectives of farmers’ challenges in enhancing biodiversity. The so-called “trilemma” (WBGU 2021) of land use stems from the multiple demands made on land for the benefit of mitigating climate change, securing food and maintaining biodiversity. The agricultural sector is accused of maladministration: it is blamed for causing soil contamination, animal cruelty, bee mortality and climate change. That is why farmers are seen as key actors at all levels. They are, however, also key players when it comes to overcoming the problems of the future. Their supportive role is urgently needed, but farmers find themselves caught between a “rock” and a ”hard place”. Consumers are calling for sustainable, environmentally friendly production and inexpensive food products that do not contain pesticide residues, demanding enough food for all. Farmers are restricted by the wants and needs of consumers who are influenced by interest groups and are exposed to direct and indirect influencing factors and their interdependencies. They are also tasked with balancing the scrutiny of the critical public on the one hand, and the control exercised by eager authorities on the other.
As part of the DINA (Diversity of Insects in Nature protected Areas) project, a trans- and interdisciplinary research study, we collected and surveyed the data of farmers who are farming within or close to the 21 selected nature protected areas included in the DINA project. Data was collected as part of a mixed method approach using a semi-structured questionnaire. The methodological and strategic approach and interdependencies of issues demonstrate the complexity of today’s problems. To investigate this, we first used the data collection method using questionnaires with closed and open questions. The conflicts and obstacles farmers face were evaluated, and the results show farmers’ willingness and the importance of appreciation shown to farmers for implementation of biodiversity measures. The paper proposes some follow-up activities (quantitative study) to verify the objectives. The results will later lead to recommendations for policymakers and farmers in all German nature protected areas.
Education for Sustainable Development (ESD, SDG 4) and human well-being (SDG 3) are among the central subjects of the Sustainable Development Goals (SDGs). In this article, based on the Questionnaire for Eudaimonic Well-Being (QEWB), we investigate to what extent (a) there is a connection between EWB and practical commitment to the SDGs and whether (b) there is a deficit in EWB among young people in general. We also want to use the article to draw attention to the need for further research on the links between human well-being and commitment for sustainable development. A total of 114 students between the ages of 18 and 34, who are either engaged in (extra)curricular activities of sustainable development (28 students) or not (86 students), completed the QEWB. The students were interviewed twice: once regarding their current and their aspired EWB. Our results show that students who are actively engaged in activities for sustainable development report a higher EWB than non-active students. Furthermore, we show that students generally report deficits in EWB and wish for an improvement in their well-being. This especially applies to aspects of EWB related to self-discovery and the sense of meaning in life. Our study suggests that a practice-oriented ESD in particular can have a positive effect on the quality of life of young students and can support them in working on deficits in EWB.
Regions and their innovation ecosystems have increasingly become of interest to CSCW research as the context in which work, research and design takes place. Our study adds to this growing discourse, by providing preliminary data and reflections from an ongoing attempt to intervene and support a regional innovation ecosystem. We report on the benefits and shortcomings of a practice-oriented approach in such regional projects and highlight the importance of relations and the notion of spillover. Lastly, we discuss methodological and pragmatic hurdles that CSCW research needs to overcome in order to support regional innovation ecosystems successfully.
Taste is a complex phenomenon that depends on the individual experience and is a matter of collective negotiation and mediation. On the contrary, it is uncommon to include taste and its many facets in everyday design, particularly online shopping for fresh food products. To realize this unused potential, we conducted two Co-Design workshops. Based on the participants’ results in the workshops, we prototyped and evaluated a click-dummy smart-phone app to explore consumers’ needs for digital taste depiction. We found that emphasizing the natural qualities of food products, external reviews, and personalizing features lead to a reflection on the individual taste experience. The self-reflection through our design enables consumers to develop their taste competencies and thus strengthen their autonomy in decision-making. Ultimately, exploring taste as a social experience adds to a broader understanding of taste beyond a sensory phenomenon.
Focus on what matters: improved feature selection techniques for personal thermal comfort modelling
(2022)
Occupants' personal thermal comfort (PTC) is indispensable for their well-being, physical and mental health, and work efficiency. Predicting PTC preferences in a smart home can be a prerequisite to adjusting the indoor temperature for providing a comfortable environment. In this research, we focus on identifying relevant features for predicting PTC preferences. We propose a machine learning-based predictive framework by employing supervised feature selection techniques. We apply two feature selection techniques to select the optimal sets of features to improve the thermal preference prediction performance. The experimental results on a public PTC dataset demonstrated the efficiency of the feature selection techniques that we have applied. In turn, our PTC prediction framework with feature selection techniques achieved state-of-the-art performance in terms of accuracy, Cohen's kappa, and area under the curve (AUC), outperforming conventional methods.
Process-induced changes in the morphology of biodegradable polybutylene adipate terephthalate (PBAT) and polylactic acid (PLA) blends modified with various multifunctional chainextending cross-linkers (CECLs) are presented. The morphology of unmodified and modified films produced with blown film extrusion is examined in an extrusion direction (ED) and a transverse direction (TD). While FTIR analysis showed only small peak shifts indicating that the CECLs modify the molecular weight of the PBAT/PLA blend, SEM investigations of the fracture surfaces of blown extrusion films revealed their significant effect on the morphology formed during the processing. Due to the combined shear and elongation deformation during blown film extrusion, rather spherical PLA islands were partly transformed into long fibrils, which tended to decay to chains of elliptical islands if cooled slowly. The CECL introduction into the blend changed the thickness of the PLA fibrils, modified the interface adhesion, and altered the deformation behavior of the PBAT matrix from brittle to ductile. The results proved that CECLs react selectively with PBAT, PLA, and their interface. Furthermore, the reactions of CECLs with PBAT/PLA induced by the processing depended on the deformation directions (ED and TD), thus resulting in further non-uniformities of blown extrusion films.
In her recent article, Bender discusses several aspects of research–practice–collaborations (RPCs). In this commentary, we apply Bender's arguments to experiences in engineering research and development (R&D). We investigate the influence of interaction with practice partners on relevance, credibility, and legitimacy in the special engineering field of product development and analyze which methodological approaches are already being pursued for dealing with diverging interests and asymmetries and which steps will be necessary to include interests of civil society beyond traditional customer relations.
The electricity grid of the future will be built on renewable energy sources, which are highly variable and dependent on atmospheric conditions. In power grids with an increasingly high penetration of solar photovoltaics (PV), an accurate knowledge of the incoming solar irradiance is indispensable for grid operation and planning, and reliable irradiance forecasts are thus invaluable for energy system operators. In order to better characterise shortwave solar radiation in time and space, data from PV systems themselves can be used, since the measured power provides information about both irradiance and the optical properties of the atmosphere, in particular the cloud optical depth (COD). Indeed, in the European context with highly variable cloud cover, the cloud fraction and COD are important parameters in determining the irradiance, whereas aerosol effects are only of secondary importance.
Intention: Within the research project EnerSHelF (Energy-Self-Sufficiency for Health Facilities in Ghana), i. a. energy-meteorological and load-related measurement data are collected, for which an overview of the availability is to be presented on a poster.
Context: In Ghana, the total electricity consumed has almost doubled between 2008 and 2018 according to the Energy Commission of Ghana. This goes along with an unstable power grid, resulting in power outages whenever electricity consumption peaks. The blackouts called "dumsor" in Ghana, pose a severe burden to the healthcare sector. Innovative solutions are needed to reduce greenhouse gas emissions and improve energy and health access.