Refine
Departments, institutes and facilities
- Präsidium (397)
- Fachbereich Angewandte Naturwissenschaften (189)
- Fachbereich Informatik (178)
- Fachbereich Wirtschaftswissenschaften (154)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (133)
- Fachbereich Ingenieurwissenschaften und Kommunikation (124)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (100)
- Institut für funktionale Gen-Analytik (IFGA) (72)
- Fachbereich Sozialpolitik und Soziale Sicherung (43)
- Institute of Visual Computing (IVC) (41)
Document Type
- Article (440)
- Part of Periodical (407)
- Conference Object (160)
- Part of a Book (83)
- Report (54)
- Working Paper (42)
- Preprint (19)
- Bachelor Thesis (18)
- Master's Thesis (14)
- Other (10)
Year of publication
Has Fulltext
- yes (1269) (remove)
Keywords
- Entrepreneurship (8)
- Ghana (8)
- Hochschule Bonn-Rhein-Sieg (7)
- Machine Learning (7)
- Robotik (7)
- cytokine-induced killer cells (7)
- lignin (7)
- Digitalisierung (6)
- Kenya (6)
- Lignin (6)
Exposure to microgravity conditions causes cardiovascular deconditioning in astronauts during spaceflight. Until now, no specific drugs are available for countermeasure, since the underlying mechanism is largely unknown. Endothelial cells (ECs) and smooth muscle cells (SMCs) play key roles in various vascular functions, many of which are regulated by purinergic 2 (P2) receptors. However, their function in ECs and SMCs under microgravity conditions is still unclear. In this study, primary ECs and SMCs were isolated from bovine aorta and verified with specific markers. We show for the first time that the P2 receptor expression pattern is altered in ECs and SMCs after 24 h exposure to simulated microgravity using a clinostat. However, conditioned medium compensates this change in specific P2 receptors, for example, P2X7. Notably, P2 receptors such as P2X7 might be the important players during the paracrine interaction. Additionally, ECs and SMCs secreted different cytokines under simulated microgravity, leading into a pathogenic proliferation and migration. In conclusion, our data indicate P2 receptors might be important players responding to gravity changes in ECs and SMCs. Since some artificial P2 receptor ligands are applied as drugs, it is reasonable to assume that they might be promising candidates against cardiovascular deconditioning in the future.
Human mesenchymal stem cells (hMSCs) are considered a promising cell source for regenerative medicine, because they have the potential to differentiate into a variety of lineages among which the mesoderm-derived lineages such adipo- or osteogenesis are investigated best. Human MSCs can be harvested in reasonable to large amounts from several parts of the patient’s body and due to this possible autologous origin, allorecognition can be avoided. In addition, even in allogenic origin-derived donor cells, hMSCs generate a local immunosuppressive microenvironment, causing only a weak immune reaction. There is an increasing need for bone replacement in patients from all ages, due to a variety of reasons such as a new recreational behavior in young adults or age-related diseases. Adipogenic differentiation is another interesting lineage, because fat tissue is considered to be a major factor triggering atherosclerosis that ultimately leads to cardiovascular diseases, the main cause of death in industrialized countries. However, understanding the differentiation process in detail is obligatory to achieve a tight control of the process for future clinical applications to avoid undesired side effects. In this review, the current findings for adipo- and osteo-differentiation are summarized together with a brief statement on first clinical trials.
Background: Human mesenchymal stem cells (hMSCs) have shown their multipotential including differentiating towards endothelial and smooth muscle cell lineages, which triggers a new interest for using hMSCs as a putative source for cardiovascular regenerative medicine. Our recent publication has shown for the first time that purinergic 2 receptors are key players during hMSC differentiation towards adipocytes and osteoblasts. Purinergic 2 receptors play an important role in cardiovascular function when they bind to extracellular nucleotides. In this study, the possible functional role of purinergic 2 receptors during MSC endothelial and smooth muscle differentiation was investigated. Methods and Results: Human MSCs were isolated from liposuction materials. Then, endothelial and smooth muscle-like cells were differentiated and characterized by specific markers via Reverse Transcriptase-PCR (RT-PCR), Western blot and immunochemical stainings. Interestingly, some purinergic 2 receptor subtypes were found to be differently regulated during these specific lineage commitments: P2Y4 and P2Y14 were involved in the early stage commitment while P2Y1 was the key player in controlling MSC differentiation towards either endothelial or smooth muscle cells. The administration of natural and artificial purinergic 2 receptor agonists and antagonists had a direct influence on these differentiations. Moreover, a feedback loop via exogenous extracellular nucleotides on these particular differentiations was shown by apyrase digest. Conclusions: Purinergic 2 receptors play a crucial role during the differentiation towards endothelial and smooth muscle cell lineages. Some highly selective and potent artificial purinergic 2 ligands can control hMSC differentiation, which might improve the use of adult stem cells in cardiovascular tissue engineering in the future.
Cytokine-induced killer cells (CIK) in combination with dendritic cells (DCs) have shown favorable outcomes in renal cell carcinoma (RCC), yet some patients exhibit recurrence or no response to this therapy. In a broader perspective, enhancing the antitumor response of DC-CIK cells may help to address this issue. Considering this, herein, we investigated the effect of anti-CD40 and anti-CTLA-4 antibodies on the antitumor response of DC-CIK cells against RCC cell lines. Our analysis showed that, a) anti-CD40 antibody (G28.5) increased the CD3+CD56+ effector cells of CIK cells by promoting the maturation and activation of DCs, b) G28.5 also increased CTLA-4 expression in CIK cells via DCs, but the increase could be hindered by the CTLA-4 inhibitor (ipilimumab), c) adding ipilimumab was also able to significantly increase the proportion of CD3+CD56+ cells in DC-CIK cells, d) anti-CD40 antibodies predominated over anti-CTLA-4 antibodies for cytotoxicity, apoptotic effect and IFN-g secretion of DC-CIK cells against RCC cells, e) after ipilimumab treatment, the population of Tregs in CIK cells remained unaffected, but ipilimumab combined with G28.5 significantly reduced the expression of CD28 in CIK cells. Taken together, we suggest that the agonistic anti-CD40 antibody rather than CTLA-4 inhibitor may improve the antitumor response of DC-CIK cells, particularly in RCC. In addition, we pointed towards the yet to be known contribution of CD28 in the crosstalk between anti-CTLA-4 and CIK cells.
Cancer is a complex disease where resistance to therapies and relapses often pose a serious clinical challenge. The scenario is even more complicated when the cancer type itself is heterogeneous in nature, e.g., lymphoma, a cancer of the lymphocytes which constitutes more than 70 different subtypes. Indeed, the treatment options continue to expand in lymphomas. Herein, we provide insights into lymphoma-specific clinical trials based on cytokine-induced killer (CIK) cell therapy and other pre-clinical lymphoma models where CIK cells have been used along with other synergetic tumor-targeting immune modules to improve their therapeutic potential. From a broader perspective, we will highlight that CIK cell therapy has potential, and in this rapidly evolving landscape of cancer therapies its optimization (as a personalized therapeutic approach) will be beneficial in lymphomas.
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
Modern GPUs come with dedicated hardware to perform ray/triangle intersections and bounding volume hierarchy (BVH) traversal. While the primary use case for this hardware is photorealistic 3D computer graphics, with careful algorithm design scientists can also use this special-purpose hardware to accelerate general-purpose computations such as point containment queries. This article explains the principles behind these techniques and their application to vector field visualization of large simulation data using particle tracing.
When the Artemis missions launch, NASA's Orion spacecraft (and crew as of the Artemis II mission) will be exposed to the deep space radiation environment beyond the protection of Earth's magnetosphere. Hence, it is essential to characterize the effects of space radiation, microgravity, and the combination thereof on cells and organisms, i.e., to quantify any correlations between the deep space radiation environment, genetic variation, and induced genetic changes in cells. To address this, the Artemis I mission will include the Peristaltic Laboratory for Automated Science with Multigenerations (PLASM) hardware containing the Deep Space Radiation Genomics (DSRG) experiment. The scientific aims of DSRG are (i) to identify the metabolic and genomic pathways in yeast affected by microgravity, space radiation, and their combination, and (ii) to differentiate between gravity and radiation exposure on single-gene deletion/overexpressing strains' ability to thrive in the spaceflight environment. Yeast is used as a model system because 70% of its essential genes have a human homolog, and over half of these homologs can functionally replace their human counterpart. As part of the experiment preparation towards spaceflight, an Experiment Verification Test (EVT) was performed at the Kennedy Space Center to verify that the experiment design, hardware, and approach to automated operations will enable achieving the scientific aims. For the EVT, fluidic systems were assembled, sterilized, loaded, and acceptance-tested, and subsequently integrated with the engineering parts to produce a flight-like PLASM unit. Each fluidic system consisted of (i) a Media Bag, (ii) four Culture Bags loaded with Saccharomyces cerevisiae (two with deletion series and the remaining two with overexpression series), and (iii) tubing and check valves. The EVT PLASM unit was put under a temperature profile replicating the anticipated different phases of flight, including handover to launch, spaceflight, and splashdown to handover back to the science team, for a 58-day period. At EVT completion, the rate of activation, cellular growth, RNA integrity, and sample contamination were interrogated. All of the experiment's success criteria were satisfied, encouraging our efforts to perform this investigation on Artemis I. This manuscript thus describes the process of spaceflight experiment design maturation with a focus on the EVT, its results, DSRG's preparation for its planned launch on Artemis I in 2022, and how the PLASM hardware can enable other scientific goals on future Artemis missions and/or the Lunar Orbital Platform – Gateway.
Extremophiles are optimal models in experimentally addressing questions about the effects of cosmic radiation on biological systems. The resistance to high charge energy (HZE) particles, and helium (He) ions and iron (Fe) ions (LET at 2.2 and 200 keV/µm, respectively, until 1000 Gy), of spores from two thermophiles, Bacillushorneckiae SBP3 and Bacilluslicheniformis T14, and two psychrotolerants, Bacillus sp. A34 and A43, was investigated. Spores survived He irradiation better, whereas they were more sensitive to Fe irradiation (until 500 Gy), with spores from thermophiles being more resistant to irradiations than psychrotolerants. The survived spores showed different germination kinetics, depending on the type/dose of irradiation and the germinant used. After exposure to He 1000 Gy, D-glucose increased the lag time of thermophilic spores and induced germination of psychrotolerants, whereas L-alanine and L-valine increased the germination efficiency, except alanine for A43. FTIR spectra showed important modifications to the structural components of spores after Fe irradiation at 250 Gy, which could explain the block in spore germination, whereas minor changes were observed after He radiation that could be related to the increased permeability of the inner membranes and alterations of receptor complex structures. Our results give new insights on HZE resistance of extremophiles that are useful in different contexts, including astrobiology.
Intention: Within the research project EnerSHelF (Energy-Self-Sufficiency for Health Facilities in Ghana), i. a. energy-meteorological and load-related measurement data are collected, for which an overview of the availability is to be presented on a poster.
Context: In Ghana, the total electricity consumed has almost doubled between 2008 and 2018 according to the Energy Commission of Ghana. This goes along with an unstable power grid, resulting in power outages whenever electricity consumption peaks. The blackouts called "dumsor" in Ghana, pose a severe burden to the healthcare sector. Innovative solutions are needed to reduce greenhouse gas emissions and improve energy and health access.
Nachhaltige und zukunftsfähige Mobilität in Städten kann langfristig nur durch die aktive Partizipation ihrer Bürger und Institutionen erreicht werden. Betriebliches Mobilitätsmanagement (BMM) kann dabei einen positiven Beitrag im Hinblick auf Umwelt, Gesundheit und Kosten leisten. Die vorliegende Arbeit beschäftigt sich mit der Wahrnehmung gesundheitlicher und finanzieller Wertschöpfungsaspekte des BMM. Im Rahmen des Forschungsprojekts Betriebe lösen Verkehrsprobleme werden Mobilitätsverhalten und Maßnahmen der Betrieblichen Gesundheitsförderung (BFG) in Bonner Betrieben untersucht. Folgenden Aspekten wird besondere Beachtung geschenkt: Bedeutung Betrieblicher Gesundheitsförderung in Bonner Betrieben, Mobilitätsverhalten von Arbeitnehmern auf dem Weg zur Dienststelle, Wahrnehmung eines unmittelbaren Zusammenhangs zwischen körperlicher Aktivität und Gesundheit bzw. krankheitsbedingter Kosten und Umsatzeinbußen durch Bewegungsmangel. Die Analyse resultiert auf der Basis einer schriftlichen Befragung von 178 Unternehmen, einer Online-Umfrage von 1.341 Mitarbeitern aus 14 Unternehmen sowie auf persönlichen Interviews mit 22 Betriebsleitern bzw. Mobilitäts- und Gesundheitsbeauftragten. Die Ergebnisse der Studie machen sowohl Handlungsbedarf als auch Optimierungspotentiale im Bereich BMM auf Betriebsseite deutlich. Kostensimulationen zeigen darüber hinaus auf, dass durch die Implementierung von BGF-Maßnahmen, explizit der Förderung von Bewegung, auf betriebs- und volkswirtschaftlicher Seite beachtliche Kosten im Gesundheitsbereich eingespart sowie höhere Gewinne im Unternehmen erzielt werden können.
The dawn of the 21st Century has witnessed a tremendous increase in trade pacts among nations, resulting in renewed hopes for sustainable enterprise development in emerging economies worldwide. Ghana and other sub- Saharan African (SSA) countries have signed onto several North-South and South-South free trade agreements with the hope of strengthening their presence in the international trade arena, and to promote economic growth in SSA. For over two decades, however, very little has changed, and many have dashed their high hopes as enterprises continue to struggle in SSA. Not even the African Continental Free Trade Agreement (AfCFTA) could renew the hopes of sceptics. Several studies opined that enterprises in SSA could improve their domestic and international competitiveness by establishing mutually beneficial partnerships with their counterparts from the Global North and South. This study delved into the issues that affect North-South and South-South business collaborations and recommends key success factors that could help promote mutually beneficial cross-border business partnerships. The research includes both literature and empirical information on the key success factors of business partnerships between African enterprises as well as between African enterprises and firms from the Global North. We approached the study qualitatively using a phenomenological research design. Research participants included important stakeholders in Africa and Europe's international trade and sustainable enterprise development ecosystem. The study identified several challenges with the current business collaborations and recommended new ways of making such partnerships more beneficial.
Cytokine-induced killer (CIK) cells are an ex vivo expanded heterogeneous cell population with an enriched NK-T phenotype (CD3+CD56+). Due to the convenient and relatively inexpensive expansion capability, together with low incidence of graft versus host disease (GVHD) in allogeneic cancer patients, CIK cells are a promising candidate for immunotherapy. It is well known that natural killer group 2D (NKG2D) plays an important role in CIK cell-mediated antitumor activity; however, it remains unclear whether its engagement alone is sufficient or if it requires additional co-stimulatory signals to activate the CIK cells. Likewise, the role of 2B4 has not yet been identified in CIK cells. Herein, we investigated the individual and cumulative contribution of NKG2D and 2B4 in the activation of CIK cells. Our analysis suggests that (a) NKG2D (not 2B4) is implicated in CIK cell (especially CD3+CD56+ subset)-mediated cytotoxicity, IFN-γ secretion, E/T conjugate formation, and degranulation; (b) NKG2D alone is adequate enough to induce degranulation, IFN-γ secretion, and LFA-1 activation in CIK cells, while 2B4 only provides limited synergy with NKG2D (e.g., in LFA-1 activation); and (c) NKG2D was unable to costimulate CD3. Collectively, we conclude that NKG2D engagement alone suffices to activate CIK cells, thereby strengthening the idea that targeting the NKG2D axis is a promising approach to improve CIK cell therapy for cancer patients. Furthermore, CIK cells exhibit similarities to classical invariant natural killer (iNKT) cells with deficiencies in 2B4 stimulation and in the costimulation of CD3 with NKG2D. In addition, based on the current data, the divergence in receptor function between CIK cells and NK (or T) cells can be assumed, pointing to the possibility that molecular modifications (e.g., using chimeric antigen receptor technology) on CIK cells may need to be customized and optimized to maximize their functional potential.
Healing of large bone defects requires implants or scaffolds that provide structural guidance for cell growth, differentiation, and vascularization. In the present work, an agarose-hydroxyapatite composite scaffold was developed that acts not only as a 3D matrix, but also as a release system. Hydroxyapatite (HA) was incorporated into the agarose gels in situ in various ratios by a simple procedure consisting of precipitation, cooling, washing, and drying. The resulting gels were characterized regarding composition, porosity, mechanical properties, and biocompatibility. A pure phase of carbonated HA was identified in the scaffolds, which had pore sizes of up to several hundred micrometers. Mechanical testing revealed elastic moduli of up to 2.8 MPa for lyophilized composites. MTT testing on Lw35human mesenchymal stem cells (hMSCs) and osteosarcoma MG-63 cells proved the biocompatibility of the scaffolds. Furthermore, scaffolds were loaded with model drug compounds for guided hMSC differentiation. Different release kinetic models were evaluated for adenosine 5′-triphosphate (ATP) and suramin, and data showed a sustained release behavior over four days.
Bone tissue engineering is an ever-changing, rapidly evolving, and highly interdisciplinary field of study, where scientists try to mimic natural bone structure as closely as possible in order to facilitate bone healing. New insights from cell biology, specifically from mesenchymal stem cell differentiation and signaling, lead to new approaches in bone regeneration. Novel scaffold and drug release materials based on polysaccharides gain increasing attention due to their wide availability and good biocompatibility to be used as hydrogels and/or hybrid components for drug release and tissue engineering. This article reviews the current state of the art, recent developments, and future perspectives in polysaccharide-based systems used for bone regeneration.
Renewable resources are gaining increasing interest as a source for environmentally benign biomaterials, such as drug encapsulation/release compounds, and scaffolds for tissue engineering in regenerative medicine. Being the second largest naturally abundant polymer, the interest in lignin valorization for biomedical utilization is rapidly growing. Depending on its resource and isolation procedure, lignin shows specific antioxidant and antimicrobial activity. Today, efforts in research and industry are directed toward lignin utilization as a renewable macromolecular building block for the preparation of polymeric drug encapsulation and scaffold materials. Within the last five years, remarkable progress has been made in isolation, functionalization and modification of lignin and lignin-derived compounds. However, the literature so far mainly focuses lignin-derived fuels, lubricants and resins. The purpose of this review is to summarize the current state of the art and to highlight the most important results in the field of lignin-based materials for potential use in biomedicine (reported in 2014⁻2018). Special focus is placed on lignin-derived nanomaterials for drug encapsulation and release as well as lignin hybrid materials used as scaffolds for guided bone regeneration in stem cell-based therapies.
Polyether and polyether/ester based TPU (thermoplastic polyurethanes) were investigated with wide-angle XRD (X-ray diffraction) and SAXS (small angle X-ray scattering). Furthermore, SAXS measurements were performed in the temperature range of 30 °C to 130 °C. Polyether based polymers exhibit only one broad diffraction signal in a region of 2 θ 15° to 25°. In case of polyurethanes with ether/ester modification, the broad diffraction signal arises with small sharp diffraction signals. SAXS measurements of polymers reveal the size and shape of the crystalline zones of the polymer. Between 30 °C and 130 °C the size of the crystalline zone changes significantly. The size decreases in most of investigated TPU. In the case of Desmopan 9365D an increase of the particle size was observed.
Approximately 45% of global greenhouse gas emissions are caused by the construction and use of buildings. Thermal insulation of buildings in the current context of climate change is a well-known strategy to improve the energy efficiency of buildings. The development of renewable insulation material can overcome the drawbacks of widely used insulation systems based on polystyrene or mineral wool. This study analyzes the sustainability and thermal conductivity of new insulation materials made of Miscanthus x giganteus fibers, foaming agents, and alkali-activated fly ash binder. Life cycle assessments (LCA) are necessary to perform benchmarking of environmental impacts of new formulations of geopolymer-based insulation materials. The global warming potential (GWP) of the product is primarily determined by the main binder component sodium silicate. Sodium silicate's CO2 emissions depend on local production, transportation, and energy consumption. The results, which have been published during recent years, vary in a wide range from 0.3 kg to 3.3 kg CO2-eq. kg-1. The overall GWP of the insulation system based on Miscanthus fibers, with properties according to current thermal insulation regulations, reaches up to 95% savings of CO2 emissions compared to conventional systems. Carbon neutrality can be achieved through formulations containing raw materials with carbon dioxide emissions and renewable materials with negative GWP, thus balancing CO2 emissions.
The clear-sky radiative effect of aerosol-radiation interactions is of relevance for our understanding of the climate system. The influence of aerosol on the surface energy budget is of high interest for the renewable energy sector. In this study, the radiative effect is investigated in particular with respect to seasonal and regional variations for the region of Germany and the year 2015 at the surface and top of atmosphere using two complementary approaches.
First, an ensemble of clear-sky models which explicitly consider aerosols is utilized to retrieve the aerosol optical depth and the surface direct radiative effect of aerosols by means of a clear sky fitting technique. For this, short-wave broadband irradiance measurements in the absence of clouds are used as a basis. A clear sky detection algorithm is used to identify cloud free observations. Considered are measurements of the shortwave broadband global and diffuse horizontal irradiance with shaded and unshaded pyranometers at 25 stations across Germany within the observational network of the German Weather Service (DWD). Clear sky models used are MMAC, MRMv6.1, METSTAT, ESRA, Heliosat-1, CEM and the simplified Solis model. The definition of aerosol and atmospheric characteristics of the models are examined in detail for their suitability for this approach.
Second, the radiative effect is estimated using explicit radiative transfer simulations with inputs on the meteorological state of the atmosphere, trace-gases and aerosol from CAMS reanalysis. The aerosol optical properties (aerosol optical depth, Ångström exponent, single scattering albedo and assymetrie parameter) are first evaluated with AERONET direct sun and inversion products. The largest inconsistency is found for the aerosol absorption, which is overestimated by about 0.03 or about 30 % by the CAMS reanalysis. Compared to the DWD observational network, the simulated global, direct and diffuse irradiances show reasonable agreement within the measurement uncertainty. The radiative kernel method is used to estimate the resulting uncertainty and bias of the simulated direct radiative effect. The uncertainty is estimated to −1.5 ± 7.7 and 0.6 ± 3.5 W m−2 at the surface and top of atmosphere, respectively, while the annual-mean biases at the surface, top of atmosphere and total atmosphere are −10.6, −6.5 and 4.1 W m−2, respectively.
The retrieval of the aerosol radiative effect with the clear sky models shows a high level of agreement with the radiative transfer simulations, with an RMSE of 5.8 W m−2 and a correlation of 0.75. The annual mean of the REari at the surface for the 25 DWD stations shows a value of −12.8 ± 5 W m−2 as average over the clear sky models, compared to −11 W m−2 from the radiative transfer simulations. Since all models assume a fixed aerosol characterisation, the annual cycle of the aerosol radiation effect cannot be reproduced. Out of this set of clear sky models, the largest level of agreement is shown by the ESRA and MRMv6.1 models.
The clear-sky radiative effect of aerosol–radiation interactions is of relevance for our understanding of the climate system. The influence of aerosol on the surface energy budget is of high interest for the renewable energy sector. In this study, the radiative effect is investigated in particular with respect to seasonal and regional variations for the region of Germany and the year 2015 at the surface and top of atmosphere using two complementary approaches.
First, an ensemble of clear-sky models which explicitly consider aerosols is utilized to retrieve the aerosol optical depth and the surface direct radiative effect of aerosols by means of a clear-sky fitting technique. For this, short-wave broadband irradiance measurements in the absence of clouds are used as a basis. A clear-sky detection algorithm is used to identify cloud-free observations. Considered are measurements of the short-wave broadband global and diffuse horizontal irradiance with shaded and unshaded pyranometers at 25 stations across Germany within the observational network of the German Weather Service (DWD). The clear-sky models used are the Modified MAC model (MMAC), the Meteorological Radiation Model (MRM) v6.1, the Meteorological–Statistical solar radiation model (METSTAT), the European Solar Radiation Atlas (ESRA), Heliosat-1, the Center for Environment and Man solar radiation model (CEM), and the simplified Solis model. The definition of aerosol and atmospheric characteristics of the models are examined in detail for their suitability for this approach.
Second, the radiative effect is estimated using explicit radiative transfer simulations with inputs on the meteorological state of the atmosphere, trace gases and aerosol from the Copernicus Atmosphere Monitoring Service (CAMS) reanalysis. The aerosol optical properties (aerosol optical depth, Ångström exponent, single scattering albedo and asymmetry parameter) are first evaluated with AERONET direct sun and inversion products. The largest inconsistency is found for the aerosol absorption, which is overestimated by about 0.03 or about 30 % by the CAMS reanalysis. Compared to the DWD observational network, the simulated global, direct and diffuse irradiances show reasonable agreement within the measurement uncertainty. The radiative kernel method is used to estimate the resulting uncertainty and bias of the simulated direct radiative effect. The uncertainty is estimated to −1.5 ± 7.7 and 0.6 ± 3.5 W m−2 at the surface and top of atmosphere, respectively, while the annual-mean biases at the surface, top of atmosphere and total atmosphere are −10.6, −6.5 and 4.1 W m−2, respectively.
The retrieval of the aerosol radiative effect with the clear-sky models shows a high level of agreement with the radiative transfer simulations, with an RMSE of 5.8 W m−2 and a correlation of 0.75. The annual mean of the REari at the surface for the 25 DWD stations shows a value of −12.8 ± 5 W m−2 as the average over the clear-sky models, compared to −11 W m−2 from the radiative transfer simulations. Since all models assume a fixed aerosol characterization, the annual cycle of the aerosol radiation effect cannot be reproduced. Out of this set of clear-sky models, the largest level of agreement is shown by the ESRA and MRM v6.1 models.
Bei der Übertragung und Speicherung von Daten ist es eine wesentliche Frage, inwieweit die Daten komprimiert werden können, ohne dass deren Informationsgehalt verloren geht.
Ein Maß für den Informationsgehalt von Daten ist also von grundlegender Bedeutung. Vor etwa siebzig Jahren hat C. E. Shannon ein solches Maß eingeführt und damit das Lehr- und Forschungsgebiet der Informationstheorie begründet, welches seit dem bis heute hin wesentlich zur Konzeption und Realisierung von Informationsund Kommunikationstechnologien beigetragen hat. Etwa zwanzig Jahre später hat A. N. Kolmogorov ein anderes Maß für den Informationsgehalt von Daten eingeführt. Während die Shannonsche Informationstheorie zum Curriculum von mathematischen, informatischen und elektrotechnischen Studiengängen gehört, ist die Algorithmische Informationstheorie von Kolmogorov weit weniger bekannt und eher Gegenstand von speziellen Lehrveranstaltungen.
Seit einigen Jahren nimmt allerdings die Beschäftigung mit dieser Theorie zu, zumal in der einschlägigen Literatur von erfolgreichen praktischen Anwendungen der Theorie berichtet wird. Die vorliegende Arbeit gibt eine Einführung in grundlegende Ideen dieser Theorie und beschreibt deren Anwendungsmöglichkeiten bei einigen ausgewählten Problemstellungen der Theoretischen Informatik.
Die Ausarbeitung kann als Skript für einführende Lehrveranstaltungen in die Algorithmische Informationstheorie sowie als Lektüre zur Einarbeitung in die Thematik als Ausgangspunkt für Forschungs- und Entwicklungsarbeiten verwendet werden.
The development of mobile robotic systems is a demanding task regarding its complexity, required resources and skills in multiple fields such as software development, artificial intelligence, mechanical design, electrical engineering, signal processing, sensor technology or control theory. This holds true particularly for soccer playing robots, where additional aspects like high dynamics, cooperation and high physical stress have to be dealt with. In robot competitions such as RoboCup, additional skills in the domains of team, project and knowledge management are of importance.
Der Beitrag untersucht, wie ein Präsenzlabor durch ein Remote-Labor ergänzt undersetzt werden kann. Dazu wird das Laborpraktikum Digitaltechnik der Hochschule Bonn-Rhein-Sieg betrachtet, bei dem ein Remote-Labor Flexibilität bei der Versuchsdurchführung bietet und Versuche ermöglicht, die allein mit dem Präsenzlabor nicht möglich wären. Neben der Ergänzung der Präsenzversuche können Studie-rende das Praktikum auch komplett im Remote-Labor durchführen. Durch klare Anforderungen an die Erteilung eines Testats ist dies sowohl für sie als auch für Lehrende praktikabel zu handhaben. Rückmeldungen der Studierenden und Nutzungszahlen belegen die Akzeptanz des Remote-Labors. Dabei zeigt sich, dass die Studierenden sehr heterogen mit dem Remote-Labor umgehen: Einige von ihnen nutzen das Remote-Labor als zusätzliche Praktikumszeit für Versuche die auch im Präsenzlabor möglich wären; andere nutzen es als Erweiterung der Praktikumsmöglichkeit für Versuche, die nur im Remote-Labor möglich sind und wieder andere arbeiten intensiv im Remote-Labor und reichen auch das Praktikumsprotokoll elektronisch ein. Für Lehrende besteht über das Protokoll und die Auswertung der Nutzungsdaten ausreichende Sicherheit, um aktive Beteiligung am Praktikum zu testieren.
Seit 2012 wird an der Hochschule Bonn-Rhein-Sieg die Studieneingangsphase im Qualitätspakt Lehre gefördert. Ein wesentliches Anliegen im Projekt „Pro-MINT-us“ ist die Einbeziehung der gesamten Hochschule, um keine isolierten Maßnahmen anzubieten, sondern die im Projekt entwickelten Lehrideen nachhaltig zu verankern.
Improving the study entry supports students in a decisive phase of their university education. Implementing improvements is a change process and can only be successful if the relevant stakeholders are addressed and convinced. In the described Teaching Quality Pact project evaluation data is used as a mean to discuss in the university the situation of the study programs. As these discussions were based on empirical data rather than on opinion, it was possible to achieve an open discussion about measures that are implemented. The open discussion is maintained during the project when results of the measures taken are analyzed.
Low power dissipation is a current topic in digital design, and therefore, it should be covered in a state-of-the-art electrical engineering curriculum. This paper describes how low-power design can be addressed within a digital design course. Doing so would be beneficial for both topics because low-power design is not detached from the systems perspective, and the digital design course would be enriched by references to current challenges and applications. Thus, the presented course should serve as an example of how a course can be developed to also teach students about sustainable engineering.
Background: the potency of drugs that interfere with glucose metabolism, i.e., glucose transporters (GLUT) and nicotinamide phosphoribosyltransferase (NAMPT) was analyzed in neuroendocrine tumor (NET, BON-1, and QPG-1 cells) and small cell lung cancer (SCLC, GLC-2, and GLC-36 cells) tumor cell lines. (2) Methods: the proliferation and survival rate of tumor cells was significantly affected by the GLUT-inhibitors fasentin and WZB1127, as well as by the NAMPT inhibitors GMX1778 and STF-31. (3) Results: none of the NET cell lines that were treated with NAMPT inhibitors could be rescued with nicotinic acid (usage of the Preiss–Handler salvage pathway), although NAPRT expression could be detected in two NET cell lines. We finally analyzed the specificity of GMX1778 and STF-31 in NET cells in glucose uptake experiments. As previously shown for STF-31 in a panel NET-excluding tumor cell lines, both drugs specifically inhibited glucose uptake at higher (50 μM), but not at lower (5 μM) concentrations. (4) Conclusions: our data suggest that GLUT and especially NAMPT inhibitors are potential candidates for the treatment of NET tumors.
In thyroid carcinoma cells, the soluble βgalactosidespecific lectin, galectin3, is extra and intracellularly expressed and plays a significant role in thyroid cancer diagnosis. The functional relevance of this molecule, particularly in its extracellular environment however, warrants further elucidation. To gain insight into this topic, the present study characterized principal functional properties of galectin3 in 3 commonly used thyroid carcinoma cell lines (BCPAP, Cal62 and FTC133) that express the molecule intra and extracellulary. Cellintrinsic galectin3 harbors a functional carbohydrate recognition domain as determined by affinity purification. Moreover, cell surface expressed galectin3 can be partially removed by treatment with lactose or asialofetuin, but not with sucrose. Thyroid carcinoma cells adhere to substratebound galectin3 in a βgalactosidespecific manner, whereby only cell adhesion, but not cell migration is promoted. Thus, thyroid tumor cells harbor functional active galectin3 that, inter alia, specifically interacts with cell surfaceexpressed molecular ligands in a βgalactosidedependent manner, whereby the molecule can at least interfere with cell adhesion. The modulation of galectin3 expression level or its ligands in such tumor cells could be of therapeutic interest and needs further experimental clarification.
Stably stratified Taylor–Green vortex simulations are performed by lattice Boltzmann methods (LBM) and compared to other recent works using Navier–Stokes solvers. The density variation is modeled with a separate distribution function in addition to the particle distribution function modeling the flow physics. Different stencils, forcing schemes, and collision models are tested and assessed. The overall agreement of the lattice Boltzmann solutions with reference solutions from other works is very good, even when no explicit subgrid model is used, but the quality depends on the LBM setup. Although the LBM forcing scheme is not decisive for the quality of the solution, the choice of the collision model and of the stencil are crucial for adequate solutions in underresolved conditions. The LBM simulations confirm the suppression of vertical flow motion for decreasing initial Froude numbers. To gain further insight into buoyancy effects, energy decay, dissipation rates, and flux coefficients are evaluated using the LBM model for various Froude numbers.
Turbulent compressible flows are traditionally simulated using explicit time integrators applied to discretized versions of the Navier-Stokes equations. However, the associated Courant-Friedrichs-Lewy condition severely restricts the maximum time-step size. Exploiting the Lagrangian nature of the Boltzmann equation’s material derivative, we now introduce a feasible three-dimensional semi-Lagrangian lattice Boltzmann method (SLLBM), which circumvents this restriction. While many lattice Boltzmann methods for compressible flows were restricted to two dimensions due to the enormous number of discrete velocities in three dimensions, the SLLBM uses only 45 discrete velocities. Based on compressible Taylor-Green vortex simulations we show that the new method accurately captures shocks or shocklets as well as turbulence in 3D without utilizing additional filtering or stabilizing techniques other than the filtering introduced by the interpolation, even when the time-step sizes are up to two orders of magnitude larger compared to simulations in the literature. Our new method therefore enables researchers to study compressible turbulent flows by a fully explicit scheme, whose range of admissible time-step sizes is dictated by physics rather than spatial discretization.
This work thoroughly investigates a semi-Lagrangian lattice Boltzmann (SLLBM) solver for compressible flows. In contrast to other LBM for compressible flows, the vertices are organized in cells, and interpolation polynomials up to fourth order are used to attain the off-vertex distribution function values. Differing from the recently introduced Particles on Demand (PoD) method , the method operates in a static, non-moving reference frame. Yet the SLLBM in the present formulation grants supersonic flows and exhibits a high degree of Galilean invariance. The SLLBM solver allows for an independent time step size due to the integration along characteristics and for the use of unusual velocity sets, like the D2Q25, which is constructed by the roots of the fifth-order Hermite polynomial. The properties of the present model are shown in diverse example simulations of a two-dimensional Taylor-Green vortex, a Sod shock tube, a two-dimensional Riemann problem and a shock-vortex interaction. It is shown that the cell-based interpolation and the use of Gauss-Lobatto-Chebyshev support points allow for spatially high-order solutions and minimize the mass loss caused by the interpolation. Transformed grids in the shock-vortex interaction show the general applicability to non-uniform grids.
Turbulent compressible flows are traditionally simulated using explicit Eulerian time integration applied to the Navier-Stokes equations. However, the associated Courant-Friedrichs-Lewy condition severely restricts the maximum time step size. Exploiting the Lagrangian nature of the Boltzmann equation's material derivative, we now introduce a feasible three-dimensional semi-Lagrangian lattice Boltzmann method (SLLBM), which elegantly circumvents this restriction. Previous lattice Boltzmann methods for compressible flows were mostly restricted to two dimensions due to the enormous number of discrete velocities needed in three dimensions. In contrast, this Rapid Communication demonstrates how cubature rules enhance the SLLBM to yield a three-dimensional velocity set with only 45 discrete velocities. Based on simulations of a compressible Taylor-Green vortex we show that the new method accurately captures shocks or shocklets as well as turbulence in 3D without utilizing additional filtering or stabilizing techniques, even when the time step sizes are up to two orders of magnitude larger compared to simulations in the literature. Our new method therefore enables researchers for the first time to study compressible turbulent flows by a fully explicit scheme, whose range of admissible time step sizes is only dictated by physics, while being decoupled from the spatial discretization.
Off-lattice Boltzmann methods increase the flexibility and applicability of lattice Boltzmann methods by decoupling the discretizations of time, space, and particle velocities. However, the velocity sets that are mostly used in off-lattice Boltzmann simulations were originally tailored to on-lattice Boltzmann methods. In this contribution, we show how the accuracy and efficiency of weakly and fully compressible semi-Lagrangian off-lattice Boltzmann simulations is increased by velocity sets derived from cubature rules, i.e. multivariate quadratures, which have not been produced by the Gauß-product rule. In particular, simulations of 2D shock-vortex interactions indicate that the cubature-derived degree-nine D2Q19 velocity set is capable to replace the Gauß-product rule-derived D2Q25. Likewise, the degree-five velocity sets D3Q13 and D3Q21, as well as a degree-seven D3V27 velocity set were successfully tested for 3D Taylor–Green vortex flows to challenge and surpass the quality of the customary D3Q27 velocity set. In compressible 3D Taylor–Green vortex flows with Mach numbers on-lattice simulations with velocity sets D3Q103 and D3V107 showed only limited stability, while the off-lattice degree-nine D3Q45 velocity set accurately reproduced the kinetic energy provided by literature.
The identification of energetic materials in containments is an important challenge for analytical methods in the field of safety and security. Opening a package without knowledge of its contents and the resulting hazards is highly involved with risks and should be avoided whenever possible. Therefore, preferable methods work non-destructive with minimal interaction and are capable of identifying target substances in a containment quickly and reliably. Most spectroscopic methods find their limits, if the target substance is shielded by a covering material. To solve this problem, a combined laser drilling method with subsequent identification of the target substance by means of Raman spectroscopic measurements through microscopic bore holes of the covering material is presented. A pulsed laser beam is used for both the drilling process and as an excitation source for Raman measurements in the same optical setup. Results show the ability of this new method to gain high-quality spectra even when performed through microscopic small bore channels. With the laser parameters chosen right, the method can even be performed on highly sensitive explosives like triacetone triperoxide (TATP). Another advantageous effect arises in an observed reduction in unwanted fluorescence signal in the spectral data, resulting from the confocal-like measurement setup with the bore hole acting as aperture.
Die vorliegende Forschungsarbeit setzt sich mit nachhaltigem Verhalten in Bezug auf die Nutzung von Kaffeebehältern an der HBRS auseinander. Anlass dafür ist, dass Pappbecher aufgrund einer Plastikbeschichtung nur schwer recycelbar sind und somit die Umwelt erheblich beeinträchtigen. In diesem Zusammenhang nahmen 204 Studierende an einer Online-Befragung teil. Den Ergebnissen zufolge kommen derzeit vor allem Einweg-Pappbecher zum Einsatz. Zur Modifizierung dieses umweltschädlichen Verhaltens bedarf es an geeigneten Interventionsstrategien. Basierend auf den Ergebnissen sind Maßnahmen zu implementieren, die dem Defizit an Handlungswissen und dem hohen Aufwand entgegenwirken, welcher mit der Verwendung eigens mitgebrachter Becher und den vorhandenen Porzellantassen assoziiert wird. Nach Sicherstellung der ökologischen Vorteile und finanziellen Umsetzbarkeit sollte das bestehende Pfandsystem um praktischere Becher sowie flexible Rückgabemöglichkeiten erweitert werden. Unterstützend ist eine Belohnung in Form von Freigetränken oder einem geringen finanziellen Rabatt sinnvoll, um den automatischen Verbrauch von Pappbechern zu unterbinden.
Digital ecosystems are driving the digital transformation of business models. Meanwhile, the associated processing of personal data within these complex systems poses challenges to the protection of individual privacy. In this paper, we explore these challenges from the perspective of digital ecosystems' platform providers. To this end, we present the results of an interview study with seven data protection officers representing a total of 12 digital ecosystems in Germany. We identified current and future challenges for the implementation of data protection requirements, covering issues on legal obligations and data subject rights. Our results support stakeholders involved in the implementation of privacy protection measures in digital ecosystems, and form the foundation for future privacy-related studies tailored to the specifics of digital ecosystems.
Risk-based authentication (RBA) extends authentication mechanisms to make them more robust against account takeover attacks, such as those using stolen passwords. RBA is recommended by NIST and NCSC to strengthen password-based authentication, and is already used by major online services. Also, users consider RBA to be more usable than two-factor authentication and just as secure. However, users currently obtain RBA's high security and usability benefits at the cost of exposing potentially sensitive personal data (e.g., IP address or browser information). This conflicts with user privacy and requires to consider user rights regarding the processing of personal data. We outline potential privacy challenges regarding different attacker models and propose improvements to balance privacy in RBA systems. To estimate the properties of the privacy-preserving RBA enhancements in practical environments, we evaluated a subset of them with long-term data from 780 users of a real-world online service. Our results show the potential to increase privacy in RBA solutions. However, it is limited to certain parameters that should guide RBA design to protect privacy. We outline research directions that need to be considered to achieve a widespread adoption of privacy preserving RBA with high user acceptance.
Risk-based Authentication (RBA) is an adaptive security measure that improves the security of password-based authentication by protecting against credential stuffing, password guessing, or phishing attacks. RBA monitors extra features during login and requests for an additional authentication step if the observed feature values deviate from the usual ones in the login history. In state-of-the-art RBA re-authentication deployments, users receive an email with a numerical code in its body, which must be entered on the online service. Although this procedure has a major impact on RBA's time exposure and usability, these aspects were not studied so far.
We introduce two RBA re-authentication variants supplementing the de facto standard with a link-based and another code-based approach. Then, we present the results of a between-group study (N=592) to evaluate these three approaches. Our observations show with significant results that there is potential to speed up the RBA re-authentication process without reducing neither its security properties nor its security perception. The link-based re-authentication via "magic links", however, makes users significantly more anxious than the code-based approaches when perceived for the first time. Our evaluations underline the fact that RBA re-authentication is not a uniform procedure. We summarize our findings and provide recommendations.
Risk-based authentication (RBA) aims to protect users against attacks involving stolen passwords. RBA monitors features during login, and requests re-authentication when feature values widely differ from those previously observed. It is recommended by various national security organizations, and users perceive it more usable than and equally secure to equivalent two-factor authentication. Despite that, RBA is still used by very few online services. Reasons for this include a lack of validated open resources on RBA properties, implementation, and configuration. This effectively hinders the RBA research, development, and adoption progress.
To close this gap, we provide the first long-term RBA analysis on a real-world large-scale online service. We collected feature data of 3.3 million users and 31.3 million login attempts over more than 1 year. Based on the data, we provide (i) studies on RBA’s real-world characteristics plus its configurations and enhancements to balance usability, security, and privacy; (ii) a machine learning–based RBA parameter optimization method to support administrators finding an optimal configuration for their own use case scenario; (iii) an evaluation of the round-trip time feature’s potential to replace the IP address for enhanced user privacy; and (iv) a synthesized RBA dataset to reproduce this research and to foster future RBA research. Our results provide insights on selecting an optimized RBA configuration so that users profit from RBA after just a few logins. The open dataset enables researchers to study, test, and improve RBA for widespread deployment in the wild.
Risk-based authentication (RBA) aims to strengthen password-based authentication rather than replacing it. RBA does this by monitoring and recording additional features during the login process. If feature values at login time differ significantly from those observed before, RBA requests an additional proof of identification. Although RBA is recommended in the NIST digital identity guidelines, it has so far been used almost exclusively by major online services. This is partly due to a lack of open knowledge and implementations that would allow any service provider to roll out RBA protection to its users. To close this gap, we provide a first in-depth analysis of RBA characteristics in a practical deployment. We observed N=780 users with 247 unique features on a real-world online service for over 1.8 years. Based on our collected data set, we provide (i) a behavior analysis of two RBA implementations that were apparently used by major online services in the wild, (ii) a benchmark of the features to extract a subset that is most suitable for RBA use, (iii) a new feature that has not been used in RBA before, and (iv) factors which have a significant effect on RBA performance. Our results show that RBA needs to be carefully tailored to each online service, as even small configuration adjustments can greatly impact RBA's security and usability properties. We provide insights on the selection of features, their weightings, and the risk classification in order to benefit from RBA after a minimum number of login attempts.
Risk-based Authentication (RBA) is an adaptive security measure to strengthen password-based authentication. RBA monitors additional features during login, and when observed feature values differ significantly from previously seen ones, users have to provide additional authentication factors such as a verification code. RBA has the potential to offer more usable authentication, but the usability and the security perceptions of RBA are not studied well.
We present the results of a between-group lab study (n=65) to evaluate usability and security perceptions of two RBA variants, one 2FA variant, and password-only authentication. Our study shows with significant results that RBA is considered to be more usable than the studied 2FA variants, while it is perceived as more secure than password-only authentication in general and comparably secure to 2FA in a variety of application types. We also observed RBA usability problems and provide recommendations for mitigation. Our contribution provides a first deeper understanding of the users' perception of RBA and helps to improve RBA implementations for a broader user acceptance.
Risk-based authentication (RBA) is an adaptive security measure to strengthen password-based authentication against account takeover attacks. Our study on 65 participants shows that users find RBA more usable than two-factor authentication equivalents and more secure than password-only authentication. We identify pitfalls and provide guidelines for putting RBA into practice.
Risk-based authentication (RBA) aims to strengthen password-based authentication rather than replacing it. RBA does this by monitoring and recording additional features during the login process. If feature values at login time differ significantly from those observed before, RBA requests an additional proof of identification. Although RBA is recommended in the NIST digital identity guidelines, it has so far been used almost exclusively by major online services. This is partly due to a lack of open knowledge and implementations that would allow any service provider to roll out RBA protection to its users.
To close this gap, we provide a first in-depth analysis of RBA characteristics in a practical deployment. We observed N=780 users with 247 unique features on a real-world online service for over 1.8 years. Based on our collected data set, we provide (i) a behavior analysis of two RBA implementations that were apparently used by major online services in the wild, (ii) a benchmark of the features to extract a subset that is most suitable for RBA use, (iii) a new feature that has not been used in RBA before, and (iv) factors which have a significant effect on RBA performance. Our results show that RBA needs to be carefully tailored to each online service, as even small configuration adjustments can greatly impact RBA's security and usability properties. We provide insights on the selection of features, their weightings, and the risk classification in order to benefit from RBA after a minimum number of login attempts.
Studi ini bertujuan untuk memvalidasi perangkat penilaian efikasi diri yang berkaitan dengan kesehatan kerja yang dikembangkan pada tahap studi pendahuluan. Skala Efikasi Diri untuk Kesehatan Kerja (SEDKK) berlandaskan konsep efikasi diri pada teori kognitif sosial yang mengukur empat faktor yang berpengaruh pada kesehatan setiap individu yang bekerja, seperti: perilaku makan dan minum, tidur, keamanan dan kesehatan kerja, serta kegiatan pemulihan dari stres bekerja. Hasil analisis faktor eksploratori menunjukan bahwa ada empat faktor yang terefleksikan dari butir-butir SEDKK. Validitas konstruk SEDKK dapat dibuktikan dengan korelasi positif antara SEDKK dan skala Efikasi Diri Umum yang sangat signifikan. Pengujian validitas kriteria dapat ditelusuri melalui efek SEDKK terhadap kondisi kesehatan umum, kepuasan akan kesehatan pribadi, keseimbangan kehidupan kerja/KKK (work life balance), perilaku sehat, dan perilaku berisiko. Namun demikian, asumsi mengenai reliabilitas tes berulang (test-retest) pada penelitian ini ditolak. Implikasi dan saran-saran untuk penelitian selanjutnya didiskusikan pada artikel ini.
Many workers experience their jobs as effortful or even stressful, which can result in strain. Although recovery from work would be an adaptive strategy to prevent the adverse effects of work-related strain, many workers face problems finding enough time to rest and to mentally disconnect from work during nonwork time. What goes on in workers’ minds after a stressful workday? What is it about their jobs that makes them think about their work? This special issue aims to bridge the gap between research on recovery processes mainly examined in Occupational Health Psychology, and research on work stress and working hours, often investigated in the field of Human Resource Management. We first summarize conceptual and theoretical streams from both fields of research. In the following, we discuss the contributions of the five special issue papers and conclude with key messages and directions for further research.
Recessive mutations in the MPV17 gene cause mitochondrial DNA depletion syndrome, a fatal infantile genetic liver disease in humans. Loss of function in mice leads to glomerulosclerosis and sensineural deafness accompanied with mitochondrial DNA depletion. Mutations in the yeast homolog Sym1, and in the zebra fish homolog tra cause interesting, but not obviously related phenotypes, although the human gene can complement the yeast Sym1 mutation. The MPV17 protein is a hydrophobic membrane protein of 176 amino acids and unknown function. Initially localised in murine peroxisomes, it was later reported to be a mitochondrial inner membrane protein in humans and in yeast. To resolve this contradiction we tested two new mouse monoclonal antibodies directed against the human MPV17 protein in Western blots and immunohistochemistry on human U2OS cells. One of these monoclonal antibodies showed specific reactivity to a protein of 20 kD absent in MPV17 negative mouse cells. Immunofluorescence studies revealed colocalisation with peroxisomal, endosomal and lysosomal markers, but not with mitochondria. This data reveal a novel connection between a possible peroxisomal/endosomal/lysosomal function and mitochondrial DNA depletion.
Work-related thoughts during off-job time have been studied extensively in occupational health psychology and related fields. We provide a focused review of the research on overcommitment—a component within the effort–reward imbalance model—and aim to connect this line of research to the most commonly studied aspects of work-related rumination. Drawing on this integrative review, we analyze survey data on ten facets of work-related rumination, namely (1) overcommitment, (2) psychological detachment, (3) affective rumination, (4) problem-solving pondering, (5) positive work reflection, (6) negative work reflection, (7) distraction, (8) cognitive irritation, (9) emotional irritation, and (10) inability to recover. First, we apply exploratory factor analysis to self-reported survey data from 357 employees to calibrate overcommitment items and to position overcommitment within the nomological net of work-related rumination constructs. Second, we leverage apply confirmatory factor analysis to self-reported survey data from 388 employees to provide a more specific test of uniqueness vs. overlap among these constructs. Third, we apply relative weight analysis to assess the unique criterion-related validity of each work-related rumination facet regarding (1) physical fatigue, (2) cognitive fatigue, (3) emotional fatigue, (4) burnout, (5) psychosomatic complaints, and (6) satisfaction with life. Our results suggest that several measures of work-related rumination (e.g., overcommitment and cognitive irritation) can be used interchangeably. Emotional irritation and affective rumination emerge as the strongest unique predictors of fatigue, burnout, psychosomatic complaints, and satisfaction with life. Our study is intended to assist researchers in making informed decisions on selecting scales for their research and paves the way for integrating research on the effort–reward imbalance and work-related rumination.
Although work events can be regarded as pivotal elements of organizational life, only a few studies have examined how positive and negative events relate to and combine to affect work engagement over time. Theory suggests that to better understand how current events affect work engagement (WE), we have to account for recent events that have preceded these current events. We present competing theoretical views on how recent and current work events may affect employees (e.g., getting used to a high frequency of negative events or becoming more sensitive to negative events). Although the occurrence of events implies discrete changes in the experience of work, prior research has not considered whether work events actually accumulate to sustained mid-term changes in WE. To address these gaps in the literature, we conducted a week-level longitudinal study across a period of 15 consecutive weeks among 135 employees, which yielded 849 weekly observations. While positive events were associated with higher levels of WE within the same week, negative events were not. Our results support neither satiation nor sensitization processes. However, high frequencies of negative events in the preceding week amplified the beneficial effects of positive events on WE in the current week. Growth curve analyses show that the benefits of positive events accumulate to sustain high levels of WE. WE dissipates in the absence of continuous experience of positive events. Our study adds a temporal component and informs research that has taken a feature-oriented perspective on the dynamic interplay of job demands and resources.
Although work events can be regarded as pivotal elements of organizational life, only a few studies have examined how positive and negative events relate to and combine to affect work engagement over time. Theory suggests that, to better understand how current events affect work engagement (WE), we have to account for recent events that have preceded these current events. We present competing theoretical views on how recent and current work events may affect employees (e.g., getting used to a high frequency of negative events or becoming more sensitive to negative events). Although the occurrence of events implies discrete changes in the experience of work, prior research has not considered whether work events actually accumulate to sustained mid-term changes in WE. To address these gaps in the literature, we conducted a week-level longitudinal study across a period of 15 consecutive weeks among 135 employees, which yielded 849 weekly observations. While positive events were associated with higher levels of WE within the same week, negative events were not. Our results support neither satiation nor sensitization processes. However, a high frequency of negative events in the preceding week amplified the beneficial effects of positive events on WE in the current week. Growth curve analyses show that the benefits of positive events accumulate to sustain high levels of WE. WE dissipates in the absence of a continuous experience of positive events. Our study adds a temporal component by highlighting that positive events affect work engagement, particularly in light of recent negative events. Our study informs research that has taken a feature-oriented perspective on the dynamic interplay of job demands and resources.
In the literature on occupational stress and recovery from work several facets of thinking about work in off-job time have been conceptualized. However, research on the focal concepts is currently rather disintegrated. In this study we take a closer look at the five most established concepts, namely (1) psychological detachment, (2) affective rumination, (3) problem-solving pondering, (4) positive work reflection, and (5) negative work reflection. More specifically, we scrutinized (1) whether the five facets of work-related rumination are empirically distinct, (2) whether they yield differential associations with different facets of employee well-being (burnout, work engagement, thriving, satisfaction with life, and flourishing), and (3) to what extent the five facets can be distinguished from and relate to conceptually similar constructs, such as irritation, worry, and neuroticism. We applied structural equation modeling techniques to cross-sectional survey data from 474 employees. Our results provide evidence that (1) the five facets of work-related rumination are highly related, yet empirically distinct, (2) that each facet contributes uniquely to explain variance in certain aspects of employee well-being, and (3) that they are distinct from related concepts, albeit there is a high overlap between (lower levels of) psychological detachment and cognitive irritation. Our study contributes to clarify the structure of work-related rumination and extends the nomological network around different types of thinking about work in off-job time and employee well-being.
In the literature on occupational stress and recovery from work, several facets of thinking about work during off-job time have been conceptualized. However, research on the focal concepts is currently rather diffuse. In this study we take a closer look at the five most well-established concepts: (1) psychological detachment, (2) affective rumination, (3) problem-solving pondering, (4) positive work reflection, and (5) negative work reflection. More specifically, we scrutinized (1) whether the five facets of work-related rumination are empirically distinct, (2) whether they yield differential associations with different facets of employee well-being (burnout, work engagement, thriving, satisfaction with life, and flourishing), and (3) to what extent the five facets can be distinguished from and relate to conceptually similar constructs, such as irritation, worry, and neuroticism. We applied structural equation modeling techniques to cross-sectional survey data from 474 employees. Our results provide evidence for (1) five correlated, yet empirically distinct facets of work-related rumination. (2) Each facet yields a unique pattern of association with the eight aspects of employee well-being. For instance, detachment is strongly linked to satisfaction with life and flourishing. Affective rumination is linked particularly to burnout. Problem-solving pondering and positive work reflection yield the strongest links to work engagement. (3) The five facets of work-related rumination are distinct from related concepts, although there is a high overlap between (lower levels of) psychological detachment and cognitive irritation. Our study contributes to clarifying the structure of work-related rumination and extends the nomological network around different types of thinking about work during off-job time and employee well-being.
Guzzo et al. (Reference Guzzo, Schneider and Nalbantian2022) argue that open science practices may marginalize inductive and abductive research and preclude leveraging big data for scientific research. We share their assessment that the hypothetico-deductive paradigm has limitations (see also Staw, Reference Staw2016) and that big data provide grand opportunities (see also Oswald et al., Reference Oswald, Behrend, Putka and Sinar2020). However, we arrive at very different conclusions. Rather than opposing open science practices that build on a hypothetico-deductive paradigm, we should take initiative to do open science in a way compatible with the very nature of our discipline, namely by incorporating ambiguity and inductive decision-making. In this commentary, we (a) argue that inductive elements are necessary for research in naturalistic field settings across different stages of the research process, (b) discuss some misconceptions of open science practices that hide or discourage inductive elements, and (c) propose that field researchers can take ownership of open science in a way that embraces ambiguity and induction. We use an example research study to illustrate our points.
Generating and visualizing large areas of vegetation that look natural makes terrain surfaces much more realistic. However, this is a challenging field in computer graphics, because ecological systems are complex and visually appealing plant models are geometrically detailed. This work presents Silva (System for the Instantiation of Large Vegetated Areas), a system to generate and visualize large vegetated areas based on the ecological surrounding. Silva generates vegetation on Wang-tiles with associated reusable distributions enabling multi-level instantiation. This paper presents a method to generate Poisson Disc Distributions (PDDs) with variable radii on Wang-tile sets (without a global optimization) that is able to generate seamless tilings. Because Silva has a freely configurable generation pipeline and can consider plant neighborhoods it is able to incorporate arbitrary abiotic and biotic components during generation. Based on multi-levelinstancing and nested kd-trees, the distributions on the Wang-tiles allow their acceleration structures to be reused during visualization. This enables Silva to visualize large vegetated areas of several hundred square kilometers with low render times and a small memory footprint.
The synthesis and characterization of a new class of 1,2,4-oxadiazolylpyridinium as a cationic scaffold for fluorinated ionic liquid crystals is herein described. A series of 12 fluorinated heterocyclic salts based on a 1,2,4-oxadiazole moiety, connected through its C(5) or C(3) to an N-alkylpyridinium unit and a perfluoroheptyl chain, differing in the length of the alkyl chain and counterions, has been synthesized. As counterions iodide, bromide and bis(trifluoromethane)sulfonimide have been considered. The synthesis, structure, and liquid crystalline properties of these compounds are discussed on the basis of the tuned structural variables. The thermotropic properties of this series of salts have been investigated by differential scanning calorimetry and polarized optical microscopy. The results showed the existence of an enantiotropic mesomorphic smectic liquid crystalline phase for six bis(trifluoromethane)sulfonimide salts.
In this work, the surface reactions of the homemade explosive triacetone triperoxide on tungsten oxide (WO3) sensor surfaces are studied to obtain detailed information about the chemical reactions taking place. Semiconductor gas sensors based on WO3 nanopowders are therefore produced and characterized by scanning electron microscopy, X-ray diffraction, and Raman spectroscopy. To analyze the reaction mechanisms at the sensor surface, the sensor is monitored online under operation conditions using Raman spectroscopy, which allows to identify the temperature-dependent sensor reactions. By combining information from the Raman spectra with data on the changing resistivity of the underlying semiconductor, it is possible to establish a correlation between the adsorbed gas species and the physical properties of the WO3 layer. In the results, it is indicated that a Lewis acid–base reaction is the most likely mechanism for the increase in resistance observed at temperatures below 150 °C. In the results, at higher temperatures, the assumption of a radical mechanism that causes a decrease in resistance is supported.
Kenya, like all other developing countries in the world, is faced with the task of working strategically towards the achievement of the Sustained Development Goals (SDGs) 2030. These goals whose due date of accomplishment coincides with those of the national development blueprint, namely, the Kenya Vision 2030, have become a major focus of attention in the country. Conferences, workshops, and seminars are organized throughout the country on regular bases by joint multiplicity of organizations to address modalities of ensuring a timely achievement of SDGs in the country. Universities either individually or jointly are working towards this same target. More specifically, there are great areas of concern or priority areas that the country is focusing on as a strategic focus towards the achievement of the Kenya Vision 2030 and SDGs 2030. These strategic areas of focus have been isolated and declared by the President of the Republic of Kenya, His Excellency Uhuru Kenyatta, as the country’s “big four priority areas”, namely, affordable housing, affordable health care, food security, and manufacturing as a grandiose effort towards achievement of the SDGs, Kenya Vision 2030 as well as job and wealth creation. Similarly, Mount Kenya University’s top management established the Graduate Enterprise Academy (GEA) in 2013 under the direct Patronage of the university’s Founder with the primary aim of assisting graduates to be job and wealth creators rather than being job seekers. So far, over twenty start-ups are running throughout the country under Graduate Enterprise Academy (GEA). Incidentally, although the Graduate Enterprise Academy’s diverse areas of focus extend beyond the President of Kenya’s “Big Four” to include ICT and creative arts, among others, there are justifiable cases to indicate that GEA’s activities are also in support of the national “Big Four” agenda. This paper gives an exposition of different start-ups under MKU’s Graduate Enterprise Academy and are show-cased as evidence of MKU’s support towards the achievement of the national “Big Four” agenda. The paper covers a part of an ongoing program through desk-top analyses of reports, with an objective of show-casing MKU’s contribution to the national agenda through the Graduate Enterprise Academy for possible scale - up.
Background
Consumers rely heavily on online user reviews when shopping online and cybercriminals produce fake reviews to manipulate consumer opinion. Much prior research focuses on the automated detection of these fake reviews, which are far from perfect. Therefore, consumers must be able to detect fake reviews on their own. In this study we survey the research examining how consumers detect fake reviews online.
Methods
We conducted a systematic literature review over the research on fake review detection from the consumer-perspective. We included academic literature giving new empirical data. We provide a narrative synthesis comparing the theories, methods and outcomes used across studies to identify how consumers detect fake reviews online.
Results
We found only 15 articles that met our inclusion criteria. We classify the most often used cues identified into five categories which were (1) review characteristics (2) textual characteristics (3) reviewer characteristics (4) seller characteristics and (5) characteristics of the platform where the review is displayed.
Discussion
We find that theory is applied inconsistently across studies and that cues to deception are often identified in isolation without any unifying theoretical framework. Consequently, we discuss how such a theoretical framework could be developed.
The development of sustainable, environmentally friendly insulation materials with a reduced carbon footprint is attracting increased interest. One alternative to conventional insulation materials are foamed geopolymers. Similar to foamed concrete, the mechanical properties of geopolymer foams can also be improved by using fibers for reinforcement. This paper presents an overview of the latest research findings in the field of fiber-reinforced geopolymer foam concrete with special focus on natural fibers reinforcement. Furthermore, some basic and background information of natural fibers and geopolymer foams are reported. In most of the research, foams are produced either through chemical foaming with hydrogen peroxide or aluminum powder, or through mechanical foaming which includes a foaming agent. However, previous reviews have not sufficiently addresses the fabrication of geopolymer foams by syntactic foams. Finally, recent efforts to reduce the fiber degradation in geopolymer concrete are discussed along with challenges for natural fiber reinforced-geopolymer foam concrete.
Due to increased emissions of palladium nanoparticles in recent years, it is important to develop analytical techniques to characterize these particles. The synthesis of defined and stable particles plays a key role in this process, as there are not many materials commercially available yet which could act as reference materials. Polyvinylpyrrolidone- (PVP-) stabilized palladium nanoparticles were synthesized through the reduction of palladium chloride by tetraethylene glycol (TEG) in the presence of KOH. Four different methods were used for particle size analysis of the palladium nanoparticles. Palladium suspensions were analyzed by scanning electron microscopy (SEM), small angle X-ray scattering (SAXS), single-particle ICP-MS (SP-ICP-MS), and X-ray diffraction (XRD). Secondary particles between 30 nm and 130 nm were detected in great compliance with SAXS and SP-ICP-MS. SEM analysis showed that the small particulates tend to form agglomerates.
New sustainable, environmentally friendly materials for thermal insulation of buildings are necessary to reduce their carbon footprints. In this study, Miscanthus fiber-reinforced geopolymer composites, foamed with sodium dodecyl sulfate (SDS), were developed using fly ash as a geopolymer precursor. The effects of fiber content, fiber size, curing temperature, foaming agent content, fumed silica specific surface area and fumed silica content on thermal conductivity and compressive strength were evaluated using a Plackett-Burman design of experiment. Furthermore, the microstructure of geopolymer composites was investigated using X-ray diffraction (XRD), X-ray micro-computed tomography (μCT) and scanning electron microscopy (SEM). The measured characteristic values were in the following ranges: Thermal conductivity 0.057 W (m K)−1 to 0.127 W (m K)−1, compressive strength 0.007 MPa–0.719 MPa and porosity 49 vol% to 76 vol%. The results reveal an enhancement of thermal conductivity by elevated fiber size and foaming agent content. In contrast, the compressive strength is enhanced by high fiber content. Additionally, SEM images indicate a good interaction between the fibers and the geopolymer matrix, because nearly the whole fiber surface is covered by the geopolymer.
Do remittances and social assistance transfers have different impacts on household’s expenditure patterns? While two separate strands of literature have looked at how social assistance or remittances have been spent, few studies have compared them directly. Using data from a household survey conducted in Moldova in 2011, this paper assesses the impact both types of transfers have on household expenditure patterns. Contrary to the common assumption that money is fungible, we find that social assistance and remittances have different impacts on expenditure patterns (having controlled for potential endogeneity). In other words, where the income comes from can determine how it is spent. As such, different sources of income may have different poverty impacts. In our sample, the two types of transfers are received by different, but slightly overlapping population groups. The fact that the two transfers are spent in different ways means that, to some extent, social assistance and remittances are complements rather than substitutes.
Friction effects impose a requirement for the supplementary amount of torque to be produced in actuators for a robot to move, which in turn increases energy consumption. We cannot eliminate friction, but we can optimize motions to make them more energy efficient, by considering friction effects in motion computations. Optimizing motions means computing efficient joint torques/accelerations based on different friction torques imposed in each joint. Existing friction forces can be used for supporting certain types of arm motions, e.g standing still.
Reducing energy consumption of robot's arms will provide many benefits, such as longer battery life of mobile robots, reducing heat in motor systems, etc.
The aim of this project is extending an already available constrained hybrid dynamic solver, by including static friction effects in the computations of energy optimal motions. When the algorithm is extended to account for static friction factors, a convex optimization (maximization) problem must be solved.
The author of this hybrid dynamic solver has briefly outlined the approach for including static friction forces in computations of motions, but without providing a detailed derivation of the approach and elaboration that will show its correctness. Additionally, the author has outlined the idea for improving the computational efficiency of the approach, but without providing its derivation.
In this project, the proposed approach for extending the originally formulated algorithm has been completely derived and evaluated in order to show its feasibility. The evaluation is conducted in simulation environment with one DOF robot arm, and it shows correct results from the computation of motions. Furthermore, this project presents the derivation of the outlined method for improving the computational efficiency of the extended solver.
Human and robot tasks in household environments include actions such as carrying an object, cleaning a surface, etc. These tasks are performed by means of dexterous manipulation, and for humans, they are straightforward to accomplish. Moreover, humans perform these actions with reasonable accuracy and precision but with much less energy and stress on the actuators (muscles) than the robots do. The high agility in controlling their forces and motions is actually due to "laziness", i.e. humans exploit the existing natural forces and constraints to execute the tasks.
The above-mentioned properties of the human lazy strategy motivate us to relax the problem of controlling robot motions and forces, and solve it with the help of the environment. Therefore, in this work, we developed a lazy control strategy, i.e. task specification models and control architectures that relax several aspects of robot control by exploiting prior knowledge about the task and environment. The developed control strategy is realized in four different robotics use cases. In this work, the Popov-Vereshchagin hybrid dynamics solver is used as one of the building blocks in the proposed control architectures. An extension of the solver’s interface with the artificial Cartesian force and feed-forward joint torque task-drivers is proposed in this thesis.
To validate the proposed lazy control approach, an experimental evaluation was performed in a simulation environment and on a real robot platform.
Protokoll 27
(2023)
Interactive Object Detection
(2019)
The success of state-of-the-art object detection methods depend heavily on the availability of a large amount of annotated image data. The raw image data available from various sources are abundant but non-annotated. Annotating image data is often costly, time-consuming or needs expert help. In this work, a new paradigm of learning called Active Learning is explored which uses user interaction to obtain annotations for a subset of the dataset. The goal of active learning is to achieve superior object detection performance with images that are annotated on demand. To realize active learning method, the trade-off between the effort to annotate (annotation cost) unlabeled data and the performance of object detection model is minimised.
Random Forests based method called Hough Forest is chosen as the object detection model and the annotation cost is calculated as the predicted false positive and false negative rate. The framework is successfully evaluated on two Computer Vision benchmark and two Carl Zeiss custom datasets. Also, an evaluation of RGB, HoG and Deep features for the task is presented.
Experimental results show that using Deep features with Hough Forest achieves the maximum performance. By employing Active Learning, it is demonstrated that performance comparable to the fully supervised setting can be achieved by annotating just 2.5% of the images. To this end, an annotation tool is developed for user interaction during Active Learning.
Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment in those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, trials of moderate-intensity exercise (i.e. self-paced cycling) and no-exercise (i.e. automatic propulsion) were performed within three levels of virtual environment exposure. Each trial was 5-min in duration and was followed by post-trial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore these change indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence.
Das Forschungsprojekt beruht auf zwei Elementen: Die erste Untersuchung, ein Verhaltensexperiment mit 35 Studierenden der Hochschule Bonn-Rhein-Sieg, erforschte den Einfluss von Gruppengröße (Zuschauereffekt) und dargebotenen Informationen zu Verantwortungsdiffusion (Priming) auf nachhaltiges Verhalten. Mithilfe eines zweiten Online-Experiments folgte eine Erhebung zum Einfluss von wahrgenommener persönlicher Bedrohung auf die Bereitschaft zu nachhaltigem Verhalten (N = 72). Die Ergebnisse des ersten Experimentes zeigen einen schwachen, statistisch nicht signifikanten Einfluss der Gruppengröße sowie einen, z.T. statistisch signifikanten, Einfluss der dargebotenen Informationen zu Verantwortungsdiffusion auf das gemessene nachhaltige Verhalten. Bequemlichkeit sowie monetärer Aufwand stellen mit Abstand die größten Hindernisse für nachhaltiges Verhalten dar, während die Beeinflussung durch andere und das Ziel des Umweltschutzes als positive Argumente für nachhaltiges Verhalten genannt wurden. In der Folgestudie konnte ein statistisch signifikanter kausaler Zusammenhang zwischen der wahrgenommenen persönlichen Bedrohung durch die aktuelle Umwelt- und Klimasituation und der Bereitschaft zu nachhaltigem Verhalten nachgewiesen werden. Alle Resultate zu Verhaltensintentionen zeigten insgesamt eine hohe Bereitschaft der Probanden zu nachhaltigem Verhalten.
From September 2016 to February 2017, I did an internship at the University of Cape Coast, Ghana (UCC) as part of my studies in Business Administration at Hochschule Bonn-Rhein-Sieg, University of Applied Sciences, Germany (H-BRS). At H-BRS, an internship of five or six months (or, alternatively, one exchange semester) is an obligatory part of the curriculum so students get hands-on experience even before they enter the job market. My internship was also part of the intercontinental partnership between UCC and H-BRS, which has resulted in many different projects.
Background: Virtual reality combined with spherical treadmills is used across species for studying neural circuits underlying navigation.
New Method: We developed an optical flow-based method for tracking treadmil ball motion in real-time using a single high-resolution camera.
Results: Tracking accuracy and timing were determined using calibration data. Ball tracking was performed at 500 Hz and integrated with an open source game engine for virtual reality projection. The projection was updated at 120 Hz with a latency with respect to ball motion of 30 ± 8 ms.
Comparison: with Existing Method(s) Optical flow based tracking of treadmill motion is typically achieved using optical mice. The camera-based optical flow tracking system developed here is based on off-the-shelf components and offers control over the image acquisition and processing parameters. This results in flexibility with respect to tracking conditions – such as ball surface texture, lighting conditions, or ball size – as well as camera alignment and calibration.
Conclusions: A fast system for rotational ball motion tracking suitable for virtual reality animal behavior across different scales was developed and characterized.
The aim of this study was to investigate whether beneficial vacation effects can be strengthened and prolonged with a smartphone-based intervention. In a four-week longitudinal study among 79 Finnish teachers, we investigated the development of recovery, well-being, and job performance before, during, and after a one-week vacation in three groups: non-users (n = 51), passive (n = 18) and active (n = 10) users. Participants were instructed to actively use a recovery app (called Holidaily) and complete five digital questionnaires. Most recovery experiences and well-being indicators increased during the vacation. Job performance and concentration capacity showed no significant time effects. Among active app users, creativity at work increased from baseline to after the vacation, whereas among non-users it decreased and among passive users it decreased a few days after the vacation but increased again one and a half weeks after the vacation. The fading of beneficial vacation effects on negative affect seems to have been slower among active app users. Only few participants used the app actively. Still, results suggest that a smartphone-based recovery intervention may support beneficial vacation effects.
Diese Arbeit beschäftigt sich mit der Entwicklung eines, für die kontrollierte Freisetzung hydrophiler Wirkstoffe geeigneten, Verkapselungssystems mit dem Ziel die Freisetzung osteospezifischer P2-Liganden zu verzögern, um bei der Behandlung von Knochendefekten kritischer Größe die Bildung neuen Knochengewebes zu gewährleisten. Hierfür werden, unter Anwendung der immersiven Layer-by-Layer-Beschichtung, mit den Modell-Substanzen Adenosintriphosphat und Suramin versetzte, Alginat sowie κ-Carrageen-Kapseln mit Chitosan und Lignosulfonat beschichtet und auf ihr Freisetzungsverhalten hin untersucht.
Comparative study of 3D object detection frameworks based on LiDAR data and sensor fusion techniques
(2022)
Estimating and understanding the surroundings of the vehicle precisely forms the basic and crucial step for the autonomous vehicle. The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time. Generally, the perception system involves various subsystems such as localization, obstacle (static and dynamic) detection, and avoidance, mapping systems, and others. For perceiving the environment, these vehicles will be equipped with various exteroceptive (both passive and active) sensors in particular cameras, Radars, LiDARs, and others. These systems are equipped with deep learning techniques that transform the huge amount of data from the sensors into semantic information on which the object detection and localization tasks are performed. For numerous driving tasks, to provide accurate results, the location and depth information of a particular object is necessary. 3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object. Based on recent research, 3D object detection frameworks performing object detection and localization on LiDAR data and sensor fusion techniques show significant improvement in their performance. In this work, a comparative study of the effect of using LiDAR data for object detection frameworks and the performance improvement seen by using sensor fusion techniques are performed. Along with discussing various state-of-the-art methods in both the cases, performing experimental analysis, and providing future research directions.
This article concerns with the accessibility of Business process modelling tools (BPMo tools) and business process modelling languages (BPMo languages). Therefore the reader will be introduced to business process management and the authors' motivation behind this inquiry. Afterwards, the paper will reflect problems when applying inaccessible BPMo tools. To illustrate these problems the authors distinguish between two different categories of issues and provide practical examples. Finally the article will present three approaches to improve the accessibility of BPMo tools and BPMo languages.
Background: Falls are common in older adults and can result in serious injuries. Due to demographic changes, falls and related healthcare costs are likely to increase over the next years. Participation and motivation of older adults in fall prevention measures remain a challenge. The iStoppFalls project developed an information and communication technology (ICT)-based system for older adults to use at home in order to reduce common fall risk factors such as impaired balance and muscle weakness. The system aims at increasing older adults’ motivation to participate in ICT-based fall prevention measures. This article reports on usability, user-experience and user-acceptance aspects affecting the use of the iStoppFalls system by older adults.
Methods: In the course of a 16-week international multicenter study, 153 community-dwelling older adults aged 65+ participated in the iStoppFalls randomized controlled trial, of which half used the system in their home to exercise and assess their risk of falling. During the study, 60 participants completed questionnaires regarding the usability, user experience and user acceptance of the iStoppFalls system. Usability was measured with the System Usability Scale (SUS). For user experience the Physical Activity Enjoyment Scale (PACES) was applied. User acceptance was assessed with the Dynamic Acceptance Model for the Re-evaluation of Technologies (DART). To collect more detailed data on usability, user experience and user acceptance, additional qualitative interviews and observations were conducted with participants.
Results: Participants evaluated the usability of the system with an overall score of 62 (Standard Deviation, SD 15.58) out of 100, which suggests good usability. Most users enjoyed the iStoppFalls games and assessments, as shown by the overall PACES score of 31 (SD 8.03). With a score of 0.87 (SD 0.26), user acceptance results showed that participants accepted the iStoppFalls system for use in their own home. Interview data suggested that certain factors such as motivation, complexity or graphical design were different for gender and age.
Conclusions: The results suggest that the iStoppFalls system has good usability, user experience and user acceptance. It will be important to take these along with factors such as motivation, gender and age into consideration when designing and further developing ICT-based fall prevention systems.
Aim: To understand how transcriptional factors Pdr1 and Pdr3, belonging to the pleiotropic drug resistance system, are activated, and regulated after introducing chemical toxins to the cell in the model organism Saccharomyces cerevisiae.
Methods: Series of molecular methods were applied using different strains of S. cerevisiae over-expressing proteins of interest as a eukaryotic cell model. The chemical stress introduced to the cell is represented by menadione. Results were obtained performing protein detection and analysis. Additionally, the regulation of the DNA binding of the transcriptional activators after stimulation is quantified using chromatin immunoprecipitation, employing epitope-tagged factors and real-time qPCR.
Results: Our results indicated higher expression levels of the Pdr1 transcriptional factor, compared to its homologous Pdr3 after treatment with menadione. The yeast-cell defence system was tested against various organic solvents to exclude the possibility of their presence potentially affecting the results. The results indicate that Pdr1 is most abundant after 30 minutes from the beginning of the treatment, compared with 240 minutes after the treatment when the function of the transcription factor is faded. It appears that Pdr1 binding to the PDR5 and SNQ2 promoters, which are both activated by Pdr1, peaks around the same time, or more precisely after 40 minutes from the start of the treatment.
Conclusion: The tendency of Pdr1 reduction after its activation by menadione is detected. One possibility is that Pdr1, after recognizing the xenobiotic menadione, is removed by a degradation mechanism. Given the fact that Pdr1 directly binds the xenobiotic molecule, its destruction might help the cells to remove toxic levels of menadione. It is possible that overexpressing the part of Pdr1 which recognizes menadione alone was sufficient to detoxify and hence produce a tolerance towards menadione.
Machine learning and neural networks are now ubiquitous in sonar perception, but it lags behind the computer vision field due to the lack of data and pre-trained models specifically for sonar images. In this paper we present the Marine Debris Turntable dataset and produce pre-trained neural networks trained on this dataset, meant to fill the gap of missing pre-trained models for sonar images. We train Resnet 20, MobileNets, DenseNet121, SqueezeNet, MiniXception, and an Autoencoder, over several input image sizes, from 32 x 32 to 96 x 96, on the Marine Debris turntable dataset. We evaluate these models using transfer learning for low-shot classification in the Marine Debris Watertank and another dataset captured using a Gemini 720i sonar. Our results show that in both datasets the pre-trained models produce good features that allow good classification accuracy with low samples (10-30 samples per class). The Gemini dataset validates that the features transfer to other kinds of sonar sensors. We expect that the community benefits from the public release of our pre-trained models and the turntable dataset.
Advanced driver assistance systems (ADAS) are technology systems and devices designed as an aid to the driver of a vehicle. One of the critical components of any ADAS is the traffic sign recognition module. For this module to achieve real-time performance, some preprocessing of input images must be done, which consists of a traffic sign detection (TSD) algorithm to reduce the possible hypothesis space. Performance of TSD algorithm is critical.
One of the best algorithms used for TSD is the Radial Symmetry Detector (RSD), which can detect both Circular [7] and Polygonal traffic signs [5]. This algorithm runs in real-time on high end personal computers, but computational performance of must be improved in order to be able to run in real-time in embedded computer platforms.
To improve the computational performance of the RSD, we propose a multiscale approach and the removal of a gaussian smoothing filter used in this algorithm. We evaluate the performance on both computation times, detection and false positive rates on a synthetic image dataset and on the german traffic sign detection benchmark [29].
We observed significant speedups compared to the original algorithm. Our Improved Radial Symmetry Detector is up to 5.8 times faster than the original on detecting Circles, up to 3.8 times faster on Triangle detection, 2.9 times faster on Square detection and 2.4 times faster on Octagon detection. All of this measurements were observed with better detection and false positive rates than the original RSD.
When evaluated on the GTSDB, we observed smaller speedups, in the range of 1.6 to 2.3 times faster for Circle and Regular Polygon detection, but for Circle detection we observed a decreased detection rate than the original algorithm, while for Regular Polygon detection we always observed better detection rates. False positive rates were high, in the range of 80% to 90%.
We conclude that our Improved Radial Symmetry Detector is a significant improvement of the Radial Symmetry Detector, both for Circle and Regular polygon detection. We expect that our improved algorithm will lead the way to obtain real-time traffic sign detection and recognition in embedded computer platforms.
Extraction of text information from visual sources is an important component of many modern applications, for example, extracting the text from traffic signs on a road scene in an autonomous vehicle. For natural images or road scenes this is a unsolved problem. In this thesis the use of histogram of stroke widths (HSW) for character and noncharacter region classification is presented. Stroke widths are extracted using two methods. One is based on the Stroke Width Transform and another based on run lengths. The HSW is combined with two simple region features– aspect and occupancy ratios– and then a linear SVM is used as classifier. One advantage of our method over the state of the art is that it is script-independent and can also be used to verify detected text regions with the purpose of reducing false positives. Our experiments on generated datasets of Latin, CJK, Hiragana and Katakana characters show that the HSW is able to correctly classify at least 90% of the character regions, a similar figure is obtained for non-character regions. This performance is also obtained when training the HSW with one script and testing with a different one, and even when characters are rotated. On the English and Kannada portions of the Chars74K dataset we obtained over 95% correctly classified character regions. The use of raycasting for text line grouping is also proposed. By combining it with our HSW-based character classifier, a text detector based on Maximally Stable Extremal Regions (MSER) was implemented. The text detector was evaluated on our own dataset of road scenes from the German Autobahn, where 65% precision, 72% recall with a f-score of 69% was obtained. Using the HSW as a text verifier increases precision while slightly reducing recall. Our HSW feature allows the building of a script-independent and low parameter count classifier for character and non-character regions.
Through the “Act to Strengthen the Non-financial Reporting by Corporations in their Management and Group Management Reports” (Gesetz zur Stärkung der nichtfinanziellen Berichterstattung der Unternehmen in ihren Lage- und Konzernlageberichten) (CSR Directive Transposition Act, „CSR-RUG“) of 11 April 2017[1], the German Bundestag implemented Directive 2014/95/EU (“CSR Directive”)[2] into German law. Following the European impetus, the CSR-RUG enriches the traditional repertoire of forms of action under environmental law by a further instrument. Already the regulatory context gives an idea of its atypical nature: The centrepiece of the CSR-RUG is the amendment of and addition to the Third Book of the German Commercial Code (Handelsgesetzbuch, “HGB”), which deals with the “trading books” of undertakings, i.e., accounting and reporting requirements. Since the reporting year 2017, large capital market-oriented corporations must report extensively within the framework of their annual management reports on their activities and effects in certain areas of “Corporate Social Responsibility”. This also includes environmental matters. The transparency and publicity this entails is intended to generate positive stimuli for more responsible, sustained and not least of all environmentally friendly entrepreneurial action.
Following a brief presentation of the European legal bases and their implementation in Germany (I.), we will classify the provisions within the underlying concept of Corporate Social Responsibility (II.) and analyse and systemise the governance effects of non-financial reporting (III.). A few remarks on selected aspects of the chosen approach and its implementation (IV.) as well as an outlook summarising our conclusions (V.) will complete this article. By detailing the German approach to transposing the CSR Directive, this paper intends to provide an example of the challenges member state legislators face when complying with modern governance concepts such as Corporate Social Responsibility by way of non-financial reporting obligations.
[1] Federal Law Gazette, Part I 2017, 802 et seq.
[2] Directive 2014/95/EU of the European Parliament and of the Council 22 October 2014 amending Directive 2013/34/EU as regards disclosure of non-financial and diversity information by certain large undertakings and groups, OJ EU No. L 330, p. 1.
Risk-Based Authentication for OpenStack: A Fully Functional Implementation and Guiding Example
(2023)
Online services have difficulties to replace passwords with more secure user authentication mechanisms, such as Two-Factor Authentication (2FA). This is partly due to the fact that users tend to reject such mechanisms in use cases outside of online banking. Relying on password authentication alone, however, is not an option in light of recent attack patterns such as credential stuffing.
Risk-Based Authentication (RBA) can serve as an interim solution to increase password-based account security until better methods are in place. Unfortunately, RBA is currently used by only a few major online services, even though it is recommended by various standards and has been shown to be effective in scientific studies. This paper contributes to the hypothesis that the low adoption of RBA in practice can be due to the complexity of implementing it. We provide an RBA implementation for the open source cloud management software OpenStack, which is the first fully functional open source RBA implementation based on the Freeman et al. algorithm, along with initial reference tests that can serve as a guiding example and blueprint for developers.
Pursuant to Sustainable Development Goal (SDG) 15 of the 2030 Agenda for Sustainable Development of the United Nations, one pivotal target is to halt biodiversity loss. This paper’s objective is to analyze why and how German farmers hesitate to implement more than the prescriptive measures with regard to cross compliance and direct payments under the European Common Agricultural Policy (CAP) and what their aspirations are for possible incentives to bring biodiversity into focus. By applying a mixed methods approach, we investigate the experience of individual farmers by means of a qualitative approach followed by a quantitative study. This analysis sheds light on how farmers perceive indirect influencing factors and how these factors play a non-negligible role in farmers´ commitment to biodiversity. Economy, policy and society are intertwined and need to be considered from a multi-faceted perspective. In addition, an in-depth analysis is conducted based on online focus group discussions to determine whether farmers accept financial support, focusing on both action- and success-oriented payments. Our results highlight the importance of paying attention to the heterogeneity of farmers, their locations and, consequently, farmers’ different views on indirect drivers influencing agricultural processes, showing the complexity of the problem. Although farmers’ expectations can be met with financial allocations, other aspects must also be taken into account.
This paper aims to assess farmers’ challenges in enhancing biodiversity. The so-called “trilemma” (WBGU 2021) of land use stems from the multiple demands made on land for the benefit of mitigating climate change, securing food, and maintaining biodiversity. Agriculture is accused of maladministration, causing soil contamination, animal cruelty, bee mortality, and climate change. However, farmers play a key role in overcoming upcoming sustainability challenges. While their supportive role is urgently needed, farmers find themselves caught between a “rock” and a ”hard place”. Consumers call for sustainable production and affordable food products without pesticide residues, demanding enough for all. Farmers are restricted by the wants and needs of consumers who are influenced by interest groups and exposed to interdependent direct and indirect influencing factors. They need to balance the scrutiny of the critical public as well as the regulatory control. In this paper, we collected and surveyed the data of farmers within or close to the 21 selected nature protected areas of the DINA (Diversity of Insects in Nature protected Areas) Project, using a mixed methods approach with a semi-structured questionnaire considering issues’ interdependencies and the complexity of today´s problems. The conflicts and obstacles faced by farmers were assessed. The results reflect the farmers’ willingness and the importance of receiving appreciation for implementing biodiversity measures. These results, complemented by a following quantitative study, are the basis for recommendations for policymakers and farmers in all German nature protected areas.
The aim of this paper is to assess the objectives of farmers’ challenges in enhancing biodiversity. The so-called “trilemma” (WBGU 2021) of land use stems from the multiple demands made on land for the benefit of mitigating climate change, securing food and maintaining biodiversity. The agricultural sector is accused of maladministration: it is blamed for causing soil contamination, animal cruelty, bee mortality and climate change. That is why farmers are seen as key actors at all levels. They are, however, also key players when it comes to overcoming the problems of the future. Their supportive role is urgently needed, but farmers find themselves caught between a “rock” and a ”hard place”. Consumers are calling for sustainable, environmentally friendly production and inexpensive food products that do not contain pesticide residues, demanding enough food for all. Farmers are restricted by the wants and needs of consumers who are influenced by interest groups and are exposed to direct and indirect influencing factors and their interdependencies. They are also tasked with balancing the scrutiny of the critical public on the one hand, and the control exercised by eager authorities on the other.
As part of the DINA (Diversity of Insects in Nature protected Areas) project, a trans- and interdisciplinary research study, we collected and surveyed the data of farmers who are farming within or close to the 21 selected nature protected areas included in the DINA project. Data was collected as part of a mixed method approach using a semi-structured questionnaire. The methodological and strategic approach and interdependencies of issues demonstrate the complexity of today’s problems. To investigate this, we first used the data collection method using questionnaires with closed and open questions. The conflicts and obstacles farmers face were evaluated, and the results show farmers’ willingness and the importance of appreciation shown to farmers for implementation of biodiversity measures. The paper proposes some follow-up activities (quantitative study) to verify the objectives. The results will later lead to recommendations for policymakers and farmers in all German nature protected areas.
Channels of distribution are important factors in the connection between goods and services produced for the final consumer and, therefore, determine the effectiveness with which they are delivered and ultimately availed to the final consumers. Globally, studies show that channels of distribution and sales play an essential role in building bonds between manufacturers, retailers, wholesalers and their consumers. The main purpose of this study is to examine the influence of distribution channels and networks on customer choice of fast-moving consumer goods (FCMG) in the Upper East Region of Ghana. The study adopted a quantitative approach and questionnaires were used to collect primary data from 110 customers of Unilever Ghana Limited in the Upper East Region of Ghana. The findings reveal that product-related factors, such as the price of products, perishability of products, size and weight of products, promote the effective distribution of Unilever goods and services, whilst consumer-related factors, such as the number of customers and increased consumer base, promote effective distribution channels. The study also established a positive influence of factors, such as incentives, receiving feedback and sales performance, on customer choice of fast-moving consumer goods (FMCG). Managers and producers in the FMCGs industry should implement reward and incentive programmes and policies to boost the sale and distribution of fast-moving consumer goods and services in the retail industry in Ghana.
There are several recent works which had proposed an automatic computer-aided diagnosis (CAD) deep learning (DL) model to diagnose coronavirus disease 2019 (COVID-19) using chest x-ray images (CXR) to propose a high-accuracy CAD method to detect COVID-19 automatically. In this study, seven different models including Convolutional Neural Network (CNN) models such as VGG-16 and vision transformer (ViT) models, are proposed. The different proposed models are trained with a three-class balanced dataset consisting of 3,000 CXR images consisting of 1,000 CXR images for each class of COVID-19, Normal, and Lung-Opacity. A publicly available dataset to train and test the models is used from Kaggle-COVID-19-Radiography-Dataset. From the experiments, the accuracy of the VGG16 model is 93.44% and ViT's is 92.33%. Besides, the binary classification between two classes of COVID-19 and Normal CXR with a limited number of just 100 images for each class, using a transfer learning technique, with a validation accuracy of 97.5% is proposed.
Therapeutic Treatments for Osteoporosis-Which Combination of Pills Is the Best among the Bad?
(2022)
Osteoporosis is a chronical, systemic skeletal disorder characterized by an increase in bone resorption, which leads to reduced bone density. The reduction in bone mineral density and therefore low bone mass results in an increased risk of fractures. Osteoporosis is caused by an imbalance in the normally strictly regulated bone homeostasis. This imbalance is caused by overactive bone-resorbing osteoclasts, while bone-synthesizing osteoblasts do not compensate for this. In this review, the mechanism is presented, underlined by in vitro and animal models to investigate this imbalance as well as the current status of clinical trials. Furthermore, new therapeutic strategies for osteoporosis are presented, such as anabolic treatments and catabolic treatments and treatments using biomaterials and biomolecules. Another focus is on new combination therapies with multiple drugs which are currently considered more beneficial for the treatment of osteoporosis than monotherapies. Taken together, this review starts with an overview and ends with the newest approaches for osteoporosis therapies and a future perspective not presented so far.
The processing of employees’ personal data is dramatically increasing, yet there is a lack of tools that allow employees to manage their privacy. In order to develop these tools, one needs to understand what sensitive personal data are and what factors influence employees’ willingness to disclose. Current privacy research, however, lacks such insights, as it has focused on other contexts in recent decades. To fill this research gap, we conducted a cross-sectional survey with 553 employees from Germany. Our survey provides multiple insights into the relationships between perceived data sensitivity and willingness to disclose in the employment context. Among other things, we show that the perceived sensitivity of certain types of data differs substantially from existing studies in other contexts. Moreover, currently used legal and contextual distinctions between different types of data do not accurately reflect the subtleties of employees’ perceptions. Instead, using 62 different data elements, we identified four groups of personal data that better reflect the multi-dimensionality of perceptions. However, previously found common disclosure antecedents in the context of online privacy do not seem to affect them. We further identified three groups of employees that differ in their perceived data sensitivity and willingness to disclose, but neither in their privacy beliefs nor in their demographics. Our findings thus provide employers, policy makers, and researchers with a better understanding of employees’ privacy perceptions and serve as a basis for future targeted research
on specific types of personal data and employees.
The European General Data Protection Regulation requires the implementation of Technical and Organizational Measures (TOMs) to reduce the risk of illegitimate processing of personal data. For these measures to be effective, they must be applied correctly by employees who process personal data under the authority of their organization. However, even data processing employees often have limited knowledge of data protection policies and regulations, which increases the likelihood of misconduct and privacy breaches. To lower the likelihood of unintentional privacy breaches, TOMs must be developed with employees’ needs, capabilities, and usability requirements in mind. To reduce implementation costs and help organizations and IT engineers with the implementation, privacy patterns have proven to be effective for this purpose. In this chapter, we introduce the privacy pattern Data Cart, which specifically helps to develop TOMs for data processing employees. Based on a user-centered design approach with employees from two public organizations in Germany, we present a concept that illustrates how Privacy by Design can be effectively implemented. Organizations, IT engineers, and researchers will gain insight on how to improve the usability of privacy-compliant tools for managing personal data.
Applied privacy research has so far focused mainly on consumer relations in private life. Privacy in the context of employment relationships is less well studied, although it is subject to the same legal privacy framework in Europe. The European General Data Protection Regulation (GDPR) has strengthened employees’ right to privacy by obliging that employers provide transparency and intervention mechanisms. For such mechanisms to be effective, employees must have a sound understanding of their functions and value. We explored possible boundaries by conducting a semistructured interview study with 27 office workers in Germany and elicited mental models of the right to informational self-determination, which is the European proxy for the right to privacy. We provide insights into (1) perceptions of different categories of data, (2) familiarity with the legal framework regarding expectations for privacy controls, and (3) awareness of data processing, data flow, safeguards, and threat models. We found that legal terms often used in privacy policies used to describe categories of data are misleading. We further identified three groups of mental models that differ in their privacy control requirements and willingness to accept restrictions on their privacy rights. We also found ignorance about actual data flow, processing, and safeguard implementation. Participants’ mindsets were shaped by their faith in organizational and technical measures to protect privacy. Employers and developers may benefit from our contributions by understanding the types of privacy controls desired by office workers and the challenges to be considered when conceptualizing and designing usable privacy protections in the workplace.
Wireless sensor networks are widely used in a variety of fields including industrial environments. In case of a clustered network the location of cluster head affects the reliability of the network operation. Finding of the optimum location of the cluster head, therefore, is critical for the design of a network. This paper discusses the optimisation approach, based on the brute force algorithm, in the context of topology optimisation of a cluster structure centralised wireless sensor network. Two examples are given to verify the approach that demonstrate the implementation of the brute force algorithm to find an optimum location of the cluster head.
Abstract Classical ballet requires dancers to exercise significant muscle control and strength both while stationary and when moving. Following the Royal Academy of Dance (RAD) syllabus, 8 male and 27 female dancers (aged 20.2 + 1.9 yr) in a full-time university undergraduate dance training program were asked to stand in first position for 10 seconds and then perform 10 repeats of a demi-plié exercise to a counted rhythm. Accelerometer records from the wrist, sacrum, knee and ankle were compared with the numerical scores from a professional dance instructor. The sacrum mounted sensor detected lateral tilts of the torso in dances with lower scores (Spearman’s rank correlation coefficient r = -0.64, p < 0.005). The 5RMS6 acceleration amplitude of wrist mounted sensor was linearly correlated to the movement scores (Spearman’s rank correlation coefficient r = 0.63, p < 0.005). The application of sacrum and wrist mounted sensors for biofeedback during dance training is a realistic, low cost option.
Intimate swabs taken for examination in sexual assault cases typically yield mixtures of sperm and epithelial cell types. While powerful, differential extraction protocols to overcome such cell type mixtures by separate lysis of epithelial cells and spermatozoa can still prove ineffective, in particular if only few sperm cells are present or if swabs contain sperm from more than one individual leading to complex low level DNA mixtures. A means to avoid such mixtures consists in the analysis of single micromanipulated sperm cells. However, the quantity of DNA from single sperm cells is not sufficient for conventional STR analysis. Here, we describe a simple method for micromanipulating individual sperm cells from intimate swabs and show that whole genome amplification can generate sufficient amounts of DNA from single cells for subsequent DNA profiling. We recovered over 80% of alleles of haploid autosomal STR profiles from the majority of individual sperm cells. Furthermore, we demonstrate that in mixtures of sperm from two contributors, Y-STR and X-STR profiles of individual sperm cells can be used to sort the haploid autosomal profiles to develop the diploid consensus STR profiles of the individual donors. Finally, by analysing single sperm cells from mock sexual assault swabs with one or two sperm donors, we showed that our protocols enabled the identification of the unknown male contributors.
Angesichts der raschen Entwicklungen und der Besonderheiten von Softwaresystemen, welche Künstliche Intelligenz (KI) nutzen, ist ein angepasstes Requirements Engineering (RE) erforderlich. Die spezifischen Anforderungen von KI-Projekten müssen dabei erkannt und angegangen werden. Hierfür wird eine systematische Überprufung bestehender Herausforderungen des RE in KI-Projekten durchgeführt. Darauf aufbauend werden neue RE-Ansätze und Empfehlungen präsentiert, die auf die Datensicht von KI-Projekten abzielen. Mithilfe der Analyse bestehender Lösungsansatze, Methoden, Frameworks und Tools soll aufgezeigt werden, inwiefern die Herausforderungen im RE bewältigt werden können. Noch bestehende Lücken im Forschungsstand werden identifiziert und aufgezeigt.
Sustainable development needs sustainable production and sustainable consumption. During the last decades the encouragement of sustainable production has been the focus of research and policy makers under the implicit assumption that the observable increasing ‘green’ values of consumers would also entail a growing sustainable consumption. However, it has been found that the actual purchasing behaviour often deviates from ‘green’ attitudes. This phenomenon is called the attitude-behaviour gap. It is influenced by individual, social and situational factors. The main purchasing barriers for sustainable (organic) food are price, lack of immediate availability, sensory criteria, lack or overload of information as well as the low-involvement feature of food products in conjunction with well-established consumption routines, lack of transparency and trust towards labels and certifications.
The phenomenon of the deviation between purchase attitudes and actual buying behaviour of responsible consumers is called the attitude-behaviour gap. It is influenced by individual, social and situational factors. The main purchasing barriers for sustainable (organic) food are price, lack of immediate availability, sensory criteria, lack or overload of information as well as the low-involvement feature of food products in conjunction with well-established consumption routines, lack of transparency and trust towards labels and certifications. The last three barriers are mainly of a psychological nature. Especially the low-involvement feature of food products due to daily purchase routines and relatively low prices tends to result in fast, automatic and subconscious decisions based on a so-called human mental system 1, derived from Daniel Kahneman’s (Nobel-Prize laureate in Behavioural Economics) model in behavioural psychology. In contrast, the human mental system 2 is especially important for the transformations of individual behaviour towards a more sustainable consumption. Decisions based on the human mental system 2 are slow, logical, rational, conscious and arduous. This so-called dual action model also influences the reliability of responses in consumer surveys. It seems that the consumer behaviour is the most unstable and unpredictable part of the entire supply chain and requires special attention. Concrete measures to influence consumer behaviour towards sustainable consumption are highly complex. Reviews of interdisciplinary research literature on behavioural psychology, behavioural economics and consumer behaviour and an empirical analysis of selected countries worldwide with a view to sustainable food are presented. The example of Denmark serves as a ‘best practice’ case study to illustrate how sustainable food consumption can be encouraged. It demonstrates that common efforts and a shared responsibility of consumers, business, interdisciplinary researchers, mass media and policy are needed. It takes pioneers of change who succeed in assembling a ‘critical mass’ willing to increase its ‘sustainable’ behaviour. Considering the strong psychological barriers of consumers and the continuing low market share of organic food, proactive policy measures would be conducive to foster the personal responsibility of the consumers and offer incentives towards a sustainable production. Also, further self-obligations of companies (Corporate Social Responsibility – CSR) as well as more transparency and simplification of reliable labels and certifications are needed to encourage the process towards a sustainable development.
Sustainability is a key issue in current research activities and programs. In this conjunction three major functions of research have been identified: Basic research, knowledge reservoirs, and knowledge transfer. With regard to a transmission to the private sector, knowledge transfer is the most important factor. In this process, universities of applied sciences can play an important part as they typically have a long-standing experience in linking science and business in their teaching and research. Another important agent in the process of knowledge transfer are networks and clusters. Their strength lies integrating the different competencies of its partners and using them to a mutual benefit.
The International Centre for Sustainable Development (IZNE) – with a major focus on responsible business and sustainable food – takes the advantage of being part of a University of Applied Sciences (Bonn-Rhein-Sieg, BRSU), and being a member of several regional and international clusters and networks. These co-operations aim to establish and strengthen linkages between science and business, in particular by investigating research needs for business and business relevant research activities. Moreover, IZNE established and expanded regional and international co-operations of its own to get more transparency about regional and international value-added chains in the food sector and the issue of responsible business.
Social cash transfers (SCTs) are considered a priority in least-developed countries, where the gap between the need for basic social protection and existing provisions is greatest. This study represents one of the first comprehensive treatments of the impact of social cash transfers in low-income sub-Saharan Africa, and the first for Zambia's oldest SCT scheme. The results, based on propensity score matching and fully efficient odds-weighted regression, and data from the Kalomo SCT pilot scheme, confirm positive SCT effects on per capita consumption expenditure. We also discover threshold effects with SCT mostly impacting food expenditure among poorer beneficiary households and non-food expenditure among wealthier beneficiaries.
In March 2020, the world was hit by the coronavirus disease (COVID‐19) pandemic which led to all‐embracing measures to contain its spread. Most employees were forced to work from home and take care of their children because schools and daycares were closed. We present data from a research project in a large multinational organisation in the Netherlands with monthly quantitative measurements from January to May 2020 (N = 253–516), enriched with qualitative data from participants' comments before and after telework had started. Growth curve modelling showed major changes in employees' work‐related well‐being reflected in decreasing work engagement and increasing job satisfaction. For work‐non‐work balance, workload and autonomy, cubic trends over time were found, reflecting initial declines during crisis onset (March/April) and recovery in May. Participants' additional remarks exemplify that employees struggled with fulfilling different roles simultaneously, developing new routines and managing boundaries between life domains. Moderation analyses demonstrated that demographic variables shaped time trends. The diverging trends in well‐being indicators raise intriguing questions and show that close monitoring and fine‐grained analyses are needed to arrive at a better understanding of the impact of the crisis across time and among different groups of employees.
The aim of this study was to investigate employees’ self-reported creativity before and after vacation and to examine the impact of recovery experiences (detachment, relaxation, mastery, meaning, autonomy, affiliation) on changes in creativity. The DRAMMA model of Newman et al. provides the theoretical background of our approach. Longitudinal data was assessed with four repeated measurements. The study encompassed data from 274 white-collar workers. Analyses showed that employees subjectively perceive their creativity to benefit not immediately after their vacation but 2 weeks later. Detachment was significantly related to lower creativity within persons, while mastery experiences explained differences in creativity between persons. This study provides a detailed picture of changes in creativity around vacations.