Refine
Departments, institutes and facilities
- Fachbereich Informatik (80)
- Fachbereich Wirtschaftswissenschaften (69)
- Fachbereich Ingenieurwissenschaften und Kommunikation (65)
- Fachbereich Angewandte Naturwissenschaften (64)
- Fachbereich Sozialpolitik und Soziale Sicherung (59)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (53)
- Institut für funktionale Gen-Analytik (IFGA) (36)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (27)
- Institut für Cyber Security & Privacy (ICSP) (23)
- Institut für Verbraucherinformatik (IVI) (23)
Document Type
- Article (150)
- Conference Object (112)
- Part of a Book (45)
- Book (monograph, edited volume) (25)
- Preprint (15)
- Working Paper (13)
- Contribution to a Periodical (11)
- Report (9)
- Video (8)
- Research Data (6)
Year of publication
- 2021 (416) (remove)
Keywords
- Lehrbuch (7)
- DGQ (6)
- Melcher (6)
- Augmented Reality (4)
- Machine Learning (4)
- Usable Security (4)
- Big Data Analysis (3)
- Digitalisierung (3)
- Grundwerkzeug des Qualitätsmanagements (3)
- Kenya (3)
Cancer is a complex disease where resistance to therapies and relapses often pose a serious clinical challenge. The scenario is even more complicated when the cancer type itself is heterogeneous in nature, e.g., lymphoma, a cancer of the lymphocytes which constitutes more than 70 different subtypes. Indeed, the treatment options continue to expand in lymphomas. Herein, we provide insights into lymphoma-specific clinical trials based on cytokine-induced killer (CIK) cell therapy and other pre-clinical lymphoma models where CIK cells have been used along with other synergetic tumor-targeting immune modules to improve their therapeutic potential. From a broader perspective, we will highlight that CIK cell therapy has potential, and in this rapidly evolving landscape of cancer therapies its optimization (as a personalized therapeutic approach) will be beneficial in lymphomas.
Orešković and Porsdam Mann draw a distinction between ‘fast’ and ‘slow’ science. Whereas the latter involves rigorous and laborious adherence to the scientific method, the former represents the reality that much scientific work faces time pressures which at times force shortcuts. The distinction can be seen to operate in contemporary research into the coronavirus pandemic: whereas the development of vaccines and treatments usually requires years of meticulous laboratory work and several more years of clinical testing, the many millions suffering from the disease need a treatment now. However, by taking too many safeguards off the treatment discovery and testing pipelines, or by refusing to act in accordance with scientific advice, governments risk sacrificing the public’s trust not only in the government’s scientific bona fides but in the scientific process itself. This is a heavy price to pay, argue Orešković and Porsdam Mann, and point to evidence indicating that the success of Germany and Japan in combating COVID-19 can be traced to public trust in science and government, as well as scientifically-informed and respectful national leadership.
Cytokine-induced killer (CIK) cells are an ex vivo expanded heterogeneous cell population with an enriched NK-T phenotype (CD3+CD56+). Due to the convenient and relatively inexpensive expansion capability, together with low incidence of graft versus host disease (GVHD) in allogeneic cancer patients, CIK cells are a promising candidate for immunotherapy. It is well known that natural killer group 2D (NKG2D) plays an important role in CIK cell-mediated antitumor activity; however, it remains unclear whether its engagement alone is sufficient or if it requires additional co-stimulatory signals to activate the CIK cells. Likewise, the role of 2B4 has not yet been identified in CIK cells. Herein, we investigated the individual and cumulative contribution of NKG2D and 2B4 in the activation of CIK cells. Our analysis suggests that (a) NKG2D (not 2B4) is implicated in CIK cell (especially CD3+CD56+ subset)-mediated cytotoxicity, IFN-γ secretion, E/T conjugate formation, and degranulation; (b) NKG2D alone is adequate enough to induce degranulation, IFN-γ secretion, and LFA-1 activation in CIK cells, while 2B4 only provides limited synergy with NKG2D (e.g., in LFA-1 activation); and (c) NKG2D was unable to costimulate CD3. Collectively, we conclude that NKG2D engagement alone suffices to activate CIK cells, thereby strengthening the idea that targeting the NKG2D axis is a promising approach to improve CIK cell therapy for cancer patients. Furthermore, CIK cells exhibit similarities to classical invariant natural killer (iNKT) cells with deficiencies in 2B4 stimulation and in the costimulation of CD3 with NKG2D. In addition, based on the current data, the divergence in receptor function between CIK cells and NK (or T) cells can be assumed, pointing to the possibility that molecular modifications (e.g., using chimeric antigen receptor technology) on CIK cells may need to be customized and optimized to maximize their functional potential.
The clear-sky radiative effect of aerosol-radiation interactions is of relevance for our understanding of the climate system. The influence of aerosol on the surface energy budget is of high interest for the renewable energy sector. In this study, the radiative effect is investigated in particular with respect to seasonal and regional variations for the region of Germany and the year 2015 at the surface and top of atmosphere using two complementary approaches.
First, an ensemble of clear-sky models which explicitly consider aerosols is utilized to retrieve the aerosol optical depth and the surface direct radiative effect of aerosols by means of a clear sky fitting technique. For this, short-wave broadband irradiance measurements in the absence of clouds are used as a basis. A clear sky detection algorithm is used to identify cloud free observations. Considered are measurements of the shortwave broadband global and diffuse horizontal irradiance with shaded and unshaded pyranometers at 25 stations across Germany within the observational network of the German Weather Service (DWD). Clear sky models used are MMAC, MRMv6.1, METSTAT, ESRA, Heliosat-1, CEM and the simplified Solis model. The definition of aerosol and atmospheric characteristics of the models are examined in detail for their suitability for this approach.
Second, the radiative effect is estimated using explicit radiative transfer simulations with inputs on the meteorological state of the atmosphere, trace-gases and aerosol from CAMS reanalysis. The aerosol optical properties (aerosol optical depth, Ångström exponent, single scattering albedo and assymetrie parameter) are first evaluated with AERONET direct sun and inversion products. The largest inconsistency is found for the aerosol absorption, which is overestimated by about 0.03 or about 30 % by the CAMS reanalysis. Compared to the DWD observational network, the simulated global, direct and diffuse irradiances show reasonable agreement within the measurement uncertainty. The radiative kernel method is used to estimate the resulting uncertainty and bias of the simulated direct radiative effect. The uncertainty is estimated to −1.5 ± 7.7 and 0.6 ± 3.5 W m−2 at the surface and top of atmosphere, respectively, while the annual-mean biases at the surface, top of atmosphere and total atmosphere are −10.6, −6.5 and 4.1 W m−2, respectively.
The retrieval of the aerosol radiative effect with the clear sky models shows a high level of agreement with the radiative transfer simulations, with an RMSE of 5.8 W m−2 and a correlation of 0.75. The annual mean of the REari at the surface for the 25 DWD stations shows a value of −12.8 ± 5 W m−2 as average over the clear sky models, compared to −11 W m−2 from the radiative transfer simulations. Since all models assume a fixed aerosol characterisation, the annual cycle of the aerosol radiation effect cannot be reproduced. Out of this set of clear sky models, the largest level of agreement is shown by the ESRA and MRMv6.1 models.
The clear-sky radiative effect of aerosol–radiation interactions is of relevance for our understanding of the climate system. The influence of aerosol on the surface energy budget is of high interest for the renewable energy sector. In this study, the radiative effect is investigated in particular with respect to seasonal and regional variations for the region of Germany and the year 2015 at the surface and top of atmosphere using two complementary approaches.
First, an ensemble of clear-sky models which explicitly consider aerosols is utilized to retrieve the aerosol optical depth and the surface direct radiative effect of aerosols by means of a clear-sky fitting technique. For this, short-wave broadband irradiance measurements in the absence of clouds are used as a basis. A clear-sky detection algorithm is used to identify cloud-free observations. Considered are measurements of the short-wave broadband global and diffuse horizontal irradiance with shaded and unshaded pyranometers at 25 stations across Germany within the observational network of the German Weather Service (DWD). The clear-sky models used are the Modified MAC model (MMAC), the Meteorological Radiation Model (MRM) v6.1, the Meteorological–Statistical solar radiation model (METSTAT), the European Solar Radiation Atlas (ESRA), Heliosat-1, the Center for Environment and Man solar radiation model (CEM), and the simplified Solis model. The definition of aerosol and atmospheric characteristics of the models are examined in detail for their suitability for this approach.
Second, the radiative effect is estimated using explicit radiative transfer simulations with inputs on the meteorological state of the atmosphere, trace gases and aerosol from the Copernicus Atmosphere Monitoring Service (CAMS) reanalysis. The aerosol optical properties (aerosol optical depth, Ångström exponent, single scattering albedo and asymmetry parameter) are first evaluated with AERONET direct sun and inversion products. The largest inconsistency is found for the aerosol absorption, which is overestimated by about 0.03 or about 30 % by the CAMS reanalysis. Compared to the DWD observational network, the simulated global, direct and diffuse irradiances show reasonable agreement within the measurement uncertainty. The radiative kernel method is used to estimate the resulting uncertainty and bias of the simulated direct radiative effect. The uncertainty is estimated to −1.5 ± 7.7 and 0.6 ± 3.5 W m−2 at the surface and top of atmosphere, respectively, while the annual-mean biases at the surface, top of atmosphere and total atmosphere are −10.6, −6.5 and 4.1 W m−2, respectively.
The retrieval of the aerosol radiative effect with the clear-sky models shows a high level of agreement with the radiative transfer simulations, with an RMSE of 5.8 W m−2 and a correlation of 0.75. The annual mean of the REari at the surface for the 25 DWD stations shows a value of −12.8 ± 5 W m−2 as the average over the clear-sky models, compared to −11 W m−2 from the radiative transfer simulations. Since all models assume a fixed aerosol characterization, the annual cycle of the aerosol radiation effect cannot be reproduced. Out of this set of clear-sky models, the largest level of agreement is shown by the ESRA and MRM v6.1 models.
In thyroid carcinoma cells, the soluble βgalactosidespecific lectin, galectin3, is extra and intracellularly expressed and plays a significant role in thyroid cancer diagnosis. The functional relevance of this molecule, particularly in its extracellular environment however, warrants further elucidation. To gain insight into this topic, the present study characterized principal functional properties of galectin3 in 3 commonly used thyroid carcinoma cell lines (BCPAP, Cal62 and FTC133) that express the molecule intra and extracellulary. Cellintrinsic galectin3 harbors a functional carbohydrate recognition domain as determined by affinity purification. Moreover, cell surface expressed galectin3 can be partially removed by treatment with lactose or asialofetuin, but not with sucrose. Thyroid carcinoma cells adhere to substratebound galectin3 in a βgalactosidespecific manner, whereby only cell adhesion, but not cell migration is promoted. Thus, thyroid tumor cells harbor functional active galectin3 that, inter alia, specifically interacts with cell surfaceexpressed molecular ligands in a βgalactosidedependent manner, whereby the molecule can at least interfere with cell adhesion. The modulation of galectin3 expression level or its ligands in such tumor cells could be of therapeutic interest and needs further experimental clarification.
Turbulent compressible flows are traditionally simulated using explicit time integrators applied to discretized versions of the Navier-Stokes equations. However, the associated Courant-Friedrichs-Lewy condition severely restricts the maximum time-step size. Exploiting the Lagrangian nature of the Boltzmann equation’s material derivative, we now introduce a feasible three-dimensional semi-Lagrangian lattice Boltzmann method (SLLBM), which circumvents this restriction. While many lattice Boltzmann methods for compressible flows were restricted to two dimensions due to the enormous number of discrete velocities in three dimensions, the SLLBM uses only 45 discrete velocities. Based on compressible Taylor-Green vortex simulations we show that the new method accurately captures shocks or shocklets as well as turbulence in 3D without utilizing additional filtering or stabilizing techniques other than the filtering introduced by the interpolation, even when the time-step sizes are up to two orders of magnitude larger compared to simulations in the literature. Our new method therefore enables researchers to study compressible turbulent flows by a fully explicit scheme, whose range of admissible time-step sizes is dictated by physics rather than spatial discretization.
Off-lattice Boltzmann methods increase the flexibility and applicability of lattice Boltzmann methods by decoupling the discretizations of time, space, and particle velocities. However, the velocity sets that are mostly used in off-lattice Boltzmann simulations were originally tailored to on-lattice Boltzmann methods. In this contribution, we show how the accuracy and efficiency of weakly and fully compressible semi-Lagrangian off-lattice Boltzmann simulations is increased by velocity sets derived from cubature rules, i.e. multivariate quadratures, which have not been produced by the Gauss-product rule. In particular, simulations of 2D shock-vortex interactions indicate that the cubature-derived degree-nine D2Q19 velocity set is capable to replace the Gauss-product rule-derived D2Q25. Likewise, the degree-five velocity sets D3Q13 and D3Q21, as well as a degree-seven D3V27 velocity set were successfully tested for 3D Taylor-Green vortex flows to challenge and surpass the quality of the customary D3Q27 velocity set. In compressible 3D Taylor-Green vortex flows with Mach numbers Ma={0.5;1.0;1.5;2.0} on-lattice simulations with velocity sets D3Q103 and D3V107 showed only limited stability, while the off-lattice degree-nine D3Q45 velocity set accurately reproduced the kinetic energy provided by literature.
Off-lattice Boltzmann methods increase the flexibility and applicability of lattice Boltzmann methods by decoupling the discretizations of time, space, and particle velocities. However, the velocity sets that are mostly used in off-lattice Boltzmann simulations were originally tailored to on-lattice Boltzmann methods. In this contribution, we show how the accuracy and efficiency of weakly and fully compressible semi-Lagrangian off-lattice Boltzmann simulations is increased by velocity sets derived from cubature rules, i.e. multivariate quadratures, which have not been produced by the Gauß-product rule. In particular, simulations of 2D shock-vortex interactions indicate that the cubature-derived degree-nine D2Q19 velocity set is capable to replace the Gauß-product rule-derived D2Q25. Likewise, the degree-five velocity sets D3Q13 and D3Q21, as well as a degree-seven D3V27 velocity set were successfully tested for 3D Taylor–Green vortex flows to challenge and surpass the quality of the customary D3Q27 velocity set. In compressible 3D Taylor–Green vortex flows with Mach numbers on-lattice simulations with velocity sets D3Q103 and D3V107 showed only limited stability, while the off-lattice degree-nine D3Q45 velocity set accurately reproduced the kinetic energy provided by literature.
The identification of energetic materials in containments is an important challenge for analytical methods in the field of safety and security. Opening a package without knowledge of its contents and the resulting hazards is highly involved with risks and should be avoided whenever possible. Therefore, preferable methods work non-destructive with minimal interaction and are capable of identifying target substances in a containment quickly and reliably. Most spectroscopic methods find their limits, if the target substance is shielded by a covering material. To solve this problem, a combined laser drilling method with subsequent identification of the target substance by means of Raman spectroscopic measurements through microscopic bore holes of the covering material is presented. A pulsed laser beam is used for both the drilling process and as an excitation source for Raman measurements in the same optical setup. Results show the ability of this new method to gain high-quality spectra even when performed through microscopic small bore channels. With the laser parameters chosen right, the method can even be performed on highly sensitive explosives like triacetone triperoxide (TATP). Another advantageous effect arises in an observed reduction in unwanted fluorescence signal in the spectral data, resulting from the confocal-like measurement setup with the bore hole acting as aperture.
Risk-based authentication (RBA) extends authentication mechanisms to make them more robust against account takeover attacks, such as those using stolen passwords. RBA is recommended by NIST and NCSC to strengthen password-based authentication, and is already used by major online services. Also, users consider RBA to be more usable than two-factor authentication and just as secure. However, users currently obtain RBA's high security and usability benefits at the cost of exposing potentially sensitive personal data (e.g., IP address or browser information). This conflicts with user privacy and requires to consider user rights regarding the processing of personal data. We outline potential privacy challenges regarding different attacker models and propose improvements to balance privacy in RBA systems. To estimate the properties of the privacy-preserving RBA enhancements in practical environments, we evaluated a subset of them with long-term data from 780 users of a real-world online service. Our results show the potential to increase privacy in RBA solutions. However, it is limited to certain parameters that should guide RBA design to protect privacy. We outline research directions that need to be considered to achieve a widespread adoption of privacy preserving RBA with high user acceptance.
Risk-based authentication (RBA) aims to strengthen password-based authentication rather than replacing it. RBA does this by monitoring and recording additional features during the login process. If feature values at login time differ significantly from those observed before, RBA requests an additional proof of identification. Although RBA is recommended in the NIST digital identity guidelines, it has so far been used almost exclusively by major online services. This is partly due to a lack of open knowledge and implementations that would allow any service provider to roll out RBA protection to its users. To close this gap, we provide a first in-depth analysis of RBA characteristics in a practical deployment. We observed N=780 users with 247 unique features on a real-world online service for over 1.8 years. Based on our collected data set, we provide (i) a behavior analysis of two RBA implementations that were apparently used by major online services in the wild, (ii) a benchmark of the features to extract a subset that is most suitable for RBA use, (iii) a new feature that has not been used in RBA before, and (iv) factors which have a significant effect on RBA performance. Our results show that RBA needs to be carefully tailored to each online service, as even small configuration adjustments can greatly impact RBA's security and usability properties. We provide insights on the selection of features, their weightings, and the risk classification in order to benefit from RBA after a minimum number of login attempts.
Risk-based authentication (RBA) is an adaptive security measure to strengthen password-based authentication against account takeover attacks. Our study on 65 participants shows that users find RBA more usable than two-factor authentication equivalents and more secure than password-only authentication. We identify pitfalls and provide guidelines for putting RBA into practice.
Risk-based authentication (RBA) aims to strengthen password-based authentication rather than replacing it. RBA does this by monitoring and recording additional features during the login process. If feature values at login time differ significantly from those observed before, RBA requests an additional proof of identification. Although RBA is recommended in the NIST digital identity guidelines, it has so far been used almost exclusively by major online services. This is partly due to a lack of open knowledge and implementations that would allow any service provider to roll out RBA protection to its users.
To close this gap, we provide a first in-depth analysis of RBA characteristics in a practical deployment. We observed N=780 users with 247 unique features on a real-world online service for over 1.8 years. Based on our collected data set, we provide (i) a behavior analysis of two RBA implementations that were apparently used by major online services in the wild, (ii) a benchmark of the features to extract a subset that is most suitable for RBA use, (iii) a new feature that has not been used in RBA before, and (iv) factors which have a significant effect on RBA performance. Our results show that RBA needs to be carefully tailored to each online service, as even small configuration adjustments can greatly impact RBA's security and usability properties. We provide insights on the selection of features, their weightings, and the risk classification in order to benefit from RBA after a minimum number of login attempts.
Many workers experience their jobs as effortful or even stressful, which can result in strain. Although recovery from work would be an adaptive strategy to prevent the adverse effects of work-related strain, many workers face problems finding enough time to rest and to mentally disconnect from work during nonwork time. What goes on in workers’ minds after a stressful workday? What is it about their jobs that makes them think about their work? This special issue aims to bridge the gap between research on recovery processes mainly examined in Occupational Health Psychology, and research on work stress and working hours, often investigated in the field of Human Resource Management. We first summarize conceptual and theoretical streams from both fields of research. In the following, we discuss the contributions of the five special issue papers and conclude with key messages and directions for further research.
Isolation of DNA and RNA
(2021)
Although work events can be regarded as pivotal elements of organizational life, only a few studies have examined how positive and negative events relate to and combine to affect work engagement over time. Theory suggests that, to better understand how current events affect work engagement (WE), we have to account for recent events that have preceded these current events. We present competing theoretical views on how recent and current work events may affect employees (e.g., getting used to a high frequency of negative events or becoming more sensitive to negative events). Although the occurrence of events implies discrete changes in the experience of work, prior research has not considered whether work events actually accumulate to sustained mid-term changes in WE. To address these gaps in the literature, we conducted a week-level longitudinal study across a period of 15 consecutive weeks among 135 employees, which yielded 849 weekly observations. While positive events were associated with higher levels of WE within the same week, negative events were not. Our results support neither satiation nor sensitization processes. However, a high frequency of negative events in the preceding week amplified the beneficial effects of positive events on WE in the current week. Growth curve analyses show that the benefits of positive events accumulate to sustain high levels of WE. WE dissipates in the absence of a continuous experience of positive events. Our study adds a temporal component by highlighting that positive events affect work engagement, particularly in light of recent negative events. Our study informs research that has taken a feature-oriented perspective on the dynamic interplay of job demands and resources.
The synthesis and characterization of a new class of 1,2,4-oxadiazolylpyridinium as a cationic scaffold for fluorinated ionic liquid crystals is herein described. A series of 12 fluorinated heterocyclic salts based on a 1,2,4-oxadiazole moiety, connected through its C(5) or C(3) to an N-alkylpyridinium unit and a perfluoroheptyl chain, differing in the length of the alkyl chain and counterions, has been synthesized. As counterions iodide, bromide and bis(trifluoromethane)sulfonimide have been considered. The synthesis, structure, and liquid crystalline properties of these compounds are discussed on the basis of the tuned structural variables. The thermotropic properties of this series of salts have been investigated by differential scanning calorimetry and polarized optical microscopy. The results showed the existence of an enantiotropic mesomorphic smectic liquid crystalline phase for six bis(trifluoromethane)sulfonimide salts.
New sustainable, environmentally friendly materials for thermal insulation of buildings are necessary to reduce their carbon footprints. In this study, Miscanthus fiber-reinforced geopolymer composites, foamed with sodium dodecyl sulfate (SDS), were developed using fly ash as a geopolymer precursor. The effects of fiber content, fiber size, curing temperature, foaming agent content, fumed silica specific surface area and fumed silica content on thermal conductivity and compressive strength were evaluated using a Plackett-Burman design of experiment. Furthermore, the microstructure of geopolymer composites was investigated using X-ray diffraction (XRD), X-ray micro-computed tomography (μCT) and scanning electron microscopy (SEM). The measured characteristic values were in the following ranges: Thermal conductivity 0.057 W (m K)−1 to 0.127 W (m K)−1, compressive strength 0.007 MPa–0.719 MPa and porosity 49 vol% to 76 vol%. The results reveal an enhancement of thermal conductivity by elevated fiber size and foaming agent content. In contrast, the compressive strength is enhanced by high fiber content. Additionally, SEM images indicate a good interaction between the fibers and the geopolymer matrix, because nearly the whole fiber surface is covered by the geopolymer.
Aim: To understand how transcriptional factors Pdr1 and Pdr3, belonging to the pleiotropic drug resistance system, are activated, and regulated after introducing chemical toxins to the cell in the model organism Saccharomyces cerevisiae.
Methods: Series of molecular methods were applied using different strains of S. cerevisiae over-expressing proteins of interest as a eukaryotic cell model. The chemical stress introduced to the cell is represented by menadione. Results were obtained performing protein detection and analysis. Additionally, the regulation of the DNA binding of the transcriptional activators after stimulation is quantified using chromatin immunoprecipitation, employing epitope-tagged factors and real-time qPCR.
Results: Our results indicated higher expression levels of the Pdr1 transcriptional factor, compared to its homologous Pdr3 after treatment with menadione. The yeast-cell defence system was tested against various organic solvents to exclude the possibility of their presence potentially affecting the results. The results indicate that Pdr1 is most abundant after 30 minutes from the beginning of the treatment, compared with 240 minutes after the treatment when the function of the transcription factor is faded. It appears that Pdr1 binding to the PDR5 and SNQ2 promoters, which are both activated by Pdr1, peaks around the same time, or more precisely after 40 minutes from the start of the treatment.
Conclusion: The tendency of Pdr1 reduction after its activation by menadione is detected. One possibility is that Pdr1, after recognizing the xenobiotic menadione, is removed by a degradation mechanism. Given the fact that Pdr1 directly binds the xenobiotic molecule, its destruction might help the cells to remove toxic levels of menadione. It is possible that overexpressing the part of Pdr1 which recognizes menadione alone was sufficient to detoxify and hence produce a tolerance towards menadione.
Machine learning and neural networks are now ubiquitous in sonar perception, but it lags behind the computer vision field due to the lack of data and pre-trained models specifically for sonar images. In this paper we present the Marine Debris Turntable dataset and produce pre-trained neural networks trained on this dataset, meant to fill the gap of missing pre-trained models for sonar images. We train Resnet 20, MobileNets, DenseNet121, SqueezeNet, MiniXception, and an Autoencoder, over several input image sizes, from 32 x 32 to 96 x 96, on the Marine Debris turntable dataset. We evaluate these models using transfer learning for low-shot classification in the Marine Debris Watertank and another dataset captured using a Gemini 720i sonar. Our results show that in both datasets the pre-trained models produce good features that allow good classification accuracy with low samples (10-30 samples per class). The Gemini dataset validates that the features transfer to other kinds of sonar sensors. We expect that the community benefits from the public release of our pre-trained models and the turntable dataset.
Das Windenergie-auf-See-Gesetz und das Verfassungsrecht: Eine Vervollständigung in drei Etappen
(2021)
Die Öffnung der Personenhandelsgesellschaft für Freiberufler in berufsrechtlicher Perspektive
(2021)
Applied privacy research has so far focused mainly on consumer relations in private life. Privacy in the context of employment relationships is less well studied, although it is subject to the same legal privacy framework in Europe. The European General Data Protection Regulation (GDPR) has strengthened employees’ right to privacy by obliging that employers provide transparency and intervention mechanisms. For such mechanisms to be effective, employees must have a sound understanding of their functions and value. We explored possible boundaries by conducting a semistructured interview study with 27 office workers in Germany and elicited mental models of the right to informational self-determination, which is the European proxy for the right to privacy. We provide insights into (1) perceptions of different categories of data, (2) familiarity with the legal framework regarding expectations for privacy controls, and (3) awareness of data processing, data flow, safeguards, and threat models. We found that legal terms often used in privacy policies used to describe categories of data are misleading. We further identified three groups of mental models that differ in their privacy control requirements and willingness to accept restrictions on their privacy rights. We also found ignorance about actual data flow, processing, and safeguard implementation. Participants’ mindsets were shaped by their faith in organizational and technical measures to protect privacy. Employers and developers may benefit from our contributions by understanding the types of privacy controls desired by office workers and the challenges to be considered when conceptualizing and designing usable privacy protections in the workplace.
Das Projekt adressiert ein Problem aus dem Bereich Medizintechnologie (ein NRW-Förderschwerpunkt): die Entwicklung eines für Patienten maßgeschneiderten Gewebeersatzmaterials, ein Knochensurrogat. Kritische (“critical size“) Knochendefekte stellen ein signifikantes Gesundheitsproblem dar, das durch die zurzeit gängigen Knochenersatzmaterialien nicht bzw. nicht effizient therapiert werden kann. Kritische Knochendefekte werden mit artifiziellen Biomaterialien behandelt, die bislang eine unzureichende Regenerationskapazität aufweisen.
The dataset contains the following data from successful and failed executions of the Toyota HSR robot placing a book on a shelf.
RGB images from the robot's head camera
Depth images from the robot's head camera
Rendered images of the robot's 3D model from the point of view of the robot's head camera
Force-torque readings from a wrist-mounted force-torque sensor
Joint efforts, velocities and positions
extrinsic and intrinsic camera calibration parameters
frame-level anomaly annotations
The anomalies that occur during execution include:
the manipulated book falling down
books on the shelf being disturbed significantly
camera occlusions
robot being disturbed by an external collision
The dataset is split into a train, validation and test set with the following number of trials:
Train: 48 successful trials
Validation: 6 successful trials
Test: 60 anomalous trials and 7 successful trials
Execution monitoring is essential for robots to detect and respond to failures. Since it is impossible to enumerate all failures for a given task, we learn from successful executions of the task to detect visual anomalies during runtime. Our method learns to predict the motions that occur during the nominal execution of a task, including camera and robot body motion. A probabilistic U-Net architecture is used to learn to predict optical flow, and the robot's kinematics and 3D model are used to model camera and body motion. The errors between the observed and predicted motion are used to calculate an anomaly score. We evaluate our method on a dataset of a robot placing a book on a shelf, which includes anomalies such as falling books, camera occlusions, and robot disturbances. We find that modeling camera and body motion, in addition to the learning-based optical flow prediction, results in an improvement of the area under the receiver operating characteristic curve from 0.752 to 0.804, and the area under the precision-recall curve from 0.467 to 0.549.
Intimate swabs taken for examination in sexual assault cases typically yield mixtures of sperm and epithelial cell types. While powerful, differential extraction protocols to overcome such cell type mixtures by separate lysis of epithelial cells and spermatozoa can still prove ineffective, in particular if only few sperm cells are present or if swabs contain sperm from more than one individual leading to complex low level DNA mixtures. A means to avoid such mixtures consists in the analysis of single micromanipulated sperm cells. However, the quantity of DNA from single sperm cells is not sufficient for conventional STR analysis. Here, we describe a simple method for micromanipulating individual sperm cells from intimate swabs and show that whole genome amplification can generate sufficient amounts of DNA from single cells for subsequent DNA profiling. We recovered over 80% of alleles of haploid autosomal STR profiles from the majority of individual sperm cells. Furthermore, we demonstrate that in mixtures of sperm from two contributors, Y-STR and X-STR profiles of individual sperm cells can be used to sort the haploid autosomal profiles to develop the diploid consensus STR profiles of the individual donors. Finally, by analysing single sperm cells from mock sexual assault swabs with one or two sperm donors, we showed that our protocols enabled the identification of the unknown male contributors.