Refine
Departments, institutes and facilities
- Fachbereich Informatik (979)
- Fachbereich Angewandte Naturwissenschaften (630)
- Institut für funktionale Gen-Analytik (IFGA) (560)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (393)
- Fachbereich Ingenieurwissenschaften und Kommunikation (386)
- Fachbereich Wirtschaftswissenschaften (319)
- Institute of Visual Computing (IVC) (286)
- Institut für Cyber Security & Privacy (ICSP) (244)
- Institut für Verbraucherinformatik (IVI) (166)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (126)
Document Type
- Article (1649)
- Conference Object (1426)
- Part of a Book (249)
- Preprint (88)
- Report (73)
- Doctoral Thesis (64)
- Book (monograph, edited volume) (57)
- Master's Thesis (36)
- Working Paper (34)
- Conference Proceedings (27)
Year of publication
Language
- English (3792) (remove)
Keywords
- Machine Learning (15)
- Virtual Reality (15)
- FPGA (14)
- Robotics (14)
- Sustainability (14)
- ENaC (13)
- GC/MS (13)
- virtual reality (13)
- sustainability (12)
- ICT (11)
The molecular weight properties of lignins are one of the key elements that need to be analyzed for a successful industrial application of these promising biopolymers. In this study, the use of 1H NMR as well as diffusion-ordered spectroscopy (DOSY NMR), combined with multivariate regression methods, was investigated for the determination of the molecular weight (Mw and Mn) and the polydispersity of organosolv lignins (n = 53, Miscanthus x giganteus, Paulownia tomentosa, and Silphium perfoliatum). The suitability of the models was demonstrated by cross validation (CV) as well as by an independent validation set of samples from different biomass origins (beech wood and wheat straw). CV errors of ca. 7–9 and 14–16% were achieved for all parameters with the models from the 1H NMR spectra and the DOSY NMR data, respectively. The prediction errors for the validation samples were in a similar range for the partial least squares model from the 1H NMR data and for a multiple linear regression using the DOSY NMR data. The results indicate the usefulness of NMR measurements combined with multivariate regression methods as a potential alternative to more time-consuming methods such as gel permeation chromatography.
The synthesis and characterization of a new class of 1,2,4-oxadiazolylpyridinium as a cationic scaffold for fluorinated ionic liquid crystals is herein described. A series of 12 fluorinated heterocyclic salts based on a 1,2,4-oxadiazole moiety, connected through its C(5) or C(3) to an N-alkylpyridinium unit and a perfluoroheptyl chain, differing in the length of the alkyl chain and counterions, has been synthesized. As counterions iodide, bromide and bis(trifluoromethane)sulfonimide have been considered. The synthesis, structure, and liquid crystalline properties of these compounds are discussed on the basis of the tuned structural variables. The thermotropic properties of this series of salts have been investigated by differential scanning calorimetry and polarized optical microscopy. The results showed the existence of an enantiotropic mesomorphic smectic liquid crystalline phase for six bis(trifluoromethane)sulfonimide salts.
The changing world poses many challenges to public policies, including social policies – among them social protection policies, which are the main focus of this handbook. Here, in this part of the handbook, we take on a number of these challenges: demographic changes and their interaction with social protection policies; roles of social protection in coping with the consequences of the COVID-19 pandemic (both topics discussed in Chapter 39 and 43 by Woodall); the challenges of globalisation (discussed in Chapter 40 by Betz) and the limitations it imposes on state sovereignty and its ability to decide on the size of publicly funded programmes, in particular social protection; challenges to labour markets and social effective protection coverage posed by automation and digitalisation of businesses (discussed in Chapter 41 by Gassmann) and, last but not least, potential roles of social protection in facilitating population’s adjustments to climate change (discussed in Chapter 42 by Malerba).
What does the right to social security mean if the majority of the world’s population still lives in overwhelming insecurity? What is the significance and role of international social security standards, developed by the International Labour Organization (ILO2) over decades? What are the economic, labour market and political factors determining differences between countries with respect to population coverage by social security schemes and systems? How can past and recent experiences of countries in the Global North and in the Global South be used to expand social security coverage, and what role can be played by the new standard in this area – the ILO Social Protection Floors Recommendation 202, adopted in 2012?
Low-Cost In-Hand Slippage Detection and Avoidance for Robust Robotic Grasping with Compliant Fingers
(2021)
The year 2020 was dominated by the Corona pandemic, which abruptly changed everyday life at the university. Bonn-Rhein-Sieg University of Applied Sciences (H-BRS) succeeded not only in maintaining operations under these difficult circumstances, but also in enriching them with new ideas and insights. This difficult year 2020 has opened up new perspectives, especially for teaching, but also for research and administration. This annual report, to which we have given the motto "Walking - knowing the point of view, taking direction, showing attitude", tells of this. It shows which paths the H-BRS has taken in the past year. Some routes were rocky, and some slopes could only be overcome with combined efforts. Often, however, new terrain was opened up. The aim was always to find answers to the multi-layered, complex questions of our time - whether with regard to digitalisation, climate change or the social responsibility of science.
As a low-input crop, Miscanthus offers numerous advantages that, in addition to agricultural applications, permits its exploitation for energy, fuel, and material production. Depending on the Miscanthus genotype, season, and harvest time as well as plant component (leaf versus stem), correlations between structure and properties of the corresponding isolated lignins differ. Here, a comparative study is presented between lignins isolated from M. x giganteus, M. sinensis, M. robustus and M. nagara using a catalyst-free organosolv pulping process. The lignins from different plant constituents are also compared regarding their similarities and differences regarding monolignol ratio and important linkages. Results showed that the plant genotype has the weakest influence on monolignol content and interunit linkages. In contrast, structural differences are more significant among lignins of different harvest time and/or season. Analyses were performed using fast and simple methods such as nuclear magnetic resonance (NMR) spectroscopy. Data was assigned to four different linkages (A: β-O-4 linkage, B: phenylcoumaran, C: resinol, D: β-unsaturated ester). In conclusion, A content is particularly high in leaf-derived lignins at just under 70% and significantly lower in stem and mixture lignins at around 60% and almost 65%. The second most common linkage pattern is D in all isolated lignins, the proportion of which is also strongly dependent on the crop portion. Both stem and mixture lignins, have a relatively high share of approximately 20% or more (maximum is M. sinensis Sin2 with over 30%). In the leaf-derived lignins, the proportions are significantly lower on average. Stem samples should be chosen if the highest possible lignin content is desired, specifically from the M. x giganteus genotype, which revealed lignin contents up to 27%. Due to the better frost resistance and higher stem stability, M. nagara offers some advantages compared to M. x giganteus. Miscanthus crops are shown to be very attractive lignocellulose feedstock (LCF) for second generation biorefineries and lignin generation in Europe.
Design of a Medium Voltage Generator with DC-Cascade for High Power Wind Energy Conversion Systems
(2021)
This paper shows a new concept to generate medium voltage (MV) in wind power application to avoid an additional transformer. Therefore, the generator must be redesigned with additional constraints and a new topology for the power rectifier system by using multiple low voltage (LV) power rectifiers connected in series and parallel to increase the DC output voltage. The combination of parallel and series connection of rectifiers is further introduced as DC-cascade. With the resulting DC-cascade, medium output voltage is achieved with low voltage rectifiers and without a bulky transformer. This approach to form a DC-cascade reduces the effort required to achieve medium DC voltage with a simple rectifier system. In this context, a suitable DC-cascade control was presented and verified with a laboratory test setup. A gearless synchronous generator, which is highly segmented so that each segment can be connected to its own power rectifier, is investigated. Due to the mixed AC and DC voltage given by the DC-cascade structure, it becomes more demanding to the design of the generator insulation, which influences the copper fill factor and the design of the cooling system. A design strategy for the overall generator design is carried out considering the new boundary conditions.
Intercultural Management
(2021)
Ghana suffers from frequent power outages, which can be compensated by off-grid energy solutions. Photovoltaic-hybrid systems become more and more important for rural electrification due to their potential to offer a clean and cost-effective energy supply. However, uncertainties related to the prediction of electrical loads and solar irradiance result in inefficient system control and can lead to an unstable electricity supply, which is vital for the high reliability required for applications within the health sector. Model predictive control (MPC) algorithms present a viable option to tackle those uncertainties compared to rule-based methods, but strongly rely on the quality of the forecasts. This study tests and evaluates (a) a seasonal autoregressive integrated moving average (SARIMA) algorithm, (b) an incremental linear regression (ILR) algorithm, (c) a long short-term memory (LSTM) model, and (d) a customized statistical approach for electrical load forecasting on real load data of a Ghanaian health facility, considering initially limited knowledge of load and pattern changes through the implementation of incremental learning. The correlation of the electrical load with exogenous variables was determined to map out possible enhancements within the algorithms. Results show that all algorithms show high accuracies with a median normalized root mean square error (nRMSE) <0.1 and differing robustness towards load-shifting events, gradients, and noise. While the SARIMA algorithm and the linear regression model show extreme error outliers of nRMSE >1, methods via the LSTM model and the customized statistical approaches perform better with a median nRMSE of 0.061 and stable error distribution with a maximum nRMSE of <0.255. The conclusion of this study is a favoring towards the LSTM model and the statistical approach, with regard to MPC applications within photovoltaic-hybrid system solutions in the Ghanaian health sector.
In contrast to the German power supply, the energy supply in many West African countries is very unstable. Frequent power outages are not uncommon. Especially for critical infrastructures, such as hospitals, a stable power supply is vital. To compensate for the power outages, diesel generators are often used. In the future, these systems will increasingly be supplemented by PV systems and storage, so that the generator will have to be used less or not at all when needed. For the design and operation of such systems, it is necessary to better understand the atmospheric variability of PV power generation. For example, there are large variations between rainy and dry seasons, between days with high and low dust levels - caused by sandstorms (harmattan) or urban air pollution.
When users in virtual reality cannot physically walk and self-motions are instead only visually simulated, spatial updating is often impaired. In this paper, we report on a study that investigated if HeadJoystick, an embodied leaning-based flying interface, could improve performance in a 3D navigational search task that relies on maintaining situational awareness and spatial updating in VR. We compared it to Gamepad, a standard flying interface. For both interfaces, participants were seated on a swivel chair and controlled simulated rotations by physically rotating. They either leaned (forward/backward, right/left, up/down) or used the Gamepad thumbsticks for simulated translation. In a gamified 3D navigational search task, participants had to find eight balls within 5 min. Those balls were hidden amongst 16 randomly positioned boxes in a dark environment devoid of any landmarks. Compared to the Gamepad, participants collected more balls using the HeadJoystick. It also minimized the distance travelled, motion sickness, and mental task demand. Moreover, the HeadJoystick was rated better in terms of ease of use, controllability, learnability, overall usability, and self-motion perception. However, participants rated HeadJoystick could be more physically fatiguing after a long use. Overall, participants felt more engaged with HeadJoystick, enjoyed it more, and preferred it. Together, this provides evidence that leaning-based interfaces like HeadJoystick can provide an affordable and effective alternative for flying in VR and potentially telepresence drones.
Risk-based authentication (RBA) aims to strengthen password-based authentication rather than replacing it. RBA does this by monitoring and recording additional features during the login process. If feature values at login time differ significantly from those observed before, RBA requests an additional proof of identification. Although RBA is recommended in the NIST digital identity guidelines, it has so far been used almost exclusively by major online services. This is partly due to a lack of open knowledge and implementations that would allow any service provider to roll out RBA protection to its users. To close this gap, we provide a first in-depth analysis of RBA characteristics in a practical deployment. We observed N=780 users with 247 unique features on a real-world online service for over 1.8 years. Based on our collected data set, we provide (i) a behavior analysis of two RBA implementations that were apparently used by major online services in the wild, (ii) a benchmark of the features to extract a subset that is most suitable for RBA use, (iii) a new feature that has not been used in RBA before, and (iv) factors which have a significant effect on RBA performance. Our results show that RBA needs to be carefully tailored to each online service, as even small configuration adjustments can greatly impact RBA's security and usability properties. We provide insights on the selection of features, their weightings, and the risk classification in order to benefit from RBA after a minimum number of login attempts.
Suitability of Current Sensors for the Measurement of Switching Currents in Power Semiconductors
(2021)
This paper investigates the impact of current sensors on the measurement of transient currents in fast-switching power semiconductors in a double pulse test (DPT environment. We review previous research that assesses the influence of current sensors on a DPT circuit through mathematical modeling. The developed selection aids can be used to identify suitable current sensors for transient current measurements of fast-switching power semiconductors and to estimate the error introduced by their insertion into the DPT circuit. Afterwards, this analysis is extended by including further elements from real DPT applications to increase the consistency of the error estimation with practical situations and setups. Both methods are compared and their individual advantages and drawbacks are discussed. Finally, a recommendation on when to use which method is derived.
In Robot-Assisted Therapy for children with Autism Spectrum Disorder, the therapists’ workload is increased due to the necessity of controlling the robot manually. The solution for this problem is to increase the level of autonomy of the system, namely the robot should interpret and adapt to the behaviour of the child under therapy. The problem that we are adressing is to develop a behaviour model that will be used for the robot decision-making process, which will learn how to adequately react to certain child reactions. We propose the use of the reinforcement learning technique for this task, where feedback for learning is obtained from the therapist’s evaluation of a robot’s behaviour.
The promotion of sustainable packaging is part of the European Green Deal and plays a key role in the EU’s social and political strategy. One option is the use of renewable resources and biomass waste as raw materials for polymer production. Lignocellulose biomass from annual and perennial industrial crops and agricultural residues are a major source of polysaccharides, proteins, and lignin, and can also be used to obtain plant-based extracts and essential oils. Therefore, these biomasses are considered as potential substitute for fossil-based resources. Here, the status quo of bio-based polymers is discussed and evaluated in terms of properties related to packaging applications such as gas and water vapor permeability as well as mechanical properties. So far, their practical use is still restricted due to lower performance in fundamental packaging functions that directly influence food quality and safety, the length of shelf life and thus the amount of food waste. Besides bio-based polymers, this review focuses on plant extracts as active packaging agents. Incorporating extracts of herbs, flowers, trees, and their fruits is inevitable to achieve desired material properties that are capable to prolong the food shelf life. Finally, the adoption potential of packaging based on polymers from renewable resources is discussed from a bioeconomy perspective.
Since its advent, the sustainability effects of the modern sharing economy have been the subject of controversial debate. While its potential was initially discussed in terms of post-ownership development with a view to decentralizing value creation and increasing social capital and environmental relief through better utilization of material goods, critics have become increasingly loud in recent years. Many people hoped that carsharing could lead to development away from ownership towards flexible use and thus more resource-efficient mobility. However, carsharing remains niche, and while many people like the idea in general, they appear to consider carsharing to not be advantageous as a means of transport in terms of cost, flexibility, and comfort. A key innovation that could elevate carsharing from its niche existence in the future is autonomous driving. This technology could help shared mobility gain a new boost by allowing it to overcome the weaknesses of the present carsharing business model. Flexibility and comfort could be greatly enhanced with shared autonomous vehicles (SAVs), which could simultaneously offer benefits in terms of low cost, and better use of time without the burden of vehicle ownership. However, it is not the technology itself that is sustainable; rather, sustainability depends on the way in which this technology is used. Hence, it is necessary to make a prospective assessment of the direct and indirect (un)sustainable effects before or during the development of a technology in order to incorporate these findings into the design and decision-making process. Transport research has been intensively analyzing the possible economic, social, and ecological consequences of autonomous driving for several years. However, research lacks knowledge about the consequences to be expected from shared autonomous vehicles. Moreover, previous findings are mostly based on the knowledge of experts, while potential users are rarely included in the research. To address this gap, this thesis contributes to answering the questions of what the ecological and social impacts of the expected concept of SAVs will be. In my thesis, I study in particular the ecological consequences of SAVs in terms of the potential modal shifts they can induce as well as their social consequences in terms of potential job losses in the taxi industry. Regarding this, I apply a user-oriented, mixed-method technology assessment approach that complements existing, expert-oriented technology assessment studies on autonomous driving that have so far been dominated by scenario analyses and simulations. To answer the two questions, I triangulated the method of scenario analysis and qualitative and quantitative user studies. The empirical studies provide evidence that the automation of mobility services such as carsharing may to a small extent foster a shift from the private vehicle towards mobility on demand. However, findings also indicate that rebound effects are to be expected: Significantly more users are expected to move away from the more sustainable public transportation, leading to an overcompensation of the positive modal shift effects by the negative modal shift effects. The results show that a large proportion of the taxi trips carried out can be re-placed by SAVs, making the profession of taxi driver somewhat obsolete. However, interviews with taxi drivers themselves revealed that the services provided by the drivers go beyond mere transport, so that even in the age of SAVs, the need for human assistance will continue – though to a smaller extent. Given these findings, I see action potential at different levels: users, mobility service providers, and policymakers. Regarding environmental and social impacts resulting from the use of SAVs, there is a strong conflict of objectives among users, potential SAV operators, and sustainable environmental and social policies. In order to strengthen the positive effects and counteract the negative effects, such as unintended modal shifts, policies may soon have to regulate the design of SAVs and their introduction. A key starting point for transport policy is to promote the use of more environmentally friendly means of transport, in particular by making public transportation attractive and, if necessary, by making the use of individual motorized mobility less attractive. The taxi industry must face the challenges of automation by opening up to these developments and focusing on service orientation – to strengthen the drivers’ main unique selling point compared to automated technology. Assessing the impacts of the not-yet-existing generally involves great uncertainty. With the results of my work, however, I would like to argue that a user-oriented technology assessment can usefully complement the findings of classic methods of technology assessment and can iteratively inform the development process regarding technology and regulation.
This exciting and innovative Handbook provides readers with a comprehensive and globally relevant overview of the instruments, actors and design features of social protection systems, as well as their application and impacts in practice. It is the first book that centres around system building globally, a theme that has gained political importance yet has received relatively little attention in academia.
Managing the Work-Nonwork Interface: Personal Social Media Use as a Tool to Craft Boundaries?
(2021)
Data emerged as a central success factor for companies to benefit from digitization. However, the skills in successfully creating value from data – especially at the management level – are not always profound. To address this problem, several canvas models have already been designed. Canvas models are usually created to write down an idea in a structured way to promote transparency and traceability. However, some existing data science canvas models mainly address developers and are thus unsuitable for decision-makers and communication within interdisciplinary teams. Based on a literature review, we identified influencing factors that are essential for the success of data science projects. With the information gained, the Data Science Canvas was developed in an expert workshop and finally evaluated by practitioners to find out whether such an instrument could support data-driven value creation.
Many people do not consume as much healthy food as recommended. Nudging has been identified as a promising intervention strategy to increase the consumption of healthy food. The present study analyzed the effects of three body shape nudges (thin, thick, or Giacometti artwork) on food ordering and assessed the mediating role of being aware of the nudge. Students (686) and employees (218) of a German university participated in an online experimental study. After randomization, participants visited a realistic online cafeteria and composed a meal for themselves. Under experimental conditions, participants were exposed to one out of three nudges while choosing dishes: (1) thin body shape, (2) thick body shape, and (3) the Giacometti artwork nudge. The Giacometti nudge resulted in more orders for salad among employees. The thin and thick body shape nudges did not change dish orders. Awareness of the nudge mediated the numbers of calories ordered when using the Giacometti or thin body shape nudges. These findings provide useful insights for health interventions in occupational and public health sectors using nudges. Our study contributes to the research on the Giacometti nudge by showing its effectiveness when participants are aware (it is effective under conditions where it is consciously perceived).
Software developers build complex systems using plenty of third-party libraries. Documentation is key to understand and use the functionality provided via the libraries’ APIs. Therefore, functionality is the main focus of contemporary API documentation, while cross-cutting concerns such as security are almost never considered at all, especially when the API itself does not provide security features. Documentations of JavaScript libraries for use in web applications, e.g., do not specify how to add or adapt a Content Security Policy (CSP) to mitigate content injection attacks like Cross-Site Scripting (XSS). This is unfortunate, as security-relevant API documentation might have an influence on secure coding practices and prevailing major vulnerabilities such as XSS. For the first time, we study the effects of integrating security-relevant information in non-security API documentation. For this purpose, we took CSP as an exemplary study object and extended the official Google Maps JavaScript API documentation with security-relevant CSP information in three distinct manners. Then, we evaluated the usage of these variations in a between-group eye-tracking lab study involving N=49 participants. Our observations suggest: (1) Developers are focused on elements with code examples. They mostly skim the documentation while searching for a quick solution to their programming task. This finding gives further evidence to results of related studies. (2) The location where CSP-related code examples are placed in non-security API documentation significantly impacts the time it takes to find this security-relevant information. In particular, the study results showed that the proximity to functional-related code examples in documentation is a decisive factor. (3) Examples significantly help to produce secure CSP solutions. (4) Developers have additional information needs that our approach cannot meet.
Overall, our study contributes to a first understanding of the impact of security-relevant information in non-security API documentation on CSP implementation. Although further research is required, our findings emphasize that API producers should take responsibility for adequately documenting security aspects and thus supporting the sensibility and training of developers to implement secure systems. This responsibility also holds in seemingly non-security relevant contexts.
Threats to passwords are still very relevant due to attacks like phishing or credential stuffing. One way to solve this problem is to remove passwords completely. User studies on passwordless FIDO2 authentication using security tokens demonstrated the potential to replace passwords. However, widespread acceptance of FIDO2 depends, among other things, on how user accounts can be recovered when the security token becomes permanently unavailable. For this reason, we provide a heuristic evaluation of 12 account recovery mechanisms regarding their properties for FIDO2 passwordless authentication. Our results show that the currently used methods have many drawbacks. Some even rely on passwords, taking passwordless authentication ad absurdum. Still, our evaluation identifies promising account recovery solutions and provides recommendations for further studies.
Less is Often More: Header Whitelisting as Semantic Gap Mitigation in HTTP-Based Software Systems
(2021)
The web is the most wide-spread digital system in the world and is used for many crucial applications. This makes web application security extremely important and, although there are already many security measures, new vulnerabilities are constantly being discovered. One reason for some of the recent discoveries lies in the presence of intermediate systems—e.g. caches, message routers, and load balancers—on the way between a client and a web application server. The implementations of such intermediaries may interpret HTTP messages differently, which leads to a semantically different understanding of the same message. This so-called semantic gap can cause weaknesses in the entire HTTP message processing chain.
In this paper we introduce the header whitelisting (HWL) approach to address the semantic gap in HTTP message processing pipelines. The basic idea is to normalize and reduce an HTTP request header to the minimum required fields using a whitelist before processing it in an intermediary or on the server, and then restore the original request for the next hop. Our results show that HWL can avoid misinterpretations of HTTP messages in the different components and thus prevent many attacks rooted in a semantic gap including request smuggling, cache poisoning, and authentication bypass.
XML Signature Wrapping (XSW) has been a relevant threat to web services for 15 years until today. Using the Personal Health Record (PHR), which is currently under development in Germany, we investigate a current SOAP-based web services system as a case study. In doing so, we highlight several deficiencies in defending against XSW. Using this real-world contemporary example as motivation, we introduce a guideline for more secure XML signature processing that provides practitioners with easier access to the effective countermeasures identified in the current state of research.
Risk-based authentication (RBA) is an adaptive security measure to strengthen password-based authentication against account takeover attacks. Our study on 65 participants shows that users find RBA more usable than two-factor authentication equivalents and more secure than password-only authentication. We identify pitfalls and provide guidelines for putting RBA into practice.
DNA Sequencing
(2021)
Risk-based authentication (RBA) aims to strengthen password-based authentication rather than replacing it. RBA does this by monitoring and recording additional features during the login process. If feature values at login time differ significantly from those observed before, RBA requests an additional proof of identification. Although RBA is recommended in the NIST digital identity guidelines, it has so far been used almost exclusively by major online services. This is partly due to a lack of open knowledge and implementations that would allow any service provider to roll out RBA protection to its users.
To close this gap, we provide a first in-depth analysis of RBA characteristics in a practical deployment. We observed N=780 users with 247 unique features on a real-world online service for over 1.8 years. Based on our collected data set, we provide (i) a behavior analysis of two RBA implementations that were apparently used by major online services in the wild, (ii) a benchmark of the features to extract a subset that is most suitable for RBA use, (iii) a new feature that has not been used in RBA before, and (iv) factors which have a significant effect on RBA performance. Our results show that RBA needs to be carefully tailored to each online service, as even small configuration adjustments can greatly impact RBA's security and usability properties. We provide insights on the selection of features, their weightings, and the risk classification in order to benefit from RBA after a minimum number of login attempts.
At the end of 2019, about 4.1 billion people on earth were using the internet. Because people entrust their most intimate and private data to their devices, the European legislation has declared the protection of natural persons in relation to the processing of personal data as a fundamental right. In 2018 23 million people worldwide, having the responsibility of implementing data security and privacy, were developing software. However, the implementation of data and application security is a challenge, as evidenced by over 41 thousand documented security incidents in 2019. Probably the most basic, powerful, and frequently used tools software developers work with are Application Programming Interfaces (APIs). Security APIs are essential tools to bring data and application security into software products. However, research results have revealed that usability problems of security APIs lead to insecure API use during development. Basic security requirements such as securely stored passwords, encrypted files or secure network connections can become an error-prone challenge and in consequence lead to unreliable or missing security and privacy. Because software developers hold a key position in the development processes of software, not properly operating security tools pose a risk to all people using software. However, little is known about the requirements of developers to address the problem and improve the usability of security APIs. This thesis is one of the first to examine the usability of security APIs. To this end, the author examines to what extent information flows can support software developers in using security APIs to implement secure software by conducting empirical studies with software developers. This thesis has contributed fundamental results that can be used in future work to identify and improve important information flows in software development. The studies have clearly shown that developer-tailored information flows with adapted security-relevant content have a positive influence on the correct implementation of security. However, the results have also led to the conclusion that API producers need to pay special attention to the channels through which they direct information flows to API users and how the information is designed to be useful for them. In many cases, it is not enough to provide security-relevant information via the documentation only. Here, proactive methods like the API security advice proposed by this thesis achieve significantly better results in terms of findability and actionable support. To further increase the effectiveness of the API security advice, this thesis developed a cryptographic API warning design for the terminal by adopting a participatory design approach with experienced software developers. However, it also became clear that a single information flow can only support up to a certain extent. As observed from two studies conducted in complex API environments in web development, multiple complementary information flows have to meet the extensive information needs of developers to be able to develop secure software. Some evaluated new approaches provided promising insights towards more API consumer-focused documentation designs as a complement to API warnings.
In the field of service robots, dealing with faults is crucial to promote user acceptance. In this context, this work focuses on some specific faults which arise from the interaction of a robot with its real world environment due to insufficient knowledge for action execution. In our previous work [1], we have shown that such missing knowledge can be obtained through learning by experimentation. The combination of symbolic and geometric models allows us to represent action execution knowledge effectively. However we did not propose a suitable representation of the symbolic model. In this work we investigate such symbolic representation and evaluate its learning capability. The experimental analysis is performed on four use cases using four different learning paradigms. As a result, the symbolic representation together with the most suitable learning paradigm are identified.
The deficiency of adenosine deaminase 2 (DADA2) is an autosomal recessively inherited disease that has undergone extensive phenotypic expansion since being first described in patients with fevers, recurrent strokes, livedo racemosa, and polyarteritis nodosa in 2014. It is now recognized that patients may develop multisystem disease that spans multiple medical subspecialties. Here, we describe the findings from a large single center longitudinal cohort of 60 patients, the broad phenotypic presentation, as well as highlight the cohort's experience with hematopoietic cell transplantation and COVID-19. Disease manifestations could be separated into three major phenotypes: inflammatory/vascular, immune dysregulatory, and hematologic, however, most patients presented with significant overlap between these three phenotype groups. The cardinal features of the inflammatory/vascular group included cutaneous manifestations and stroke. Evidence of immune dysregulation was commonly observed, including hypogammaglobulinemia, absent to low class-switched memory B cells, and inadequate response to vaccination. Despite these findings, infectious complications were exceedingly rare in this cohort. Hematologic findings including pure red cell aplasia (PRCA), immune-mediated neutropenia, and pancytopenia were observed in half of patients. We significantly extended our experience using anti-TNF agents, with no strokes observed in 2026 patient months on TNF inhibitors. Meanwhile, hematologic and immune features had a more varied response to anti-TNF therapy. Six patients received a total of 10 allogeneic hematopoietic cell transplant (HCT) procedures, with secondary graft failure necessitating repeat HCTs in three patients, as well as unplanned donor cell infusions to avoid graft rejection. All transplanted patients had been on anti-TNF agents prior to HCT and received varying degrees of reduced-intensity or non-myeloablative conditioning. All transplanted patients are still alive and have discontinued anti-TNF therapy. The long-term follow up afforded by this large single-center study underscores the clinical heterogeneity of DADA2 and the potential for phenotypes to evolve in any individual patient.
BACKGROUND
Biallelic loss-of-function variants in NCF1 lead to reactive oxygen species deficiency and chronic granulomatous disease (CGD). Heterozygosity for the p.Arg90His variant in NCF1 has been associated with susceptibility to systemic lupus erythematosus, rheumatoid arthritis, and Sjögren's syndrome in adult patients. This study demonstrates the association of the homozygous p.Arg90His variant with interferonopathy with features of autoinflammation and autoimmunity in a pediatric patient.
CASE PRESENTATION
A 5-year old female of Indian ancestry with early-onset recurrent fever and headache, and persistently elevated antinuclear, anti-Ro, and anti-La antibodies was found to carry the homozygous p.Arg90His variant in NCF1 through exome sequencing. Her unaffected parents and three other siblings were carriers for the mutant allele. Because the presence of two NCF1 pseudogenes, this variant was confirmed by independent genotyping methods. Her intracellular neutrophil oxidative burst and NCF1 expression levels were normal, and no clinical features of CGD were apparent. Gene expression analysis in peripheral blood detected an interferon gene expression signature, which was further supported by cytokine analyses of supernatants of cultured patient's cells. These findings suggested that her inflammatory disease is at least in part mediated by type I interferons. While her fever episodes responded well to systemic steroids, treatment with the JAK inhibitor tofacitinib resulted in decreased serum ferritin levels and reduced frequency of fevers.
CONCLUSION
Homozygosity for p.Arg90His in NCF1 should be considered contributory in young patients with an atypical systemic inflammatory antecedent phenotype that may evolve into autoimmunity later in life. The complex genomic organization of NCF1 poses a difficulty for high-throughput genotyping techniques and variants in this gene should be carefully evaluated when using the next generation and Sanger sequencing technologies. The p.Arg90His variant is found at a variable allele frequency in different populations, and is higher in people of South East Asian ancestry. In complex genetic diseases such as SLE, other rare and common susceptibility alleles might be necessary for the full disease expressivity.
Somatic Mutations in UBA1 Define a Distinct Subset of Relapsing Polychondritis Patients With VEXAS
(2021)
Neurodevelopmental disorder with dysmorphic facies and distal limb anomalies (NEDDFL), defined primarily by developmental delay/intellectual disability, speech delay, postnatal microcephaly, and dysmorphic features, is a syndrome resulting from heterozygous variants in the dosage-sensitive bromodomain PHD finger chromatin remodeler transcription factor BPTF gene. To date, only 11 individuals with NEDDFL due to de novo BPTF variants have been described. To expand the NEDDFL phenotypic spectrum, we describe the clinical features in 25 novel individuals with 20 distinct, clinically relevant variants in BPTF, including four individuals with inherited changes in BPTF. In addition to the previously described features, individuals in this cohort exhibited mild brain abnormalities, seizures, scoliosis, and a variety of ophthalmologic complications. These results further support the broad and multi-faceted complications due to haploinsufficiency of BPTF.
Due to ongoing digitalization, more and more cloud services are finding their way into companies. In this context, data integration from the various software solutions, which are provided both on-premise (local use or licensing for local use of software) and as a service, is of great importance. In this regard, Integration Platform as a Service (IPaaS) models aim to support companies as well as software providers in the context of data integration by providing connectors to enable data flow between different applications and systems and other integration services. Since previous research has mostly focused on technical or legal aspects of IPaaS, this article focuses on deriving integration practices and design-related barriers and drivers regarding the adoption of IPaaS. Therefore, we conducted 10 interviews with experts from different software as a services vendors. Our results show that the main factors regarding the adoption of IPaaS are the standardization of data models, the usability and variety of connectors provided, and the issues regarding data privacy, security, and transparency.
New sustainable, environmentally friendly materials for thermal insulation of buildings are necessary to reduce their carbon footprints. In this study, Miscanthus fiber-reinforced geopolymer composites, foamed with sodium dodecyl sulfate (SDS), were developed using fly ash as a geopolymer precursor. The effects of fiber content, fiber size, curing temperature, foaming agent content, fumed silica specific surface area and fumed silica content on thermal conductivity and compressive strength were evaluated using a Plackett-Burman design of experiment. Furthermore, the microstructure of geopolymer composites was investigated using X-ray diffraction (XRD), X-ray micro-computed tomography (μCT) and scanning electron microscopy (SEM). The measured characteristic values were in the following ranges: Thermal conductivity 0.057 W (m K)−1 to 0.127 W (m K)−1, compressive strength 0.007 MPa–0.719 MPa and porosity 49 vol% to 76 vol%. The results reveal an enhancement of thermal conductivity by elevated fiber size and foaming agent content. In contrast, the compressive strength is enhanced by high fiber content. Additionally, SEM images indicate a good interaction between the fibers and the geopolymer matrix, because nearly the whole fiber surface is covered by the geopolymer.
Isolation of DNA and RNA
(2021)
Polymerase Chain Reaction
(2021)
Intimate swabs taken for examination in sexual assault cases typically yield mixtures of sperm and epithelial cell types. While powerful, differential extraction protocols to overcome such cell type mixtures by separate lysis of epithelial cells and spermatozoa can still prove ineffective, in particular if only few sperm cells are present or if swabs contain sperm from more than one individual leading to complex low level DNA mixtures. A means to avoid such mixtures consists in the analysis of single micromanipulated sperm cells. However, the quantity of DNA from single sperm cells is not sufficient for conventional STR analysis. Here, we describe a simple method for micromanipulating individual sperm cells from intimate swabs and show that whole genome amplification can generate sufficient amounts of DNA from single cells for subsequent DNA profiling. We recovered over 80% of alleles of haploid autosomal STR profiles from the majority of individual sperm cells. Furthermore, we demonstrate that in mixtures of sperm from two contributors, Y-STR and X-STR profiles of individual sperm cells can be used to sort the haploid autosomal profiles to develop the diploid consensus STR profiles of the individual donors. Finally, by analysing single sperm cells from mock sexual assault swabs with one or two sperm donors, we showed that our protocols enabled the identification of the unknown male contributors.
HR Management & Leadership
(2021)
The analysis of used engine oils from industrial engines enables the study of engine wear and oil degradation in order to evaluate the necessity of oil changes. As the matrix composition of an engine oil strongly depends on its intended application, meaningful diagnostic oil analyses bear considerable challenges. Owing to the broad spectrum of available oil matrices, we have evaluated the applicability of using an internal standard and/or preceding sample digestion for elemental analysis of used engine oils via inductively coupled plasma optical emission spectroscopy (ICP OES). Elements originating from both wear particles and additives as well as particle size influence could be clearly recognized by their distinct digestion behaviour. While a precise determination of most wear elements can be achieved in oily matrix, the measurement of additives is performed preferably after sample digestion. Considering a dataset of physicochemical parameters and elemental composition for several hundred used engine oils, we have further investigated the feasibility of predicting the identity and overall condition of an unknown combustion engine using the machine learning system XGBoost. A maximum accuracy of 89.6% in predicting the engine type was achieved, a mean error of less than 10% of the observed timeframe in predicting the oil running time and even less than 4% for the total engine running time, based purely on common oil check data. Furthermore, obstacles and possibilities to improve the performance of the machine learning models were analysed and the factors that enabled the prediction were explored with SHapley Additive exPlanation (SHAP). Our results demonstrate that both the identification of an unknown engine as well as a lifetime assessment can be performed for a first estimation of the actual sample without requiring meticulous documentation.
Social budgeting
(2021)
At the beginning of 2020 with the globally spreading pandemic of COVID-19 and all its social and economic consequences, the importance of having comprehensive, universal and effective social protection systems became once again – like during all the major economic and social crises before – very clear (Gentilini et al. 2020; Chapter 43 of this volume). Countries with strong social protection systems, although needing to enhance many benefit provisions and extend coverage to reach those in non-standard forms of employment, still were coping better with the pandemic and had better chances of cushioning the resulting economic downturn. However, we know from past experience that after the crisis is over, austerity measures may focus again on limiting social expenditure under all kinds of excuses.
Components and Architecture for the Implementation of Technology-Driven Employee Data Protection
(2021)
In the course of growing online retailing, recommendation systems have become established that derive recommendations from customers’ purchase histories. Recommending suitable food products can represent a lucrative added value for food retailers, but at the same time challenges them to make good predictions for repeated food purchases. Repeat purchase recommendations have been little explored in the literature. These predict when a product will be purchased again by a customer. This is especially important for food recommendations, since it is not the frequency of the same item in the shopping basket that is relevant for determining repeat purchase intervals, but rather their difference over time. In this paper, in addition to critically reflecting classical recommendation systems on the underlying repeat purchase context, two models for online product recommendations are derived from the literature, validated and discussed for the food context using real transaction data of a German stationary food retailer.
Applied privacy research has so far focused mainly on consumer relations in private life. Privacy in the context of employment relationships is less well studied, although it is subject to the same legal privacy framework in Europe. The European General Data Protection Regulation (GDPR) has strengthened employees’ right to privacy by obliging that employers provide transparency and intervention mechanisms. For such mechanisms to be effective, employees must have a sound understanding of their functions and value. We explored possible boundaries by conducting a semistructured interview study with 27 office workers in Germany and elicited mental models of the right to informational self-determination, which is the European proxy for the right to privacy. We provide insights into (1) perceptions of different categories of data, (2) familiarity with the legal framework regarding expectations for privacy controls, and (3) awareness of data processing, data flow, safeguards, and threat models. We found that legal terms often used in privacy policies used to describe categories of data are misleading. We further identified three groups of mental models that differ in their privacy control requirements and willingness to accept restrictions on their privacy rights. We also found ignorance about actual data flow, processing, and safeguard implementation. Participants’ mindsets were shaped by their faith in organizational and technical measures to protect privacy. Employers and developers may benefit from our contributions by understanding the types of privacy controls desired by office workers and the challenges to be considered when conceptualizing and designing usable privacy protections in the workplace.
What is Design Theory?
(2021)
Dental stem cells have been isolated from the medical waste of various dental tissues. They have been characterized by numerous markers, which are evaluated herein and differentiated into multiple cell types. They can also be used to generate cell lines and iPSCs for long-term in vitro research. Methods for utilizing these stem cells including cellular systems such as organoids or cell sheets, cell-free systems such as exosomes, and scaffold-based approaches with and without drug release concepts are reported in this review and presented with new pictures for clarification. These in vitro applications can be deployed in disease modeling and subsequent pharmaceutical research and also pave the way for tissue regeneration. The main focus herein is on the potential of dental stem cells for hard tissue regeneration, especially bone, by evaluating their potential for osteogenesis and angiogenesis, and the regulation of these two processes by growth factors and environmental stimulators. Current in vitro and in vivo publications show numerous benefits of using dental stem cells for research purposes and hard tissue regeneration. However, only a few clinical trials currently exist. The goal of this review is to pinpoint this imbalance and encourage scientists to pick up this research and proceed one step further to translation.
The Global Compact for Safe, Orderly and Regular Migration defines Global Skill Partnerships (GSP) as an innovative means of strengthen skills development among origin countries and countries of destination in mutually beneficial manner. However, GSPs are very limited in number and scope, and empirical analyses of them are, to date, relatively rare. This study helps fill this gap in data by presenting and examining existing GSPs or GSP-like approaches (e.g., transnational training partnerships). The aim of the study is to take stock of the various conceptual discourses on and practical experience with transnational skill partnerships. Using Kosovo as a case study, the study details the structure of such partnerships and the processes they entail. It documents the experience of those involved and catalogues the factors contributing to success. On this basis, the authors propose a means of categorizing the various practices that will help structure the empirical diversity of such approaches and render them conceptually feasible: Transnational Skills and Mobility Partnerships (TSMP).
With the debates on climate change and sustainability, a reduction of the share of cars in the modal split has become increasingly prevalent in both public and academic discourse. Besides some motivational approaches, there is a lack of ICT artifacts that successfully raise the ability of consumers to adopt sustainable mobility patterns. To further understand the requirements and the design of these artifacts within everyday mobility adopted a practice-lens. This lens is helpful to get a broader perspective on the use of ICT artifacts along consumers’ transformational journey towards sustainable mobility practices. Based on 12 retrospective interviews with car-free mobility consumers, we argue that artifacts should not be viewed as ’magic-bullet’ solutions but should accompany the complex transformation of practices in multifaceted ways. Moreover, we highlight in particular the difficulties of appropriating shared infrastructures and aligning own practices with them. This opens up a design space to provide more support for these kinds of material-interactions, to provide access to consumption infrastructures and make them usable, rather than leaving consumers alone with increased motivation.
Fabry disease (FD) is an X‐linked lysosomal storage disorder. Deficiency of the lysosomal enzyme alpha‐galactosidase (GLA) leads to accumulation of potentially toxic globotriaosylceramide (Gb3) on a multisystem level. Cardiac and cerebrovascular abnormalities as well as progressive renal failure are severe, life‐threatening long‐term complications. The complete pathophysiology of chronic kidney disease (CKD) in FD and the role of tubular involvement for its progression are unclear.
We established human renal tubular epithelial cell lines from the urine of male FD patients and male controls. The renal tubular system is rich in mitochondria and involved in transport processes at high energy costs. Our studies revealed fragmented mitochondria with disrupted cristae structure in FD patient cells. Oxidative stress levels were elevated and oxidative phosphorylation was up‐regulated in FD pointing at enhanced energetic needs. Mitochondrial homeostasis and energy metabolism revealed major changes as evidenced by differences in mitochondrial number, energy production and fuel consumption. The changes were accompanied by activation of the autophagy machinery in FD. Sirtuin1, an important sensor of (renal) metabolic stress and modifier of different defense pathways, was highly expressed in FD.
Our data show that lysosomal FD impairs mitochondrial function and results in severe disturbance of mitochondrial energy metabolism in renal cells. This insight on a tissue‐specific level points to new therapeutic targets which might enhance treatment efficacy.
The clear-sky radiative effect of aerosol–radiation interactions is of relevance for our understanding of the climate system. The influence of aerosol on the surface energy budget is of high interest for the renewable energy sector. In this study, the radiative effect is investigated in particular with respect to seasonal and regional variations for the region of Germany and the year 2015 at the surface and top of atmosphere using two complementary approaches.
First, an ensemble of clear-sky models which explicitly consider aerosols is utilized to retrieve the aerosol optical depth and the surface direct radiative effect of aerosols by means of a clear-sky fitting technique. For this, short-wave broadband irradiance measurements in the absence of clouds are used as a basis. A clear-sky detection algorithm is used to identify cloud-free observations. Considered are measurements of the short-wave broadband global and diffuse horizontal irradiance with shaded and unshaded pyranometers at 25 stations across Germany within the observational network of the German Weather Service (DWD). The clear-sky models used are the Modified MAC model (MMAC), the Meteorological Radiation Model (MRM) v6.1, the Meteorological–Statistical solar radiation model (METSTAT), the European Solar Radiation Atlas (ESRA), Heliosat-1, the Center for Environment and Man solar radiation model (CEM), and the simplified Solis model. The definition of aerosol and atmospheric characteristics of the models are examined in detail for their suitability for this approach.
Second, the radiative effect is estimated using explicit radiative transfer simulations with inputs on the meteorological state of the atmosphere, trace gases and aerosol from the Copernicus Atmosphere Monitoring Service (CAMS) reanalysis. The aerosol optical properties (aerosol optical depth, Ångström exponent, single scattering albedo and asymmetry parameter) are first evaluated with AERONET direct sun and inversion products. The largest inconsistency is found for the aerosol absorption, which is overestimated by about 0.03 or about 30 % by the CAMS reanalysis. Compared to the DWD observational network, the simulated global, direct and diffuse irradiances show reasonable agreement within the measurement uncertainty. The radiative kernel method is used to estimate the resulting uncertainty and bias of the simulated direct radiative effect. The uncertainty is estimated to −1.5 ± 7.7 and 0.6 ± 3.5 W m−2 at the surface and top of atmosphere, respectively, while the annual-mean biases at the surface, top of atmosphere and total atmosphere are −10.6, −6.5 and 4.1 W m−2, respectively.
The retrieval of the aerosol radiative effect with the clear-sky models shows a high level of agreement with the radiative transfer simulations, with an RMSE of 5.8 W m−2 and a correlation of 0.75. The annual mean of the REari at the surface for the 25 DWD stations shows a value of −12.8 ± 5 W m−2 as the average over the clear-sky models, compared to −11 W m−2 from the radiative transfer simulations. Since all models assume a fixed aerosol characterization, the annual cycle of the aerosol radiation effect cannot be reproduced. Out of this set of clear-sky models, the largest level of agreement is shown by the ESRA and MRM v6.1 models.
Self-supervised learning has proved to be a powerful approach to learn image representations without the need of large labeled datasets. For underwater robotics, it is of great interest to design computer vision algorithms to improve perception capabilities such as sonar image classification. Due to the confidential nature of sonar imaging and the difficulty to interpret sonar images, it is challenging to create public large labeled sonar datasets to train supervised learning algorithms. In this work, we investigate the potential of three self-supervised learning methods (RotNet, Denoising Autoencoders, and Jigsaw) to learn high-quality sonar image representation without the need of human labels. We present pre-training and transfer learning results on real-life sonar image datasets. Our results indicate that self-supervised pre-training yields classification performance comparable to supervised pre-training in a few-shot transfer learning setup across all three methods. Code and self-supervised pre-trained models are be available at https://github.com/agrija9/ssl-sonar-images
We introduce canonical weight normalization for convolutional neural networks. Inspired by the canonical tensor decomposition, we express the weight tensors in so-called canonical networks as scaled sums of outer vector products. In particular, we train network weights in the decomposed form, where scale weights are optimized separately for each mode. Additionally, similarly to weight normalization, we include a global scaling parameter. We study the initialization of the canonical form by running the power method and by drawing randomly from Gaussian or uniform distributions. Our results indicate that we can replace the power method with cheaper initializations drawn from standard distributions. The canonical re-parametrization leads to competitive normalization performance on the MNIST, CIFAR10, and SVHN data sets. Moreover, the formulation simplifies network compression. Once training has converged, the canonical form allows convenient model-compression by truncating the parameter sums.
Comparative study of 3D object detection frameworks based on LiDAR data and sensor fusion techniques
(2022)
Estimating and understanding the surroundings of the vehicle precisely forms the basic and crucial step for the autonomous vehicle. The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time. Generally, the perception system involves various subsystems such as localization, obstacle (static and dynamic) detection, and avoidance, mapping systems, and others. For perceiving the environment, these vehicles will be equipped with various exteroceptive (both passive and active) sensors in particular cameras, Radars, LiDARs, and others. These systems are equipped with deep learning techniques that transform the huge amount of data from the sensors into semantic information on which the object detection and localization tasks are performed. For numerous driving tasks, to provide accurate results, the location and depth information of a particular object is necessary. 3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object. Based on recent research, 3D object detection frameworks performing object detection and localization on LiDAR data and sensor fusion techniques show significant improvement in their performance. In this work, a comparative study of the effect of using LiDAR data for object detection frameworks and the performance improvement seen by using sensor fusion techniques are performed. Along with discussing various state-of-the-art methods in both the cases, performing experimental analysis, and providing future research directions.
Vietnam requires a sustainable urbanization, for which city sensing is used in planning and de-cision-making. Large cities need portable, scalable, and inexpensive digital technology for this purpose. End-to-end air quality monitoring companies such as AirVisual and Plume Air have shown their reliability with portable devices outfitted with superior air sensors. They are pricey, yet homeowners use them to get local air data without evaluating the causal effect. Our air quality inspection system is scalable, reasonably priced, and flexible. Minicomputer of the sys-tem remotely monitors PMS7003 and BME280 sensor data through a microcontroller processor. The 5-megapixel camera module enables researchers to infer the causal relationship between traffic intensity and dust concentration. The design enables inexpensive, commercial-grade hardware, with Azure Blob storing air pollution data and surrounding-area imagery and pre-venting the system from physically expanding. In addition, by including an air channel that re-plenishes and distributes temperature, the design improves ventilation and safeguards electrical components. The gadget allows for the analysis of the correlation between traffic and air quali-ty data, which might aid in the establishment of sustainable urban development plans and poli-cies.
Comparative study of 3D object detection frameworks based on LiDAR data and sensor fusion techniques
(2022)
Estimating and understanding the surroundings of the vehicle precisely forms the basic and crucial step for the autonomous vehicle. The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time. Generally, the perception system involves various subsystems such as localization, obstacle (static and dynamic) detection, and avoidance, mapping systems, and others. For perceiving the environment, these vehicles will be equipped with various exteroceptive (both passive and active) sensors in particular cameras, Radars, LiDARs, and others. These systems are equipped with deep learning techniques that transform the huge amount of data from the sensors into semantic information on which the object detection and localization tasks are performed. For numerous driving tasks, to provide accurate results, the location and depth information of a particular object is necessary. 3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object. Based on recent research, 3D object detection frameworks performing object detection and localization on LiDAR data and sensor fusion techniques show significant improvement in their performance. In this work, a comparative study of the effect of using LiDAR data for object detection frameworks and the performance improvement seen by using sensor fusion techniques are performed. Along with discussing various state-of-the-art methods in both the cases, performing experimental analysis, and providing future research directions.
Recent advances in Natural Language Processing have substantially improved contextualized representations of language. However, the inclusion of factual knowledge, particularly in the biomedical domain, remains challenging. Hence, many Language Models (LMs) are extended by Knowledge Graphs (KGs), but most approaches require entity linking (i.e., explicit alignment between text and KG entities). Inspired by single-stream multimodal Transformers operating on text, image and video data, this thesis proposes the Sophisticated Transformer trained on biomedical text and Knowledge Graphs (STonKGs). STonKGs incorporates a novel multimodal architecture based on a cross encoder that uses the attention mechanism on a concatenation of input sequences derived from text and KG triples, respectively. Over 13 million so-called text-triple pairs, coming from PubMed and assembled using the Integrated Network and Dynamical Reasoning Assembler (INDRA), were used in an unsupervised pre-training procedure to learn representations of biomedical knowledge in STonKGs. By comparing STonKGs to an NLP- and a KG-baseline (operating on either text or KG data) on a benchmark consisting of eight fine-tuning tasks, the proposed knowledge integration method applied in STonKGs was empirically validated. Specifically, on tasks with a comparatively small dataset size and a larger number of classes, STonKGs resulted in considerable performance gains, beating the F1-score of the best baseline by up to 0.083. Both the source code as well as the code used to implement STonKGs are made publicly available so that the proposed method of this thesis can be extended to many other biomedical applications.
Contextual information is widely considered for NLP and knowledge discovery in life sciences since it highly influences the exact meaning of natural language. The scientific challenge is not only to extract such context data, but also to store this data for further query and discovery approaches. Classical approaches use RDF triple stores, which have serious limitations. Here, we propose a multiple step knowledge graph approach using labeled property graphs based on polyglot persistence systems to utilize context data for context mining, graph queries, knowledge discovery and extraction. We introduce the graph-theoretic foundation for a general context concept within semantic networks and show a proof of concept based on biomedical literature and text mining. Our test system contains a knowledge graph derived from the entirety of PubMed and SCAIView data and is enriched with text mining data and domain-specific language data using Biological Expression Language. Here, context is a more general concept than annotations. This dense graph has more than 71M nodes and 850M relationships. We discuss the impact of this novel approach with 27 real-world use cases represented by graph queries. Storing and querying a giant knowledge graph as a labeled property graph is still a technological challenge. Here, we demonstrate how our data model is able to support the understanding and interpretation of biomedical data. We present several real-world use cases that utilize our massive, generated knowledge graph derived from PubMed data and enriched with additional contextual data. Finally, we show a working example in context of biologically relevant information using SCAIView.
For research in audiovisual interview archives often it is not only of interest what is said but also how. Sentiment analysis and emotion recognition can help capture, categorize and make these different facets searchable. In particular, for oral history archives, such indexing technologies can be of great interest. These technologies can help understand the role of emotions in historical remembering. However, humans often perceive sentiments and emotions ambiguously and subjectively. Moreover, oral history interviews have multi-layered levels of complex, sometimes contradictory, sometimes very subtle facets of emotions. Therefore, the question arises of the chance machines and humans have capturing and assigning these into predefined categories. This paper investigates the ambiguity in human perception of emotions and sentiment in German oral history interviews and the impact on machine learning systems. Our experiments reveal substantial differences in human perception for different emotions. Furthermore, we report from ongoing machine learning experiments with different modalities. We show that the human perceptual ambiguity and other challenges, such as class imbalance and lack of training data, currently limit the opportunities of these technologies for oral history archives. Nonetheless, our work uncovers promising observations and possibilities for further research.
State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
21 pages, with supplementary
ProtSTonKGs: A Sophisticated Transformer Trained on Protein Sequences, Text, and Knowledge Graphs
(2022)
While most approaches individually exploit unstructured data from the biomedical literature or structured data from biomedical knowledge graphs, their union can better exploit the advantages of such approaches, ultimately improving representations of biology. Using multimodal transformers for such purposes can improve performance on context dependent classication tasks, as demonstrated by our previous model, the Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs (STonKGs). In this work, we introduce ProtSTonKGs, a transformer aimed at learning all-encompassing representations of protein-protein interactions. ProtSTonKGs presents an extension to our previous work by adding textual protein descriptions and amino acid sequences (i.e., structural information) to the text- and knowledge graph-based input sequence used in STonKGs. We benchmark ProtSTonKGs against STonKGs, resulting in improved F1 scores by up to 0.066 (i.e., from 0.204 to 0.270) in several tasks such as predicting protein interactions in several contexts. Our work demonstrates how multimodal transformers can be used to integrate heterogeneous sources of information, paving the foundation for future approaches that use multiple modalities for biomedical applications.
Differential-Algebraic Equations and Beyond: From Smooth to Nonsmooth Constrained Dynamical Systems
(2022)
Effective Neighborhood Feature Exploitation in Graph CNNs for Point Cloud Object-Part Segmentation
(2022)
Part segmentation is the task of semantic segmentation applied on objects and carries a wide range of applications from robotic manipulation to medical imaging. This work deals with the problem of part segmentation on raw, unordered point clouds of 3D objects. While pioneering works on deep learning for point clouds typically ignore taking advantage of local geometric structure around individual points, the subsequent methods proposed to extract features by exploiting local geometry have not yielded significant improvements either. In order to investigate further, a graph convolutional network (GCN) is used in this work in an attempt to increase the effectiveness of such neighborhood feature exploitation approaches. Most of the previous works also focus only on segmenting complete point cloud data. Considering the impracticality of such approaches, taking into consideration the real world scenarios where complete point clouds are scarcely available, this work proposes approaches to deal with partial point cloud segmentation.
In the attempt to better capture neighborhood features, this work proposes a novel method to learn regional part descriptors which guide and refine the segmentation predictions. The proposed approach helps the network achieve state-of-the-art performance of 86.4% mIoU on the ShapeNetPart dataset for methods which do not use any preprocessing techniques or voting strategies. In order to better deal with partial point clouds, this work also proposes new strategies to train and test on partial data. While achieving significant improvements compared to the baseline performance, the problem of partial point cloud segmentation is also viewed through an alternate lens of semantic shape completion.
Semantic shape completion networks not only help deal with partial point cloud segmentation but also enrich the information captured by the system by predicting complete point clouds with corresponding semantic labels for each point. To this end, a new network architecture for semantic shape completion is also proposed based on point completion network (PCN) which takes advantage of a graph convolution based hierarchical decoder for completion as well as segmentation. In addition to predicting complete point clouds, results indicate that the network is capable of reaching within a margin of 5% to the mIoU performance of dedicated segmentation networks for partial point cloud segmentation.