Refine
Departments, institutes and facilities
- Graduierteninstitut (51)
- Fachbereich Informatik (20)
- Fachbereich Angewandte Naturwissenschaften (18)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (5)
- Institut für Verbraucherinformatik (IVI) (5)
- Fachbereich Wirtschaftswissenschaften (4)
- Institut für Cyber Security & Privacy (ICSP) (4)
- Institute of Visual Computing (IVC) (4)
- Institut für Sicherheitsforschung (ISF) (3)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (2)
Document Type
- Doctoral Thesis (64) (remove)
Year of publication
Language
- English (64) (remove)
Has Fulltext
- no (64)
Keywords
- Robotics (3)
- Antioxidans (2)
- Evolutionary optimization (2)
- Human-Computer Interaction (2)
- Nachhaltigkeit (2)
- Quality diversity (2)
- Softwareentwicklung (2)
- Sustainability (2)
- Virtual Reality (2)
- API Gebrauchstauglichkeit (1)
Risikobasierte Authentifizierung (RBA) ist ein adaptiver Ansatz zur Stärkung der Passwortauthentifizierung. Er überwacht eine Reihe von Merkmalen, die sich auf das Loginverhalten während der Passworteingabe beziehen. Wenn sich die beobachteten Merkmalswerte signifikant von denen früherer Logins unterscheiden, fordert RBA zusätzliche Identitätsnachweise an. Regierungsbehörden und ein Erlass des US-Präsidenten empfehlen RBA, um Onlineaccounts vor Angriffen mit gestohlenen Passwörtern zu schützen. Trotz dieser Tatsachen litt RBA unter einem Mangel an offenem Wissen. Es gab nur wenige bis keine Untersuchungen über die Usability, Sicherheit und Privatsphäre von RBA. Das Verständnis dieser Aspekte ist jedoch wichtig für eine breite Akzeptanz.
Diese Arbeit soll ein umfassendes Verständnis von RBA mit einer Reihe von Studien vermitteln. Die Ergebnisse ermöglichen es, datenschutzfreundliche RBA-Lösungen zu schaffen, die die Authentifizierung stärken bei gleichzeitig hoher Menschenakzeptanz.
This dissertation presents a probabilistic state estimation framework for integrating data-driven machine learning models and a deformable facial shape model in order to estimate continuous-valued intensities of 22 different facial muscle movements, known as Action Units (AU), defined in the Facial Action Coding System (FACS). A practical approach is proposed and validated for integrating class-wise probability scores from machine learning models within a Gaussian state estimation framework. Furthermore, driven mass-spring-damper models are applied for modelling the dynamics of facial muscle movements. Both facial shape and appearance information are used for estimating AU intensities, making it a hybrid approach. Several features are designed and explored to help the probabilistic framework to deal with multiple challenges involved in automatic AU detection. The proposed AU intensity estimation method and its features are evaluated quantitatively and qualitatively using three different datasets containing either spontaneous or acted facial expressions with AU annotations. The proposed method produced temporally smoother estimates that facilitate a fine-grained analysis of facial expressions. It also performed reasonably well, even though it simultaneously estimates intensities of 22 AUs, some of which are subtle in expression or resemble each other closely. The estimated AU intensities tended to the lower range of values, and were often accompanied by a small delay in onset. This shows that the proposed method is conservative. In order to further improve performance, state-of-the-art machine learning approaches for AU detection could be integrated within the proposed probabilistic AU intensity estimation framework.
Diese Arbeit beschäftigt sich mit der Effizienz der Seitenkanal-Kryptanalyse. In Teil II dieser Arbeit demonstrieren wir, wie die Laufzeit der wichtigsten Analysewerkzeuge mit Hilfe der CUDA Plattform erheblich gesteigert werden kann. Zweitens untersuchen wir neue Ansätze der profilierenden Seitenkanal-Kryptanalyse. Der Forschungszweig des maschinellen Lernens kann für deutliche Verbesserungen adaptiert werden, wurde jedoch wenig dahingehend untersucht. In Teil III dieser Arbeit präsentieren wir zwei neue Methoden, die einige Gemeinsamkeiten jedoch auch einige Unterschiede aufbieten, sodass sich Prüfergebnisse in einem vollständigeren Bild zeigen lassen. Darüber hinaus schlagen wir in Teil IV eine Seitenkanalanwendung zum Schutz geistigen Eigentums (IP) vor. In Teil V beschäftigen wir uns tiefergehend mit praktischer Seitenkanal-Kryptanalyse, indem wir Attacken auf einen Sicherheitsmikrokontroller durchführen, der Anwendung in einer, in Deutschland weit verbreiteten, EC Karte findet.
The art of nudging
(2023)
Do simple and subtle changes in the living and study environment improve the eating behaviour of students in an educational setting? This dissertation provides a not-so-simple answer to this simple question based on the outcomes of four studies that explore the effects and design of artwork nudges (specifically the artwork of Alberto Giacometti) on the eating behaviour of students by applying different research designs. Study 1 explores the effects of a Giacometti-like nudge (a more contemporary version of the original nudge) regarding the dietary behaviour of high school students in a controlled setting. Study 2 applies different artwork nudges within a virtual vignette setting to measure their effects on virtual meal choices made. Also, the degree to which individuals were aware of the nudge’s presence is included as an influential factor in nudge effectiveness. Study 3 assesses the susceptibility to nudges as measured with a questionnaire. Susceptibility to nudges is defined as nudgeability. Study 4 assesses the effects of the original Giacometti nudge in a real-world university cafeteria setting. Specifically, the immediate and sustained effects of the original Giacometti nudge on students’ meal purchases in the university cafeteria are considered. In addition, the role of awareness of the nudge’s presence as well as the acceptance of this specific nudge are discussed. The conclusion is drawn that the original Giacometti nudge should only be applied in an educational setting to improve healthy eating behaviour if the intended target groups and environment meet certain conditions. Artwork nudges in general should be applied only after rigorous testing of various types of different nudges and more research reflecting healthy eating in its entirety.
Traditional and newly developed testing methods were used for extensive application-related characterization of transdermal therapeutic systems (TTS) and pressure sensitive adhesives (PSA). Large amplitude oscillatory shear tests of PSAs were correlated to the material behavior during the patient’s motion and showed that all PSAs were located close to the gel point. Furthermore, an increasing strain amplitude results in stretching and yielding of the PSA´s microstructure causing a consolidation of the network and a release with increasing strain amplitude. RheoTack approach was developed to allow for an advanced tack characterization of TTS with visual inspection. The results showed a clear resin content and rod geometry dependent behavior, and displays the PSA´s viscoelasticity resulting in either high tack and long stretched fibrils or non-adhesion and brittle behavior. Moreover, diffusion of water / sweat during TTS´s application might influence its performance. Therefore, a dielectric analysis based evaluation method displayed occurring water diffusion into the PSA from which the diffusion coefficient can be determined, and showed clear material and resin content dependent behavior. All methods allow for an advanced product-oriented material testing that can be utilized within further TTS development.
Since its advent, the sustainability effects of the modern sharing economy have been the subject of controversial debate. While its potential was initially discussed in terms of post-ownership development with a view to decentralizing value creation and increasing social capital and environmental relief through better utilization of material goods, critics have become increasingly loud in recent years. Many people hoped that carsharing could lead to development away from ownership towards flexible use and thus more resource-efficient mobility. However, carsharing remains niche, and while many people like the idea in general, they appear to consider carsharing to not be advantageous as a means of transport in terms of cost, flexibility, and comfort. A key innovation that could elevate carsharing from its niche existence in the future is autonomous driving. This technology could help shared mobility gain a new boost by allowing it to overcome the weaknesses of the present carsharing business model. Flexibility and comfort could be greatly enhanced with shared autonomous vehicles (SAVs), which could simultaneously offer benefits in terms of low cost, and better use of time without the burden of vehicle ownership. However, it is not the technology itself that is sustainable; rather, sustainability depends on the way in which this technology is used. Hence, it is necessary to make a prospective assessment of the direct and indirect (un)sustainable effects before or during the development of a technology in order to incorporate these findings into the design and decision-making process. Transport research has been intensively analyzing the possible economic, social, and ecological consequences of autonomous driving for several years. However, research lacks knowledge about the consequences to be expected from shared autonomous vehicles. Moreover, previous findings are mostly based on the knowledge of experts, while potential users are rarely included in the research. To address this gap, this thesis contributes to answering the questions of what the ecological and social impacts of the expected concept of SAVs will be. In my thesis, I study in particular the ecological consequences of SAVs in terms of the potential modal shifts they can induce as well as their social consequences in terms of potential job losses in the taxi industry. Regarding this, I apply a user-oriented, mixed-method technology assessment approach that complements existing, expert-oriented technology assessment studies on autonomous driving that have so far been dominated by scenario analyses and simulations. To answer the two questions, I triangulated the method of scenario analysis and qualitative and quantitative user studies. The empirical studies provide evidence that the automation of mobility services such as carsharing may to a small extent foster a shift from the private vehicle towards mobility on demand. However, findings also indicate that rebound effects are to be expected: Significantly more users are expected to move away from the more sustainable public transportation, leading to an overcompensation of the positive modal shift effects by the negative modal shift effects. The results show that a large proportion of the taxi trips carried out can be re-placed by SAVs, making the profession of taxi driver somewhat obsolete. However, interviews with taxi drivers themselves revealed that the services provided by the drivers go beyond mere transport, so that even in the age of SAVs, the need for human assistance will continue – though to a smaller extent. Given these findings, I see action potential at different levels: users, mobility service providers, and policymakers. Regarding environmental and social impacts resulting from the use of SAVs, there is a strong conflict of objectives among users, potential SAV operators, and sustainable environmental and social policies. In order to strengthen the positive effects and counteract the negative effects, such as unintended modal shifts, policies may soon have to regulate the design of SAVs and their introduction. A key starting point for transport policy is to promote the use of more environmentally friendly means of transport, in particular by making public transportation attractive and, if necessary, by making the use of individual motorized mobility less attractive. The taxi industry must face the challenges of automation by opening up to these developments and focusing on service orientation – to strengthen the drivers’ main unique selling point compared to automated technology. Assessing the impacts of the not-yet-existing generally involves great uncertainty. With the results of my work, however, I would like to argue that a user-oriented technology assessment can usefully complement the findings of classic methods of technology assessment and can iteratively inform the development process regarding technology and regulation.
Skill generalisation and experience acquisition for predicting and avoiding execution failures
(2023)
For performing tasks in their target environments, autonomous robots usually execute and combine skills. Robot skills in general and learning-based skills in particular are usually designed so that flexible skill acquisition is possible, but without an explicit consideration of execution failures, the impact that failure analysis can have on the skill learning process, or the benefits of introspection for effective coexistence with humans. Particularly in human-centered environments, the ability to understand, explain, and appropriately react to failures can affect a robot's trustworthiness and, consequently, its overall acceptability. Thus, in this dissertation, we study the questions of how parameterised skills can be designed so that execution-level decisions are associated with semantic knowledge about the execution process, and how such knowledge can be utilised for avoiding and analysing execution failures. The first major segment of this work is dedicated to developing a representation for skill parameterisation whose objective is to improve the transparency of the skill parameterisation process and enable a semantic analysis of execution failures. We particularly develop a hybrid learning-based representation for parameterising skills, called an execution model, which combines qualitative success preconditions with a function that maps parameters to predicted execution success. The second major part of this work focuses on applications of the execution model representation to address different types of execution failures. We first present a diagnosis algorithm that, given parameters that have resulted in a failure, finds a failure hypothesis by searching for violations of the qualitative model, as well as an experience correction algorithm that uses the found hypothesis to identify parameters that are likely to correct the failure. Furthermore, we present an extension of execution models that allows multiple qualitative execution contexts to be considered so that context-specific execution failures can be avoided. Finally, to enable the avoidance of model generalisation failures, we propose an adaptive ontology-assisted strategy for execution model generalisation between object categories that aims to combine the benefits of model-based and data-driven methods; for this, information about category similarities as encoded in an ontology is integrated with outcomes of model generalisation attempts performed by a robot. The proposed methods are exemplified in terms of various use cases - object and handle grasping, object stowing, pulling, and hand-over - and evaluated in multiple experiments performed with a physical robot. The main contributions of this work include a formalisation of the skill parameterisation problem by considering execution failures as an integral part of the skill design and learning process, a demonstration of how a hybrid representation for parameterising skills can contribute towards improving the introspective properties of robot skills, as well as an extensive evaluation of the proposed methods in various experiments. We believe that this work constitutes a small first step towards more failure-aware robots that are suitable to be used in human-centered environments.
The topic of this PhD project is in the context of cross-reality, a term that defines mixed reality environments that tunnel dense real-world data acquired through the use of sensor/actuator device networks into virtual worlds. It is part of the ongoing academia and industry efforts to achieve interoperability between virtual and real devices and services.
The design of an efficient digital circuit in term of low-power has become a very challenging issue. For this reason, low-power digital circuit design is a topic addressed in electrical and computer engineering curricula, but it also requires practical experiments in a laboratory. This PhD research investigates a novel approach, the low-power design laboratory system by developing a new technical and pedagogical system. The low-power design laboratory system is composed of two types of laboratories: the on-site (hands-on) laboratory and the remote laboratory. It has been developed at the Bonn-Rhine-Sieg University of Applied Sciences to teach low-power techniques in the laboratory. Additionally, this thesis contributes a suggestion on how the learning objectives can be complemented by developing a remote system in order to improve the teaching process of the low-power digital circuit design. This laboratory system enables online experiments that can be performed using physical instruments and obtaining real data via the internet. The laboratory experiments use a Field Programmable Gate Array (FPGA) as a design platform for circuit implementation by students and use image processing as an application for teaching low-power techniques.
This thesis presents the instructions for the low-power design experiments which use a top-down hierarchical design methodology. The engineering student designs his/her algorithm with a high level of abstraction and the experimental results are obtained and measured at a low level (hardware) so that more information is available to correctly estimate the power dissipation such as specification, latency, thermal effect, and technology used. Power dissipation of the digital system is influenced by specification, design, technology used, as well as operating temperature. Digital circuit designers can observe the most influential factors in power dissipation during the laboratory exercises in the on-site system and then use the remote system to supplement investigating the other factors. Furthermore, the remote system has obvious benefits such as developing learning outcomes, facilitating new teaching methods, reducing costs and maintenance, cost-saving by reducing the numbers of instructors, saving instructor time and simplifying their tasks, facilitating equipment sharing, improving reliability, and finally providing flexibility of usage the laboratories.
Remineralizing soils? The agricultural usage of silicate rock powders in the context of One Health
(2022)
The concept of soil health describes the capacity of soil to fulfill essential functions and ecosystem services. Healthy soils are inextricably linked to sustainable agriculture and are crucial for the interconnected health of plants, animals, humans, and their environment ("One Health"). However, soil health is threatened through unprecedented rates of soil degradation. A major form of soil degradation is nutrient depletion, which has been seriously underestimated for potassium (K) and several micronutrients. One way to replenish K and micronutrients are multi-nutrient silicate rock powders (SRPs). Their agronomic suitability has long been questioned due to slow weathering rates, although recent studies found significant soil health improvements and challenge past objections which insufficiently addressed the factorial complexity of the weathering process. Furthermore, environmental co-benefits might arise through their mixture with livestock slurry, which could reduce the slurry’s ammonia (NH3) emissions and improve its biophysicochemical properties. However, neither SRPs effects on soil health, nor the biophysicochemical effects of mixing SRPs with livestock slurry have hitherto been comprehensively analyzed. The overall aim of this dissertation is thus to review the agricultural usage of SRPs in the context of One Health. The first part of this thesis starts with an elaboration of the health concept in general and then explores the interlinkages between soil health and One Health. Subsequently, the potentials and oftentimes bypassed problems of operationalizing soil health will be outlined, and feasible ways for its future usage are proposed. In the second part of the thesis, it is reviewed how and under which circumstances SRPs can ameliorate soil health. This is done by presenting a new framework with the most relevant factors for the usage of SRPs through which several contradictory outcomes of prior studies can be explained. A subsequent analysis of 48 crop trials reveals the potential of SRPs as K and multi-nutrient soil amendment for tropical soils, whereas the benefits for temperate soils are inconclusive. The review revealed various co-benefits that could substantially increase SRPs overall agronomic efficiency. The last part of the thesis reports about the effects of mixing two rock powders with cattle slurry. SRPs significantly increased the slurry´s CH4 emission rates, whereas the effects on NH3, CO2, and N2O emission rates were mostly insignificant. The rock powders increased the nutrient content of the slurry and altered its microbiology. In conclusion, the concept of soil health must be operationalized in more specific, practical, and context-dependent ways. Particularly in humid tropical environments, SRPs could advance low-cost soil health ameliorations, and its usage could have additional co-benefits regarding One Health. Mixing SRPs with organic materials like livestock slurry could overcome the major obstacle of their low solubility, although the effects on NH3 and greenhouse gas emissions must be further evaluated.
Western consumption patterns are strongly associated with environmental pollution and climate change, which challenges us with transforming our society and consumption towards a sustainable future. This thesis takes up this challenge and aims to contribute to this debate at the intersection of ICT artifacts and social practices through the examples of food and mobility consumption. The social practice lens is employed as an alternative to the predominant persuasive or motivational lens of design in the respective consumption domains. Against this background, this thesis first presents three research papers that contribute to a broader understanding of dynamic practices and their transformation towards a sustainable stable state. The following research takes up these sections' empirical results that more intensely focus on the appropriation of materials and infrastructures utilizing Recommender Systems. Given this approach, this thesis contributes to three fields - practice-based Computing, Recommender Systems, and Consumer Informatics.
In this doctoral thesis the curing process of visible light-curing (VLC) dental composites and 3D printing rapid prototyping (RP) materials are investigated with the focus on dielectric analysis (DEA). This method is able to monitor the curing of resins in an alternating electric fringe field with adjustable frequencies and is often used for cure control of composites manufacturing in the aviation and automotive industry but hardly established in dental science or RP method development. It is capable of investigating very fast initiation and primary curing processes using high frequencies in the kHz-range. The aim of the Thesis is a better understanding of the curing processes with respect to curing parameters such as resin composition, viscosity, temperature, and for light-curing composites also light intensity and irradiation depth. Due to the nature of both dental and RP systems an application of specific experimental set-up had to be designed allowing for the generation of reproducible and valid results. Subsequently, different evaluation methods were developed to characterize the curing behavior of both material types. A special focus was paid to the determination of kinetic parameters from DEA measurements. Reaction rates of the curing of the corresponding thermosets were calculated and applied to the ion viscosity curves measured by DEA to evaluate reaction kinetic parameters. For the dental composites it could be clearly shown that the initial curing rate is directly proportional to light intensity and not to its square root as proposed by many others authors. A good description of the curing behaviour of 3DP RP materials was also achieved assuming a reaction order smaller than one. This data provides the base for the kinetic modeling of polymerization and curing processes proposed within the Thesis.
Microorganisms not only contribute to the spoilage of food but can also cause illnesses through consumption. Consumer concerns and doubts about the shelf life of the products and the resulting enormous amounts of food waste have led to a demand for a rapid, robust, and non-destructive method for the detection of microorganisms, especially in the food sector. Therefore, a rapid and simple sampling method for the Raman- and infrared (IR)-microspectroscopic study of microorganisms associated with spoilage processes was developed. For subsequent evaluation pre-processing routines, as well as chemometric models for classification of spoilage microorganisms were developed. The microbiological samples are taken using a disinfectable sampling stamp and measured by microspectroscopy without the usual pre-treatments such as purification separation, washing, and centrifugation. The resulting complex multivariate data sets were pre-processed, reduced by principal component analysis, and classified by discriminant analysis. Classification of independent unlabeled test data showed that microorganisms could be classified at genus, species, and strain levels with an accuracy of 96.5 % (Raman) and 94.5 % (IR), respectively, despite large biological differences and novel sampling strategies. As bacteria are exposed to constantly changing conditions and their adaptation mechanisms may make them inaccessible to conventional measurement methods, the methods and models developed were investigated for their suitability for microorganisms exposed to stress. Compared to normal growth conditions, spectral changes in lipids, polysaccharides, nucleic acids, and proteins were observed in microorganisms exposed to stress. Models were developed to discriminate microorganisms, independent of the involvement of various stress factors and storage times. Classification of the investigated bacteria yielded accuracies of 97.6 % (Raman) and 96.6 % (IR), respectively, and a robust and meaningful model was developed to discriminate different microorganisms at the genus, species, and strain levels. The obtained results are very promising and show that the methods and models developed for the discrimination of microorganisms as well as the investigation of stress factors on microorganisms by means of Raman- and IR-microspectroscopy have the potential to be used, for example, in the food sector for the rapid determination of surface contamination.
Discrimination and classification of eight strains related to meat spoilage microorganisms commonly found in poultry meat were successfully carried out using two dispersive Raman spectrometers (Microscope and Portable Fiber-Optic systems) in combination with chemometric methods. Principal Components Analysis (PCA) and Multi-Class Support Vector Machines (MC-SVM) were applied to develop discrimination and classification models. These models were certified using validation data sets which were successfully assigned to the correct bacterial genera and even to the right strain. The discrimination of bacteria down to the strain level was performed for the pre-processed spectral data using a 3-stage model based on PCA. The spectral features and differences among the species on which the discrimination was based were clarified through PCA loadings. In MC-SVM the pre-processed spectral data was subjected to PCA and utilized to build a classification model. When using the first two components, the accuracy of the MC-SVM model was 97.64% and 93.23% for the validation data collected by the Raman Microscope and the Portable Fiber-Optic Raman system, respectively. The accuracy reached 100% for the validation data by using the first eight and ten PC’s from the data collected by Raman Microscope and by Portable Fiber-Optic Raman system, respectively. The results reflect the strong discriminative power and the high performance of the developed models, the suitability of the pre-processing method used in this study and that the low accuracy of the Portable Fiber-Optic Raman system does not adversely affect the discriminative power of the developed models.
Process-dependent thermo-mechanical viscoelastic properties and the corresponding morphology of HDPE extrusion blow molded (EBM) parts were investigated. Evaluation of bulk data showed that flow direction, draw ratio, and mold temperature influence the viscoelastic behavior significantly in certain temperature ranges. Flow induced orientations due to higher draw ratio and higher mold temperature lead to higher crystallinities. To determine the local viscoelastic properties, a new microindentation system was developed by merging indentation with dynamic mechanical analysis. The local process-structure-property relationship of EBM parts showed that the cross-sectional temperature distribution is clearly reflected by local crystallinities and local complex moduli. Additionally, a model to calculate three-dimensional anisotropic coefficients of thermal expansion as a function of the process dependent crystallinity was developed based on an elementary volume unit cell with stacked layers of amorphous phase and crystalline lamellae. Good agreement of the predicted thermal expansion coefficients with measured ones was found up to a temperature of 70 °C.
We present a type inference algoritm and its verification for an object-oriented programming language called O'SMALL. O'SMALL is a class-based language with imperative features. Classes are not first-class citizens. No type declarations are required. Type inference operates on an extended lambda-calculus into which O'SMALL is translated. The system features extensible record types, mu-types, and imperative types. This work belongs to both theoretical and practical computer science. In the theoretical part, the type inference algoritm for our lambda-calculus with records is formalized in order-sorted logic. In the practical part, the algoritm for let-polymorphism and imperative features is based on well-known approaches. These approaches are presented in a new fashion but they are not proven correct.
Computer graphics research strives to synthesize images of a high visual realism that are indistinguishable from real visual experiences. While modern image synthesis approaches enable to create digital images of astonishing complexity and beauty, processing resources remain a limiting factor. Here, rendering efficiency is a central challenge involving a trade-off between visual fidelity and interactivity. For that reason, there is still a fundamental difference between the perception of the physical world and computer-generated imagery. At the same time, advances in display technologies drive the development of novel display devices. The dynamic range, the pixel densities, and refresh rates are constantly increasing. Display systems enable a larger visual field to be addressed by covering a wider field-of-view, due to either their size or in the form of head-mounted devices. Currently, research prototypes are ranging from stereo and multi-view systems, head-mounted devices with adaptable lenses, up to retinal projection, and lightfield/holographic displays. Computer graphics has to keep step with, as driving these devices presents us with immense challenges, most of which are currently unsolved. Fortunately, the human visual system has certain limitations, which means that providing the highest possible visual quality is not always necessary. Visual input passes through the eye’s optics, is filtered, and is processed at higher level structures in the brain. Knowledge of these processes helps to design novel rendering approaches that allow the creation of images at a higher quality and within a reduced time-frame. This thesis presents the state-of-the-art research and models that exploit the limitations of perception in order to increase visual quality but also to reduce workload alike - a concept we call perception-driven rendering. This research results in several practical rendering approaches that allow some of the fundamental challenges of computer graphics to be tackled. By using different tracking hardware, display systems, and head-mounted devices, we show the potential of each of the presented systems. The capturing of specific processes of the human visual system can be improved by combining multiple measurements using machine learning techniques. Different sampling, filtering, and reconstruction techniques aid the visual quality of the synthesized images. An in-depth evaluation of the presented systems including benchmarks, comparative examination with image metrics as well as user studies and experiments demonstrated that the methods introduced are visually superior or on the same qualitative level as ground truth, whilst having a significantly reduced computational complexity.
This thesis explores novel haptic user interfaces for touchscreens, virtual and remote environments (VE and RE). All feedback modalities have been designed to study performance and perception while focusing on integrating an additional sensory channel - the sense of touch. Related work has shown that tactile stimuli can increase performance and usability when interacting with a touchscreen. It was also shown that perceptual aspects in virtual environments could be improved by haptic feedback. Motivated by previous findings, this thesis examines the versatility of haptic feedback approaches. For this purpose, five haptic interfaces from two application areas are presented. Research methods from prototyping and experimental design are discussed and applied. These methods are used to create and evaluate the interfaces; therefore, seven experiments have been performed. All five prototypes use a unique feedback approach. While three haptic user interfaces designed for touchscreen interaction address the fingers, two interfaces developed for VE and RE target the feet. Within touchscreen interaction, an actuated touchscreen is presented, and study shows the limits and perceptibility of geometric shapes. The combination of elastic materials and a touchscreen is examined with the second interface. A psychophysical study has been conducted to highlight the potentials of the interface. The back of a smartphone is used for haptic feedback in the third prototype. Besides a psychophysical study, it is found that the touch accuracy could be increased. Interfaces presented in the second application area also highlight the versatility of haptic feedback. The sides of the feet are stimulated in the first prototype. They are used to provide proximity information of remote environments sensed by a telepresence robot. In a study, it was found that spatial awareness could be increased. Finally, the soles of the feet are stimulated. A designed foot platform that provides several feedback modalities shows that self-motion perception can be increased.
This research investigates the efficacy of multisensory cues for locating targets in Augmented Reality (AR). Sensory constraints can impair perception and attention in AR, leading to reduced performance due to factors such as conflicting visual cues or a restricted field of view. To address these limitations, the research proposes head-based multisensory guidance methods that leverage audio-tactile cues to direct users' attention towards target locations. The research findings demonstrate that this approach can effectively reduce the influence of sensory constraints, resulting in improved search performance in AR. Additionally, the thesis discusses the limitations of the proposed methods and provides recommendations for future research.
Multipoint data-communications is among the hot topics of communication research and development. A lot of studies and ideas have been presented, the vast majority focusing on a homogenous environment in terms of physical network, communication protocol stacks, coding schemes and/or service qualities. First straight-forward implementations –Steve Deering‘s IP multipoint on the MBone being the most popular one– already give an idea of the capabilities of a multipoint environment.
This thesis is dedicated to models and algorithms for the use in physical cryptanalysis which is a new evolving discipline in implementation security of information systems.
Physical observables such as the power consumption or electromagnetic emanation of a cryptographic module are so-called `side channels'. They contain exploitable information about internal states of an implementation at runtime. Physical effects can also be used for the injection of faults. Fault injection is successful if it recovers internal states by examining the effects of an erroneous state propagating through the computation.
The best currently known approach in physical cryptanalysis is a thorough experimental verification at a profiling stage, which is included in methods achieving maximum power. The final multivariate algorithms of this thesis can be seen as the most efficient ones in side channel cryptanalysis.
As robots are becoming ubiquitous and more capable, the need for introducing solid robot software development methods is pressing to increase robots' task spectrum. This thesis is concerned with improving software engineering of robot perception systems. The presented research employs a model-based approach to provide the means to represent knowledge about robotics software. The thesis is divided into three parts, namely research on the specification, deployment and adaptation of robot perception systems.
The generation and maintenance of intricate spatiotemporal patterns of gene expression in multicellular organisms requires the establishment of complex mechanisms of transcriptional regulation. Estimations that up to one million enhancers exist in the human genome accentuates the utmost importance of this type of cis-regulatory element for gene regulation. However, surprisingly little is known about the mechanisms used to temporarily or permanently activate or inactivate enhancers during cellular differentiation. The current work addresses the question how enhancer regulation can be achieved.
Using the chemokine (C-C motif) ligand gene Ccl22 as a model, the first example is based on the question how the activation of an enhancer can be prevented in a physiological context. Ccl22 is expressed by myeloid cells, such as dendritic cells, upon exposure to inflammatory stimuli. The expression in other cell types, such as fibroblasts, is prevented by the strong accumulation of H3K9me3 at the enhancer's proximal region. This accumulation is attenuated in myeloid cells through activity of the stimulus-induced demethylase Jmjd2d. To tease out which genomic fragment or fragments in the Ccl22 locus could be responsible for the maintenance of enhancer inactivity, potentially through the recruitment of H3K9 methyltransferases, the enhancer repressing capacity of 1 kb fragments of the gene locus was analysed in retroviral reporter assays. It was found that a fragment adjacent to the Ccl22 enhancer that overlaps with a member of a subfamily of long interspersed nuclear elements (LINEs) showed strong repressive potential on a model enhancer. Subsequent retroviral reporter assays with LINEs from loci of other stimulus-dependent genes identified additional LINE fragments that exhibit strong enhancer repressive capacity. These findings suggest a mechanism for enhancer silencing involving LINEs.
The second example concentrates on the inactivation of an enhancer during colorectal cancer (CRC) progression. The adenoma to carcinoma transition during CRC progression often is accompanied by a downregulation of the tumour suppressor gene EPHB2. The EMT inducing factor SNAIL1 strongly downregulated EPHB2 expression in a CRC cell model. To gain insights into the transcriptional regulation of EPHB2, potential cis-regulatory elements in the EPHB2 upstream region were analysed using reporter assays. A cell-type-specific enhancer was identified and subsequent chromatin analyses revealed a correlation between enhancer chromatin conformation and EPHB2 expression in different CRC cell lines. Additionally, the overexpression of the murine Snail1 induced chromatin changes at the EPHB2 enhancer towards a poised, transcriptionally silent chromatin conformation. Mutational analyses of the minimal enhancer region pinpointed three transcription factor binding motifs to be essential for full enhancer activity. Different binding patterns between CRC cell lines at the TCF/LEF motif were subsequently identified. Furthermore, a switch from TCF7L2 to LEF1 occupancy was found upon overexpression of Snail1 in vitro and in vivo. The generation of LS174T CRC cells overexpressing LEF1 confirmed the involvement of LEF1 in the downregulation of EPHB2 and the competitive displacement of TCF7L2. This part of the work demonstrated that the SNAIL1 induced downregulation of EPHB2 is dependent on the decommissioning of a transcriptional enhancer and led to a hypothetical model involving LEF1 and ZEB1.
In summary, this work highlighted two distinct mechanisms for enhancer regulation. One mechanism is based on enhancer repressive LINE fragments that might prevent stimulus-dependent enhancer activation. In the second, enhancer silencing was shown to be based on a competitive transcription factor binding mechanism.