Refine
Departments, institutes and facilities
- Fachbereich Informatik (86)
- Fachbereich Wirtschaftswissenschaften (68)
- Fachbereich Angewandte Naturwissenschaften (42)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (39)
- Präsidium (39)
- Fachbereich Ingenieurwissenschaften und Kommunikation (38)
- Fachbereich Sozialpolitik und Soziale Sicherung (33)
- Institut für Cyber Security & Privacy (ICSP) (30)
- Institute of Visual Computing (IVC) (27)
- Institut für funktionale Gen-Analytik (IFGA) (16)
Document Type
- Conference Object (123)
- Article (105)
- Part of a Book (93)
- Part of Periodical (40)
- Book (monograph, edited volume) (18)
- Contribution to a Periodical (15)
- Report (12)
- Doctoral Thesis (5)
- Working Paper (5)
- Master's Thesis (3)
Year of publication
- 2015 (431) (remove)
Keywords
- Sozialversicherung (13)
- Rehabilitation (7)
- Lehrbuch (6)
- Qualitätssicherung (5)
- Selbstverwaltung (5)
- Arbeitswelt (4)
- Controlling (4)
- Eco-Feedback (4)
- Prozessmanagement (4)
- Prävention (4)
RNA is one of the most important molecules in living organisms. One of its main functions is to regulate gene expression. This involves binding to and forming a joint structure with a messenger RNA. An RNAs functions is determined by its sequence and the structure it folds into. Accordingly, the prediction of individual as well as joint structures is an important area of research. In this thesis a method for the prediction of RNA-RNA joint structure using their minimum free energy (mfe) structures was developed. It is able to extensively explore the joint structural landscape of two interacting RNAs by taking advantage of the locality of changes in the RNAs structures as well as natural and energetic constraints. The method predicts the mfe joint structure as well as alternative stable joint structures while also computing non-optimal folding pathways from the unbound individual mfe structures to the predicted joint structures. It is shown how an enumeration approach is used which is able to deal with the enormous search space as well as to avoid any cyclic behaviour. The method is evaluated using two standard datasets of known interacting RNAs and shows good results.
Die Forschung zur kontrovers diskutierten Robotik in der Pflege und Begleitung von Personen mit Demenz steht noch am Anfang, wenngleich bereits erste Systeme auf dem Markt sind. Der Beitrag gibt entlang beispielhafter, fallbezogener Auszüge Einblicke in das laufende multidisziplinäre Projekt EmoRobot, das sich explorativ und interpretativ mit der Erkundung des Einsatzes von Robotik in der emotionsorientierten Pflege und Versorgung von Personen mit Demenz befasst. Fokussiert werden dabei die je eigenen Relevanzen der Personen mit Demenz.
Error analysis in a high accuracy sampled-data velocity stabilising system using Volterra series
(2015)
We present GEM-NI -- a graph-based generative-design tool that supports parallel exploration of alternative designs. Producing alternatives is a key feature of creative work, yet it is not strongly supported in most extant tools. GEM-NI enables various forms of exploration with alternatives such as parallel editing, recalling history, branching, merging, comparing, and Cartesian products of and for alternatives. Further, GEM-NI provides a modal graphical user interface and a design gallery, which both allow designers to control and manage their design exploration. We conducted an exploratory user study followed by in-depth one-on-one interviews with moderately and highly skills participants and obtained positive feedback for the system features, showing that GEM-NI supports creative design work well.
Polyether and polyether/ester based TPU (thermoplastic polyurethanes) were investigated with wide-angle XRD (X-ray diffraction) and SAXS (small angle X-ray scattering). Furthermore, SAXS measurements were performed in the temperature range of 30 °C to 130 °C. Polyether based polymers exhibit only one broad diffraction signal in a region of 2 θ 15° to 25°. In case of polyurethanes with ether/ester modification, the broad diffraction signal arises with small sharp diffraction signals. SAXS measurements of polymers reveal the size and shape of the crystalline zones of the polymer. Between 30 °C and 130 °C the size of the crystalline zone changes significantly. The size decreases in most of investigated TPU. In the case of Desmopan 9365D an increase of the particle size was observed.
In der vorliegenden Arbeit wird ein neuartiges Verfahren zur Echtzeitüberwachung von Laserbohrprozessen vorgestellt. Die Untersuchungen werden an unterschiedlichen Materialien unter Einsatz eines passiv-gütegeschalteten Nd:YAG Lasers durchgeführt. Prozessbegleitend findet eine Aufzeichnung der akustischen Emissionen mit anschließender Analyse durch schnelle Fourier-Transformation statt. Hierdurch lassen sich der Durchbruch beim Bohren durch ein Material sowie der Materialübergang mehrschichtiger Systeme detektieren. Die akustischen Messungen werden durchAuswertung der Pulsfolge des Lasers mittels einer Fotodiode gestützt. Hierbei zeigt sich eine gute Übereinstimmung der im akustischen Spektrum dominanten Frequenz mit der jeweils im Laserburstauftretenden Pulsfrequenz. Das vorgestellte Verfahren ermöglicht eine Echtzeitüberwachung beim Laserbohren mittels kostengünstiger und einfacher Hardware. Zudem zeichnet es sich im Gegensatz zu bestehenden Verfahren durch eine hohe Robustheit gegen äußere Störeinflüsse aus, da eine frequenzbasierte Auswertung stattfindet.
Familienunternehmen tragen maßgeblich zur Bruttowertschöpfung der Bundesrepublik Deutschland bei: der Anteil von Familienunternehmen an allen Unternehmen der deutschen Volkswirtschaft am Ende des Jahres 2010 betrug etwa 78 % bei einem Anteil von 56 % an der Gesamtbeschäftigung. Bei allen Familienunternehmen kommt es früher oder später zu einem Wechsel der Leitung und des Eigentums. Die Unternehmensnachfolge ist unvermeidlicher Bestandteil des Lebenszyklus eines Familienunternehmens. Im Zeitraum von 2014 bis 2018 werden pro Jahr etwa 27.000 Nachfolgen in deutschen Familienunternehmen prognostiziert: dies bedeutet rein mathematisch im Durchschnitt etwa eine Nachfolge alle zwanzig Minuten.
We present a system that combines voxel and polygonal representations into a single octree acceleration structure that can be used for ray tracing. Voxels are well-suited to create good level-of-detail for high-frequency models where polygonal simplifications usually fail due to the complex structure of the model. However, polygonal descriptions provide the higher visual fidelity. In addition, voxel representations often oversample the geometric domain especially for large triangles, whereas a few polygons can be tested for intersection more quickly.
Grundlagen des Marketings
(2015)
Detection of triacetone triperoxide using temperature cycled metal-oxide semiconductor gas sensors
(2015)
Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment in those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, trials of moderate-intensity exercise (i.e. self-paced cycling) and no-exercise (i.e. automatic propulsion) were performed within three levels of virtual environment exposure. Each trial was 5-min in duration and was followed by post-trial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore these change indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence.
Semantic Image Segmentation Combining Visible and Near-Infrared Channels with Depth Information
(2015)
Image understanding is a vital task in computer vision that has many applications in areas such as robotics, surveillance and the automobile industry. An important precondition for image understanding is semantic image segmentation, i.e. the correct labeling of every image pixel with its corresponding object name or class. This thesis proposes a machine learning approach for semantic image segmentation that uses images from a multi-modal camera rig. It demonstrates that semantic segmentation can be improved by combining different image types as inputs to a convolutional neural network (CNN), when compared to a single-image approach. In this work a multi-channel near-infrared (NIR) image, an RGB image and a depth map are used. The detection of people is further improved by using a skin image that indicates the presence of human skin in the scene and is computed based on NIR information. It is also shown that segmentation accuracy can be enhanced by using a class voting method based on a superpixel pre-segmentation. Models are trained for 10-class, 3-class and binary classification tasks using an original dataset. Compared to the NIR-only approach, average class accuracy is increased by 7% for 10-class, and by 22% for 3-class classification, reaching a total of 48% and 70% accuracy, respectively. The binary classification task, which focuses on the detection of people, achieves a classification accuracy of 95% and true positive rate of 66%. The report at hand describes the proposed approach and the encountered challenges and shows that a CNN can successfully learn and combine features from multi-modal image sets and use them to predict scene labeling.
This paper proposes a new artificial neural network-based maximum power point tracker for photovoltaic application. This tracker significantly improves efficiency of the photovoltaic system with series-connection of photovoltaic modules in non-uniform irradiance on photovoltaic array surfaces. The artificial neural network uses irradiance and temperature sensors to generate the maximum power point reference voltage and employ a classical perturb and observe searching algorithm. The structure of the artificial neural network was obtained by numerical modelling using Matlab/Simulink. The artificial neural network was trained using Bayesian regularisation back-propagation algorithms and demonstrated a good prediction of the maximum power point. Relative number of Vmpp prediction errors in range of ±0.2V is 0.05% based on validation data.
IT-accessiblity is often treated as an orphan in companies. Even though the proportion of disabled people is substantial and people become older and more susceptible to disabilities. Besides cost factors, companies often do not have a plan how to implement and control IT-accessibility successfully. However, most companies are familiar with IT-maturity frameworks to evaluate and improve their own IT-infrastructure. It would facilitate dealing with IT-accessibility, if IT-maturity frameworks consider IT-accessibility and provide recommendations and solutions for a successful implementation. Therefore, this article conducts a review of an acknowledged IT-maturity framework with regard to its capability to enable implementation of IT-accessibility in an organization. The first part of this article will illustrate the motivation and background for the authors concern with such a topic. Afterwards the authors will introduce the reader to the reviewed IT-maturity framework and provide basic knowledge on IT-accessibility. The main part of the article will deal with the review of the applied IT-maturity framework and outline examples of critical capabilities for successfully implementing IT-accessibility in an organization. The final section will derive implications and close with planned future research activities in this field.
Advanced driver assistance systems (ADAS) are technology systems and devices designed as an aid to the driver of a vehicle. One of the critical components of any ADAS is the traffic sign recognition module. For this module to achieve real-time performance, some preprocessing of input images must be done, which consists of a traffic sign detection (TSD) algorithm to reduce the possible hypothesis space. Performance of TSD algorithm is critical.
One of the best algorithms used for TSD is the Radial Symmetry Detector (RSD), which can detect both Circular [7] and Polygonal traffic signs [5]. This algorithm runs in real-time on high end personal computers, but computational performance of must be improved in order to be able to run in real-time in embedded computer platforms.
To improve the computational performance of the RSD, we propose a multiscale approach and the removal of a gaussian smoothing filter used in this algorithm. We evaluate the performance on both computation times, detection and false positive rates on a synthetic image dataset and on the german traffic sign detection benchmark [29].
We observed significant speedups compared to the original algorithm. Our Improved Radial Symmetry Detector is up to 5.8 times faster than the original on detecting Circles, up to 3.8 times faster on Triangle detection, 2.9 times faster on Square detection and 2.4 times faster on Octagon detection. All of this measurements were observed with better detection and false positive rates than the original RSD.
When evaluated on the GTSDB, we observed smaller speedups, in the range of 1.6 to 2.3 times faster for Circle and Regular Polygon detection, but for Circle detection we observed a decreased detection rate than the original algorithm, while for Regular Polygon detection we always observed better detection rates. False positive rates were high, in the range of 80% to 90%.
We conclude that our Improved Radial Symmetry Detector is a significant improvement of the Radial Symmetry Detector, both for Circle and Regular polygon detection. We expect that our improved algorithm will lead the way to obtain real-time traffic sign detection and recognition in embedded computer platforms.
Extraction of text information from visual sources is an important component of many modern applications, for example, extracting the text from traffic signs on a road scene in an autonomous vehicle. For natural images or road scenes this is a unsolved problem. In this thesis the use of histogram of stroke widths (HSW) for character and noncharacter region classification is presented. Stroke widths are extracted using two methods. One is based on the Stroke Width Transform and another based on run lengths. The HSW is combined with two simple region features– aspect and occupancy ratios– and then a linear SVM is used as classifier. One advantage of our method over the state of the art is that it is script-independent and can also be used to verify detected text regions with the purpose of reducing false positives. Our experiments on generated datasets of Latin, CJK, Hiragana and Katakana characters show that the HSW is able to correctly classify at least 90% of the character regions, a similar figure is obtained for non-character regions. This performance is also obtained when training the HSW with one script and testing with a different one, and even when characters are rotated. On the English and Kannada portions of the Chars74K dataset we obtained over 95% correctly classified character regions. The use of raycasting for text line grouping is also proposed. By combining it with our HSW-based character classifier, a text detector based on Maximally Stable Extremal Regions (MSER) was implemented. The text detector was evaluated on our own dataset of road scenes from the German Autobahn, where 65% precision, 72% recall with a f-score of 69% was obtained. Using the HSW as a text verifier increases precision while slightly reducing recall. Our HSW feature allows the building of a script-independent and low parameter count classifier for character and non-character regions.
Permanenz in Inkohärenz
(2015)
Secure vehicular communication has been discussed over a long period of time. Now,- this technology is implemented in different Intelligent Transportation System (ITS) projects in europe. In most of these projects a suitable Public Key Infrastructure (PKI) for a secure communication between involved entities in a Vehicular Ad hoc Network (VANET) is needed. A first proposal for a PKI architecture for Intelligent Vehicular Systems (IVS PKI) is given by the car2car communication consortium. This architecture however mainly deals with inter vehicular communication and is less focused on the needs of Road Side Units. Here, we propose a multi-domain PKI architecture for Intelligent Transportation Systems, which considers the necessities of road infrastructure authorities and vehicle manufacturers, today. The PKI domains are cryptographically linked based on local trust lists. In addition, a crypto agility concept is suggested, which takes adaptation of key length and cryptographic algorithms during PKI operation into account.
The latest advances in the field of smart card technologies allow modern cards to be more than just simple security tokens. Recent developments facilitate the use of interactive components like buttons, displays or even touch-sensors within the card's body thus conquering whole new areas of application. With interactive functionalities the usability aspect becomes the most important one for designing secure and popularly accepted products. Unfortunately, the usability can only be tested fully with completely integrated hence expensive smart card prototypes. This restricts severely application specific research, case studies of new smart card user interfaces and the optimization of design aspects, as well as hardware requirements by making usability and acceptance tests in smart card development very costly and time-consuming. Rapid development and simulation of smart card interfaces and applications can help to avoid this restriction. This paper presents a rapid development process for new smart card interfaces and applications based on common smartphone technology using a tool called SCUID^Sim. We will demonstrate the variety of usability aspects that can be analyzed with such a simulator by discussing some selected example projects.
This paper proposes an Artificial Plasmodium Algorithm (APA) mimicked a contraction wave of a plasmodium of physarum polucephalum. Plasmodia can live using the contracion wave in their body to communicate to others and transport a nutriments. In the APA, each plasmodium has two information as the wave information: the direction and food index. We apply the APA to 4 types of mazes and confirm that the APA can solve the mazes.
This paper proposes an Artificial Plasmodium Algorithm (APA) mimicked a contraction wave of a plasmodium of physarum polucephalum. Plasmodia can live using the contracion wave in their body to communicate to others and transport a nutriments. In the APA, each plasmodium has two information as the wave information: the direction and food index. We apply APA to a maze solving and route planning of road map.
Simultaneous multifrequency radio observations of the Galactic Centre magnetar SGR J1745-2900
(2015)
Internes Qualitätsmanagement (QM) wurde spätestens 2007 mit dem Gesetz zur Stärkung des Wettbewerbs in der gesetzlichen Krankenversicherung zu einem wesentlichen Bestandteil der stationären medizinischen Rehabilitation (Petri, Stähler, 2008). Seit dem Auslaufen der Übergangsfrist am 01.10.2012 verfügen alle durch einen gesetzlichen Rehabilitationsträger belegten stationären Einrichtungen über ein, den Anforderungen der Bundesarbeitsgemeinschaft für Rehabilitation entsprechendes, zertifiziertes QM-System.
Für das IT-Service-Management, also für die Maßnahmen zur Planung, Überwachung und Steuerung der Effektivitat und Effizienz von IT-Services, existieren Standardprozessmodelle wie beispielsweise ITIL oder MOF. Kleine und mittlere Unternehmen (KMU) können für das IT-Service-Management oftmals nicht die IT-Service-Management-Prozesse aus ITIL oder MOF nutzen, da der zusätzliche administrative Aufwand zur Nutzung für diese Unternehmen meist nicht rentabel ist. Dies ist ein entscheidender Wettbewerbsnachteil, da die Aufgaben und Themen im IT-Service-Management für KMU der in großen Unternehmen sehr ähnlich sind.
Die Autoren ermitteln in diesem Beitrag typische Anforderungen an das IT-Service-Management in KMU, entwickeln anschliesend ein für KMU geeignetes Prozessmodell für das IT-Service-Management, das sich aus ITIL ableitet, und beschreiben abschließend die exemplarische Einführung in einem Unternehmen.
Háttér és célkitűzések: A játékszenvedély vizsgálatával foglalkozó tudományos cikkek sora dinamikusan bővült az elmúlt években. Kutatások folynak a témában például Nagy-Britanniában, (1) Finnországban(2) és Svédországban(3) is. Ugyanakkor ezen kutatások empirikus eredményei még nem publikáltak, így az értékelésük is csak később lehetséges.(4)
Sustainable development needs sustainable production and sustainable consumption. During the last decades the encouragement of sustainable production has been the focus of research and policy makers under the implicit assumption that the observable increasing ‘green’ values of consumers would also entail a growing sustainable consumption. However, it has been found that the actual purchasing behaviour often deviates from ‘green’ attitudes. This phenomenon is called the attitude-behaviour gap. It is influenced by individual, social and situational factors. The main purchasing barriers for sustainable (organic) food are price, lack of immediate availability, sensory criteria, lack or overload of information as well as the low-involvement feature of food products in conjunction with well-established consumption routines, lack of transparency and trust towards labels and certifications.
The phenomenon of the deviation between purchase attitudes and actual buying behaviour of responsible consumers is called the attitude-behaviour gap. It is influenced by individual, social and situational factors. The main purchasing barriers for sustainable (organic) food are price, lack of immediate availability, sensory criteria, lack or overload of information as well as the low-involvement feature of food products in conjunction with well-established consumption routines, lack of transparency and trust towards labels and certifications. The last three barriers are mainly of a psychological nature. Especially the low-involvement feature of food products due to daily purchase routines and relatively low prices tends to result in fast, automatic and subconscious decisions based on a so-called human mental system 1, derived from Daniel Kahneman’s (Nobel-Prize laureate in Behavioural Economics) model in behavioural psychology. In contrast, the human mental system 2 is especially important for the transformations of individual behaviour towards a more sustainable consumption. Decisions based on the human mental system 2 are slow, logical, rational, conscious and arduous. This so-called dual action model also influences the reliability of responses in consumer surveys. It seems that the consumer behaviour is the most unstable and unpredictable part of the entire supply chain and requires special attention. Concrete measures to influence consumer behaviour towards sustainable consumption are highly complex. Reviews of interdisciplinary research literature on behavioural psychology, behavioural economics and consumer behaviour and an empirical analysis of selected countries worldwide with a view to sustainable food are presented. The example of Denmark serves as a ‘best practice’ case study to illustrate how sustainable food consumption can be encouraged. It demonstrates that common efforts and a shared responsibility of consumers, business, interdisciplinary researchers, mass media and policy are needed. It takes pioneers of change who succeed in assembling a ‘critical mass’ willing to increase its ‘sustainable’ behaviour. Considering the strong psychological barriers of consumers and the continuing low market share of organic food, proactive policy measures would be conducive to foster the personal responsibility of the consumers and offer incentives towards a sustainable production. Also, further self-obligations of companies (Corporate Social Responsibility – CSR) as well as more transparency and simplification of reliable labels and certifications are needed to encourage the process towards a sustainable development.
We propose a high-performance GPU implementation of Ray Histogram Fusion (RHF), a denoising method for stochastic global illumination rendering. Based on the CPU implementation of the original algorithm, we present a naive GPU implementation and the necessary optimization steps. Eventually, we show that our optimizations increase the performance of RHF by two orders of magnitude when compared to the original CPU implementation and one order of magnitude compared to the naive GPU implementation. We show how the quality for identical rendering times relates to unfiltered path tracing and how much time is needed to achieve identical quality when compared to an unfiltered path traced result. Finally, we summarize our work and describe possible future applications and research based on this.
Die Vorteile, Nutzer aktiv, früh und langfristig in ntwicklungsprozesse zu integrieren, um Fehlentwicklungen zu vermeiden und Nutzerbedürfnisse zu adressieren, sind nicht nur in der akademischen Forschung bekannt. Prozesse und Strukturen in Unternehmen der IKT-Branche sind bereits häufig agil implementiert. Dennoch schaffen es kleine und mittlere Unternehmen (KMU) oftmals nicht, die Potentiale einer Nutzerintegration konsequent auszuschöpfen. In Fallstudien wurden drei unterschiedliche KMU analysiert, wie sie die Stimme des Nutzers im Entwicklungsprozess berücksichtigen. Unterschiedliche Strategien der Nutzerintegration, die sich in Rollen und Werkzeugen, in Anforderungen und Problemen an das Nutzersample, Methoden und Datenaufbereitung widerspiegeln, werden beleuchtet. Unser Beitrag soll helfen, Herausforderungen und Probleme von KMU auf der Suche nach angemessenen und passgenauen Wegen der Nutzerintegration zu verstehen und Lösungen zu gestalten.
In this doctoral thesis the curing process of visible light-curing (VLC) dental composites and 3D printing rapid prototyping (RP) materials are investigated with the focus on dielectric analysis (DEA). This method is able to monitor the curing of resins in an alternating electric fringe field with adjustable frequencies and is often used for cure control of composites manufacturing in the aviation and automotive industry but hardly established in dental science or RP method development. It is capable of investigating very fast initiation and primary curing processes using high frequencies in the kHz-range. The aim of the Thesis is a better understanding of the curing processes with respect to curing parameters such as resin composition, viscosity, temperature, and for light-curing composites also light intensity and irradiation depth. Due to the nature of both dental and RP systems an application of specific experimental set-up had to be designed allowing for the generation of reproducible and valid results. Subsequently, different evaluation methods were developed to characterize the curing behavior of both material types. A special focus was paid to the determination of kinetic parameters from DEA measurements. Reaction rates of the curing of the corresponding thermosets were calculated and applied to the ion viscosity curves measured by DEA to evaluate reaction kinetic parameters. For the dental composites it could be clearly shown that the initial curing rate is directly proportional to light intensity and not to its square root as proposed by many others authors. A good description of the curing behaviour of 3DP RP materials was also achieved assuming a reaction order smaller than one. This data provides the base for the kinetic modeling of polymerization and curing processes proposed within the Thesis.
Persons entering the working range of industrial robots are exposed to a high risk of collision with moving parts of the system, potentially causing severe injuries. Conventional systems, which restrict the access to this area, range from walls and fences to light barriers and other vision based protective devices (VBPD). None of these systems allow to distinguish between humans and workpieces in a safe and reliable manner. In this work, a new approach is investigated, which uses an active near-infrared (NIR) camera system with advanced capabilities of skin detection to distinguish humans from workpieces based on characteristic spectral signatures. This approach allows to implement more intelligent muting processes and at the same time increases the safety of persons working close to the robots. The conceptual integration of such a camera system into a VBPD and the enhancement of person detection methods through skin detection are described and evaluated in this paper. Based upon this work, next steps could be the development of multimodal sensor systems to safeguard working ranges of collaborating robots using the described camera system.
The steadily decreasing prices of display technologies and computer graphics hardware contribute to the increasing popularity of multiple-display environments, like large, high-resolution displays. It is therefore necessary that educational organizations give the new generation of computer scientists an opportunity to become familiar with this kind of technology. However, there is a lack of tools that allow for getting started easily. Existing frameworks and libraries that provide support for multi-display rendering are often complex in understanding, configuration and extension. This is critical especially in educational context where the time that students have for their projects is limited and quite short. These tools are also rather known and used in research communities only, thus providing less benefit for future non-scientists. In this work we present an extension for the Unity game engine. The extension allows – with a small overhead – for implementation of applications that are apt to run on both single-display and multi-display systems. It takes care of the most common issues in the context of distributed and multi-display rendering like frame, camera and animation synchronization, thus reducing and simplifying the first steps into the topic. In conjunction with Unity, which significantly simplifies the creation of different kinds of virtual environments, the extension affords students to build mock-up virtual reality applications for large, high-resolution displays, and to implement and evaluate new interaction techniques and metaphors and visualization concepts. Unity itself, in our experience, is very popular among computer graphics students and therefore familiar to most of them. It is also often employed in projects of both research institutions and commercial organizations; so learning it will provide students with qualification in high demand.
Over the last 50 years, the controlled motion of robots has become a very mature domain of expertise. It can deal with all sorts of topologies and types of joints and actuators, with kinematic as well as dynamic models of devices, and with one or several tools or sensors attached to the mechanical structure. Nevertheless, the domain has not succeeded in standardizing the modelling of robot devices (including such fundamental entities as “reference frames”!), let alone the semantics of their motion specification and control. This thesis aims to solve this long-standing problem, from three different sides: semantic models for robot kinematics and dynamics, semantic models of all possible motion specification and control problems, and software that can support the latter while being configured by a systematic use of the former.
Hox genes are an evolutionary highly conserved gene family. They determine the anterior-posterior body axis in bilateral organisms and influence the developmental fate of cells. Embryonic stem cells are usually devoid of any Hox gene expression, but these transcription factors are activated in varying spatial and temporal patterns defining the development of various body regions. In the adult body, Hox genes are among others responsible for driving the differentiation of tissue stem cells towards their respective lineages in order to repair and maintain the correct function of tissues and organs. Due to their involvement in the embryonic and adult body, they have been suggested to be useable for improving stem cell differentiations in vitro and in vivo. In many studies Hox genes have been found as driving factors in stem cell differentiation towards adipogenesis, in lineages involved in bone and joint formation, mainly chondrogenesis and osteogenesis, in cardiovascular lineages including endothelial and smooth muscle cell differentiations, and in neurogenesis. As life expectancy is rising, the demand for tissue reconstruction continues to increase. Stem cells have become an increasingly popular choice for creating therapies in regenerative medicine due to their self-renewal and differentiation potential. Especially mesenchymal stem cells are used more and more frequently due to their easy handling and accessibility, combined with a low tumorgenicity and little ethical concerns. This review therefore intends to summarize to date known correlations between natural Hox gene expression patterns in body tissues and during the differentiation of various stem cells towards their respective lineages with a major focus on mesenchymal stem cell differentiations. This overview shall help to understand the complex interactions of Hox genes and differentiation processes all over the body as well as in vitro for further improvement of stem cell treatments in future regenerative medicine approaches.
Allgemeines Steuerrecht
(2015)
The generation and maintenance of intricate spatiotemporal patterns of gene expression in multicellular organisms requires the establishment of complex mechanisms of transcriptional regulation. Estimations that up to one million enhancers exist in the human genome accentuates the utmost importance of this type of cis-regulatory element for gene regulation. However, surprisingly little is known about the mechanisms used to temporarily or permanently activate or inactivate enhancers during cellular differentiation. The current work addresses the question how enhancer regulation can be achieved.
Using the chemokine (C-C motif) ligand gene Ccl22 as a model, the first example is based on the question how the activation of an enhancer can be prevented in a physiological context. Ccl22 is expressed by myeloid cells, such as dendritic cells, upon exposure to inflammatory stimuli. The expression in other cell types, such as fibroblasts, is prevented by the strong accumulation of H3K9me3 at the enhancer's proximal region. This accumulation is attenuated in myeloid cells through activity of the stimulus-induced demethylase Jmjd2d. To tease out which genomic fragment or fragments in the Ccl22 locus could be responsible for the maintenance of enhancer inactivity, potentially through the recruitment of H3K9 methyltransferases, the enhancer repressing capacity of 1 kb fragments of the gene locus was analysed in retroviral reporter assays. It was found that a fragment adjacent to the Ccl22 enhancer that overlaps with a member of a subfamily of long interspersed nuclear elements (LINEs) showed strong repressive potential on a model enhancer. Subsequent retroviral reporter assays with LINEs from loci of other stimulus-dependent genes identified additional LINE fragments that exhibit strong enhancer repressive capacity. These findings suggest a mechanism for enhancer silencing involving LINEs.
The second example concentrates on the inactivation of an enhancer during colorectal cancer (CRC) progression. The adenoma to carcinoma transition during CRC progression often is accompanied by a downregulation of the tumour suppressor gene EPHB2. The EMT inducing factor SNAIL1 strongly downregulated EPHB2 expression in a CRC cell model. To gain insights into the transcriptional regulation of EPHB2, potential cis-regulatory elements in the EPHB2 upstream region were analysed using reporter assays. A cell-type-specific enhancer was identified and subsequent chromatin analyses revealed a correlation between enhancer chromatin conformation and EPHB2 expression in different CRC cell lines. Additionally, the overexpression of the murine Snail1 induced chromatin changes at the EPHB2 enhancer towards a poised, transcriptionally silent chromatin conformation. Mutational analyses of the minimal enhancer region pinpointed three transcription factor binding motifs to be essential for full enhancer activity. Different binding patterns between CRC cell lines at the TCF/LEF motif were subsequently identified. Furthermore, a switch from TCF7L2 to LEF1 occupancy was found upon overexpression of Snail1 in vitro and in vivo. The generation of LS174T CRC cells overexpressing LEF1 confirmed the involvement of LEF1 in the downregulation of EPHB2 and the competitive displacement of TCF7L2. This part of the work demonstrated that the SNAIL1 induced downregulation of EPHB2 is dependent on the decommissioning of a transcriptional enhancer and led to a hypothetical model involving LEF1 and ZEB1.
In summary, this work highlighted two distinct mechanisms for enhancer regulation. One mechanism is based on enhancer repressive LINE fragments that might prevent stimulus-dependent enhancer activation. In the second, enhancer silencing was shown to be based on a competitive transcription factor binding mechanism.
TinyECC 2.0 is an open source library for Elliptic Curve Cryptography (ECC) in wireless sensor networks. This paper analyzes the side channel susceptibility of TinyECC 2.0 on a LOTUS sensor node platform. In our work we measured the electromagnetic (EM) emanation during computation of the scalar multiplication using 56 different configurations of TinyECC 2.0. All of them were found to be vulnerable, but to a different degree. The different degrees of leakage include adversary success using (i) Simple EM Analysis (SEMA) with a single measurement, (ii) SEMA using averaging, and (iii) Multiple-Exponent Single-Data (MESD) with a single measurement of the secret scalar. It is extremely critical that in 30 TinyECC 2.0 configurations a single EM measurement of an ECC private key operation is sufficient to simply read out the secret scalar. MESD requires additional adversary capabilities and it affects all TinyECC 2.0 configurations, again with only a single measurement of the ECC private key operation. These findings give evidence that in security applications a configuration of TinyECC 2.0 should be chosen that withstands SEMA with a single measurement and, beyond that, an addition of appropriate randomizing countermeasures is necessary.
Work in progress: Starter-project for first semester students to survey their engineering studies
(2015)
Only since the turn of the 21st century have humanitarian organisations developed specific strategies that address climate change impacts as a humanitarian challenge. Taking the International Red Cross / Red Crescent Movement, being the largest humanitarian network, as an empirical case study, the article discusses the Movement’s changes in the areas 1) agenda setting, 2) organisational restructuring, 3) networking, 4) programming, and 5) advocacy. Based on the case study and a theoretical framework of organisational sociology, the article provides conclusions on internal and external factors that can explain why the Movement has been successful in being one of the first actors within the organisational field of humanitarian organisations to focus systematically on the humanitarian implications of climatic changes.
Communicating Climate Risks. A case study of the International Red Cross/Red Crescent Movement
(2015)
Gabriel sollte "nein" sagen
(2015)