Refine
Departments, institutes and facilities
- Fachbereich Informatik (73)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (32)
- Fachbereich Angewandte Naturwissenschaften (29)
- Fachbereich Ingenieurwissenschaften und Kommunikation (29)
- Institute of Visual Computing (IVC) (24)
- Institut für Cyber Security & Privacy (ICSP) (22)
- Institut für funktionale Gen-Analytik (IFGA) (16)
- Fachbereich Wirtschaftswissenschaften (13)
- Institut für Verbraucherinformatik (IVI) (9)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (9)
Document Type
- Conference Object (100)
- Article (59)
- Part of a Book (9)
- Report (5)
- Doctoral Thesis (4)
- Master's Thesis (2)
- Part of Periodical (2)
- Preprint (2)
- Working Paper (2)
- Book (monograph, edited volume) (1)
- Lecture (1)
Year of publication
- 2015 (187) (remove)
Language
- English (187) (remove)
Keywords
- Eco-Feedback (4)
- E-Learning (3)
- Education (3)
- FPGA (3)
- Sustainable Interaction Design (3)
- Workplace (3)
- 802.11 (2)
- Crisis Communication (2)
- Culture (2)
- Development Policy (2)
RNA is one of the most important molecules in living organisms. One of its main functions is to regulate gene expression. This involves binding to and forming a joint structure with a messenger RNA. An RNAs functions is determined by its sequence and the structure it folds into. Accordingly, the prediction of individual as well as joint structures is an important area of research. In this thesis a method for the prediction of RNA-RNA joint structure using their minimum free energy (mfe) structures was developed. It is able to extensively explore the joint structural landscape of two interacting RNAs by taking advantage of the locality of changes in the RNAs structures as well as natural and energetic constraints. The method predicts the mfe joint structure as well as alternative stable joint structures while also computing non-optimal folding pathways from the unbound individual mfe structures to the predicted joint structures. It is shown how an enumeration approach is used which is able to deal with the enormous search space as well as to avoid any cyclic behaviour. The method is evaluated using two standard datasets of known interacting RNAs and shows good results.
Error analysis in a high accuracy sampled-data velocity stabilising system using Volterra series
(2015)
We present GEM-NI -- a graph-based generative-design tool that supports parallel exploration of alternative designs. Producing alternatives is a key feature of creative work, yet it is not strongly supported in most extant tools. GEM-NI enables various forms of exploration with alternatives such as parallel editing, recalling history, branching, merging, comparing, and Cartesian products of and for alternatives. Further, GEM-NI provides a modal graphical user interface and a design gallery, which both allow designers to control and manage their design exploration. We conducted an exploratory user study followed by in-depth one-on-one interviews with moderately and highly skills participants and obtained positive feedback for the system features, showing that GEM-NI supports creative design work well.
Polyether and polyether/ester based TPU (thermoplastic polyurethanes) were investigated with wide-angle XRD (X-ray diffraction) and SAXS (small angle X-ray scattering). Furthermore, SAXS measurements were performed in the temperature range of 30 °C to 130 °C. Polyether based polymers exhibit only one broad diffraction signal in a region of 2 θ 15° to 25°. In case of polyurethanes with ether/ester modification, the broad diffraction signal arises with small sharp diffraction signals. SAXS measurements of polymers reveal the size and shape of the crystalline zones of the polymer. Between 30 °C and 130 °C the size of the crystalline zone changes significantly. The size decreases in most of investigated TPU. In the case of Desmopan 9365D an increase of the particle size was observed.
We present a system that combines voxel and polygonal representations into a single octree acceleration structure that can be used for ray tracing. Voxels are well-suited to create good level-of-detail for high-frequency models where polygonal simplifications usually fail due to the complex structure of the model. However, polygonal descriptions provide the higher visual fidelity. In addition, voxel representations often oversample the geometric domain especially for large triangles, whereas a few polygons can be tested for intersection more quickly.
Detection of triacetone triperoxide using temperature cycled metal-oxide semiconductor gas sensors
(2015)
Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment in those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, trials of moderate-intensity exercise (i.e. self-paced cycling) and no-exercise (i.e. automatic propulsion) were performed within three levels of virtual environment exposure. Each trial was 5-min in duration and was followed by post-trial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore these change indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence.
Semantic Image Segmentation Combining Visible and Near-Infrared Channels with Depth Information
(2015)
Image understanding is a vital task in computer vision that has many applications in areas such as robotics, surveillance and the automobile industry. An important precondition for image understanding is semantic image segmentation, i.e. the correct labeling of every image pixel with its corresponding object name or class. This thesis proposes a machine learning approach for semantic image segmentation that uses images from a multi-modal camera rig. It demonstrates that semantic segmentation can be improved by combining different image types as inputs to a convolutional neural network (CNN), when compared to a single-image approach. In this work a multi-channel near-infrared (NIR) image, an RGB image and a depth map are used. The detection of people is further improved by using a skin image that indicates the presence of human skin in the scene and is computed based on NIR information. It is also shown that segmentation accuracy can be enhanced by using a class voting method based on a superpixel pre-segmentation. Models are trained for 10-class, 3-class and binary classification tasks using an original dataset. Compared to the NIR-only approach, average class accuracy is increased by 7% for 10-class, and by 22% for 3-class classification, reaching a total of 48% and 70% accuracy, respectively. The binary classification task, which focuses on the detection of people, achieves a classification accuracy of 95% and true positive rate of 66%. The report at hand describes the proposed approach and the encountered challenges and shows that a CNN can successfully learn and combine features from multi-modal image sets and use them to predict scene labeling.
This paper proposes a new artificial neural network-based maximum power point tracker for photovoltaic application. This tracker significantly improves efficiency of the photovoltaic system with series-connection of photovoltaic modules in non-uniform irradiance on photovoltaic array surfaces. The artificial neural network uses irradiance and temperature sensors to generate the maximum power point reference voltage and employ a classical perturb and observe searching algorithm. The structure of the artificial neural network was obtained by numerical modelling using Matlab/Simulink. The artificial neural network was trained using Bayesian regularisation back-propagation algorithms and demonstrated a good prediction of the maximum power point. Relative number of Vmpp prediction errors in range of ±0.2V is 0.05% based on validation data.
IT-accessiblity is often treated as an orphan in companies. Even though the proportion of disabled people is substantial and people become older and more susceptible to disabilities. Besides cost factors, companies often do not have a plan how to implement and control IT-accessibility successfully. However, most companies are familiar with IT-maturity frameworks to evaluate and improve their own IT-infrastructure. It would facilitate dealing with IT-accessibility, if IT-maturity frameworks consider IT-accessibility and provide recommendations and solutions for a successful implementation. Therefore, this article conducts a review of an acknowledged IT-maturity framework with regard to its capability to enable implementation of IT-accessibility in an organization. The first part of this article will illustrate the motivation and background for the authors concern with such a topic. Afterwards the authors will introduce the reader to the reviewed IT-maturity framework and provide basic knowledge on IT-accessibility. The main part of the article will deal with the review of the applied IT-maturity framework and outline examples of critical capabilities for successfully implementing IT-accessibility in an organization. The final section will derive implications and close with planned future research activities in this field.
Advanced driver assistance systems (ADAS) are technology systems and devices designed as an aid to the driver of a vehicle. One of the critical components of any ADAS is the traffic sign recognition module. For this module to achieve real-time performance, some preprocessing of input images must be done, which consists of a traffic sign detection (TSD) algorithm to reduce the possible hypothesis space. Performance of TSD algorithm is critical.
One of the best algorithms used for TSD is the Radial Symmetry Detector (RSD), which can detect both Circular [7] and Polygonal traffic signs [5]. This algorithm runs in real-time on high end personal computers, but computational performance of must be improved in order to be able to run in real-time in embedded computer platforms.
To improve the computational performance of the RSD, we propose a multiscale approach and the removal of a gaussian smoothing filter used in this algorithm. We evaluate the performance on both computation times, detection and false positive rates on a synthetic image dataset and on the german traffic sign detection benchmark [29].
We observed significant speedups compared to the original algorithm. Our Improved Radial Symmetry Detector is up to 5.8 times faster than the original on detecting Circles, up to 3.8 times faster on Triangle detection, 2.9 times faster on Square detection and 2.4 times faster on Octagon detection. All of this measurements were observed with better detection and false positive rates than the original RSD.
When evaluated on the GTSDB, we observed smaller speedups, in the range of 1.6 to 2.3 times faster for Circle and Regular Polygon detection, but for Circle detection we observed a decreased detection rate than the original algorithm, while for Regular Polygon detection we always observed better detection rates. False positive rates were high, in the range of 80% to 90%.
We conclude that our Improved Radial Symmetry Detector is a significant improvement of the Radial Symmetry Detector, both for Circle and Regular polygon detection. We expect that our improved algorithm will lead the way to obtain real-time traffic sign detection and recognition in embedded computer platforms.
Extraction of text information from visual sources is an important component of many modern applications, for example, extracting the text from traffic signs on a road scene in an autonomous vehicle. For natural images or road scenes this is a unsolved problem. In this thesis the use of histogram of stroke widths (HSW) for character and noncharacter region classification is presented. Stroke widths are extracted using two methods. One is based on the Stroke Width Transform and another based on run lengths. The HSW is combined with two simple region features– aspect and occupancy ratios– and then a linear SVM is used as classifier. One advantage of our method over the state of the art is that it is script-independent and can also be used to verify detected text regions with the purpose of reducing false positives. Our experiments on generated datasets of Latin, CJK, Hiragana and Katakana characters show that the HSW is able to correctly classify at least 90% of the character regions, a similar figure is obtained for non-character regions. This performance is also obtained when training the HSW with one script and testing with a different one, and even when characters are rotated. On the English and Kannada portions of the Chars74K dataset we obtained over 95% correctly classified character regions. The use of raycasting for text line grouping is also proposed. By combining it with our HSW-based character classifier, a text detector based on Maximally Stable Extremal Regions (MSER) was implemented. The text detector was evaluated on our own dataset of road scenes from the German Autobahn, where 65% precision, 72% recall with a f-score of 69% was obtained. Using the HSW as a text verifier increases precision while slightly reducing recall. Our HSW feature allows the building of a script-independent and low parameter count classifier for character and non-character regions.
Secure vehicular communication has been discussed over a long period of time. Now,- this technology is implemented in different Intelligent Transportation System (ITS) projects in europe. In most of these projects a suitable Public Key Infrastructure (PKI) for a secure communication between involved entities in a Vehicular Ad hoc Network (VANET) is needed. A first proposal for a PKI architecture for Intelligent Vehicular Systems (IVS PKI) is given by the car2car communication consortium. This architecture however mainly deals with inter vehicular communication and is less focused on the needs of Road Side Units. Here, we propose a multi-domain PKI architecture for Intelligent Transportation Systems, which considers the necessities of road infrastructure authorities and vehicle manufacturers, today. The PKI domains are cryptographically linked based on local trust lists. In addition, a crypto agility concept is suggested, which takes adaptation of key length and cryptographic algorithms during PKI operation into account.
The latest advances in the field of smart card technologies allow modern cards to be more than just simple security tokens. Recent developments facilitate the use of interactive components like buttons, displays or even touch-sensors within the card's body thus conquering whole new areas of application. With interactive functionalities the usability aspect becomes the most important one for designing secure and popularly accepted products. Unfortunately, the usability can only be tested fully with completely integrated hence expensive smart card prototypes. This restricts severely application specific research, case studies of new smart card user interfaces and the optimization of design aspects, as well as hardware requirements by making usability and acceptance tests in smart card development very costly and time-consuming. Rapid development and simulation of smart card interfaces and applications can help to avoid this restriction. This paper presents a rapid development process for new smart card interfaces and applications based on common smartphone technology using a tool called SCUID^Sim. We will demonstrate the variety of usability aspects that can be analyzed with such a simulator by discussing some selected example projects.
This paper proposes an Artificial Plasmodium Algorithm (APA) mimicked a contraction wave of a plasmodium of physarum polucephalum. Plasmodia can live using the contracion wave in their body to communicate to others and transport a nutriments. In the APA, each plasmodium has two information as the wave information: the direction and food index. We apply the APA to 4 types of mazes and confirm that the APA can solve the mazes.
This paper proposes an Artificial Plasmodium Algorithm (APA) mimicked a contraction wave of a plasmodium of physarum polucephalum. Plasmodia can live using the contracion wave in their body to communicate to others and transport a nutriments. In the APA, each plasmodium has two information as the wave information: the direction and food index. We apply APA to a maze solving and route planning of road map.
Simultaneous multifrequency radio observations of the Galactic Centre magnetar SGR J1745-2900
(2015)
Sustainable development needs sustainable production and sustainable consumption. During the last decades the encouragement of sustainable production has been the focus of research and policy makers under the implicit assumption that the observable increasing ‘green’ values of consumers would also entail a growing sustainable consumption. However, it has been found that the actual purchasing behaviour often deviates from ‘green’ attitudes. This phenomenon is called the attitude-behaviour gap. It is influenced by individual, social and situational factors. The main purchasing barriers for sustainable (organic) food are price, lack of immediate availability, sensory criteria, lack or overload of information as well as the low-involvement feature of food products in conjunction with well-established consumption routines, lack of transparency and trust towards labels and certifications.
The phenomenon of the deviation between purchase attitudes and actual buying behaviour of responsible consumers is called the attitude-behaviour gap. It is influenced by individual, social and situational factors. The main purchasing barriers for sustainable (organic) food are price, lack of immediate availability, sensory criteria, lack or overload of information as well as the low-involvement feature of food products in conjunction with well-established consumption routines, lack of transparency and trust towards labels and certifications. The last three barriers are mainly of a psychological nature. Especially the low-involvement feature of food products due to daily purchase routines and relatively low prices tends to result in fast, automatic and subconscious decisions based on a so-called human mental system 1, derived from Daniel Kahneman’s (Nobel-Prize laureate in Behavioural Economics) model in behavioural psychology. In contrast, the human mental system 2 is especially important for the transformations of individual behaviour towards a more sustainable consumption. Decisions based on the human mental system 2 are slow, logical, rational, conscious and arduous. This so-called dual action model also influences the reliability of responses in consumer surveys. It seems that the consumer behaviour is the most unstable and unpredictable part of the entire supply chain and requires special attention. Concrete measures to influence consumer behaviour towards sustainable consumption are highly complex. Reviews of interdisciplinary research literature on behavioural psychology, behavioural economics and consumer behaviour and an empirical analysis of selected countries worldwide with a view to sustainable food are presented. The example of Denmark serves as a ‘best practice’ case study to illustrate how sustainable food consumption can be encouraged. It demonstrates that common efforts and a shared responsibility of consumers, business, interdisciplinary researchers, mass media and policy are needed. It takes pioneers of change who succeed in assembling a ‘critical mass’ willing to increase its ‘sustainable’ behaviour. Considering the strong psychological barriers of consumers and the continuing low market share of organic food, proactive policy measures would be conducive to foster the personal responsibility of the consumers and offer incentives towards a sustainable production. Also, further self-obligations of companies (Corporate Social Responsibility – CSR) as well as more transparency and simplification of reliable labels and certifications are needed to encourage the process towards a sustainable development.
We propose a high-performance GPU implementation of Ray Histogram Fusion (RHF), a denoising method for stochastic global illumination rendering. Based on the CPU implementation of the original algorithm, we present a naive GPU implementation and the necessary optimization steps. Eventually, we show that our optimizations increase the performance of RHF by two orders of magnitude when compared to the original CPU implementation and one order of magnitude compared to the naive GPU implementation. We show how the quality for identical rendering times relates to unfiltered path tracing and how much time is needed to achieve identical quality when compared to an unfiltered path traced result. Finally, we summarize our work and describe possible future applications and research based on this.
In this doctoral thesis the curing process of visible light-curing (VLC) dental composites and 3D printing rapid prototyping (RP) materials are investigated with the focus on dielectric analysis (DEA). This method is able to monitor the curing of resins in an alternating electric fringe field with adjustable frequencies and is often used for cure control of composites manufacturing in the aviation and automotive industry but hardly established in dental science or RP method development. It is capable of investigating very fast initiation and primary curing processes using high frequencies in the kHz-range. The aim of the Thesis is a better understanding of the curing processes with respect to curing parameters such as resin composition, viscosity, temperature, and for light-curing composites also light intensity and irradiation depth. Due to the nature of both dental and RP systems an application of specific experimental set-up had to be designed allowing for the generation of reproducible and valid results. Subsequently, different evaluation methods were developed to characterize the curing behavior of both material types. A special focus was paid to the determination of kinetic parameters from DEA measurements. Reaction rates of the curing of the corresponding thermosets were calculated and applied to the ion viscosity curves measured by DEA to evaluate reaction kinetic parameters. For the dental composites it could be clearly shown that the initial curing rate is directly proportional to light intensity and not to its square root as proposed by many others authors. A good description of the curing behaviour of 3DP RP materials was also achieved assuming a reaction order smaller than one. This data provides the base for the kinetic modeling of polymerization and curing processes proposed within the Thesis.
Persons entering the working range of industrial robots are exposed to a high risk of collision with moving parts of the system, potentially causing severe injuries. Conventional systems, which restrict the access to this area, range from walls and fences to light barriers and other vision based protective devices (VBPD). None of these systems allow to distinguish between humans and workpieces in a safe and reliable manner. In this work, a new approach is investigated, which uses an active near-infrared (NIR) camera system with advanced capabilities of skin detection to distinguish humans from workpieces based on characteristic spectral signatures. This approach allows to implement more intelligent muting processes and at the same time increases the safety of persons working close to the robots. The conceptual integration of such a camera system into a VBPD and the enhancement of person detection methods through skin detection are described and evaluated in this paper. Based upon this work, next steps could be the development of multimodal sensor systems to safeguard working ranges of collaborating robots using the described camera system.
The steadily decreasing prices of display technologies and computer graphics hardware contribute to the increasing popularity of multiple-display environments, like large, high-resolution displays. It is therefore necessary that educational organizations give the new generation of computer scientists an opportunity to become familiar with this kind of technology. However, there is a lack of tools that allow for getting started easily. Existing frameworks and libraries that provide support for multi-display rendering are often complex in understanding, configuration and extension. This is critical especially in educational context where the time that students have for their projects is limited and quite short. These tools are also rather known and used in research communities only, thus providing less benefit for future non-scientists. In this work we present an extension for the Unity game engine. The extension allows – with a small overhead – for implementation of applications that are apt to run on both single-display and multi-display systems. It takes care of the most common issues in the context of distributed and multi-display rendering like frame, camera and animation synchronization, thus reducing and simplifying the first steps into the topic. In conjunction with Unity, which significantly simplifies the creation of different kinds of virtual environments, the extension affords students to build mock-up virtual reality applications for large, high-resolution displays, and to implement and evaluate new interaction techniques and metaphors and visualization concepts. Unity itself, in our experience, is very popular among computer graphics students and therefore familiar to most of them. It is also often employed in projects of both research institutions and commercial organizations; so learning it will provide students with qualification in high demand.
Over the last 50 years, the controlled motion of robots has become a very mature domain of expertise. It can deal with all sorts of topologies and types of joints and actuators, with kinematic as well as dynamic models of devices, and with one or several tools or sensors attached to the mechanical structure. Nevertheless, the domain has not succeeded in standardizing the modelling of robot devices (including such fundamental entities as “reference frames”!), let alone the semantics of their motion specification and control. This thesis aims to solve this long-standing problem, from three different sides: semantic models for robot kinematics and dynamics, semantic models of all possible motion specification and control problems, and software that can support the latter while being configured by a systematic use of the former.
Hox genes are an evolutionary highly conserved gene family. They determine the anterior-posterior body axis in bilateral organisms and influence the developmental fate of cells. Embryonic stem cells are usually devoid of any Hox gene expression, but these transcription factors are activated in varying spatial and temporal patterns defining the development of various body regions. In the adult body, Hox genes are among others responsible for driving the differentiation of tissue stem cells towards their respective lineages in order to repair and maintain the correct function of tissues and organs. Due to their involvement in the embryonic and adult body, they have been suggested to be useable for improving stem cell differentiations in vitro and in vivo. In many studies Hox genes have been found as driving factors in stem cell differentiation towards adipogenesis, in lineages involved in bone and joint formation, mainly chondrogenesis and osteogenesis, in cardiovascular lineages including endothelial and smooth muscle cell differentiations, and in neurogenesis. As life expectancy is rising, the demand for tissue reconstruction continues to increase. Stem cells have become an increasingly popular choice for creating therapies in regenerative medicine due to their self-renewal and differentiation potential. Especially mesenchymal stem cells are used more and more frequently due to their easy handling and accessibility, combined with a low tumorgenicity and little ethical concerns. This review therefore intends to summarize to date known correlations between natural Hox gene expression patterns in body tissues and during the differentiation of various stem cells towards their respective lineages with a major focus on mesenchymal stem cell differentiations. This overview shall help to understand the complex interactions of Hox genes and differentiation processes all over the body as well as in vitro for further improvement of stem cell treatments in future regenerative medicine approaches.
The generation and maintenance of intricate spatiotemporal patterns of gene expression in multicellular organisms requires the establishment of complex mechanisms of transcriptional regulation. Estimations that up to one million enhancers exist in the human genome accentuates the utmost importance of this type of cis-regulatory element for gene regulation. However, surprisingly little is known about the mechanisms used to temporarily or permanently activate or inactivate enhancers during cellular differentiation. The current work addresses the question how enhancer regulation can be achieved.
Using the chemokine (C-C motif) ligand gene Ccl22 as a model, the first example is based on the question how the activation of an enhancer can be prevented in a physiological context. Ccl22 is expressed by myeloid cells, such as dendritic cells, upon exposure to inflammatory stimuli. The expression in other cell types, such as fibroblasts, is prevented by the strong accumulation of H3K9me3 at the enhancer's proximal region. This accumulation is attenuated in myeloid cells through activity of the stimulus-induced demethylase Jmjd2d. To tease out which genomic fragment or fragments in the Ccl22 locus could be responsible for the maintenance of enhancer inactivity, potentially through the recruitment of H3K9 methyltransferases, the enhancer repressing capacity of 1 kb fragments of the gene locus was analysed in retroviral reporter assays. It was found that a fragment adjacent to the Ccl22 enhancer that overlaps with a member of a subfamily of long interspersed nuclear elements (LINEs) showed strong repressive potential on a model enhancer. Subsequent retroviral reporter assays with LINEs from loci of other stimulus-dependent genes identified additional LINE fragments that exhibit strong enhancer repressive capacity. These findings suggest a mechanism for enhancer silencing involving LINEs.
The second example concentrates on the inactivation of an enhancer during colorectal cancer (CRC) progression. The adenoma to carcinoma transition during CRC progression often is accompanied by a downregulation of the tumour suppressor gene EPHB2. The EMT inducing factor SNAIL1 strongly downregulated EPHB2 expression in a CRC cell model. To gain insights into the transcriptional regulation of EPHB2, potential cis-regulatory elements in the EPHB2 upstream region were analysed using reporter assays. A cell-type-specific enhancer was identified and subsequent chromatin analyses revealed a correlation between enhancer chromatin conformation and EPHB2 expression in different CRC cell lines. Additionally, the overexpression of the murine Snail1 induced chromatin changes at the EPHB2 enhancer towards a poised, transcriptionally silent chromatin conformation. Mutational analyses of the minimal enhancer region pinpointed three transcription factor binding motifs to be essential for full enhancer activity. Different binding patterns between CRC cell lines at the TCF/LEF motif were subsequently identified. Furthermore, a switch from TCF7L2 to LEF1 occupancy was found upon overexpression of Snail1 in vitro and in vivo. The generation of LS174T CRC cells overexpressing LEF1 confirmed the involvement of LEF1 in the downregulation of EPHB2 and the competitive displacement of TCF7L2. This part of the work demonstrated that the SNAIL1 induced downregulation of EPHB2 is dependent on the decommissioning of a transcriptional enhancer and led to a hypothetical model involving LEF1 and ZEB1.
In summary, this work highlighted two distinct mechanisms for enhancer regulation. One mechanism is based on enhancer repressive LINE fragments that might prevent stimulus-dependent enhancer activation. In the second, enhancer silencing was shown to be based on a competitive transcription factor binding mechanism.
TinyECC 2.0 is an open source library for Elliptic Curve Cryptography (ECC) in wireless sensor networks. This paper analyzes the side channel susceptibility of TinyECC 2.0 on a LOTUS sensor node platform. In our work we measured the electromagnetic (EM) emanation during computation of the scalar multiplication using 56 different configurations of TinyECC 2.0. All of them were found to be vulnerable, but to a different degree. The different degrees of leakage include adversary success using (i) Simple EM Analysis (SEMA) with a single measurement, (ii) SEMA using averaging, and (iii) Multiple-Exponent Single-Data (MESD) with a single measurement of the secret scalar. It is extremely critical that in 30 TinyECC 2.0 configurations a single EM measurement of an ECC private key operation is sufficient to simply read out the secret scalar. MESD requires additional adversary capabilities and it affects all TinyECC 2.0 configurations, again with only a single measurement of the ECC private key operation. These findings give evidence that in security applications a configuration of TinyECC 2.0 should be chosen that withstands SEMA with a single measurement and, beyond that, an addition of appropriate randomizing countermeasures is necessary.
Work in progress: Starter-project for first semester students to survey their engineering studies
(2015)
Only since the turn of the 21st century have humanitarian organisations developed specific strategies that address climate change impacts as a humanitarian challenge. Taking the International Red Cross / Red Crescent Movement, being the largest humanitarian network, as an empirical case study, the article discusses the Movement’s changes in the areas 1) agenda setting, 2) organisational restructuring, 3) networking, 4) programming, and 5) advocacy. Based on the case study and a theoretical framework of organisational sociology, the article provides conclusions on internal and external factors that can explain why the Movement has been successful in being one of the first actors within the organisational field of humanitarian organisations to focus systematically on the humanitarian implications of climatic changes.
Communicating Climate Risks. A case study of the International Red Cross/Red Crescent Movement
(2015)
Managing the needs of learners is crucial in order to support their motivation and keep dropout rates on a low level. With the constantly growing level of internationalization in classrooms, the variety of different context-specific requirements from learners increase; without a profound understanding of the learners’ contexts, successfully maintaining a culture-sensitive and learner-focussed education is impossible. A solution to reach this understanding is the open exchange of experiences and knowledge amongst educators of the different contexts. In this paper, we will briefly introduce the two European projects “Open Discovery Space” (ODS) and “Inspiring Science Education” (ISE), which have the aim to foster the establishment and improvement of Open Educational Practices in the context of school education. The purpose of this paper is to attract and invite potential partners to affiliate with, contribute to, and profit from the projects.
The Whole Is More than the Sum of Its Parts - On Culture in Education and Educational Culture
(2015)
The Learning Culture Survey investigates learners’ expectations towards and perceptions of education on international level with the aim to make culture in the context of education better understandable and support educators to prevent and solve intercultural conflicts in education. So far, we found that culture-related expectations differ between educational settings, depend on the age of the learners, and that a nationally homogenous educational culture is rather an exception than the rule. The results of our recently completed longitudinal study provided evidence that educational culture on the institutional level actually is persistent, at least over a term of four years. After a brief introduction of the general background, we will subsume the steps taken during the past seven years and achieved general insights regarding educational culture. Last, we will introduce a method for the determination of conflict potential, which bases on the understanding of culture as the level to w hich people within a society accept deviations from the usual. We close with demonstrating the method’s functionality on examples from the Learning Culture Survey.
With a focus on Technology Enhanced Learning, this paper investigates if and to which extent a culture shift can be expected alongside with the adoption of currently emerging Web 3.0 technologies. Instead of just offering new opportunities for the field to improve education, such a culture shift could lead to unexpected general consequences not just for Technology Enhanced Learning but the whole educational sector. Understanding the dimension of expectable changes enables us to prevent conflicts and pointedly support culture-related change processes. After an introduction of the Revised Onion Model of Culture, which, later on, serves as theoretical foundation, expectable changes in the design of learning scenarios are analysed, distinguishing the stakeholder groups “learners” and “educators”. Eventually, the found changes are analysed to which extent a general culture shift is to be expected in order to understand the transferability and limitations of future research results in the field.
The aim of our research is finding measures to preserve the learners’ initial motivation in educational settings. For that we need to avoid conflicting situations that possibly could jeopardize their joy of learning.
In our thematically comprehensive Learning Culture Survey, we investigate the cultural biasing of students’ attitudes, behaviours, and expectations towards education. Particularly in times of massive international migration and growing numbers of refugees, the relevance to deeply understand cultural aspects in education increases. Just with this understanding, we can raise the awareness towards more cultural tolerance across all involved stakeholder groups and thus, foster the development of more culture-sensitive educational approaches. In this paper we focus on the most relevant aspect of motivation and comparatively discuss our study conducted in Germany and South Korea.
Mit unserer Forschung wollen wir Maßnahmen finden, die dazu beitragen, die anfängliche
Bildungsmaßnahmen zu
Motivation von Lernern bewahren. Zu diesem Zweck
in müssen Konfliktsituationen möglichst vermieden werden, wenn sie das Potential haben, ihnen die Freude am Lernen zu verderben. In unserem thematisch breitgefächerten Learning Culture Survey (Untersuchung von Lernkultur), untersuchen wir bei Lernern das Vorhandensein und den Einfluss kulturspezifischer Prägungen auf deren Verhaltensweisen, Gewohnheiten und Erwartungen bzgl. Bildung. Besonders in Zeiten massiver internationaler Migration und steigender Zahlen von Flüchtlingen wächst der Bedarf nach entsprechender Forschung stetig an. Nur wenn wir die Zusammenhänge zwischen Lernen und Kultur ausreichend verstehen, sind wir in der Lage, auf allen Ebenen die Entwicklung des erforderlichen Bewusstseins bzgl. kultursensibler Bildungsansätze zu fördern. In diesem Beitrag konzentrieren wir uns auf den sehr wichtigen Aspekt Motivation und diskutieren die Ergebnisse, die wir in unserer vergleichenden Studie in Deutschland und Südkorea erzielt haben.
Quality Management in Education: Business Process Modelling in Interdisciplinary Environments
(2015)
Ultra-fast photopolymerization of experimental composites: DEA and FT-NIRS measurement comparison
(2015)
An Empirical Evaluation of the Received Signal Strength Indicator for fixed outdoor 802.11 links
(2015)
For the evaluation of the received signal strength indication (RSSI) a different methodology compared to previous publications is introduced in this paper by exploiting a spectral scan feature of recent Qualcomm Atheros WiFi NICs. This method is compared to driver reports and to an industrial grade spectrum analyzer. During the conducted outdoor experiments a decreased scattering of the RSSI compared to previous publications is observed. By applying well-known mathematical tests for normality it is possible to show that the RSSI does not follow a normal distribution in a line-of-sight outdoor environment. The evaluated spectral scan features offers additional possibilities to develop interference classifiers which is an important step for frequency allocation in long-distance 802.11 networks.
Rural areas often lack affordable broadband Internet connectivity, mainly due to the CAPEX and especially OPEX of traditional operator equipment [HEKN11]. This digital divide limits the access to knowledge, health care and other services for billions of people. Different approaches to close this gap were discussed in the last decade [SPNB08]. In most rural areas satellite bandwidth is expensive and cellular networks (3G,4G) as well as WiMAX suffer from the usually low population density making it hard to amortize the costs of a base station [SPNB08].
Despite the opportunities and benefits of OER, research and practice has shown how the OER repositories have a hard time in reaching an active user-base. The opportunities of experience exchange and simple feedback mechanisms of social software have been realized for improving the situation and many are basing or transforming their OER offerings towards socially powered environments. Research on social software has shown how knowledge-sharing barriers in online environments are highly culture and context-specific and require proper investigation. It is crucial to study what challenges might arise in such environments and how to overcome them, ensuring a successful uptake. A large-scale (N = 855) cross-European investigation was initiated in the school context to determine which barriers teachers and learners perceive as critical. The study highlights barriers on cultural distance, showing how those are predicted by nationality and age of the respondents. The paper concludes with recommendations for overcoming those barriers.
Solar energy is one option to serve the rising global energy demand with low environmental Impact [1]. Building an energy system with a considerable share of solar power requires long-term investment and a careful investigation of potential sites. Therefore, understanding the impacts from varying regionally and locally determined meteorological conditions on solar energy production will influence energy yield projections. Clouds are moving on a short term timescale and have a high influence on the available solar radiation, as they absorb, reflect and scatter parts of the incoming light [2]. However, modeling photovoltaic (PV) power yields with a spectral resolution and local cloud information gives new insights on the atmospheric impact on solar energy.
Formal concept analysis (FCA) as introduced in [4] deals with contexts and concepts. Roughly speaking, a context is an environment that is equipped with some kind of "knowledge". Such contexts are also known as information or knowledge representation systems where the knowledge consists of (intensional) descriptions relating sets of objects to sets of properties. Given extsensional and intensional descriptions (the latter one in terms of binary attributes), they can be arranged in a taxonomy or concept lattice.
Roughness by Residuals
(2015)
Rough set theory (RST) focuses on forming posets of equivalence relations to describe sets with increasing accuracy. The connection between modal logics and RST is well known and has been extensively studied in their relation algebraic (RA) formalisation. RST has also been interpreted as a variant of intuitionistic or multi-valued logics and has even been studied in the context of logic programming.
Sweet sorghum (Sorghum bicolor (L.) moench), a crop that is grown by subsistence farmers in Zimbabwe was used to extract silica gel in order to assess its possible use as a raw material for the production of silica-based products. The gel was prepared from sodium silicate extracted from sweet sorghum bagasse ash by sodium hydroxide leaching. Results show that maximum yield can be obtained at pH 5 and with 3 M sodium concentration. The silica gel prepared at optimum pH 5 had a bulk density of 0.5626 g/cm3 and anestimated porosity of 71.87%. Silica gel aged over 10 h had improved moisture adsorption properties. X-ray fluorescence (XRF) determinations show that the silica content in the ash is 40.1%. Characterization of sweet sorghum ash and silica gels produced at pH 5, 7 and 8.5 by Fourier Transform Infrared spectroscopy gave absorption bands similar to those reported by other researchers.Transmission electron micrographs show that silica prepared under optimum conditions is amorphous and consisted of irregular particles. Sweet sorghum proved to be a potential low cost raw material for the production of silica gel.
Although much effort is made to prevent risks arising from food, food-borne diseases are an ever present-threat to the consumers’ health. The consumption of fresh food that is contaminated with pathogens like fungi, viruses or bacteria can cause food poisoning that leads to severe health damages or even death. The outbreak of Shiga Toxin-producing enterohemorrhagic E. coli (EHEC) in Germany and neighbouring countries in 2011 has shown this dramatically. Nearly 4.000 people were reported of being affected and more than 50 people died during the so called EHEC-crisis. As a result the consumers’ trust in the safety of fruits and vegetables decreased sharply.
Although much effort is made to prevent risks arising from food, food-borne diseases are an ever-present threat to the consumers’ health. The consumption of fresh food that is contaminated with pathogens like fungi, viruses or bacteria can cause food poisoning that leads to severe health damages or even death. The outbreak of Shiga Toxin-producing enterohemorrhagic E. coli (EHEC) in Germany and neighbouring countries in 2011 has shown this dramatically. Nearly 4.000 people were reported of being affected and more than 50 people died during the so called EHEC-crisis. As a result the consumers’ trust in the safety of fruits and vegetables decreased sharply.
[Context and motivation] Communication in distributed software development is usually supported by issue tracking systems. Within these systems, most of the communication is stored as unstructured natural language text. The natural language text, however, contains much information with respect to requirements management, e.g. discussion, clarification and prioritization of features, bugs, and refactorings. [Question] This paper investigates the information stored in the issue tracking systems of four different open-source projects. It categorizes the text and reports on the distribution of issue types and information types. [Principal ideas/results] A manual analysis of 80 issues, using a grounded approach, is conducted to derive a taxonomy of issue types and information types. Subsequently, the taxonomy is used as a codebook, to manually categorize and structure the text in another 120 issues. [Contribution] The first contribution of this paper is the taxonomy of issue and information types and the second contribution is an in-depth analysis of the natural language data and the communication. This analysis showed, for example, that information with respect to prioritization and scheduling can be found in natural language data, whether the ITS supports such tasks in a structured way or not.
Solar energy is one option to serve the rising global energy demand with low environmental impact.1 Building an energy system with a considerable share of solar power requires long-term investment and a careful investigation of potential sites. Therefore, understanding the impacts from varying regionally and locally determined meteorological conditions on solar energy production will influence energy yield projections. Clouds are moving on a short term timescale and have a high influence on the available solar radiation, as they absorb, reflect and scatter parts of the incoming light.2 However, the impact of cloudiness on photovoltaic power yields (PV) and cloud induced deviations from average yields might vary depending on the technology, location and time scale under consideration.
Fundamentals of Energy Meteorology - Influence of atmospheric parameters on solar energy production
(2015)
The paper presents a new control strategy of management of transport companies operating in completive transport environment. It is aimed to optimise the headway of transport companies to provide the balance between costs and benefits of operation under competition. The model of transport system build using AnyLogic comprises agent-based and discrete-event techniques. The model combined two transport companies was investigated under condition of the competition between them. It was demonstrated that the control strategy can ensure the balance of interests of transport companies trying to find compromise between cost of operation and quality of service.
Appropriating Digital Fabrication Technologies — A comparative study of two 3D Printing Communities
(2015)
Digital fabrication technologies have a great potential for empowering consumers to produce their own creations. However, despite the growing availability of digital fabrication technologies in shared machine shops such as FabLabs or University Labs, they are often perceived as difficult to use, especially by users with limited technological aptitude. Hence, it is not yet clear if the potentials of the technology can be made accessible to a broader public, or if they will remain limited to some form of “maker elite”. In this paper, we study the appropriation of digital fabrication on the example of the use of 3D printers in two different communities. In doing so, we analyze how users conceptualize their use of the 3D printers, what kind of contextual understanding is necessary to work with the machines, and how users document and share their knowledge. Based on our empirical findings, we identify the potentials that the machines offer to the communities, and what kind of challenges have to be overcome in their appropriation of the technology.
With the increasing average age of the population in many developed countries, afflictions like cardiovascular diseases have also increased. Exercising has a proven therapeutic effect on the cardiovascular system and can counteract this development. To avoid overstrain, determining an optimal training dose is crucial. In previous research, heart rate has been shown to be a good measure for cardiovascular behavior. Hence, prediction of the heart rate from work load information is an essential part in models used for training control. Most heart-rate-based models are described in the context of specific scenarios, and have been evaluated on unique datasets only. In this paper, we conduct a joint evaluation of existing approaches to model the cardiovascular system under a certain strain, and compare their predictive performance. For this purpose, we investigated some analytical models as well as some machine learning approaches in two scenarios: prediction over a certain time horizon into the future, and estimation of the relation between work load and heart rate over a whole training session.
Despite the lack of standardisation for building REST-ful HTTP applications, the deployment of REST-based Web Services has attracted an increased interest. This gap causes, however, an ambiguous interpretation of REST and induces the design and implementation of REST-based systems following proprietary approaches instead of clear and agreed upon definitions. Issues arising from these shortcomings have an influence on service properties such as the loose coupling of REST-based services via a unitary service contract and the automatic generation of code. To overcome such limitations, at least two prerequisites are required: the availability of specifications for implementing REST-based services and auxiliaries for auditing the compliance of those services with such specifications. This paper introduces an approach for conformance testing of REST-based Web Services. This appears conflicting at the first glance, since there are no specifications available for implementing REST by, e.g., t he prevalent technology set HTTP/URI to test against. Still, by providing a conformance test tool and leaning it on the current practice, the exploration of service properties is enabled. Moreover, the real demand for standardisation gets explorable by such an approach. First investigations conducted with the developed conformance test system targeting major Cloud-based storage services expose inconsistencies in many respects which emphasizes the necessity for further research and standardisation.