Refine
Departments, institutes and facilities
- Fachbereich Informatik (73)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (32)
- Fachbereich Angewandte Naturwissenschaften (29)
- Fachbereich Ingenieurwissenschaften und Kommunikation (29)
- Institute of Visual Computing (IVC) (24)
- Institut für Cyber Security & Privacy (ICSP) (22)
- Institut für funktionale Gen-Analytik (IFGA) (16)
- Fachbereich Wirtschaftswissenschaften (13)
- Institut für Verbraucherinformatik (IVI) (9)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (9)
Document Type
- Conference Object (100)
- Article (59)
- Part of a Book (9)
- Report (5)
- Doctoral Thesis (4)
- Master's Thesis (2)
- Part of Periodical (2)
- Preprint (2)
- Working Paper (2)
- Book (monograph, edited volume) (1)
Year of publication
- 2015 (187) (remove)
Language
- English (187) (remove)
Keywords
- Eco-Feedback (4)
- E-Learning (3)
- Education (3)
- FPGA (3)
- Sustainable Interaction Design (3)
- Workplace (3)
- 802.11 (2)
- Crisis Communication (2)
- Culture (2)
- Development Policy (2)
We present GEM-NI -- a graph-based generative-design tool that supports parallel exploration of alternative designs. Producing alternatives is a key feature of creative work, yet it is not strongly supported in most extant tools. GEM-NI enables various forms of exploration with alternatives such as parallel editing, recalling history, branching, merging, comparing, and Cartesian products of and for alternatives. Further, GEM-NI provides a modal graphical user interface and a design gallery, which both allow designers to control and manage their design exploration. We conducted an exploratory user study followed by in-depth one-on-one interviews with moderately and highly skills participants and obtained positive feedback for the system features, showing that GEM-NI supports creative design work well.
Binary relations with certain properties such as biorders, equivalences or difunctional relations can be represented as particular matrices. In order for these properties to be identified usually a rearrangement of rows and columns is required in order to reshape it into a recognisable normal form. Most algorithms performing these transformations are working on binary matrix representations of the underlying relations. This paper presents an approach to use the RLE-compressed matrix representation as a data structure for storing relations to test whether they are biorders in a hopefully more efficient way.
Formal concept analysis (FCA) as introduced in [4] deals with contexts and concepts. Roughly speaking, a context is an environment that is equipped with some kind of "knowledge". Such contexts are also known as information or knowledge representation systems where the knowledge consists of (intensional) descriptions relating sets of objects to sets of properties. Given extsensional and intensional descriptions (the latter one in terms of binary attributes), they can be arranged in a taxonomy or concept lattice.
Roughness by Residuals
(2015)
Rough set theory (RST) focuses on forming posets of equivalence relations to describe sets with increasing accuracy. The connection between modal logics and RST is well known and has been extensively studied in their relation algebraic (RA) formalisation. RST has also been interpreted as a variant of intuitionistic or multi-valued logics and has even been studied in the context of logic programming.
The aim of our research is finding measures to preserve the learners’ initial motivation in educational settings. For that we need to avoid conflicting situations that possibly could jeopardize their joy of learning.
In our thematically comprehensive Learning Culture Survey, we investigate the cultural biasing of students’ attitudes, behaviours, and expectations towards education. Particularly in times of massive international migration and growing numbers of refugees, the relevance to deeply understand cultural aspects in education increases. Just with this understanding, we can raise the awareness towards more cultural tolerance across all involved stakeholder groups and thus, foster the development of more culture-sensitive educational approaches. In this paper we focus on the most relevant aspect of motivation and comparatively discuss our study conducted in Germany and South Korea.
Mit unserer Forschung wollen wir Maßnahmen finden, die dazu beitragen, die anfängliche
Bildungsmaßnahmen zu
Motivation von Lernern bewahren. Zu diesem Zweck
in müssen Konfliktsituationen möglichst vermieden werden, wenn sie das Potential haben, ihnen die Freude am Lernen zu verderben. In unserem thematisch breitgefächerten Learning Culture Survey (Untersuchung von Lernkultur), untersuchen wir bei Lernern das Vorhandensein und den Einfluss kulturspezifischer Prägungen auf deren Verhaltensweisen, Gewohnheiten und Erwartungen bzgl. Bildung. Besonders in Zeiten massiver internationaler Migration und steigender Zahlen von Flüchtlingen wächst der Bedarf nach entsprechender Forschung stetig an. Nur wenn wir die Zusammenhänge zwischen Lernen und Kultur ausreichend verstehen, sind wir in der Lage, auf allen Ebenen die Entwicklung des erforderlichen Bewusstseins bzgl. kultursensibler Bildungsansätze zu fördern. In diesem Beitrag konzentrieren wir uns auf den sehr wichtigen Aspekt Motivation und diskutieren die Ergebnisse, die wir in unserer vergleichenden Studie in Deutschland und Südkorea erzielt haben.
Despite the opportunities and benefits of OER, research and practice has shown how the OER repositories have a hard time in reaching an active user-base. The opportunities of experience exchange and simple feedback mechanisms of social software have been realized for improving the situation and many are basing or transforming their OER offerings towards socially powered environments. Research on social software has shown how knowledge-sharing barriers in online environments are highly culture and context-specific and require proper investigation. It is crucial to study what challenges might arise in such environments and how to overcome them, ensuring a successful uptake. A large-scale (N = 855) cross-European investigation was initiated in the school context to determine which barriers teachers and learners perceive as critical. The study highlights barriers on cultural distance, showing how those are predicted by nationality and age of the respondents. The paper concludes with recommendations for overcoming those barriers.
The Whole Is More than the Sum of Its Parts - On Culture in Education and Educational Culture
(2015)
The Learning Culture Survey investigates learners’ expectations towards and perceptions of education on international level with the aim to make culture in the context of education better understandable and support educators to prevent and solve intercultural conflicts in education. So far, we found that culture-related expectations differ between educational settings, depend on the age of the learners, and that a nationally homogenous educational culture is rather an exception than the rule. The results of our recently completed longitudinal study provided evidence that educational culture on the institutional level actually is persistent, at least over a term of four years. After a brief introduction of the general background, we will subsume the steps taken during the past seven years and achieved general insights regarding educational culture. Last, we will introduce a method for the determination of conflict potential, which bases on the understanding of culture as the level to w hich people within a society accept deviations from the usual. We close with demonstrating the method’s functionality on examples from the Learning Culture Survey.
Quality Management in Education: Business Process Modelling in Interdisciplinary Environments
(2015)
The generation and maintenance of intricate spatiotemporal patterns of gene expression in multicellular organisms requires the establishment of complex mechanisms of transcriptional regulation. Estimations that up to one million enhancers exist in the human genome accentuates the utmost importance of this type of cis-regulatory element for gene regulation. However, surprisingly little is known about the mechanisms used to temporarily or permanently activate or inactivate enhancers during cellular differentiation. The current work addresses the question how enhancer regulation can be achieved.
Using the chemokine (C-C motif) ligand gene Ccl22 as a model, the first example is based on the question how the activation of an enhancer can be prevented in a physiological context. Ccl22 is expressed by myeloid cells, such as dendritic cells, upon exposure to inflammatory stimuli. The expression in other cell types, such as fibroblasts, is prevented by the strong accumulation of H3K9me3 at the enhancer's proximal region. This accumulation is attenuated in myeloid cells through activity of the stimulus-induced demethylase Jmjd2d. To tease out which genomic fragment or fragments in the Ccl22 locus could be responsible for the maintenance of enhancer inactivity, potentially through the recruitment of H3K9 methyltransferases, the enhancer repressing capacity of 1 kb fragments of the gene locus was analysed in retroviral reporter assays. It was found that a fragment adjacent to the Ccl22 enhancer that overlaps with a member of a subfamily of long interspersed nuclear elements (LINEs) showed strong repressive potential on a model enhancer. Subsequent retroviral reporter assays with LINEs from loci of other stimulus-dependent genes identified additional LINE fragments that exhibit strong enhancer repressive capacity. These findings suggest a mechanism for enhancer silencing involving LINEs.
The second example concentrates on the inactivation of an enhancer during colorectal cancer (CRC) progression. The adenoma to carcinoma transition during CRC progression often is accompanied by a downregulation of the tumour suppressor gene EPHB2. The EMT inducing factor SNAIL1 strongly downregulated EPHB2 expression in a CRC cell model. To gain insights into the transcriptional regulation of EPHB2, potential cis-regulatory elements in the EPHB2 upstream region were analysed using reporter assays. A cell-type-specific enhancer was identified and subsequent chromatin analyses revealed a correlation between enhancer chromatin conformation and EPHB2 expression in different CRC cell lines. Additionally, the overexpression of the murine Snail1 induced chromatin changes at the EPHB2 enhancer towards a poised, transcriptionally silent chromatin conformation. Mutational analyses of the minimal enhancer region pinpointed three transcription factor binding motifs to be essential for full enhancer activity. Different binding patterns between CRC cell lines at the TCF/LEF motif were subsequently identified. Furthermore, a switch from TCF7L2 to LEF1 occupancy was found upon overexpression of Snail1 in vitro and in vivo. The generation of LS174T CRC cells overexpressing LEF1 confirmed the involvement of LEF1 in the downregulation of EPHB2 and the competitive displacement of TCF7L2. This part of the work demonstrated that the SNAIL1 induced downregulation of EPHB2 is dependent on the decommissioning of a transcriptional enhancer and led to a hypothetical model involving LEF1 and ZEB1.
In summary, this work highlighted two distinct mechanisms for enhancer regulation. One mechanism is based on enhancer repressive LINE fragments that might prevent stimulus-dependent enhancer activation. In the second, enhancer silencing was shown to be based on a competitive transcription factor binding mechanism.
Despite the lack of standardisation for building REST-ful HTTP applications, the deployment of REST-based Web Services has attracted an increased interest. This gap causes, however, an ambiguous interpretation of REST and induces the design and implementation of REST-based systems following proprietary approaches instead of clear and agreed upon definitions. Issues arising from these shortcomings have an influence on service properties such as the loose coupling of REST-based services via a unitary service contract and the automatic generation of code. To overcome such limitations, at least two prerequisites are required: the availability of specifications for implementing REST-based services and auxiliaries for auditing the compliance of those services with such specifications. This paper introduces an approach for conformance testing of REST-based Web Services. This appears conflicting at the first glance, since there are no specifications available for implementing REST by, e.g., t he prevalent technology set HTTP/URI to test against. Still, by providing a conformance test tool and leaning it on the current practice, the exploration of service properties is enabled. Moreover, the real demand for standardisation gets explorable by such an approach. First investigations conducted with the developed conformance test system targeting major Cloud-based storage services expose inconsistencies in many respects which emphasizes the necessity for further research and standardisation.
This paper gives necessary foundations to understand the mechanism of warning processing and summarizes the state of the art in warning development. That includes a description of tools, researchers use to work in this scientific field. In detail these are models that describes the human way of processing warnings and mental models. Both are presented detailed with relevant examples. The paper tells how these tools are connected and how they are used to improve the effectiveness of warnings.
Managing the needs of learners is crucial in order to support their motivation and keep dropout rates on a low level. With the constantly growing level of internationalization in classrooms, the variety of different context-specific requirements from learners increase; without a profound understanding of the learners’ contexts, successfully maintaining a culture-sensitive and learner-focussed education is impossible. A solution to reach this understanding is the open exchange of experiences and knowledge amongst educators of the different contexts. In this paper, we will briefly introduce the two European projects “Open Discovery Space” (ODS) and “Inspiring Science Education” (ISE), which have the aim to foster the establishment and improvement of Open Educational Practices in the context of school education. The purpose of this paper is to attract and invite potential partners to affiliate with, contribute to, and profit from the projects.
With a focus on Technology Enhanced Learning, this paper investigates if and to which extent a culture shift can be expected alongside with the adoption of currently emerging Web 3.0 technologies. Instead of just offering new opportunities for the field to improve education, such a culture shift could lead to unexpected general consequences not just for Technology Enhanced Learning but the whole educational sector. Understanding the dimension of expectable changes enables us to prevent conflicts and pointedly support culture-related change processes. After an introduction of the Revised Onion Model of Culture, which, later on, serves as theoretical foundation, expectable changes in the design of learning scenarios are analysed, distinguishing the stakeholder groups “learners” and “educators”. Eventually, the found changes are analysed to which extent a general culture shift is to be expected in order to understand the transferability and limitations of future research results in the field.
IT-accessiblity is often treated as an orphan in companies. Even though the proportion of disabled people is substantial and people become older and more susceptible to disabilities. Besides cost factors, companies often do not have a plan how to implement and control IT-accessibility successfully. However, most companies are familiar with IT-maturity frameworks to evaluate and improve their own IT-infrastructure. It would facilitate dealing with IT-accessibility, if IT-maturity frameworks consider IT-accessibility and provide recommendations and solutions for a successful implementation. Therefore, this article conducts a review of an acknowledged IT-maturity framework with regard to its capability to enable implementation of IT-accessibility in an organization. The first part of this article will illustrate the motivation and background for the authors concern with such a topic. Afterwards the authors will introduce the reader to the reviewed IT-maturity framework and provide basic knowledge on IT-accessibility. The main part of the article will deal with the review of the applied IT-maturity framework and outline examples of critical capabilities for successfully implementing IT-accessibility in an organization. The final section will derive implications and close with planned future research activities in this field.
Understanding the Internet of Things: A Conceptualisation of Business-to-Thing (B2T) Interactions
(2015)
In education, finding the appropriate learning pace that fits to the members of a large group is a challenging task. This becomes especially evident when teaching multidisciplinary subjects such as epidemiology in medicine or computer science in most study programs, since lecturers have to face a very heterogeneous state of previous knowledge. Approaching this issue requires an individual supervision of each and every student, which is obviously bounded by the available resources. Moreover, when referring back to the second example, writing computer programs requires a complex installation and configuration of development tools. Many beginning programmers already become stuck at this entry stage. This paper introduces WHELP, a Web-based Holistic E-Learning Platform, which provides an integrated environment enabling the learning and teaching of computer science topics without the need to install any software. Moreover, WHELP includes an interactive feedback system for each programming exercise, where lecturers or tutors can supply comments, improvements, code assistance or tips helping the students to accomplish their tasks. Furthermore, WHELP offers a statistical analysis module as well as a real-time classroom polling system both promoting an overview of the state of knowledge of a course. In addition to that, WHELP enables collaborative working including code-sharing and peer-to-peer learning. This feature enables students to work on exercises simultaneously at distinct places. WHELP has been successfully deployed in the winter term 2013 at the Cologne University of Applied Sciences supporting the 120 students and 3 lecturers to learn and teach basic topics of computer science in an engineering study program.
Only since the turn of the 21st century have humanitarian organisations developed specific strategies that address climate change impacts as a humanitarian challenge. Taking the International Red Cross / Red Crescent Movement, being the largest humanitarian network, as an empirical case study, the article discusses the Movement’s changes in the areas 1) agenda setting, 2) organisational restructuring, 3) networking, 4) programming, and 5) advocacy. Based on the case study and a theoretical framework of organisational sociology, the article provides conclusions on internal and external factors that can explain why the Movement has been successful in being one of the first actors within the organisational field of humanitarian organisations to focus systematically on the humanitarian implications of climatic changes.
Communicating Climate Risks. A case study of the International Red Cross/Red Crescent Movement
(2015)
Work in progress: Starter-project for first semester students to survey their engineering studies
(2015)
The central theme of the 2014 Annual Report is human thinking.
In an interview, University President Hartmut Ihne and 3Sat moderator Gert Scobel discuss the concept of thought: "Should we be allowed to give up our autonomy voluntarily?"
Our university’s Language Centre Director James Chamberlain examines to what extent thinking varies in different languages.
Professor Paul Plöger from the Department of Computer Science explains why robots have tremendous problems understanding complex relationships in open environments.
Rather than focusing solely on our university’s future, the Annual Report links the fascinating theme to the enormous variety of life, research and tuition offered by H-BRS.
Background: Falls and fall-related injuries are a serious public health issue. Exercise programs can effectively reduce fall risk in older people. The iStoppFalls project developed an Information and Communication Technology-based system to deliver an unsupervised exercise program in older people’s homes. The primary aims of the iStoppFalls randomized controlled trial were to assess the feasibility (exercise adherence, acceptability and safety) of the intervention program and its effectiveness on common fall risk factors.
Methods: A total of 153 community-dwelling people aged 65+ years took part in this international, multicentre, randomized controlled trial. Intervention group participants conducted the exercise program for 16 weeks, with a recommended duration of 120 min/week for balance exergames and 60 min/week for strength exercises. All intervention and control participants received educational material including advice on a healthy lifestyle and fall prevention. Assessments included physical and cognitive tests, and questionnaires for health, fear of falling, number of falls, quality of life and psychosocial outcomes.
Results: The median total exercise duration was 11.7 h (IQR = 22.0) over the 16-week intervention period. There were no adverse events. Physiological fall risk (Physiological Profile Assessment, PPA) reduced significantly more in the intervention group compared to the control group (F1,127 = 4.54, p = 0.035). There was a significant three-way interaction for fall risk assessed by the PPA between the high-adherence (>90 min/week; n = 18, 25.4 %), low-adherence (<90 min/week; n = 53, 74.6 %) and control group (F2,125 = 3.12, n = 75, p = 0.044). Post hoc analysis revealed a significantly larger effect in favour of the high-adherence group compared to the control group for fall risk (p = 0.031), postural sway (p = 0.046), stepping reaction time (p = 0.041), executive functioning (p = 0.044), and quality of life (p for trend = 0.052).
Conclusions: The iStoppFalls exercise program reduced physiological fall risk in the study sample. Additional subgroup analyses revealed that intervention participants with better adherence also improved in postural sway, stepping reaction, and executive function.
Advanced driver assistance systems (ADAS) are technology systems and devices designed as an aid to the driver of a vehicle. One of the critical components of any ADAS is the traffic sign recognition module. For this module to achieve real-time performance, some preprocessing of input images must be done, which consists of a traffic sign detection (TSD) algorithm to reduce the possible hypothesis space. Performance of TSD algorithm is critical.
One of the best algorithms used for TSD is the Radial Symmetry Detector (RSD), which can detect both Circular [7] and Polygonal traffic signs [5]. This algorithm runs in real-time on high end personal computers, but computational performance of must be improved in order to be able to run in real-time in embedded computer platforms.
To improve the computational performance of the RSD, we propose a multiscale approach and the removal of a gaussian smoothing filter used in this algorithm. We evaluate the performance on both computation times, detection and false positive rates on a synthetic image dataset and on the german traffic sign detection benchmark [29].
We observed significant speedups compared to the original algorithm. Our Improved Radial Symmetry Detector is up to 5.8 times faster than the original on detecting Circles, up to 3.8 times faster on Triangle detection, 2.9 times faster on Square detection and 2.4 times faster on Octagon detection. All of this measurements were observed with better detection and false positive rates than the original RSD.
When evaluated on the GTSDB, we observed smaller speedups, in the range of 1.6 to 2.3 times faster for Circle and Regular Polygon detection, but for Circle detection we observed a decreased detection rate than the original algorithm, while for Regular Polygon detection we always observed better detection rates. False positive rates were high, in the range of 80% to 90%.
We conclude that our Improved Radial Symmetry Detector is a significant improvement of the Radial Symmetry Detector, both for Circle and Regular polygon detection. We expect that our improved algorithm will lead the way to obtain real-time traffic sign detection and recognition in embedded computer platforms.
Hand speed is particularly important in boxing both for protection against incoming blows and delivering blows. Sixteen amateur boxers (10 male, 6 female) with varying levels of experience from a boxing gym performed 20 jabs and 20 cross punches in air. The movement was recorded with a small wrist mounted accelerometer under the glove. The maximum velocity of each punch was determined from the RMS acceleration profile. The mean values of the jab maximal velocity was higher than the cross maximal velocity for 9 participants. The cross showed some dependence on reach (Spearman's correlation coefficient r = 0.57) and the jab dependence on experience (Spearman's correlation coefficient r = 0.56). The accelerometer technique has some promise for routine assessment of fist speed.
Over the past two decades many governments of low and middle income countries have started to introduce social protection measures or to extend the coverage and improve the functioning of public social protection systems. These reforms are a "global phenomenon" and can be observed in many African, Asian and Latin American countries. This paper focuses on international determinants for policy change within social protection by assessing the state of the art of both policy diffusion and policy transfer studies. Empirical studies of policy transfer and diffusion in the field of social protection are furthermore assessed in light of the theoretical background.
TinyECC 2.0 is an open source library for Elliptic Curve Cryptography (ECC) in wireless sensor networks. This paper analyzes the side channel susceptibility of TinyECC 2.0 on a LOTUS sensor node platform. In our work we measured the electromagnetic (EM) emanation during computation of the scalar multiplication using 56 different configurations of TinyECC 2.0. All of them were found to be vulnerable, but to a different degree. The different degrees of leakage include adversary success using (i) Simple EM Analysis (SEMA) with a single measurement, (ii) SEMA using averaging, and (iii) Multiple-Exponent Single-Data (MESD) with a single measurement of the secret scalar. It is extremely critical that in 30 TinyECC 2.0 configurations a single EM measurement of an ECC private key operation is sufficient to simply read out the secret scalar. MESD requires additional adversary capabilities and it affects all TinyECC 2.0 configurations, again with only a single measurement of the ECC private key operation. These findings give evidence that in security applications a configuration of TinyECC 2.0 should be chosen that withstands SEMA with a single measurement and, beyond that, an addition of appropriate randomizing countermeasures is necessary.
The paper examines the effectiveness of transgovernmental policy networks as a governance structure for policy diffusion. The analysis is based on a survey including 50 social protection policy maker and technical practitioner who are country delegates to transgovernmental policy networks within the policy area of social protection. The paper provides anecdotal empirical evidence that policy networks contribute to policy diffusion by inducing mutual learning processes.
Communicating Sequential Processes (CSP) [7] is a calculus for concurrent systems that has been the basis of subject-oriented business process management (S-BPM) [4]. We use CSPm -- a machine readable dialect of CSP -- to create a sequence of models for a case study on an "Automated Teller Machine" [1]. We use the refinement checker FDR2 to prove that certain models are correct implementations of specifications.
Comparison of the subject-oriented and the Petri net based approach for business process automation
(2015)
The subject-oriented modelling approach [5] significally differs from the classic Petri net based approach of many business process modeling languages like EPC [9], Business Process Model and Notation (BPMN) [11], and also Yet Another Workflow Language (YAWL) [10]. In this work, we compare the two approaches by modeling a case study called "Procure to Pay"[3], a typical business process where some equipment for a construction site is rented and finally paid. The case study is not only modelled but also automated using the Metasonic Suite for the subject-oriented and YAWL for the Petri net based approach.
Extraction of text information from visual sources is an important component of many modern applications, for example, extracting the text from traffic signs on a road scene in an autonomous vehicle. For natural images or road scenes this is a unsolved problem. In this thesis the use of histogram of stroke widths (HSW) for character and noncharacter region classification is presented. Stroke widths are extracted using two methods. One is based on the Stroke Width Transform and another based on run lengths. The HSW is combined with two simple region features– aspect and occupancy ratios– and then a linear SVM is used as classifier. One advantage of our method over the state of the art is that it is script-independent and can also be used to verify detected text regions with the purpose of reducing false positives. Our experiments on generated datasets of Latin, CJK, Hiragana and Katakana characters show that the HSW is able to correctly classify at least 90% of the character regions, a similar figure is obtained for non-character regions. This performance is also obtained when training the HSW with one script and testing with a different one, and even when characters are rotated. On the English and Kannada portions of the Chars74K dataset we obtained over 95% correctly classified character regions. The use of raycasting for text line grouping is also proposed. By combining it with our HSW-based character classifier, a text detector based on Maximally Stable Extremal Regions (MSER) was implemented. The text detector was evaluated on our own dataset of road scenes from the German Autobahn, where 65% precision, 72% recall with a f-score of 69% was obtained. Using the HSW as a text verifier increases precision while slightly reducing recall. Our HSW feature allows the building of a script-independent and low parameter count classifier for character and non-character regions.
This book chapter describes application examples of gas chromatography/mass spectrometry and pyrolysis – gas chromatography/mass spectrometry in failure analysis for the identification of chemical materials like mineral oils and nitrile rubber gaskets. Furthermore, failure cases demanding identification of polymers/copolymers in fouling on the compressor wall of a car air conditioner and identification of fouling on the surface of a bearing race from the automotive industry are demonstrated. The obtained analytical results were then used for troubleshooting and remedial action of the technological process.
In the fermentation process sugars are transformed into lactic acid. pH meters have traditionally been used for fermentation process monitoring based on acidity. More recently, near infrared (NIR) spectroscopy has proven to provide an accurate and non-invasive method to detect when the transformation of sugars into lactic acid is finished. The fermentation process when sugars are transformed into lactic acid. This research proposes the use of simplified NIR spectroscopy using multispectral optical sensors as a simpler and less expensive measure to end the fermentation process. The NIR spectrum of milk and yogurt is compared to find and extract features that can be used to design a simple sensor to monitor the yogurt fermentation process. Multispectral images in four selected wavebands within the NIR spectrum are captured and show different spectral remission characteristics for milk, yogurt and water, which support the selection of these wavebands for milk and yogurt classification.
The latest advances in the field of smart card technologies allow modern cards to be more than just simple security tokens. Recent developments facilitate the use of interactive components like buttons, displays or even touch-sensors within the card's body thus conquering whole new areas of application. With interactive functionalities the usability aspect becomes the most important one for designing secure and popularly accepted products. Unfortunately, the usability can only be tested fully with completely integrated hence expensive smart card prototypes. This restricts severely application specific research, case studies of new smart card user interfaces and the optimization of design aspects, as well as hardware requirements by making usability and acceptance tests in smart card development very costly and time-consuming. Rapid development and simulation of smart card interfaces and applications can help to avoid this restriction. This paper presents a rapid development process for new smart card interfaces and applications based on common smartphone technology using a tool called SCUID^Sim. We will demonstrate the variety of usability aspects that can be analyzed with such a simulator by discussing some selected example projects.
We present a system that combines voxel and polygonal representations into a single octree acceleration structure that can be used for ray tracing. Voxels are well-suited to create good level-of-detail for high-frequency models where polygonal simplifications usually fail due to the complex structure of the model. However, polygonal descriptions provide the higher visual fidelity. In addition, voxel representations often oversample the geometric domain especially for large triangles, whereas a few polygons can be tested for intersection more quickly.
We propose a high-performance GPU implementation of Ray Histogram Fusion (RHF), a denoising method for stochastic global illumination rendering. Based on the CPU implementation of the original algorithm, we present a naive GPU implementation and the necessary optimization steps. Eventually, we show that our optimizations increase the performance of RHF by two orders of magnitude when compared to the original CPU implementation and one order of magnitude compared to the naive GPU implementation. We show how the quality for identical rendering times relates to unfiltered path tracing and how much time is needed to achieve identical quality when compared to an unfiltered path traced result. Finally, we summarize our work and describe possible future applications and research based on this.
The study of locomotion in virtual environments is a diverse and rewarding research area. Yet, creating effective and intuitive locomotion techniques is challenging, especially when users cannot move around freely. While using handheld input devices for navigation may often be good enough, it does not match our natural experience of motion in the real world. Frequently, there are strong arguments for supporting body-centered self-motion cues as they may improve orientation and spatial judgments, and reduce motion sickness. Yet, how these cues can be introduced while the user is not moving around physically is not well understood. Actuated solutions such as motion platforms can be an option, but they are expensive and difficult to maintain. Alternatively, within this article we focus on the effect of upper-body tilt while users are seated, as previous work has indicated positive effects on self-motion perception. We report on two studies that investigated the effects of static and dynamic upper body leaning on perceived distances traveled and self-motion perception (vection). Static leaning (i.e., keeping a constant forward torso inclination) had a positive effect on self-motion, while dynamic torso leaning showed mixed results. We discuss these results and identify further steps necessary to design improved embodied locomotion control techniques that do not require actuated motion platforms.
Over the last 50 years, the controlled motion of robots has become a very mature domain of expertise. It can deal with all sorts of topologies and types of joints and actuators, with kinematic as well as dynamic models of devices, and with one or several tools or sensors attached to the mechanical structure. Nevertheless, the domain has not succeeded in standardizing the modelling of robot devices (including such fundamental entities as “reference frames”!), let alone the semantics of their motion specification and control. This thesis aims to solve this long-standing problem, from three different sides: semantic models for robot kinematics and dynamics, semantic models of all possible motion specification and control problems, and software that can support the latter while being configured by a systematic use of the former.
Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment in those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, trials of moderate-intensity exercise (i.e. self-paced cycling) and no-exercise (i.e. automatic propulsion) were performed within three levels of virtual environment exposure. Each trial was 5-min in duration and was followed by post-trial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore these change indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence.
Annual Report 2013 - 2014
(2015)
The steadily decreasing prices of display technologies and computer graphics hardware contribute to the increasing popularity of multiple-display environments, like large, high-resolution displays. It is therefore necessary that educational organizations give the new generation of computer scientists an opportunity to become familiar with this kind of technology. However, there is a lack of tools that allow for getting started easily. Existing frameworks and libraries that provide support for multi-display rendering are often complex in understanding, configuration and extension. This is critical especially in educational context where the time that students have for their projects is limited and quite short. These tools are also rather known and used in research communities only, thus providing less benefit for future non-scientists. In this work we present an extension for the Unity game engine. The extension allows – with a small overhead – for implementation of applications that are apt to run on both single-display and multi-display systems. It takes care of the most common issues in the context of distributed and multi-display rendering like frame, camera and animation synchronization, thus reducing and simplifying the first steps into the topic. In conjunction with Unity, which significantly simplifies the creation of different kinds of virtual environments, the extension affords students to build mock-up virtual reality applications for large, high-resolution displays, and to implement and evaluate new interaction techniques and metaphors and visualization concepts. Unity itself, in our experience, is very popular among computer graphics students and therefore familiar to most of them. It is also often employed in projects of both research institutions and commercial organizations; so learning it will provide students with qualification in high demand.
Sustainable development needs sustainable production and sustainable consumption. During the last decades the encouragement of sustainable production has been the focus of research and policy makers under the implicit assumption that the observable increasing ‘green’ values of consumers would also entail a growing sustainable consumption. However, it has been found that the actual purchasing behaviour often deviates from ‘green’ attitudes. This phenomenon is called the attitude-behaviour gap. It is influenced by individual, social and situational factors. The main purchasing barriers for sustainable (organic) food are price, lack of immediate availability, sensory criteria, lack or overload of information as well as the low-involvement feature of food products in conjunction with well-established consumption routines, lack of transparency and trust towards labels and certifications.
The phenomenon of the deviation between purchase attitudes and actual buying behaviour of responsible consumers is called the attitude-behaviour gap. It is influenced by individual, social and situational factors. The main purchasing barriers for sustainable (organic) food are price, lack of immediate availability, sensory criteria, lack or overload of information as well as the low-involvement feature of food products in conjunction with well-established consumption routines, lack of transparency and trust towards labels and certifications. The last three barriers are mainly of a psychological nature. Especially the low-involvement feature of food products due to daily purchase routines and relatively low prices tends to result in fast, automatic and subconscious decisions based on a so-called human mental system 1, derived from Daniel Kahneman’s (Nobel-Prize laureate in Behavioural Economics) model in behavioural psychology. In contrast, the human mental system 2 is especially important for the transformations of individual behaviour towards a more sustainable consumption. Decisions based on the human mental system 2 are slow, logical, rational, conscious and arduous. This so-called dual action model also influences the reliability of responses in consumer surveys. It seems that the consumer behaviour is the most unstable and unpredictable part of the entire supply chain and requires special attention. Concrete measures to influence consumer behaviour towards sustainable consumption are highly complex. Reviews of interdisciplinary research literature on behavioural psychology, behavioural economics and consumer behaviour and an empirical analysis of selected countries worldwide with a view to sustainable food are presented. The example of Denmark serves as a ‘best practice’ case study to illustrate how sustainable food consumption can be encouraged. It demonstrates that common efforts and a shared responsibility of consumers, business, interdisciplinary researchers, mass media and policy are needed. It takes pioneers of change who succeed in assembling a ‘critical mass’ willing to increase its ‘sustainable’ behaviour. Considering the strong psychological barriers of consumers and the continuing low market share of organic food, proactive policy measures would be conducive to foster the personal responsibility of the consumers and offer incentives towards a sustainable production. Also, further self-obligations of companies (Corporate Social Responsibility – CSR) as well as more transparency and simplification of reliable labels and certifications are needed to encourage the process towards a sustainable development.
Secure vehicular communication has been discussed over a long period of time. Now,- this technology is implemented in different Intelligent Transportation System (ITS) projects in europe. In most of these projects a suitable Public Key Infrastructure (PKI) for a secure communication between involved entities in a Vehicular Ad hoc Network (VANET) is needed. A first proposal for a PKI architecture for Intelligent Vehicular Systems (IVS PKI) is given by the car2car communication consortium. This architecture however mainly deals with inter vehicular communication and is less focused on the needs of Road Side Units. Here, we propose a multi-domain PKI architecture for Intelligent Transportation Systems, which considers the necessities of road infrastructure authorities and vehicle manufacturers, today. The PKI domains are cryptographically linked based on local trust lists. In addition, a crypto agility concept is suggested, which takes adaptation of key length and cryptographic algorithms during PKI operation into account.
Simultaneous multifrequency radio observations of the Galactic Centre magnetar SGR J1745-2900
(2015)
RNA is one of the most important molecules in living organisms. One of its main functions is to regulate gene expression. This involves binding to and forming a joint structure with a messenger RNA. An RNAs functions is determined by its sequence and the structure it folds into. Accordingly, the prediction of individual as well as joint structures is an important area of research. In this thesis a method for the prediction of RNA-RNA joint structure using their minimum free energy (mfe) structures was developed. It is able to extensively explore the joint structural landscape of two interacting RNAs by taking advantage of the locality of changes in the RNAs structures as well as natural and energetic constraints. The method predicts the mfe joint structure as well as alternative stable joint structures while also computing non-optimal folding pathways from the unbound individual mfe structures to the predicted joint structures. It is shown how an enumeration approach is used which is able to deal with the enormous search space as well as to avoid any cyclic behaviour. The method is evaluated using two standard datasets of known interacting RNAs and shows good results.
Reducing energy consumption is one of the most pursued economic and ecologic challenges concerning societies as a whole, individuals and organizations alike. While politics start taking measures for energy turnaround and smart home energy monitors are becoming popular, few studies have touched on sustainability in office environments so far, though they account for almost every second workplace in modern economics. In this paper, we present findings of two parallel studies in an organizational context using behavioral change oriented strategies to raise energy awareness. Next to demonstrating potentials, it shows that energy feedback needs must fit to the local organizational context to succeed and should consider typical work patterns to foster accountability of consumption.
Reducing energy consumption is one of the most pursued economic and ecologic challenges concerning societies as a whole, individuals and organizations alike. While politics start taking measures for energy turnaround and smart home energy monitors are becoming popular, few studies have touched on sustainability in office environments so far, though they account for almost every second workplace in modern economics. In this paper, we present findings of two parallel studies in an organizational context using behavioral change oriented strategies to raise energy awareness. Next to demonstrating potentials, it shows that energy feedback needs must fit to the local organizational context to succeed and should consider typical work patterns to foster accountability of consumption.
Although much effort is made to prevent risks arising from food, food-borne diseases are an ever present-threat to the consumers’ health. The consumption of fresh food that is contaminated with pathogens like fungi, viruses or bacteria can cause food poisoning that leads to severe health damages or even death. The outbreak of Shiga Toxin-producing enterohemorrhagic E. coli (EHEC) in Germany and neighbouring countries in 2011 has shown this dramatically. Nearly 4.000 people were reported of being affected and more than 50 people died during the so called EHEC-crisis. As a result the consumers’ trust in the safety of fruits and vegetables decreased sharply.
Although much effort is made to prevent risks arising from food, food-borne diseases are an ever-present threat to the consumers’ health. The consumption of fresh food that is contaminated with pathogens like fungi, viruses or bacteria can cause food poisoning that leads to severe health damages or even death. The outbreak of Shiga Toxin-producing enterohemorrhagic E. coli (EHEC) in Germany and neighbouring countries in 2011 has shown this dramatically. Nearly 4.000 people were reported of being affected and more than 50 people died during the so called EHEC-crisis. As a result the consumers’ trust in the safety of fruits and vegetables decreased sharply.
Fundamentals of Energy Meteorology - Influence of atmospheric parameters on solar energy production
(2015)
Solar energy is one option to serve the rising global energy demand with low environmental impact.1 Building an energy system with a considerable share of solar power requires long-term investment and a careful investigation of potential sites. Therefore, understanding the impacts from varying regionally and locally determined meteorological conditions on solar energy production will influence energy yield projections. Clouds are moving on a short term timescale and have a high influence on the available solar radiation, as they absorb, reflect and scatter parts of the incoming light.2 However, the impact of cloudiness on photovoltaic power yields (PV) and cloud induced deviations from average yields might vary depending on the technology, location and time scale under consideration.