Refine
Departments, institutes and facilities
- Fachbereich Informatik (73)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (32)
- Fachbereich Angewandte Naturwissenschaften (29)
- Fachbereich Ingenieurwissenschaften und Kommunikation (29)
- Institute of Visual Computing (IVC) (24)
- Institut für Cyber Security & Privacy (ICSP) (22)
- Institut für funktionale Gen-Analytik (IFGA) (16)
- Fachbereich Wirtschaftswissenschaften (13)
- Institut für Verbraucherinformatik (IVI) (9)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (9)
Document Type
- Conference Object (100)
- Article (59)
- Part of a Book (9)
- Report (5)
- Doctoral Thesis (4)
- Master's Thesis (2)
- Part of Periodical (2)
- Preprint (2)
- Working Paper (2)
- Book (monograph, edited volume) (1)
Year of publication
- 2015 (187) (remove)
Language
- English (187) (remove)
Keywords
- Eco-Feedback (4)
- E-Learning (3)
- Education (3)
- FPGA (3)
- Sustainable Interaction Design (3)
- Workplace (3)
- 802.11 (2)
- Crisis Communication (2)
- Culture (2)
- Development Policy (2)
Appropriating Digital Fabrication Technologies — A comparative study of two 3D Printing Communities
(2015)
Digital fabrication technologies have a great potential for empowering consumers to produce their own creations. However, despite the growing availability of digital fabrication technologies in shared machine shops such as FabLabs or University Labs, they are often perceived as difficult to use, especially by users with limited technological aptitude. Hence, it is not yet clear if the potentials of the technology can be made accessible to a broader public, or if they will remain limited to some form of “maker elite”. In this paper, we study the appropriation of digital fabrication on the example of the use of 3D printers in two different communities. In doing so, we analyze how users conceptualize their use of the 3D printers, what kind of contextual understanding is necessary to work with the machines, and how users document and share their knowledge. Based on our empirical findings, we identify the potentials that the machines offer to the communities, and what kind of challenges have to be overcome in their appropriation of the technology.
Mesenchymal stem cells (MSCs) are an attractive cell source for Regenerative Dentistry in particular due to their ability to differentiate towards osteoblasts, among other lineages. Tooth and jaw bone loss are frequent sequelae of traumatic and pathological conditions in both the young and the elderly and must be met by appropriate prosthetic replacements. For successful osseointegration of the dental implant a sufficient bone level is necessary. Besides the utilization of bone autografts or synthetic biomaterials, medical research is more and more focused on the utilization of MSCs. Compared to cells obtained from liposuction material, ectomesenchymal stem cells derived from the head area e.g. out of dental follicles or particulate, non-vascularized bone chips show a higher differentiation potential towards osteoblasts.
Despite the opportunities and benefits of OER, research and practice has shown how the OER repositories have a hard time in reaching an active user-base. The opportunities of experience exchange and simple feedback mechanisms of social software have been realized for improving the situation and many are basing or transforming their OER offerings towards socially powered environments. Research on social software has shown how knowledge-sharing barriers in online environments are highly culture and context-specific and require proper investigation. It is crucial to study what challenges might arise in such environments and how to overcome them, ensuring a successful uptake. A large-scale (N = 855) cross-European investigation was initiated in the school context to determine which barriers teachers and learners perceive as critical. The study highlights barriers on cultural distance, showing how those are predicted by nationality and age of the respondents. The paper concludes with recommendations for overcoming those barriers.
The aim of our research is finding measures to preserve the learners’ initial motivation in educational settings. For that we need to avoid conflicting situations that possibly could jeopardize their joy of learning.
In our thematically comprehensive Learning Culture Survey, we investigate the cultural biasing of students’ attitudes, behaviours, and expectations towards education. Particularly in times of massive international migration and growing numbers of refugees, the relevance to deeply understand cultural aspects in education increases. Just with this understanding, we can raise the awareness towards more cultural tolerance across all involved stakeholder groups and thus, foster the development of more culture-sensitive educational approaches. In this paper we focus on the most relevant aspect of motivation and comparatively discuss our study conducted in Germany and South Korea.
Mit unserer Forschung wollen wir Maßnahmen finden, die dazu beitragen, die anfängliche
Bildungsmaßnahmen zu
Motivation von Lernern bewahren. Zu diesem Zweck
in müssen Konfliktsituationen möglichst vermieden werden, wenn sie das Potential haben, ihnen die Freude am Lernen zu verderben. In unserem thematisch breitgefächerten Learning Culture Survey (Untersuchung von Lernkultur), untersuchen wir bei Lernern das Vorhandensein und den Einfluss kulturspezifischer Prägungen auf deren Verhaltensweisen, Gewohnheiten und Erwartungen bzgl. Bildung. Besonders in Zeiten massiver internationaler Migration und steigender Zahlen von Flüchtlingen wächst der Bedarf nach entsprechender Forschung stetig an. Nur wenn wir die Zusammenhänge zwischen Lernen und Kultur ausreichend verstehen, sind wir in der Lage, auf allen Ebenen die Entwicklung des erforderlichen Bewusstseins bzgl. kultursensibler Bildungsansätze zu fördern. In diesem Beitrag konzentrieren wir uns auf den sehr wichtigen Aspekt Motivation und diskutieren die Ergebnisse, die wir in unserer vergleichenden Studie in Deutschland und Südkorea erzielt haben.
With a focus on Technology Enhanced Learning, this paper investigates if and to which extent a culture shift can be expected alongside with the adoption of currently emerging Web 3.0 technologies. Instead of just offering new opportunities for the field to improve education, such a culture shift could lead to unexpected general consequences not just for Technology Enhanced Learning but the whole educational sector. Understanding the dimension of expectable changes enables us to prevent conflicts and pointedly support culture-related change processes. After an introduction of the Revised Onion Model of Culture, which, later on, serves as theoretical foundation, expectable changes in the design of learning scenarios are analysed, distinguishing the stakeholder groups “learners” and “educators”. Eventually, the found changes are analysed to which extent a general culture shift is to be expected in order to understand the transferability and limitations of future research results in the field.
Quality Management in Education: Business Process Modelling in Interdisciplinary Environments
(2015)
The Whole Is More than the Sum of Its Parts - On Culture in Education and Educational Culture
(2015)
The Learning Culture Survey investigates learners’ expectations towards and perceptions of education on international level with the aim to make culture in the context of education better understandable and support educators to prevent and solve intercultural conflicts in education. So far, we found that culture-related expectations differ between educational settings, depend on the age of the learners, and that a nationally homogenous educational culture is rather an exception than the rule. The results of our recently completed longitudinal study provided evidence that educational culture on the institutional level actually is persistent, at least over a term of four years. After a brief introduction of the general background, we will subsume the steps taken during the past seven years and achieved general insights regarding educational culture. Last, we will introduce a method for the determination of conflict potential, which bases on the understanding of culture as the level to w hich people within a society accept deviations from the usual. We close with demonstrating the method’s functionality on examples from the Learning Culture Survey.
Managing the needs of learners is crucial in order to support their motivation and keep dropout rates on a low level. With the constantly growing level of internationalization in classrooms, the variety of different context-specific requirements from learners increase; without a profound understanding of the learners’ contexts, successfully maintaining a culture-sensitive and learner-focussed education is impossible. A solution to reach this understanding is the open exchange of experiences and knowledge amongst educators of the different contexts. In this paper, we will briefly introduce the two European projects “Open Discovery Space” (ODS) and “Inspiring Science Education” (ISE), which have the aim to foster the establishment and improvement of Open Educational Practices in the context of school education. The purpose of this paper is to attract and invite potential partners to affiliate with, contribute to, and profit from the projects.
The human MPV17-related mitochondrial DNA depletion syndrome is an inherited autosomal recessive disease caused by mutations in the inner mitochondrial membrane protein MPV17. Although more than 30 MPV17 gene mutations were shown to be associated with mitochondrial DNA depletion syndrome, the function of MPV17 is still unknown. Mice deficient in Mpv17 show signs of premature aging. In the present study, we used electrophysiological measurements with recombinant MPV17 to reveal that this protein forms a non-selective channel with a pore diameter of 1.8 nm and located the channel's selectivity filter. The channel was weakly cation-selective and showed several subconductance states. Voltage-dependent gating of the channel was regulated by redox conditions and pH and was affected also in mutants mimicking a phosphorylated state. Likewise, the mitochondrial membrane potential (Δψm) and the cellular production of reactive oxygen species were higher in embryonic fibroblasts from Mpv17−/− mice. However, despite the elevated Δψm, the Mpv17-deficient mitochondria showed signs of accelerated fission. Together, these observations uncover the role of MPV17 as a Δψm-modulating channel that apparently contributes to mitochondrial homeostasis under different conditions.
It is know that mesenchymal stem cells (MSCs) actively secretemultiple biologically-active factors during their process of differentiation which gives rise to a variey of cytotypes including bone and fatcells. It is also acknowledged that the chemokines secreted throughoutMSC differentiation may play an important role in the development and growth of tumor cells, although literature data appear somewhat indeterminate due to the contradictory evidence often found.
Background: Falls and fall-related injuries are a serious public health issue. Exercise programs can effectively reduce fall risk in older people. The iStoppFalls project developed an Information and Communication Technology-based system to deliver an unsupervised exercise program in older people’s homes. The primary aims of the iStoppFalls randomized controlled trial were to assess the feasibility (exercise adherence, acceptability and safety) of the intervention program and its effectiveness on common fall risk factors.
Methods: A total of 153 community-dwelling people aged 65+ years took part in this international, multicentre, randomized controlled trial. Intervention group participants conducted the exercise program for 16 weeks, with a recommended duration of 120 min/week for balance exergames and 60 min/week for strength exercises. All intervention and control participants received educational material including advice on a healthy lifestyle and fall prevention. Assessments included physical and cognitive tests, and questionnaires for health, fear of falling, number of falls, quality of life and psychosocial outcomes.
Results: The median total exercise duration was 11.7 h (IQR = 22.0) over the 16-week intervention period. There were no adverse events. Physiological fall risk (Physiological Profile Assessment, PPA) reduced significantly more in the intervention group compared to the control group (F1,127 = 4.54, p = 0.035). There was a significant three-way interaction for fall risk assessed by the PPA between the high-adherence (>90 min/week; n = 18, 25.4 %), low-adherence (<90 min/week; n = 53, 74.6 %) and control group (F2,125 = 3.12, n = 75, p = 0.044). Post hoc analysis revealed a significantly larger effect in favour of the high-adherence group compared to the control group for fall risk (p = 0.031), postural sway (p = 0.046), stepping reaction time (p = 0.041), executive functioning (p = 0.044), and quality of life (p for trend = 0.052).
Conclusions: The iStoppFalls exercise program reduced physiological fall risk in the study sample. Additional subgroup analyses revealed that intervention participants with better adherence also improved in postural sway, stepping reaction, and executive function.
RNA is one of the most important molecules in living organisms. One of its main functions is to regulate gene expression. This involves binding to and forming a joint structure with a messenger RNA. An RNAs functions is determined by its sequence and the structure it folds into. Accordingly, the prediction of individual as well as joint structures is an important area of research. In this thesis a method for the prediction of RNA-RNA joint structure using their minimum free energy (mfe) structures was developed. It is able to extensively explore the joint structural landscape of two interacting RNAs by taking advantage of the locality of changes in the RNAs structures as well as natural and energetic constraints. The method predicts the mfe joint structure as well as alternative stable joint structures while also computing non-optimal folding pathways from the unbound individual mfe structures to the predicted joint structures. It is shown how an enumeration approach is used which is able to deal with the enormous search space as well as to avoid any cyclic behaviour. The method is evaluated using two standard datasets of known interacting RNAs and shows good results.
The latest advances in the field of smart card technologies allow modern cards to be more than just simple security tokens. Recent developments facilitate the use of interactive components like buttons, displays or even touch-sensors within the card's body thus conquering whole new areas of application. With interactive functionalities the usability aspect becomes the most important one for designing secure and popularly accepted products. Unfortunately, the usability can only be tested fully with completely integrated hence expensive smart card prototypes. This restricts severely application specific research, case studies of new smart card user interfaces and the optimization of design aspects, as well as hardware requirements by making usability and acceptance tests in smart card development very costly and time-consuming. Rapid development and simulation of smart card interfaces and applications can help to avoid this restriction. This paper presents a rapid development process for new smart card interfaces and applications based on common smartphone technology using a tool called SCUID^Sim. We will demonstrate the variety of usability aspects that can be analyzed with such a simulator by discussing some selected example projects.
Secure vehicular communication has been discussed over a long period of time. Now,- this technology is implemented in different Intelligent Transportation System (ITS) projects in europe. In most of these projects a suitable Public Key Infrastructure (PKI) for a secure communication between involved entities in a Vehicular Ad hoc Network (VANET) is needed. A first proposal for a PKI architecture for Intelligent Vehicular Systems (IVS PKI) is given by the car2car communication consortium. This architecture however mainly deals with inter vehicular communication and is less focused on the needs of Road Side Units. Here, we propose a multi-domain PKI architecture for Intelligent Transportation Systems, which considers the necessities of road infrastructure authorities and vehicle manufacturers, today. The PKI domains are cryptographically linked based on local trust lists. In addition, a crypto agility concept is suggested, which takes adaptation of key length and cryptographic algorithms during PKI operation into account.
Formal concept analysis (FCA) as introduced in [4] deals with contexts and concepts. Roughly speaking, a context is an environment that is equipped with some kind of "knowledge". Such contexts are also known as information or knowledge representation systems where the knowledge consists of (intensional) descriptions relating sets of objects to sets of properties. Given extsensional and intensional descriptions (the latter one in terms of binary attributes), they can be arranged in a taxonomy or concept lattice.
Roughness by Residuals
(2015)
Rough set theory (RST) focuses on forming posets of equivalence relations to describe sets with increasing accuracy. The connection between modal logics and RST is well known and has been extensively studied in their relation algebraic (RA) formalisation. RST has also been interpreted as a variant of intuitionistic or multi-valued logics and has even been studied in the context of logic programming.
Introduction: After cellulose, lignin represents the most abundant biopolymer on earth that accounts for up to 18-35 % by weight of lignocellulose biomass. Today, it is a by-product of the paper and pulping industry. Although lignin is available in huge amounts, mainly in form of so called black liquor produced via Kraft-pulping, processes for the valorization of lignin are still limited [1]. Due to its hyperbranched polyphenol-like structure, lignin gained increasing interest as biobased building block for polymer synthesis [2]. The present work is focused on extraction and purification of lignin from industrial black liquor and synthesis of lignin-based polyurethanes.
Semantic Image Segmentation Combining Visible and Near-Infrared Channels with Depth Information
(2015)
Image understanding is a vital task in computer vision that has many applications in areas such as robotics, surveillance and the automobile industry. An important precondition for image understanding is semantic image segmentation, i.e. the correct labeling of every image pixel with its corresponding object name or class. This thesis proposes a machine learning approach for semantic image segmentation that uses images from a multi-modal camera rig. It demonstrates that semantic segmentation can be improved by combining different image types as inputs to a convolutional neural network (CNN), when compared to a single-image approach. In this work a multi-channel near-infrared (NIR) image, an RGB image and a depth map are used. The detection of people is further improved by using a skin image that indicates the presence of human skin in the scene and is computed based on NIR information. It is also shown that segmentation accuracy can be enhanced by using a class voting method based on a superpixel pre-segmentation. Models are trained for 10-class, 3-class and binary classification tasks using an original dataset. Compared to the NIR-only approach, average class accuracy is increased by 7% for 10-class, and by 22% for 3-class classification, reaching a total of 48% and 70% accuracy, respectively. The binary classification task, which focuses on the detection of people, achieves a classification accuracy of 95% and true positive rate of 66%. The report at hand describes the proposed approach and the encountered challenges and shows that a CNN can successfully learn and combine features from multi-modal image sets and use them to predict scene labeling.
TinyECC 2.0 is an open source library for Elliptic Curve Cryptography (ECC) in wireless sensor networks. This paper analyzes the side channel susceptibility of TinyECC 2.0 on a LOTUS sensor node platform. In our work we measured the electromagnetic (EM) emanation during computation of the scalar multiplication using 56 different configurations of TinyECC 2.0. All of them were found to be vulnerable, but to a different degree. The different degrees of leakage include adversary success using (i) Simple EM Analysis (SEMA) with a single measurement, (ii) SEMA using averaging, and (iii) Multiple-Exponent Single-Data (MESD) with a single measurement of the secret scalar. It is extremely critical that in 30 TinyECC 2.0 configurations a single EM measurement of an ECC private key operation is sufficient to simply read out the secret scalar. MESD requires additional adversary capabilities and it affects all TinyECC 2.0 configurations, again with only a single measurement of the ECC private key operation. These findings give evidence that in security applications a configuration of TinyECC 2.0 should be chosen that withstands SEMA with a single measurement and, beyond that, an addition of appropriate randomizing countermeasures is necessary.
This paper proposes an Artificial Plasmodium Algorithm (APA) mimicked a contraction wave of a plasmodium of physarum polucephalum. Plasmodia can live using the contracion wave in their body to communicate to others and transport a nutriments. In the APA, each plasmodium has two information as the wave information: the direction and food index. We apply APA to a maze solving and route planning of road map.
In this paper, a set of micro-benchmarks is proposed to determine basic performance parameters of single-node mainstream hardware architectures for High Performance Computing. Performance parameters of recent processors, including those of accelerators, are determined. The investigated systems are Intel server processor architectures as well as the two accelerator lines Intel Xeon Phi and Nvidia graphic processors. Results show similarities for some parameters between all architectures, but significant differences for others.
Persons entering the working range of industrial robots are exposed to a high risk of collision with moving parts of the system, potentially causing severe injuries. Conventional systems, which restrict the access to this area, range from walls and fences to light barriers and other vision based protective devices (VBPD). None of these systems allow to distinguish between humans and workpieces in a safe and reliable manner. In this work, a new approach is investigated, which uses an active near-infrared (NIR) camera system with advanced capabilities of skin detection to distinguish humans from workpieces based on characteristic spectral signatures. This approach allows to implement more intelligent muting processes and at the same time increases the safety of persons working close to the robots. The conceptual integration of such a camera system into a VBPD and the enhancement of person detection methods through skin detection are described and evaluated in this paper. Based upon this work, next steps could be the development of multimodal sensor systems to safeguard working ranges of collaborating robots using the described camera system.
Manufacturers of machinery are increasingly using application programming of safety controls in order to implement safety functions. The EN ISO 13849-1 and EN 62061 standards define requirements concerning the development of software employed for safety functions. The IFA began addressing the subject of safety-related application software many years ago. Between 2011 and 2013, Project FF-FP0319 concerning standardscompliant development and documentation of safetyrelated user software in machine construction was successfully completed at the Bonn-Rhein-Sieg University of Applied Sciences in conjunction with numerous partner bodies from the machine construction sector and with funding from the DGUV. For this purpose, a procedure – the IFA matrix method – was developed, and evaluated and documented with reference to examples from industry, for implementation of the requirements concerning the development of software for machine safety functions. This paper provides insights into both the IFA matrix method and the new IFA report on the subject, and with information on what further tools are planned.
The proper use of protective hoods on panel saws should reliably prevent severe injuries from (hand) contact with the blade or material kickbacks. It also should minimize long-term lung damages from fine-particle pollution. To achieve both purposes the hood must be adjusted properly by the operator for each workpiece to fit its height. After a work process is finished, the hood must be lowered down completely to the bench. Unfortunately, in practice the protective hood is fixed at a high position for most of the work time and herein loses its safety features. A system for an automatic height adjustment of the hood would increase comfort and safety. If the system can distinguish between workpieces and skin reliably, it furthermore will reduce occupational hazards for panel saw users. A functional demonstrator of such a system has been designed and implemented to show the feasibility of this approach. A specific optical sensor system is used to observe a point on the extended cut axis in front of the blade. The sensor determines the surface material reliably and measures the distance to the workpiece surface simultaneously. If the distance changes because of a workpiece fed to the machine, the control unit will set the motor-adjusted hood to the correct height. If the sensor detects skin, the hood will not be moved. In addition a camera observes the area under the hood. If there are no workpieces or offcuts left under the hood, it will be lowered back to the default position.
Over the last 50 years, the controlled motion of robots has become a very mature domain of expertise. It can deal with all sorts of topologies and types of joints and actuators, with kinematic as well as dynamic models of devices, and with one or several tools or sensors attached to the mechanical structure. Nevertheless, the domain has not succeeded in standardizing the modelling of robot devices (including such fundamental entities as “reference frames”!), let alone the semantics of their motion specification and control. This thesis aims to solve this long-standing problem, from three different sides: semantic models for robot kinematics and dynamics, semantic models of all possible motion specification and control problems, and software that can support the latter while being configured by a systematic use of the former.
Polyether and polyether/ester based TPU (thermoplastic polyurethanes) were investigated with wide-angle XRD (X-ray diffraction) and SAXS (small angle X-ray scattering). Furthermore, SAXS measurements were performed in the temperature range of 30 °C to 130 °C. Polyether based polymers exhibit only one broad diffraction signal in a region of 2 θ 15° to 25°. In case of polyurethanes with ether/ester modification, the broad diffraction signal arises with small sharp diffraction signals. SAXS measurements of polymers reveal the size and shape of the crystalline zones of the polymer. Between 30 °C and 130 °C the size of the crystalline zone changes significantly. The size decreases in most of investigated TPU. In the case of Desmopan 9365D an increase of the particle size was observed.
The steadily decreasing prices of display technologies and computer graphics hardware contribute to the increasing popularity of multiple-display environments, like large, high-resolution displays. It is therefore necessary that educational organizations give the new generation of computer scientists an opportunity to become familiar with this kind of technology. However, there is a lack of tools that allow for getting started easily. Existing frameworks and libraries that provide support for multi-display rendering are often complex in understanding, configuration and extension. This is critical especially in educational context where the time that students have for their projects is limited and quite short. These tools are also rather known and used in research communities only, thus providing less benefit for future non-scientists. In this work we present an extension for the Unity game engine. The extension allows – with a small overhead – for implementation of applications that are apt to run on both single-display and multi-display systems. It takes care of the most common issues in the context of distributed and multi-display rendering like frame, camera and animation synchronization, thus reducing and simplifying the first steps into the topic. In conjunction with Unity, which significantly simplifies the creation of different kinds of virtual environments, the extension affords students to build mock-up virtual reality applications for large, high-resolution displays, and to implement and evaluate new interaction techniques and metaphors and visualization concepts. Unity itself, in our experience, is very popular among computer graphics students and therefore familiar to most of them. It is also often employed in projects of both research institutions and commercial organizations; so learning it will provide students with qualification in high demand.
This paper proposes a new artificial neural network-based maximum power point tracker for photovoltaic application. This tracker significantly improves efficiency of the photovoltaic system with series-connection of photovoltaic modules in non-uniform irradiance on photovoltaic array surfaces. The artificial neural network uses irradiance and temperature sensors to generate the maximum power point reference voltage and employ a classical perturb and observe searching algorithm. The structure of the artificial neural network was obtained by numerical modelling using Matlab/Simulink. The artificial neural network was trained using Bayesian regularisation back-propagation algorithms and demonstrated a good prediction of the maximum power point. Relative number of Vmpp prediction errors in range of ±0.2V is 0.05% based on validation data.
The paper presents a new control strategy of management of transport companies operating in completive transport environment. It is aimed to optimise the headway of transport companies to provide the balance between costs and benefits of operation under competition. The model of transport system build using AnyLogic comprises agent-based and discrete-event techniques. The model combined two transport companies was investigated under condition of the competition between them. It was demonstrated that the control strategy can ensure the balance of interests of transport companies trying to find compromise between cost of operation and quality of service.
This book chapter describes application examples of gas chromatography/mass spectrometry and pyrolysis – gas chromatography/mass spectrometry in failure analysis for the identification of chemical materials like mineral oils and nitrile rubber gaskets. Furthermore, failure cases demanding identification of polymers/copolymers in fouling on the compressor wall of a car air conditioner and identification of fouling on the surface of a bearing race from the automotive industry are demonstrated. The obtained analytical results were then used for troubleshooting and remedial action of the technological process.
In the fermentation process sugars are transformed into lactic acid. pH meters have traditionally been used for fermentation process monitoring based on acidity. More recently, near infrared (NIR) spectroscopy has proven to provide an accurate and non-invasive method to detect when the transformation of sugars into lactic acid is finished. The fermentation process when sugars are transformed into lactic acid. This research proposes the use of simplified NIR spectroscopy using multispectral optical sensors as a simpler and less expensive measure to end the fermentation process. The NIR spectrum of milk and yogurt is compared to find and extract features that can be used to design a simple sensor to monitor the yogurt fermentation process. Multispectral images in four selected wavebands within the NIR spectrum are captured and show different spectral remission characteristics for milk, yogurt and water, which support the selection of these wavebands for milk and yogurt classification.
This paper proposes an Artificial Plasmodium Algorithm (APA) mimicked a contraction wave of a plasmodium of physarum polucephalum. Plasmodia can live using the contracion wave in their body to communicate to others and transport a nutriments. In the APA, each plasmodium has two information as the wave information: the direction and food index. We apply the APA to 4 types of mazes and confirm that the APA can solve the mazes.
Binary relations with certain properties such as biorders, equivalences or difunctional relations can be represented as particular matrices. In order for these properties to be identified usually a rearrangement of rows and columns is required in order to reshape it into a recognisable normal form. Most algorithms performing these transformations are working on binary matrix representations of the underlying relations. This paper presents an approach to use the RLE-compressed matrix representation as a data structure for storing relations to test whether they are biorders in a hopefully more efficient way.
Annual Report 2013 - 2014
(2015)
Ultra-fast photopolymerization of experimental composites: DEA and FT-NIRS measurement comparison
(2015)
Simultaneous multifrequency radio observations of the Galactic Centre magnetar SGR J1745-2900
(2015)
Reducing energy consumption is one of the most pursued economic and ecologic challenges concerning societies as a whole, individuals and organizations alike. While politics start taking measures for energy turnaround and smart home energy monitors are becoming popular, few studies have touched on sustainability in office environments so far, though they account for almost every second workplace in modern economics. In this paper, we present findings of two parallel studies in an organizational context using behavioral change oriented strategies to raise energy awareness. Next to demonstrating potentials, it shows that energy feedback needs must fit to the local organizational context to succeed and should consider typical work patterns to foster accountability of consumption.
Reducing energy consumption is one of the most pursued economic and ecologic challenges concerning societies as a whole, individuals and organizations alike. While politics start taking measures for energy turnaround and smart home energy monitors are becoming popular, few studies have touched on sustainability in office environments so far, though they account for almost every second workplace in modern economics. In this paper, we present findings of two parallel studies in an organizational context using behavioral change oriented strategies to raise energy awareness. Next to demonstrating potentials, it shows that energy feedback needs must fit to the local organizational context to succeed and should consider typical work patterns to foster accountability of consumption.
So far, sustainable HCI has mainly focused on the domestic context, but there is a growing body of work looking at the organizational context. As in the domestic context, these works still rest on psychological theories for behaviour change used for the domestic context. We supplement this view with an organizational theory-informed approach that adopts organizational roles as a key element. We will show how a role-based analysis could be applied to uncover information needs and to give em-ployee’s eco-feedback, which is linked to their tasks at hand. We illustrate the approach on a qualitative case study that was part of a broader, ongoing action research conducted in a German production company.
Competitions for Benchmarking: Task and Functionality Scoring Complete Performance Assessment
(2015)
The central theme of the 2014 Annual Report is human thinking.
In an interview, University President Hartmut Ihne and 3Sat moderator Gert Scobel discuss the concept of thought: "Should we be allowed to give up our autonomy voluntarily?"
Our university’s Language Centre Director James Chamberlain examines to what extent thinking varies in different languages.
Professor Paul Plöger from the Department of Computer Science explains why robots have tremendous problems understanding complex relationships in open environments.
Rather than focusing solely on our university’s future, the Annual Report links the fascinating theme to the enormous variety of life, research and tuition offered by H-BRS.
The study of locomotion in virtual environments is a diverse and rewarding research area. Yet, creating effective and intuitive locomotion techniques is challenging, especially when users cannot move around freely. While using handheld input devices for navigation may often be good enough, it does not match our natural experience of motion in the real world. Frequently, there are strong arguments for supporting body-centered self-motion cues as they may improve orientation and spatial judgments, and reduce motion sickness. Yet, how these cues can be introduced while the user is not moving around physically is not well understood. Actuated solutions such as motion platforms can be an option, but they are expensive and difficult to maintain. Alternatively, within this article we focus on the effect of upper-body tilt while users are seated, as previous work has indicated positive effects on self-motion perception. We report on two studies that investigated the effects of static and dynamic upper body leaning on perceived distances traveled and self-motion perception (vection). Static leaning (i.e., keeping a constant forward torso inclination) had a positive effect on self-motion, while dynamic torso leaning showed mixed results. We discuss these results and identify further steps necessary to design improved embodied locomotion control techniques that do not require actuated motion platforms.
Since being introduced in the sixties and seventies, semi-implicit RosenbrockWanner (ROW) methods have become an important tool for the timeintegration of ODE and DAE problems. Over the years, these methods have been further developed in order to save computational effort by regarding approximations with respect to the given Jacobian [5], reduce effects of order reduction by introducing additional conditions [2, 4] or use advantages of partial explicit integration by considering underlying Runge-Kutta formulations [1]. As a consequence, there is a large number of different ROW-type schemes with characteristic properties for solving various problem formulations given in literature today.
We propose a high-performance GPU implementation of Ray Histogram Fusion (RHF), a denoising method for stochastic global illumination rendering. Based on the CPU implementation of the original algorithm, we present a naive GPU implementation and the necessary optimization steps. Eventually, we show that our optimizations increase the performance of RHF by two orders of magnitude when compared to the original CPU implementation and one order of magnitude compared to the naive GPU implementation. We show how the quality for identical rendering times relates to unfiltered path tracing and how much time is needed to achieve identical quality when compared to an unfiltered path traced result. Finally, we summarize our work and describe possible future applications and research based on this.
This presentation gives an overview of current research in the area of high quality rendering and visualization at the Institute of Visual Computing (IVC). Our research facility has some unique software and hardware installations of which we will describe a large, ultra- high resolution (72 megapixel) video wall in this presentation.
We present a system that combines voxel and polygonal representations into a single octree acceleration structure that can be used for ray tracing. Voxels are well-suited to create good level-of-detail for high-frequency models where polygonal simplifications usually fail due to the complex structure of the model. However, polygonal descriptions provide the higher visual fidelity. In addition, voxel representations often oversample the geometric domain especially for large triangles, whereas a few polygons can be tested for intersection more quickly.
A recent trend in interactive environments are large, ultra high resolution displays (LUHRDs). Compared to other large interactive installations, like the CAVE tm , LUHRDs are usually flat or (slightly) curved and have a significantly higher resolution, offering new research and application opportunities.
This tutorial provides information for researchers and engineers who plan to install and use a large ultra-high resolution display. We will give detailed information on the hardware and software of recently created and established installations and will show the variety of possible approaches. Also, we will talk about rendering software, rendering techniques and interaction for LUHRDs, as well as applications.