Refine
Departments, institutes and facilities
- Fachbereich Informatik (32)
- Fachbereich Ingenieurwissenschaften und Kommunikation (16)
- Institute of Visual Computing (IVC) (16)
- Institut für Cyber Security & Privacy (ICSP) (10)
- Institut für Verbraucherinformatik (IVI) (6)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (3)
- Fachbereich Wirtschaftswissenschaften (2)
- Fachbereich Angewandte Naturwissenschaften (1)
- Sprachenzentrum (1)
Document Type
- Conference Object (71) (remove)
Year of publication
- 2013 (71) (remove)
Keywords
- Education (2)
- Three-dimensional displays (2)
- end user development (2)
- 3D user interface (1)
- ARRs (1)
- Accessibility (1)
- Appropriateness (1)
- Cloud (1)
- Cognitive processing (1)
- Component Models (1)
Cost efficient energy monitoring in existing large buildings demands for autonomous indoor sensors with low power consumption, high performance in multipath fading channels and economic implementation. Good performance in multipath fading channels can be achieved with noncoherent chaotic modulation schemes such as chaos on-off keying (COOK) or differential chaos shift keying (DCSK). While COOK stands out in the area of power consumption, DCSK excels when it comes to its performance in noisy and multipath fading channels. This paper evaluates a combination of both schemes for autonomous indoor sensors. The simulation results show 50% less power consumption than DCSK and more than 3dB SNR gain in Rayleigh fading channels at BER=10-3 as compared to COOK, making it a promising candidate for low power transmission in autonomous wireless indoor sensors. We further present an enhanced version of this scheme showing another 1 dB SNR improvement, but at 25% less power consumption than DCSK.
Open Discovery Space
(2013)
Embodied artificial agents operating in dynamic, real-world environments need architectures that support the special requirements that exist for them. Architectures are not always designed from scratch and the system then implemented all at once, but rather, a step-wise integration of components is often made to increase functionality. Our work aims to increase flexibility and robustness by integrating a task planner into an existing architecture and coupling the planning process with the preexisting execution and the basic monitoring processes. This involved the conversion of monolithic SMACH scenario scripts (state-machine execution scripts) into modular states that can be called dynamically based on the plan that was generated by the planning process. The procedural knowledge encoded in such state machines was used to model the planning domain for two RoboCup@Home scenarios on a Care-O-Bot 3 robot [GRH+08]. This was done for the JSHOP2 [IN03] hierarchical task network (HTN) planner. A component which iterates through a generated plan and calls the appropriate SMACH states [Fie11] was implemented, thus enabling the scenarios. Crucially, individual monitoring actions which enable the robot to monitor the execution of the actions were designed and included, thus providing additional robustness.
We developed a scene text recognition system with active vision capabilities, namely: auto-focus, adaptive aperture control and auto-zoom. Our localization system is able to delimit text regions in images with complex backgrounds, and is based on an attentional cascade, asymmetric adaboost, decision trees and Gaussian mixture models. We think that text could become a valuable source of semantic information for robots, and we aim to raise interest in it within the robotics community. Moreover, thanks to the robot’s pan-tilt-zoom camera and to the active vision behaviors, the robot can use its affordances to overcome hindrances to the performance of the perceptual task. Detrimental conditions, such as poor illumination, blur, low resolution, etc. are very hard to deal with once an image has been captured and can often be prevented. We evaluated the localization algorithm on a public dataset and one of our own with encouraging results. Furthermore, we offer an interesting experiment in active vision, which makes us consider that active sensing in general should be considered early on when addressing complex perceptual problems in embodied agents.
Using an Embroidery Machine to Achieve a Deeper Understanding of Electromechanical Applications
(2013)
The aim of our research is preserving the learners’ initial motivation in educational settings by avoiding unnecessary conflicts that could decrease the learners’ joy of learning. In order to get a better understanding of particularly cul-ture-related factors that could jeopardize the learners’ motivation in international e-Learning scenarios, we devel-oped and exemplarily implemented the standardized questionnaire ‘Learning Culture’ in the Higher Education contexts of Germany and South Korea. Regarding motivation, we analysed how the students evaluated their own motivational predispositions towards outer influences, their purpose of learning and affections towards particular knowledge, and their strategies to deal with educational tasks that appear unmanageable or too difficult for them.
One idea behind Open Educational Resources (OERs) is opening up the access to learning resources for stakeholders who were not the originally targeted users. Even though making educational resources available for the public already is a remarkable achievement, their usefulness often is limited to a very particular context because of unclear or missing appropriateness regarding other contexts. In this paper, contextual appropriateness is investigated as a special quality criterion for OERs. We will introduce barriers against the use of OERs and demands from the educational community that need to be addressed in order to overcome such barriers. We will show that the hitherto implemented quality standards for Technology Enhanced Learning do not yet fully support such particular demands and discuss which additional steps are required for the context of OERs. We conclude with an outlook and recommendations that can open up the full potential of OERs.