Refine
Departments, institutes and facilities
- Fachbereich Informatik (77)
- Fachbereich Angewandte Naturwissenschaften (20)
- Fachbereich Ingenieurwissenschaften und Kommunikation (17)
- Fachbereich Wirtschaftswissenschaften (14)
- Institute of Visual Computing (IVC) (14)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (11)
- Institut für Verbraucherinformatik (IVI) (7)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (6)
- Institut für Sicherheitsforschung (ISF) (5)
- Fachbereich Sozialpolitik und Soziale Sicherung (4)
Document Type
- Report (201) (remove)
Year of publication
Keywords
- Robotik (5)
- Engineering (4)
- Cutting sticks-Problem (3)
- Teilsummenaufteilung (3)
- Virtuelle Realität (3)
- AAL-Technik (2)
- Benutzeroberfläche (2)
- Deep Learning (2)
- Forschungsbericht (2)
- Gravitation (2)
- Interaktion (2)
- Kosovo (2)
- Machine Learning (2)
- Mengenpartitionsproblem (2)
- Mensch-Maschine-Kommunikation (2)
- Perceptual Upright (2)
- Raumwahrnehmung (2)
- Verkehrssimulation (2)
- maschinelles Lernen (2)
- virtuelle Umgebungen (2)
- 3D Segmentation (1)
- 3D-Scanner (1)
- 802.11 (1)
- AOD (1)
- Adaptation (1)
- Adaptive Behavior (1)
- Adaptive Case Management (1)
- Agents (1)
- Algorithmische Informationstheorie (1)
- Allgegenwärtige Spiele (1)
- Altengerechte Technik (1)
- Altenpflege (1)
- Analyse (1)
- Analytische Chemie (1)
- Apprenticeship Learning (1)
- Arbeitsmigration (1)
- Architektur <Informatik> (1)
- Assistenzsystem (1)
- Aufrecht (1)
- Ausbildungspartnerschaft (1)
- Benchmark (1)
- Berufsausbildung (1)
- Bildverarbeitung (1)
- Biochemische Analyse (1)
- Bioinformatics (1)
- Biokompatibilität (1)
- Biokunststoff (1)
- Bioökonomie (1)
- Blasformen (1)
- Blockchain (1)
- Bodengängige Arbeitsmaschine für 6000 m Meerestiefe (1)
- Bubble-Chart (1)
- COD (1)
- Calibration (1)
- Centrifuge (1)
- Chaitin-Konstante (1)
- Chromatographische Analyse, Elektrophorese (1)
- Client-server-Konzept (1)
- Cloud Computing (1)
- Codierung (1)
- Collaboration/Cooperation (1)
- Comparative Analysis (1)
- Computer science (1)
- Computersicherheit (1)
- Concurrent Kleene Algebra (1)
- Context Metadata (1)
- Convexity (1)
- Created Gravity (1)
- Cryptography (1)
- Curriculum (1)
- CyberGlove (1)
- DCF (1)
- DNSSEC (1)
- Declarative Process Modeling (1)
- Demenz (1)
- Desinfektion (1)
- Diabetes mellitus (1)
- Disco (1)
- Distribution grid management (1)
- Domain-Specific Language (1)
- Domestic Robots (1)
- Duroplast (1)
- Dynamic Case Management (1)
- Educational Science (1)
- Eingebettetes System (1)
- Elektrische Simulation (1)
- Elektrohydraulische Fahr- und Lenkantriebe für die Tiefsee (1)
- Emotion (1)
- EnOcean (1)
- Energiemeteorologie (1)
- Erweiterte Realität (1)
- Erzeugungsprognose (1)
- Explosivstoff (1)
- FIVIS (1)
- FPGA (1)
- FS20 (1)
- Fahrradfahrsimulator (1)
- Fahrsimulator (1)
- Five Factor Model (1)
- Flüssigkristalline Polymere (1)
- Fuzzy Miner (1)
- Fährverkehr (1)
- GDDL (1)
- Gabor filters (1)
- Gasanalyse (1)
- Gassensor (1)
- Gefahrenprävention (1)
- Gefühl (1)
- Gestaltungsorientierte Wirtschaftsinformatik (1)
- Graph Convolutional Neural Networks (1)
- Grasp Domain Definition Language (1)
- Grasp Planner (1)
- Grasping (1)
- Handzeichenerkennung (1)
- Healthcare logistics (1)
- Hochleistungssport (1)
- Hochschule (1)
- Hochschule Bonn-Rhein-Sieg (1)
- HomeMatic (1)
- Human-Centered Robotics (1)
- Human-Computer-Interaction (HCI) (1)
- Humanoider Roboter (1)
- ICF (1)
- ISO9999 (1)
- Image Classification (1)
- Individualisierte Medizin (1)
- Inductive Visual Miner (1)
- Informationsgewinnung (1)
- Informationsverarbeitung (1)
- Infrarotmikroskopie (1)
- Inhaltsanalyse (1)
- Innovation (1)
- Instantaneous assignment (1)
- Inversion (1)
- Ionenbeweglichkeitsspektroskopie (1)
- KNX (1)
- Knochenersatz (1)
- Knowledge Graphs (1)
- Knowledge Worker (1)
- Knowledge-intensive Process (1)
- Kollaboration/Kooperation (1)
- Kolmogorov-Komplexität (1)
- Kompression und Zufälligkeit von Zeichenketten (1)
- Kultur (1)
- Kunststoffe (1)
- Künstliche Gravitation (1)
- Künstliche Intelligenz (1)
- LBP (1)
- LDP (1)
- LSTM (1)
- Laws of programming (1)
- Learning Context (1)
- Learning and Adaptive Systems (1)
- Lehrbeauftragter (1)
- Lehren (1)
- Leistungsdiagnostik (1)
- Leistungssport (1)
- Literaturstudie (1)
- Localization (1)
- Long-Term Autonomy (1)
- Luftfracht (1)
- METEOR score (1)
- Materials science (1)
- Mathematisch-naturwissenschaftlicher Unterricht (1)
- Mathematisches Modell (1)
- Mediendienste (1)
- Medienwirkungsforschung (1)
- Mengenpartitionierungsproblem (1)
- Mensch-Computer-Interaktion (1)
- Method of lines (1)
- Methodik (1)
- Migranten (1)
- Migration (1)
- Mikrogravitation (1)
- Mixed-Reality (MR) (1)
- Mobiler Roboter (1)
- Multi-robot systems (1)
- Naive physics (1)
- Natural Language Processing (1)
- OER (1)
- Object Segmentation (1)
- Open Educational Practices (1)
- Open source software (1)
- Orientierung (1)
- Out Of Distribution (OOD) data (1)
- Outer Space Research (1)
- Part Segmentation (1)
- Peer methods (1)
- Perception (1)
- Personality (1)
- Pervasive Gaming (1)
- Pflegeinformatik (1)
- Photovoltaik (1)
- Plagiat (1)
- Point Cloud Segmentation (1)
- Point Clouds (1)
- Polymerwerkstoffe (1)
- ProM (1)
- Probenahme (1)
- Process Automation (1)
- Process Mining (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Prozessautomation (1)
- Prozessmanagement (1)
- Qualitative reasoning (1)
- Quality (1)
- RGB-D (1)
- ROPOD (1)
- Radfahren (1)
- Raman-Spektroskopie (1)
- RapidMiner (1)
- Raumfahrt (1)
- Refinement (1)
- Reflektanz (1)
- Regionalentwicklung (1)
- Regionalwirtschaft (1)
- Robotic faults (1)
- Roleplaying Game (RPG) (1)
- Rollenspiel (1)
- Rollenspiele (1)
- SISAL (1)
- SPICE (1)
- Satellitenprodukte (1)
- Scene understanding through Deep Learning (1)
- Segmentation (1)
- Semantic Segmentation (1)
- Semantic models (1)
- Sensorik (1)
- Serviceroboter (1)
- Shallow water equations (1)
- Si reference cells (1)
- Si-Referenzzellen (1)
- Sicherheitsmaßnahme (1)
- Software (1)
- Spatio-Temporal (1)
- Spectral Analysis (1)
- Spectral Clustering (1)
- Spektraler Einfluss (1)
- Spielanalyse (1)
- Sprengstoffspürhund (1)
- Strahlung (1)
- Strahlungsvariabilität (1)
- Studienverlauf (1)
- Studium (1)
- Synergetik (1)
- Task allocation (1)
- Technik in Beziehung zu anderen Gebieten (1)
- Temporal constraints (1)
- Tiefsee-Simulator für 60 MPa Druck (1)
- Time extended assignment (1)
- Trace algebra (1)
- Trainingssteuerung (1)
- Transformers (1)
- Uncertainty Estimation (1)
- Unifying theories (1)
- Verkehrserziehung (1)
- Verkehrsnetz (1)
- Verkehrsnetzwerke (1)
- Verteilnetzbetriebsführung (1)
- Virtual Reality (1)
- WENO-schemes (1)
- Wahrnehmung (1)
- Warencode (1)
- Wasserverteilung (1)
- Weltraumforschung (1)
- Wettkampfanalyse (1)
- Wolkenparameter (1)
- ZWave (1)
- Zahnfüllung (1)
- Zentrifuge (1)
- ZigBee (1)
- adaptive user interfaces (1)
- assistive robots (1)
- automatic music generation (1)
- automatisierte Netzwerkgenerierung (1)
- binary classification (1)
- building automation (1)
- camera (1)
- cloud parameters (1)
- computer vision (1)
- constraint relaxation (1)
- control (1)
- control architectures (1)
- convex optimization (1)
- data glove (1)
- database (1)
- dynamics (1)
- energy (1)
- energy meteorology (1)
- energy saving (1)
- external faults (1)
- facial expression recognition (1)
- fiducial marker (1)
- generation forecast (1)
- grasp motions (1)
- grasping (1)
- hybrid dynamics solver (1)
- hybrid system (1)
- image captioning (1)
- immersive Visualisierung (1)
- infrared pattern (1)
- intelligente Agenten (1)
- intelligente virtuelle Agenten (1)
- interaction (1)
- inversion (1)
- long-distance modeling (1)
- machine learning for user modeling (1)
- metadata (1)
- migration (1)
- mobile manipulators (1)
- mobility assistance system (1)
- motion capture (1)
- music analysis (1)
- numerical weather prediction (1)
- numerische Wettervorhersage (1)
- optical character recognition (1)
- optical tracking (1)
- photovoltaics (1)
- prehensile motions (1)
- radiation (1)
- radiation variability (1)
- recommender systems (1)
- reflectance (1)
- representation learning (1)
- road (1)
- robot control (1)
- robot dynamics (1)
- robotic arm (1)
- robotic evaluation (1)
- robotics (1)
- satellite products (1)
- scene-segmentation (1)
- scenes (1)
- security (1)
- spectral influence (1)
- static friction (1)
- task models (1)
- taxonomie (1)
- technology mapping (1)
- text detection (1)
- text localization (1)
- traffic sign detection (1)
- traffic sign localization (1)
- user input (1)
- user interaction (1)
- vocational education (1)
- workflow management (1)
- Öffentlichkeit (1)
Selbstfahrende Arbeitsmaschinen für den Einsatz auf dem Boden der Tiefsee von beispielsweise 6000 m Tiefe existieren derzeit noch nicht. Im Rahmen der vorliegenden Untersuchungen wird das Problem des Antreibens und Steuerns schwerpunktmäßig mit dem Ziel behandelt, die Grundlagen für weiterführende Entwicklungen eines elektrohydraulischen Fahr- und Lenkantriebes zu schaffen. Hierzu wird ein Versuchsaggregat unter einem Umgebungsdruck von 60 MPa (600 bar) im Tiefsee-Simulator getestet. Die dabei gewonnenen Erkenntnisse sind bei der konstruktiven Weiterentwicklung solcher Antriebe übertragbar.
Die vorliegende Arbeit beschäftigt sich mit der numerischen Behandlung Differential-Algebraischer Gleichungen (DAE" s). DAE" s treten beispielsweise bei der Modellierung der Dynamik mechanischer System, der Schaltkreissimulation sowie der chemischen Reaktionskinetik auf. Es werden Rosenbrock-Wanner ähnliche Verfahren zu deren Lösung hergeleitet und an technischen Modellen (Fahrzeugachse und Verstärker) getestet.
SISAL: User manual
(1990)
Zur Perzentilberechnung
(1990)
Chloride in Mosel und Saar
(1992)
Technology Transfer in Developing Countries: Computer Integrated Manufacturing (CIM) in China
(1994)
Szenariogestützte Endauswahl
(1996)
CASTLE is a co-design platform developed at GMD SET institute. It provides a number of design tools for configuring application specific design flows. This paper presents a walk through the CASTLE co-design environment, following the design flow of a video processing system. The design methodology and the tool usage for this real life example are described, as seen from a designers point of view. The design flow starts with a C/C++ program and gradually derives a register-transfer level description of a processor hardware, as well as the corresponding compiler for generating the processor opcode. The main results of each design step are presented and the usage of the CASTLE tools at each step is explained.
Workflow-Management-Systemen (WFMS) kommt bei der prozeßorientierten Neugestaltung der Unternehmenstätigkeit eine zentrale Rolle zu. Jedoch erschweren ein bislang noch uneinheitliches Verständnis der grundlegenden Architektur und Aufgaben eines WFMS die Gestaltung von WFMS-gestützten Unternehmensprozessen. In dieser Situation können sich WFMS-Referenzarchitekturen als sehr hilfreich erweisen, da sie ordnend in die heterogene Landschaft prozeßorientierter Konzepte, Architekturen und Systeme eingreifen. Ausgehend von zwei bekannten WFMS-Referenzarchitekturen und einer Abgrenzung der Funktionen eines WFMS präsentiert der vorliegende Beitrag eine allgemeine WFMS-Rahmenarchitektur, die auf einem Client/Server-Modell des Workflow-Computing basiert. Zwecks Demonstration der praktischen Relevanz werden ausgewählte Komponenten konkreter WFMS in die Rahmenarchitektur eingeordnet.
Mobile-Commerce-Studie
(2000)
This report has been prepared by the SETAC Europe Scientific Task Group on Global And RegionaL Impact Categories (SETAC-Europe/STG-GARLIC) that is installed by the 2nd SETAC Europe working group on life cycle impact assessment (WIA-2). This document is background to a chapter written by the same authors under the title “Climate change, stratospheric ozone depletion, photo-oxidant formation, acidification and eutrophication” in Udo de Haes et al. (2002). The chapter summarises the work of the STG-GARLIC and aims to give a state-of-the-art review of the best available practice(s) regarding category indicators and lists of concomitant characterisation factors for climate change, stratospheric ozone depletion, photo-oxidant formation, acidification, and aquatic and terrestrial eutrophication. Backgrounds on each of the specific impact categories are given in another background report from Klöpffer and Potting (2001).
This background report provides details on a selection of general issues relevant in relation to LCA and characterisation of impact in LCA. The document starts with a short introduction of the LCA methodology and impact assessment in LCA for non LCA-experts. LCA experts, on the other hand, will usually not be familiar in-depth with scientific and political backgrounds of the specific impact categories. A review of this is given. Also the discussion is provided about the issue of the position of the category indicator in the causality chain, and into the related issue of spatial differentiation. These two issues appeared to be one of the core items for SETAC-Europe/STG-GARLIC.
Mit kühlem Kopf studieren
(2002)
Die Ausführung großer Baumaßnahmen wird in der Regel an einen Generalunternehmer vergeben. Die Gebäude entstehen oft unter hohem Zeit- und Kostendruck. Eine energiebewusste Planung scheint alles nur komplizierter zu machen, gerade wenn ohnehin verschiedenartige Nutzungen und die entsprechenden baulichen Anforderungen unter einen Hut gebracht werden müssen.
The problem of filtering relevant information from the huge amount of available data is tackled by using models of the user's interest in order to discriminate interesting information from un-interesting data. As a consequence, Machine Learning for User Modeling (ML4UM) has become a key technique in recent adaptive systems. This article presents the novel approach of conceptual user models which are easy to understand and which allow for the system to explain its actions to the user. We show that ILP can be applied for the task of inducing user models from even sparse feedback by mutual sample enlargement. Results are evaluated independently of domain knowledge within a clear machine learning problem definition. The whole concept presented is realized in a meta web search engine, OySTER.
Zur Schätzung der Exposition von Oberflächengewässern durch Pflanzenschutzmittel werden PEC-Werte mit Hilfe eines probabilistischen Verfahrens ermittelt. Hierfür werden zunächst verschiedene Regressionsanalysen zur Modellierung der Abdrift durchgeführt. Anschließend wird die ausgewählte Abdriftverteilung mit verschiedenen Verteilungsansätzen für die Aufwandmenge und das Gewässervolumen kombiniert.
Neue technologische Entwicklungen basieren immer mehr auf einer
zunehmenden Mathematisierung, gerade in den Ingenieurwissenschaften.
Nicht erst seit PISA ist jedoch zu beobachten, dass sich das
belastbare mathematische Grundwissen vieler Studienanfänger in den letzten Jahren verringert hat.
Im vorliegenden Beitrag wird dieses Spannungsfeld, in dem sich die Ingenieurmathematik befindet, aus Sicht von Fachhochschuldozenten beschrieben. Ausgehend von den Ausbildungszielen der Ingenieurmathematik werden Anforderungen an die Schulmathematik abgeleitet.
Diese Anforderungen werden beispielhaft für die Einführung und den Umgang mit den mathematischen Objekten Zahlen, Terme, Gleichungen und Funktionen konkretisiert.
Ziel ist eine Sensibilisierung von Mathematiklehrerinnen und -lehrern, um ihre Schulabsolventinnen und -absolventen besser für ein zukünftiges ingenieurwissenschaftliches Studium zu rüsten.
Machine Learning seems to offer the solution to the central problem in recommender systems: Learning to recommend interesting items from observations. However, one tends to run into similar problems each time one tries to apply out-of-the-box solutions from Machine Learning. This article relates the problem of recommendation by user modeling closely to the machine learning problem and explicates some inherent dilemmas. A few examples will illustrate specific approaches and discuss underlying assumptions on the domain or how learned hypotheses relate to requirements on the user model. The article concludes with a tentative 'checklist' that one might like to consider when thinking about to use Machine Learning in User Adaptive environments such as recommender systems.
Data management is a challenge in both scientific and technical environments. Therefore researchers have developed a special interest in this field. Modern approaches (i.e. Subversion, CVS) already offer authoring and versioning in distributed systems. However this might be insufficient in a vast number of scenarios, where not only the data resulting from a process, but also data which describes the process that generated those results is crucial.
Ziel der vorliegenden Studie ist eine möglichst umfassende Bestandsaufnahme der entwicklungsbezogenen Forschung, Lehre und forschungsgestützten Beratung in Nordrhein-Westfalen. Eine solche Bestandsaufnahme der wissenschaftlichen Aktivitäten 1) im Entwicklungsbereich ist insofern bedeutsam, als Entwicklungsländer und ihre (kommenden) Eliten für das Land NRW als entwicklungs- und außenwirtschaftlicher Akteur von wachsender Bedeutung sind. Tendenzen, Themen, Kooperationspartner und Netzwerke der Wissen-schaft hier und in den Entwicklungsländern zu kennen kann hilfreich sein. Zudem lassen sich wissenschaftspolitische Schlussfolgerungen ziehen, die für die Internationalisierung der nordrhein-westfälischen Hochschulen und innovative Formen transdimensionaler Vernetzung von Politik, Wissenschaft, Wirtschaft und Gesellschaft wertvoll sein können.
This thesis introduces and demonstrates a novel method for learning qualitative models of the world by an autonomous robot. The method makes possible generation of qualitative models that can be used for prediction as well as directing the experiments to improve the model. The qualitative models form the knowledge representation of the robot and consists of qualitative trees and non-deterministic finite automaton. An efficient exploration algorithm that lets the robot collect the most relevant learning samples is also introduced. To demonstrate the use of the methodology, representation and algorithm, two experiments are described. The first experiment is conducted using a mobile robot and a ball, where the robot observes the ball and learns the effect of its actions on the observed attributes of the world. The second experiment is conducted using a mobile robot and five boxes, two non-movable boxes and three movable boxes. The robot experiments actively with the objects and observes the changes in the attributes of the world. The main difference with the two experiments is that the first one tries to learn by observation while the second tries to learn by experimentation. In both experiments the robot learns qualitative models from its actions and observations. Although the primary objective of the robot is to improve itself by being able to predict the outcome of its actions, the models Learned were also used at each step of the learning process to direct the experiments so that the model converges to the final model as quickly as possible.
XPERSIF: a software integration framework & architecture for robotic learning by experimentation
(2008)
The integration of independently-developed applications into an efficient system, particularly in a distributed setting, is the core issue addressed in this work. Cooperation between researchers across various field boundaries in order to solve complex problems has become commonplace. Due to the multidisciplinary nature of such efforts, individual applications are developed independent of the integration process. The integration of individual applications into a fully-functioning architecture is a complex and multifaceted task. This thesis extends a component-based architecture, previously developed by the authors, to allow the integration of various software applications which are deployed in a distributed setting. The test bed for the framework is the EU project XPERO, the goal of which is robot learning by experimentation. The task at hand is the integration of the required applications, such as planning of experiments, perception of parametrized features, robot motion control and knowledge-based learning, into a coherent cognitive architecture. This allows a mobile robot to use the methods involved in experimentation in order to learn about its environment. To meet the challenge of developing this architecture within a distributed, heterogeneous environment, the authors specified, defined, developed, implemented and tested a component-based architecture called XPERSIF. The architecture comprises loosely-coupled, autonomous components that offer services through their well-defined interfaces and form a service-oriented architecture. The Ice middleware is used in the communication layer. Its deployment facilitates the necessary refactoring of concepts. One fully specified and detailed use case is the successful integration of the XPERSim simulator which constitutes one of the kernel components of XPERO.The results of this work demonstrate that the proposed architecture is robust and flexible, and can be successfully scaled to allow the complete integration of the necessary applications, thus enabling robot learning by experimentation. The design supports composability, thus allowing components to be grouped together in order to provide an aggregate service. Distributed simulation enabled real time tele-observation of the simulated experiment. Results show that incorporating the XPERSim simulator has substantially enhanced the speed of research and the information flow within the cognitive learning loop.
„from stable to table“
(2008)
Publikation Umweltdaten
(2009)
Autonomous mobile robots need internal environment representations or models of their environment in order to act in a goal-directed manner, plan actions and navigate effectively. Especially in those situations where a robot can not be provided with a manually constructed model or in environments that change over time, the robot needs to possess the ability of autonomously constructing models and maintaining these models on its own. To construct a model of an environment multiple sensor readings have to be acquired and integrated into a single representation. Where the robot has to take these sensor readings is determined by an exploration strategy. The strategy allows the robot to sense all environmental structures and to construct a complete model of its workspace. Given a complete environment model, the task of inspection is to guide the robot to all modeled environmental structures in order to detect changes and to update the model if necessary. Informally stated, exploration and inspection provide the means for acquiring as much information as possible by the robot itself. Both exploration and inspection are highly integrated problems. In addition to the according strategies, they require for several abilities of a robotic system and comprise various problems from the field of mobile robotics including Simultaneous localization and Mapping (SLAM), motion planning and control as well as reliable collision avoidance. The goal of this thesis is to develop and implement a complete system and a set of algorithms for robotic exploration and inspection. That is, instead of focussing on specific strategies, robotic exploration and inspection are addressed as the integrated problems that they are. Given the set of algorithms a real mobile service robot has to be able to autonomously explore its workspace, construct a model of its workspace and use this model in subsequent tasks e.g. for navigating in the workspace or inspecting the workspace itself. The algorithms need to be reliable, robust against environment dynamics and internal failures and applicable online in real-time on a real mobile robot. The resulting system should allow a mobile service robot to navigate effectively and reliably in a domestic environment and avoid all kinds of collisions. In the context of mobile robotics, domestic environments combine the characteristics of being cluttered, dynamic and populated by humans and domestic animals. SLAM is going to be addressed in terms of incremental range image registration which provides efficient means to construct internal environment representations online while moving through the environment. Two registration algorithms are presented that can be applied on two-dimensional and three-dimensional data together with several extensions and an incremental registration procedure. The algorithms are used to construct two different types of environment representations, memory-efficient sparse points and probabilistic reflection maps. For effective navigation in the robot’s workspace, different path planning algorithms are going to be presented for the two types of environment representations. Furthermore, two motion controllers will be described that allow a mobile robot to follow planned paths and to approach a target position and orientation. Finally this thesis will present different exploration and inspection strategies that use the aforementioned algorithms to move the robot to previously unexplored or uninspected terrain and update the internal environment representations accordingly. These strategies are augmented with algorithms for detecting changes in the environment and for segmenting internal models into individual rooms. The resulting system performed very successfully in the 2008 and 2009 RoboCup@Home competitions.
For many practical problems an efficient solution of the one-dimensional shallow water equations (Saint-Venant equations) is important, especially when large networks of rivers, channels or pipes are considered. In order to test and develop numerical methods four test problems are formulated. These tests include the well known dam break and hydraulic jump problems and two steady state problems with varying channel bottom, channel width and friction.
This report presents the implementation and evaluation of a computer vision task on a Field Programmable Gate Array (FPGA). As an experimental approach for an application-specific image-processing problem it provides reliable results to measure gained performance and precision compared with similar solutions on General Purpose Processor (GPP) architectures.
The project addresses the problem of detecting Binary Large OBjects (BLOBs) in a continuous video stream. For this problem a number of different solutions exist. But most of these are realized on GPP platforms, where resolution and processing speed define the performance barrier. With the opportunity of parallelization and performance abilities like in hardware, the application of FPGAs become interesting. This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg. It address the detection of the users position and orientation in relation to the virtual environment in an Immersion Square.
The goal is to develop a light emitting device, that points from the user towards the point of interest on the projection screen. The projected light dots are used to represent the user in the virtual environment. By detecting the light dots with video cameras, the idea is to interface the position and orientation of the relative position of the user interface. Fort that the laser dots need to be arranged in a unique pattern, which requires at least five points.[29] For a reliable estimation a robust computation of the BLOB's center-points is necessary.
This project has covered the development of a BLOB detection system on a FPGA platform. It detects binary spatially extended objects in a continuous video stream and computes their center points. The results are displayed to the user and where validated for their ground truth. The evaluation compares precision and performance gain against similar approaches on GPP platforms.
Schwingungen sind Bestandteil einer jeden Maschine mit beweglichen Teilen, manchmal sind sie unbedeutend, manchmal deutlich merkbar. Die Überwachung von Schwingungen einer Maschine kann dazu dienen, die Maschinenfunktion zu kontrollieren oder auch frühzeitig Verschleißschäden zu erkennen. Stand der Technik ist die Messung von Schwingungen mit präzisen, aber auch kostenintensiven piezoelektrischen Beschleunigungssensoren und aufwändigen Messsystemen. Solche konventionellen Systeme sind für hochpreisige Maschinen und Anlagen, z. B. Windkraftanlagen, einsetzbar, aber nicht für kleine und mittelgroße Maschinen. Hier besteht der Bedarf an wirklichen Low-Cost Messsystemen, die so günstig sind, dass sie permanent in die Maschine integriert sind und permanent die Schwingungen überwachen. Im Forschungsprojekt wurde ein Baukasten aus kommerziell verfügbaren Komponenten entwickelt, mit dem Low-Cost Schwingungsmesssysteme aufgebaut werden können.
Low Cost Displays
(2010)
This report describes the design, the implementation and the usage of a system for managing different systems for automated theorem proving and automatically generated proofs. In particular, we focus on a user-friendly web-based interface and a structure for collecting and cataloguing proofs in a uniform way. The second point hopefully helps to understand the structure of automatically generated proofs and builds a starting point for new insights for strategies for proof planning.
This report presents the implementation and evaluation of a computer vision problem on a Field Programmable Gate Array (FPGA). This work is based upon [5] where the feasibility of application specific image processing algorithms on a FPGA platform have been evaluated by experimental approaches. The results and conclusions of that previous work builds the starting point for the work, described in this report. The project results show considerable improvement of previous implementations in processing performance and precision. Different algorithms for detecting Binary Large OBjects (BLOBs) more precisely have been implemented. In addition, the set of input devices for acquiring image data has been extended by a Charge-Coupled Device (CCD) camera. The main goal of the designed system is to detect BLOBs in continuous video image material and compute their center points.
This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg1 . The intent is the invention of a passive tracking device for an immersive environment to improve user interaction and system usability. Therefore the detection of the users position and orientation in relation to the projection surface is required. For a reliable estimation a robust and fast computation of the BLOB's center-points is necessary. This project has covered the development of a BLOB detection system on an Altera DE2 Development and Education Board with a Cyclone II FPGA. It detects binary spatially extended objects in image material and computes their center points. Two different sources have been applied to provide image material for the processing. First, an analog composite video input, which can be attached to any compatible video device. Second, a five megapixel CCD camera, which is attached to the DE2 board. The results are transmitted on the serial interface of the DE2 board to a PC for validation of their ground truth and further processing. The evaluation compares precision and performance gain dependent on the applied computation methods and the input device, which is providing the image material.
Reversible logic synthesis is an emerging research topic with different application areas like low-power CMOS design, quantum- and optical computing. The key motivation behind reversible logic synthesis is the optimization of the heat dissipation problem current architectures show, by reducing it to theoretically zero [2].
This thesis work presents the implementation and validation of image processing problems in hardware to estimate the performance and precision gain. It compares the implementation for the addressed problem on a Field Programmable Gate Array (FPGA) with a software implementation for a General Purpose Processor (GPP) architecture. For both solutions the implementation costs for their development is an important aspect in the validation. The analysis of the flexibility and extendability that can be achieved by a modular implementation for the FPGA design was another major aspect. This work is based upon approaches from previous work, which included the detection of Binary Large OBjects (BLOBs) in static images and continuous video streams [13, 15]. One addressed problem of this work is the tracking of the detected BLOBs in continuous image material. This has been implemented for the FPGA platform and the GPP architecture. Both approaches have been compared with respect to performance and precision. This research project is motivated by the MI6 project of the Computer Vision research group, which is located at the Bonn-Rhein-Sieg University of Applied Sciences. The intent of the MI6 project is the tracking of a user in an immersive environment. The proposed solution is to attach a light emitting device to the user for tracking the created light dots on the projection surface of the immersive environment. Having the center points of those light dots would allow the estimation of the user’s position and orientation. One major issue that makes Computer Vision problems computationally expensive is the high amount of data that has to be processed in real-time. Therefore, one major target for the implementation was to get a processing speed of more than 30 frames per second. This would allow the system to realize feedback to the user in a response time which is faster than the human visual perception. One problem that comes with the idea of using a light emitting device to represent the user, is the precision error. Dependent on the resolution of the tracked projection surface of the immersive environment, a pixel might have a size in cm2. Having a precision error of only a few pixels, might lead to an offset in the estimated user’s position of several cm. In this research work the development and validation of a detection and tracking system for BLOBs on a Cyclone II FPGA from Altera has been realized. The system supports different input devices for the image acquisition and can perform detection and tracking for five to eight BLOBs. A further extension of the design has been evaluated and is possible with some constraints. Additional modules for compressing the image data based on run-length encoding and sub-pixel precision for the computed BLOB center-points have been designed. For the comparison of the FPGA approach for BLOB tracking a similar implementation in software using a multi-threaded approach has been realized. The system can transmit the detection or tracking results on two available communication interfaces, USB and RS232. The analysis of the hardware solution showed a similar precision for the BLOB detection and tracking as the software approach. One problem is the strong increase of the allocated resources when extending the system to process more BLOBs. With one of the applied target platforms, the DE2-70 board from Altera, the BLOB detection could be extended to process up to thirty BLOBs. The implementation of the tracking approach in hardware required much more effort than the software solution. The design of high level problems in hardware for this case are more expensive than the software implementation. The search and match steps in the tracking approach could be realized more efficiently and reliably in software. The additional pre-processing modules for sub-pixel precision and run-length-encoding helped to increase the system’s performance and precision.
Die sogenannte Regionalstudie trägt den Langtitel "Regionalwirtschaftliche Bedeutung der Hochschule Bonn-Rhein-Sieg und zukünftige Einbindung in die strategische Entwicklung der Region" und wurde 2010 erstellt und 2011 veröffentlicht. Sie analysiert die regionalwirtschaftliche Bedeutung der Hochschule primär unter quantitativen Aspekten und aus externer Sicht. Viele der daraus resultierenden Anregungen für die strategische Ausrichtung der Hochschule wurden bereits während der Feldphase im Hochschulentwicklungsplan aufgegriffen, konnten wegen der Parallelität aber nicht in der Studie Eingang finden.
Die Analyse zeigt, welchen Beitrag die Hochschule für die regionale Entwicklung leisten kann - im Einklang mit den Kommunen, der Gebietskörperschaften und der regional ansässigen Unternehmen.
A system that interacts with its environment can be much more robust if it is able to reason about the faults that occur in its environment, despite perfect functioning of its internal components. For robots, which interact with the same environment as human beings, this robustness can be obtained by incorporating human-like reasoning abilities in them. In this work we use naive physics to enable reasoning about external faults in robots. We propose an approach for diagnosing external faults that uses qualitative reasoning on naive physics concepts for diagnosis. These concepts are mainly individual properties of objects that define their state qualitatively. The reasoning process uses physical laws to generate possible states of the concerned object(s), which could result into a detected external fault. Since effective reasoning about any external fault requires the information of relevant properties and physical laws, we associate different properties and laws to different types of faults which can be detected by a robot. The underlying ontology of this association is proposed on the basis of studies conducted (by other researchers) on reasoning of physics novices about everyday physical phenomena. We also formalize some definitions of properties of objects into a small framework represented in First-Order logic. These definitions represent naive concepts behind the properties and are intended to be independent from objects and circumstances. The definitions in the framework illustrates our proposal of using different biased definitions of properties for different types of faults. In this work, we also present a brief review of important contributions in the area of naive/qualitative physics. These reviews help in understanding the limitations of naive/qualitative physics in general. We also apply our approach to simple scenarios to asses its limitations in particular. Since this work was done independent of any particular real robotic system, it can be seen as a theoretical proof of the concept of usefulness of naive physics for external fault reasoning in robotics.
forschung@h-brs
(2011)
This study presents the findings of a quantitative study on the use of Open Educational Resources (OER) and Open Educational Practices (OEP) in Higher Education and Adult Learning Institutions. The study is based on the results of an online survey targeted at four educational roles: educational policy makers; institutional policy makers/managers; educational professionals; and learners. The report encompasses five chapters and four annexes. Chapter I presents the survey and Chapter II discloses the main research questions and models. Chapter III characterises the universe of respondents. Chapter IV advances with a detailed survey analysis including an overview of key statistical data. Finally, Chapter V provides an exploratory in-depth analysis of some key issues: representations, attitudes and uses of OEP. The table of contents and the complete list of diagrams and tables can be found at the end of the report.
The roadmap for quality and innovation through open educational practices has been conceived as a number of steps, a conceptual document, which can be used by organisations, leaners or professionals in order to improve their open educational practices. After the development of the core concept of the OPAL project, the guidelines for OEP, it became clear that these guidelines, would have to play an important part on the roadmap exercise, because they represent the very essence of how to foster and stimulate open educational practices. The roadmap therefore is meant to be an instrument, a tool which helps the different stakeholders to use the guidelines for their own context and purpose.
In Anlehnung an die von Leidner und Kayworth (2006) durchgeführte Studie zum Umgang mit Kultur in der angelsächsischen Wissenschaftsdisziplin „Information Systems“ wurde eine entsprechende Literaturstudie für die gestaltungsorientierte Wirtschaftsinformatik des deutschen Sprachraums durchgeführt. In der Studie wurde in den Hauptorganen der Disziplin untersucht, in welcher Häufigkeit kulturelle Einflüsse auf Informationstechnologie thematisiert wurden, wie diese Einflüsse aufgearbeitet wurden und welche Referenzmodel-le/Referenzliteratur verwendet wurden. Nach einer kurzen Darstellung der gewählten Vorgehensweise werden die Ergebnisse und Beschränkungen der Studie präsentiert.
Having multiple talkers on a bus system rises the bandwidth on this bus. To monitor the communication on a bus, tools that constantly read the bus are needed. This report shows an implementation of a monitoring system for the CAN bus utilizing the Altera DE2 development board. The Biomedical Institute of the University of New Brunswick is currently developing together with different partners a prosthetic limb device, the UNB hand. Communication in this device is done via two CAN buses, which operate at a bit-rate of 1 Mbit/s. The developed monitoring system has been completely designed in Verilog HDL. It monitors the CAN bus in real-time and allows monitoring of different modules as well as of the overall load. The calculated data is displayed on the built-in LCD and also transmitted via UART to a PC. A sample receiver programmed in C is also given. The evaluation of this system has been done by using the Microchip CAN Bus Analyzer Tool connected to the GPIO port of the development board that simulates CAN communication.
Nowadays Field Programmable Gate Arrays (FPGA) are used in many fields of research, e.g. to create prototypes of hardware or in applications where hardware functionality has to be changed more frequently. Boolean circuits, which can be implemented by FPGAs are the compiled result of hardware description languages such as Verilog or VHDL. Odin II is a tool, which supports developers in the research of FPGA based applications and FPGA architecture exploration by providing a framework for compilation and verification. In combination with the tools ABC, T-VPACK and VPR, Odin II is part of a CAD flow, which compiles Verilog source code that targets specific hardware resources. This paper describes the development of a graphical user interface as part of Odin II. The goal is to visualize the results of these tools in order to explore the changing structure during the compilation and optimization processes, which can be helpful to research new FPGA architectures and improve the workflow.
A robot (e.g. mobile manipulator) that interacts with its environment to perform its tasks, often faces situations in which it is unable to achieve its goals despite perfect functioning of its sensors and actuators. These situations occur when the behavior of the object(s) manipulated by the robot deviates from its expected course because of unforeseeable ircumstances. These deviations are experienced by the robot as unknown external faults. In this work we present an approach that increases reliability of mobile manipulators against the unknown external faults. This approach focuses on the actions of manipulators which involve releasing of an object. The proposed approach, which is triggered after detection of a fault, is formulated as a three-step scheme that takes a definition of a planning operator and an example simulation as its inputs. The planning operator corresponds to the action that fails because of the fault occurrence, whereas the example simulation shows the desired/expected behavior of the objects for the same action. In its first step, the scheme finds a description of the expected behavior of the objects in terms of logical atoms (i.e. description vocabulary). The description of the simulation is used by the second step to find limits of the parameters of the manipulated object. These parameters are the variables that define the releasing state of the object.
Using randomly chosen values of the parameters within these limits, this step creates different examples of the releasing state of the object. Each one of these examples is labelled as desired or undesired according to the behavior exhibited by the object (in the simulation), when the object is released in the state corresponded by the example. The description vocabulary is also used in labeling the examples autonomously. In the third step, an algorithm (i.e. N-Bins) uses the labelled examples to suggest the state for the object in which releasing it avoids the occurrence of unknown external faults.
The proposed N-Bins algorithm can also be used for binary classification problems. Therefore, in our experiments with the proposed approach we also test its prediction ability along with the analysis of the results of our approach. The results show that under the circumstances peculiar to our approach, N-Bins algorithm shows reasonable prediction accuracy where other state of the art classification algorithms fail to do so. Thus, N-Bins also extends the ability of a robot to predict the behavior of the object to avoid unknown external faults. In this work we use simulation environment OPENRave that uses physics engine ODE to simulate the dynamics of rigid bodies.
This project investigated the viability of using the Microsoft Kinect in order to obtain reliable Red-Green-Blue-Depth (RGBD) information. This explored the usability of the Kinect in a variety of environments as well as its ability to detect different classes of materials and objects. This was facilitated through the implementation of Random Sample and Consensus (RANSAC) based algorithms and highly parallelized workflows in order to provide time sensitive results. We found that the Kinect provides detailed and reliable information in a time sensitive manner. Furthermore, the project results recommend usability and operational parameters for the use of the Kinect as a scientific research tool.
The work presented in this paper focuses on the comparison of well-known and new fault-diagnosis algorithms in the robot domain. The main challenge for fault diagnosis is to allow the robot to effectively cope not only with internal hardware and software faults but with external disturbances and errors from dynamic and complex environments as well. Based on a study of literature covering fault-diagnosis algorithms, I selected four of these methods based on both linear and non-linear models, analysed and implemented them in a mathematical robot-model, representing a four-wheels-OMNI robot. In experiments I tested the ability of the algorithms to detect and identify abnormal behaviour and to optimize the model parameters for the given training data. The final goal was to point out the strengths of each algorithm and to figure out which method would best suit the demands of fault diagnosis for a particular robot.
The ability of detecting people has become a crucial subtask, especially in robotic systems which aim an application in public or domestic environments. Robots already provide their services e.g. in real home improvement markets and guide people to a desired product. In such a scenario many robot internal tasks would benefit from the knowledge of knowing the number and positions of people in the vicinity. The navigation for example could treat them as dynamical moving objects and also predict their next motion directions in order to compute a much safer path. Or the robot could specifically approach customers and offer its services. This requires to detect a person or even a group of people in a reasonable range in front of the robot. Challenges of such a real-world task are e.g. changing lightning conditions, a dynamic environment and different people shapes. In this thesis a 3D people detection approach based on point cloud data provided by the Microsoft Kinect is implemented and integrated on mobile service robot. A Top-Down/Bottom-Up segmentation is applied to increase the systems flexibility and provided the capability to the detect people even if they are partially occluded. A feature set is proposed to detect people in various pose configurations and motions using a machine learning technique. The system can detect people up to a distance of 5 meters. The experimental evaluation compared different machine learning techniques and showed that standing people can be detected with a rate of 87.29% and sitting people with 74.94% using a Random Forest classifier. Certain objects caused several false detections. To elimante those a verification is proposed which further evaluates the persons shape in the 2D space. The detection component has been implemented as s sequential (frame rate of 10 Hz) and a parallel application (frame rate of 16 Hz). Finally, the component has been embedded into complete people search task which explorates the environment, find all people and approach each detected person.
Education is widely seen as an important means of addressing both national and international problems, such as political or religious extremism, poverty, and hunger. If publicly available educational resources (OERs) shall help overcoming the educational gap, localization is one of the major issues we need to deal with. Educators as well as learners need to be supported to determine adaptation needs. This paper provides a list of possible in-fluence factors on educational scenarios which are defined as context metadata. In the given form, the list needs to be understood as an addendum for the paper entitled ‘Open Educational Resources: Education for the World?’ from Thomas richter and Maggie McPherson; It is being published in the volume 3, issue 2 of the Journal Distance Education in 2012.
Als Basis für Simulationen innerhalb virtueller Umgebungen werden meist unterliegende Semantiken benötigt. Im Fall von Verkehrssimulationen werden in der Regel definierte Verkehrsnetzwerke genutzt. Die Erstellung dieser Netzwerke wird meist per Hand durchgeführt, wodurch sie fehleranfällig ist und viel Zeit erfordert. Dieses Projekt wurde im Rahmen des AVeSi Projektes durchgeführt, in dem an der Entwicklung einer realistischen Verkehrssimulation für virtuelle Umgebung geforscht wird. Der im Projekt angestrebte Simulationsansatz basiert auf der Nutzung von zwei Komplexitätsebenen – einer mikroskopischen und einer mesoskopischen. Um einen Übergang zwischen den Simulationsebenen zu realisieren ist eine Verknüpfung der Verkehrsnetzwerke notwendig, was ebenfalls mit einem hohen Zeitaufwand verbunden ist. In diesem Bericht werden Modelle für Verkehrsnetzwerke beider Ebenen vorgestellt. Anschließend wird ein Ansatz beschrieben, der eine automatische Generierung und Verknüpfung von Verkehrsnetzwerken beider Modelle ermöglicht. Als Grundlage für die Generierung der Netzwerke dienen Daten im OpenDRIVE®-Format. Zur Evaluierung wurden wirklichkeitsgetreue OpenStreetMap-Daten, durch Verwendung einer Drittanbietersoftware, in OpenDRIVE®-Datensätze überführt. Es konnte nachgewiesen werden, dass es durch den Ansatz möglich ist, innerhalb weniger Minuten, große Verkehrsnetzwerke zu erzeugen, auf denen unmittelbar Simulationen ausgeführt werden können. Die Qualität der zur Evaluierung generierten Netzwerke reicht jedoch für Umgebungen, in denen ein hoher Realitätsgrad gefordert wird, nicht aus, was einen zusätzlichen Bearbeitungsschritt notwendig macht. Die Qualitätsprobleme konnten darauf zurückgeführt werden, dass der Detailgrad, der den Evaluierungsdaten zu Grunde liegenden OpenStreetMap-Daten, nicht hoch genug und der Überführungsprozess nicht ausreichend transparent ist.
Realism and plausibility of computer controlled entities in entertainment software have been enhanced by adding both static personalities and dynamic emotions. Here a generic model is introduced which allows the transfer of findings from real-life personality studies to a computational model. This information is used for decision making. The introduction of dynamic event-based emotions enables adaptive behavior patterns. The advantages of this new model have been validated with a four-way crossroad in a traffic simulation. Driving agents using the introduced model enhanced by dynamics were compared to agents based on static personality profiles and simple rule-based behavior. It has been shown that adding an adaptive dynamic factor to agents improves perceivable plausibility and realism. It also supports coping with extreme situations in a fair and understandable way.
Die vorliegende Geldwäsche-Studie soll im deutschen Glücksspielmarkt soll die Diskussion über die Geldwäschemöglichkeiten im Bereich Online Poker versachlichen und grundsätzliche Präventionsmaßnahmen herleiten. Auf dieser Grundlage wird die TÜV TRUST IT GmbH Unternehmensgruppe TÜV AUSTRIA als Auftraggeber der Studie mess- und bewertbare Prüfkriterien entwickeln und diese in ein Prüf- und Zertifizierverfahren überleiten. Damit werden dem Markt dann erstmals sachlich nachvollziehbare, wissenschaftlich fundiert, standardisierte Kriterien zur Verfügung stehen, um das Thema sachlich zu behandeln und Teilnehmer am Markt in die Lage zu versetzen, klare Regeln zu definieren und einzuhalten.
Im gemeinsamen Verbundprojekt analysierte das IZNE die Wahrnehmung gesundheitlicher und finanzieller Wertschöpfungsaspekte des betrieblichen Mobilitätsmanagements (BMM). Hierzu wurden 178 Betriebe schriftlich und 22 Betriebsleiter in persönlichen Interviews zu Maßnahmen der betrieblichen Gesundheitsförderung (BGF) sowie 1.341 Arbeitnehmer aus 14 Unternehmen im Raum Bonn zu ihrem Mobilitätsverhalten befragt. Die Einschätzung der tatsächlichen Existenz und des gesundheitlichen und wirtschaftlichen Nutzens des BMM sollte Bedarf und Optimierungspotentiale erkennbar machen.
Vielfalt ist unser Angebot
(2014)
Der Fachbereich Sozialversicherung der Hochschule Bonn-Rhein-Sieg blickt auf zehn erfolgreiche Jahre zurück. Seit Gründung des Fachbereichs im Jahr 2003 arbeiten Wissenschaftlerinnen und Wissenschaftler verschiedener Disziplinen auf dem Campus Hennef eng vernetzt im Untersuchungsfeld der Sozialversicherung.
A principal step towards solving diverse perception problems is segmentation. Many algorithms benefit from initially partitioning input point clouds into objects and their parts. In accordance with cognitive sciences, segmentation goal may be formulated as to split point clouds into locally smooth convex areas, enclosed by sharp concave boundaries. This goal is based on purely geometrical considerations and does not incorporate any constraints, or semantics, of the scene and objects being segmented, which makes it very general and widely applicable. In this work we perform geometrical segmentation of point cloud data according to the stated goal. The data is mapped onto a graph and the task of graph partitioning is considered. We formulate an objective function and derive a discrete optimization problem based on it. Finding the globally optimal solution is an NP-complete problem; in order to circumvent this, spectral methods are applied. Two algorithms that implement the divisive hierarchical clustering scheme are proposed. They derive graph partition by analyzing the eigenvectors obtained through spectral relaxation. The specifics of our application domain are used to automatically introduce cannot-link constraints in the clustering problem. The algorithms function in completely unsupervised manner and make no assumptions about shapes of objects and structures that they segment. Three publicly available datasets with cluttered real-world scenes and an abundance of box-like, cylindrical, and free-form objects are used to demonstrate convincing performance. Preliminary results of this thesis have been contributed to the International Conference on Autonomous Intelligent Systems (IAS-13).
Business process infrastructures like BPMS (Business Process Management Systems) and WfMS (Workflow Management Systems) traditionally focus on the automation of processes predefined at design time. This approach is well suited for routine tasks which are processed repeatedly and which are described by a predefined control flow. In contrast, knowledge-intensive work is more goal and data-driven and less control-flow oriented. Knowledge workers need the flexibility to decide dynamically at run-time and based on current context information on the best next process step to achieve a given goal. Obviously, in most practical scenarios, these decisions are complex and cannot be anticipated and modeled completely in a predefined process model. Therefore, adaptive and dynamic process management techniques are necessary to augment the control-flow oriented part of process management (which is still a need also for knowledge workers) with flexible, context-dependent, goaloriented support.