Refine
H-BRS Bibliography
- yes (58) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (58) (remove)
Document Type
- Conference Object (32)
- Article (12)
- Report (6)
- Book (monograph, edited volume) (3)
- Part of a Book (3)
- Conference Proceedings (1)
- Preprint (1)
Year of publication
- 2013 (58) (remove)
Keywords
- Three-dimensional displays (2)
- Virtuelle Realität (2)
- 3D real-time echocardiography (1)
- 3D user interface (1)
- ARRs (1)
- Adaptive Behavior (1)
- Agents (1)
- Boolesche Algebra (1)
- Component Models (1)
- Congenital heart disease (1)
- Datalog (1)
- Ecosystem simulation (1)
- Educational institutions (1)
- Edutainment (1)
- Einführung (1)
- Emotion (1)
- FDI (1)
- Five Factor Model (1)
- Grailog (1)
- Graphentheorie (1)
- Human-Robot Interaction (1)
- IEEE 802.21 (1)
- Instantiation (1)
- Interaction devices (1)
- Internet (1)
- Interoperability (1)
- JavaScript (1)
- Lehrbuch (1)
- Lineare Algebra (1)
- MPLS (1)
- Machine Learning (1)
- Mathematische Logik (1)
- Mengenlehre (1)
- Open source software (1)
- People Detection (1)
- Personality (1)
- Poisson Disc Distribution (1)
- Power Analysis (1)
- Pressure wire (1)
- Pressure-volume relation (1)
- QoS (1)
- RGB-D (1)
- Ray Tracing (1)
- Reusable Software (1)
- Robot kinematics (1)
- Robot sensing systems (1)
- Robotics (1)
- RuleML (1)
- SVG (1)
- Scene text recognition, active vision, domestic robot, pantilt, auto-zoom, auto-focus, adaptive aperture control (1)
- School experiments (1)
- Second Life (1)
- Semantics (1)
- Software Architectures (1)
- Standards (1)
- Support Vector Machine (1)
- Switched power electronic systems (1)
- Template Attacks (1)
- Terrain rendering (1)
- Therapy (1)
- Transforms (1)
- Uncertainty (1)
- VR-based systems (1)
- Verkehrsnetz (1)
- Verkehrsnetzwerke (1)
- Verkehrssimulation (1)
- Virtual Reality (1)
- Visualization (1)
- Wang-tiles (1)
- Wireless Backhaul Network (1)
- XML (1)
- XNA Game Studio (1)
- XSLT (1)
- atomic instructions (1)
- automatisierte Netzwerkgenerierung (1)
- computational logic (1)
- detection (1)
- directed hypergraphs (1)
- displacement measurement (1)
- estimation (1)
- full-body interface (1)
- graphs (1)
- human factors (1)
- intelligente Agenten (1)
- interactive distributed rendering (1)
- interface design (1)
- link calibration (1)
- multi-screen visualization environments (1)
- multiple Xbox 360 (1)
- multiple computer systems (1)
- multisensory interface (1)
- optical sensor (1)
- optical triangulation (1)
- opto-electronic protective device (1)
- parallel BFS (1)
- prognosis (1)
- redundant work (1)
- rules (1)
- security (1)
- self-configuration (1)
- self-management (1)
- signal processing algorithm (1)
- unique bond graph representation for all modes of operation (1)
- virtual environments (1)
- virtuelle Umgebungen (1)
Grailog embodies a systematics to visualize knowledge sources by graphical elements. Its main benefit is that the resulting visual presentations are easier to read for humans than the original symbolic source code. In this paper we introduce a methodology to handle the mapping from Datalog RuleML, serialized in XML, to an SVG representation of Grailog, also serialized in XML, via eXtensible Stylesheet Language Transformations (XSLT) 2.0/XML; the SVG is then rendered visually by modern Web browsers. This initial mapping is realized to target Grailog's "fully node copied" normal form. Elements can thus be translated one at a time, separating the fundamental Datalog-to-SVG translation concern from the concern of merging node copies for optimal (hyper)graph layout and avoiding its high computational complexity in this online tool. The resulting open source Grailog Knowledge-Source Visualizer (Grailog KS Viz) supports Datalog RuleML with positional relations of arity n>1. The on-the-fly transformation was shown to run on all recent major Web browsers and should be easy to understand, use, and extend.
Embodied artificial agents operating in dynamic, real-world environments need architectures that support the special requirements that exist for them. Architectures are not always designed from scratch and the system then implemented all at once, but rather, a step-wise integration of components is often made to increase functionality. Our work aims to increase flexibility and robustness by integrating a task planner into an existing architecture and coupling the planning process with the preexisting execution and the basic monitoring processes. This involved the conversion of monolithic SMACH scenario scripts (state-machine execution scripts) into modular states that can be called dynamically based on the plan that was generated by the planning process. The procedural knowledge encoded in such state machines was used to model the planning domain for two RoboCup@Home scenarios on a Care-O-Bot 3 robot [GRH+08]. This was done for the JSHOP2 [IN03] hierarchical task network (HTN) planner. A component which iterates through a generated plan and calls the appropriate SMACH states [Fie11] was implemented, thus enabling the scenarios. Crucially, individual monitoring actions which enable the robot to monitor the execution of the actions were designed and included, thus providing additional robustness.
Switched power electronic subsystems are widely used in various applications. A fault in one of their components may have a significant effect on the system’s load or may even cause a damage. Therefore, it is important to detect and isolate faults and to report true faults to a supervisory system in order to avoid malfunction of or damage to a load. If, in a model-based approach to fault detection and isolation of hybrid systems, switching devices are considered as ideal switches then some equations must be reformulated whenever some devices have switched. In this paper, a fixed causality bond graph representation of hybrid system models is used, i.e., computational causalities assigned according to the Standard Causality Assignment Procedure (SCAP) are independent of system modes of operation. The latter are taken into account by transformer moduli mi(t) ∈ {0, 1} ∀t ≥ 0 in a unique set of equations of motion. In a case study, this approach is used for fault diagnosis in a three-phase full-wave rectifier. Residuals of Analytical Redundancy Relations (ARRs) holding for all modes of operations and serving as fault indicators are computed in an offline simulation as part of a DAE system by using a bond graph model of the faulty system instead of the real one and by coupling it to a bond graph of the healthy system by means of residual sinks.
Web-based Editor for YAWL
(2013)
This paper presents a web-based editor that offers YAWL editing capabilities and comprehensive support for the XML format of YAWL. The open-source project Signavio Core Components is extended with a graphical user interface (GUI) for parts of the YAWL Language, and an import-/export component that converts between YAWL and the internal format of Signavio Core Components. This conversion, between the web-based editor and the offcial YAWL Editor, is lossless so both tools may be used together. Compared to the offcial YAWL Editor, the web-based editor is missing some features, but could still facilitate the usage of the YAWL system in use cases that are not supported by a desktop application.
Issues in an issue tracking system contain different kinds of information like requirements, features, development tasks, bug reports, bug fixing tasks, refactoring tasks and so on. This information is generally accompanied by discussions or comments, which again are different kinds of information (e.g. social interaction, implementation ideas, stack traces or error messages). We propose to improve automatic categorization of this information and use the categorized data to support software engineering tasks. We want to obtain improvements in two different ways. Firstly, we want to obtain algorithmic improvements (e.g. natural language processing techniques) to retrieve and use categorized auxiliary data. Secondly we want to utilize multiple task-based categorizations to support different software engineering tasks.
Computers will soon be powerful enough to simulate consciousness. The artificial life community should start to try to understand how consciousness could be simulated. The proposal is to build an artificial life system in which consciousness might be able to evolve. The idea is to develop internet-wide artificial universe in which the agents can evolve. Users play games by defining agents that form communities. The communities have to perform tasks, or compete, or whatever the specific game demands. The demands should be such that agents that are more aware of their universe are more likely to succeed. The agents reproduce and evolve within their user’s machine, but can also sometimes transfer to other machine across the internet. Users will be able to choose the capabilities of their agents from a fixed list, but may also write their own powers for their agents.
Improving Robustness of Task Execution Against External Faults Using Simulation Based Approach
(2013)
Robots interacting in complex and cluttered environments may face unexpected situations referred to as external faults which prohibit the successful completion of their tasks. In order to function in a more robust manner, robots need to recognise these faults and learn how to deal with them in the future. We present a simulation-based technique to avoid external faults occurring during execusion releasing actions of a robot. Our technique utilizes simulation to generate a set of labeled examples which are used by a histogram algorithm to compute a safe region. A safe region consists of a set of releasing states of an object that correspond to successful performances of the action. This technique also suggests a general solution to avoid the occurrence of external faults for not only the current, observable object but also for any other object of the same shape but different size.
We developed a scene text recognition system with active vision capabilities, namely: auto-focus, adaptive aperture control and auto-zoom. Our localization system is able to delimit text regions in images with complex backgrounds, and is based on an attentional cascade, asymmetric adaboost, decision trees and Gaussian mixture models. We think that text could become a valuable source of semantic information for robots, and we aim to raise interest in it within the robotics community. Moreover, thanks to the robot’s pan-tilt-zoom camera and to the active vision behaviors, the robot can use its affordances to overcome hindrances to the performance of the perceptual task. Detrimental conditions, such as poor illumination, blur, low resolution, etc. are very hard to deal with once an image has been captured and can often be prevented. We evaluated the localization algorithm on a public dataset and one of our own with encouraging results. Furthermore, we offer an interesting experiment in active vision, which makes us consider that active sensing in general should be considered early on when addressing complex perceptual problems in embodied agents.
This paper presents recent research on an active multispectral scanning sensor capable of classifying an object's surface material in order to distinguish between different kinds of materials and human skin. The sensor itself has already been presented in previous work and can be used in conjunction with safeguarding equipment at manually-fed machines or robot workplaces, for example. This work shows how an extended sensor system with advanced material classifiers can be used to provide additional value by distinguishing different materials of work pieces in order to suggest different tools or parameters for the machine (e.g. the use of a different saw blade or rotation speed at table saws). Additionally, a first implementation and evaluation of an active multispectral camera system addressing new safety applications is described. Both approaches intend to increase the productivity and the user's acceptance of the sensor technology.
Logiken, Mengen, Relationen, Funktionen, Induktion und Rekursion sind grundlegende mathematische Konzepte und Methoden, die in allen Bereichen der Informatik für die Beschreibung von Problemen und deren Lösung benötigt werden. Das Beherrschen dieser Konzepte und Methoden ist Voraussetzung für das Studium fast aller weiteren Informatik-Module, nicht nur in Bereichen der Mathematik und der Theoretischen Informatik, sondern auch in Bereichen der Praktischen Informatik, wie z.B. Programmierung, Datenstrukturen, Algorithmen und Datenbanken. Das Buch stellt die grundlegenden Begriffe, ihre Eigenschaften und Anwendungsmöglichkeiten schrittweise vor. Das Verständnis der Begriffe und deren Zusammenhang und Zusammenwirken wird u.a. durch Lernziele, integrierte Übungsaufgaben mit Musterlösungen und Marginalien unterstützt; das Buch ist zum Selbststudium geeignet.
Internet, Soziale Netzwerke, Spiele, Smartphones, DVDs, digitaler Rundfunk und digitales Fernsehen funktionieren nur deshalb, weil zu ihrer Entwicklung und Anwendung mathematisch abgesicherte Verfahren zur Verfügung stehen. Dieses Buch vermittelt Einsichten in grundlegende Konzepte und Methoden der Linearen Algebra, auf denen diese Verfahren beruhen. Am Beispiel fehlertoleranter Codierung wird einführend gezeigt, wie diese Konzepte und Methoden in der Praxis eingesetzt werden, und am Beispiel von Quantenalgorithmen, die möglicherweise in Zukunft eine Rolle spielen, wird deutlich, dass die Lineare Algebra zeitinvariante Konzepte, Methoden und Verfahren bereitstellt, mit denen IT-Technologien konzipiert, implementiert, angewendet und weiterentwickelt werden können. Wegen seiner didaktischen Elemente wie Vorgabe von Lernzielen, Zusammenfassungen, Marginalien und einer Vielzahl von Übungen mit Musterlösungen eignet sich das Buch nicht nur als Begleitlektüre zu entsprechenden Informatik- und Mathematik-Lehrveranstaltungen, sondern insbesondere auch zum Selbststudium.
The reciprocal translocation t(12;21)(p13;q22), the most common structural genomic alteration in B-cell precursor acute lymphoblastic leukaemia in children, results in a chimeric transcription factor TEL-AML1 (ETV6-RUNX1). We identified directly and indirectly regulated target genes utilizing an inducible TEL-AML1 system derived from the murine pro B-cell line BA/F3 and a monoclonal antibody directed against TEL-AML1. By integration of promoter binding identified with chromatin immunoprecipitation (ChIP)-on-chip, gene expression and protein output through microarray technology and stable labelling of amino acids in cell culture, we identified 217 directly and 118 indirectly regulated targets of the TEL-AML1 fusion protein. Directly, but not indirectly, regulated promoters were enriched in AML1-binding sites. The majority of promoter regions were specific for the fusion protein and not bound by native AML1 or TEL. Comparison with gene expression profiles from TEL-AML1-positive patients identified 56 concordantly misregulated genes with negative effects on proliferation and cellular transport mechanisms and positive effects on cellular migration, and stress responses including immunological responses. In summary, this work for the first time gives a comprehensive insight into how TEL-AML1 expression may directly and indirectly contribute to alter cells to become prone for leukemic transformation.
Die Vorstandsperspektive
(2013)
In this paper we present the steps towards a well-designed concept of a 5VR6 system for school experiments in scientific domains like physics, biology and chemistry. The steps include the analysis of system requirements in general, the analysis of school experiments and the analysis of input and output devices demands. Based on the results of these steps we show a taxonomy of school experiments and provide a comparison between several currently available devices which can be used for building such a system. We also compare the advantages and shortcomings of 5VR6 and 5AR6 systems in general to show why, in our opinion, 5VR6 systems are better suited for school-use.
Real-Time Simulation of Camera Errors and Their Effect on Some Basic Robotic Vision Algorithms
(2013)
Generating and visualizing large areas of vegetation that look natural makes terrain surfaces much more realistic. However, this is a challenging field in computer graphics, because ecological systems are complex and visually appealing plant models are geometrically detailed. This work presents Silva (System for the Instantiation of Large Vegetated Areas), a system to generate and visualize large vegetated areas based on the ecological surrounding. Silva generates vegetation on Wang-tiles with associated reusable distributions enabling multi-level instantiation. This paper presents a method to generate Poisson Disc Distributions (PDDs) with variable radii on Wang-tile sets (without a global optimization) that is able to generate seamless tilings. Because Silva has a freely configurable generation pipeline and can consider plant neighborhoods it is able to incorporate arbitrary abiotic and biotic components during generation. Based on multi-levelinstancing and nested kd-trees, the distributions on the Wang-tiles allow their acceleration structures to be reused during visualization. This enables Silva to visualize large vegetated areas of several hundred square kilometers with low render times and a small memory footprint.
Application performance improvements through VM parameter modification after runtime analysis
(2013)
Traffic simulations are generally used to forecast traffic behavior or to simulate non-player characters in computer games and virual environments. These systems are usually modeled in such a way that traffic rules are strictly followed. However, rule violations are a common part of real-life traffic and thus should be integrated into such models.
YAWL Symposium 2013. Proceedings of the First YAWL Symposium, Sankt Augustin, Germany, June 7, 2013
(2013)
In this paper, we describe an approach that enables an autonomous system to infer the semantics of a command (i.e. a symbol sequence representing an action) in terms of the relations between changes in the observations and the action instances. We present a method of how to induce a theory (i.e. a semantic description) of the meaning of a command in terms of a minimal set of background knowledge. The only thing we have is a sequence of observations from which we extract what kinds of effects were caused by performing the command. This way, we yield a description of the semantics of the action and, hence, a definition.
The BRICS component model: a model-based development paradigm for complex robotics software systems
(2013)
Updating a shared data structure in a parallel program is usually done with some sort of high-level synchronization operation to ensure correctness and consistency. However, underlying synchronization instructions in a processor architecture are costly and rather limited in their scalability on larger multi-core/multi-processors systems. In this paper, we examine work queue operations where such costly atomic update operations are replaced with non-atomic modifiers (simple read+write). In this approach, we trade the exact amount of work with atomic operations against doing more and redundant work but without atomic operations and without violating the correctness of the algorithm. We show results for the application of this idea to the concrete scenario of parallel Breadth First Search (BFS) algorithms for undirected graphs on two large NUMA shared memory system with up to 64 cores.
Information reliability and automatic computation are two important aspects that are continuously pushing the Web to be more semantic. Information uploaded to the Web should be reusable and extractable automatically to other applications, platforms, etc. Several tools exist to explicitly markup Web content. The Web services may also have a positive role on the automatic processing of Web contents, especially when they act as flexible and agile agents. However, Web services themselves should be developed with semantics in mind. They should include and provide structured information to facilitate their use, reuse, composition, query, etc. In this chapter, the authors focus on evaluating state-of-the-art semantic aspects and approaches in Web services. Ultimately, this contributes to the goal of Web knowledge management, execution, and transfer.
Realism and plausibility of computer controlled entities in entertainment software have been enhanced by adding both static personalities and dynamic emotions. Here a generic model is introduced which allows the transfer of findings from real-life personality studies to a computational model. This information is used for decision making. The introduction of dynamic event-based emotions enables adaptive behavior patterns. The advantages of this new model have been validated with a four-way crossroad in a traffic simulation. Driving agents using the introduced model enhanced by dynamics were compared to agents based on static personality profiles and simple rule-based behavior. It has been shown that adding an adaptive dynamic factor to agents improves perceivable plausibility and realism. It also supports coping with extreme situations in a fair and understandable way.
Als Basis für Simulationen innerhalb virtueller Umgebungen werden meist unterliegende Semantiken benötigt. Im Fall von Verkehrssimulationen werden in der Regel definierte Verkehrsnetzwerke genutzt. Die Erstellung dieser Netzwerke wird meist per Hand durchgeführt, wodurch sie fehleranfällig ist und viel Zeit erfordert. Dieses Projekt wurde im Rahmen des AVeSi Projektes durchgeführt, in dem an der Entwicklung einer realistischen Verkehrssimulation für virtuelle Umgebung geforscht wird. Der im Projekt angestrebte Simulationsansatz basiert auf der Nutzung von zwei Komplexitätsebenen – einer mikroskopischen und einer mesoskopischen. Um einen Übergang zwischen den Simulationsebenen zu realisieren ist eine Verknüpfung der Verkehrsnetzwerke notwendig, was ebenfalls mit einem hohen Zeitaufwand verbunden ist. In diesem Bericht werden Modelle für Verkehrsnetzwerke beider Ebenen vorgestellt. Anschließend wird ein Ansatz beschrieben, der eine automatische Generierung und Verknüpfung von Verkehrsnetzwerken beider Modelle ermöglicht. Als Grundlage für die Generierung der Netzwerke dienen Daten im OpenDRIVE®-Format. Zur Evaluierung wurden wirklichkeitsgetreue OpenStreetMap-Daten, durch Verwendung einer Drittanbietersoftware, in OpenDRIVE®-Datensätze überführt. Es konnte nachgewiesen werden, dass es durch den Ansatz möglich ist, innerhalb weniger Minuten, große Verkehrsnetzwerke zu erzeugen, auf denen unmittelbar Simulationen ausgeführt werden können. Die Qualität der zur Evaluierung generierten Netzwerke reicht jedoch für Umgebungen, in denen ein hoher Realitätsgrad gefordert wird, nicht aus, was einen zusätzlichen Bearbeitungsschritt notwendig macht. Die Qualitätsprobleme konnten darauf zurückgeführt werden, dass der Detailgrad, der den Evaluierungsdaten zu Grunde liegenden OpenStreetMap-Daten, nicht hoch genug und der Überführungsprozess nicht ausreichend transparent ist.