Refine
H-BRS Bibliography
- yes (68) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (68) (remove)
Document Type
- Conference Object (43)
- Article (16)
- Report (5)
- Conference Proceedings (2)
- Book (monograph, edited volume) (1)
- Preprint (1)
Year of publication
- 2014 (68) (remove)
Keywords
- parallel breadth-first search (3)
- BFS (2)
- Garbage collection (2)
- Java virtual machine (2)
- NUMA (2)
- data locality (2)
- memory bandwidth (2)
- path planning (2)
- ARRs (1)
- Adaptive Case Management (1)
We are happy to present you the special issue on Best Practice in Robot Software Development of the Journal on Software Engineering for Robotics! The spark for this special issue came during the eighth workshop on Software Development and Integration in Robotics (SDIR) at the 2013 IEEE International Conference on Robotics and Automation. The workshop focused on Robot Software Architectures, and the fruitful discussions made it clear that the design, development, and deployment of robot software is always an interplay between competing aspects. These are often couched in antagonistic pairs, such as dependability versus performance, and prominently include quality attributes as well as functional, nonfunctional, and application requirements.
Improving data acquisition techniques and rising computational power keep producing more and larger data sets that need to be analyzed. These data sets usually do not fit into a GPU's memory. To interactively visualize such data with direct volume rendering, sophisticated techniques for problem domain decomposition, memory management and rendering have to be used. The volume renderer Volt is used to show how CUDA is efficiently utilised to manage the volume data and a GPU's memory with the aim of low opacity volume renderings of large volumes at interactive frame rates.
Current computer architectures are multi-threaded and make use of multiple CPU cores. Most garbage collections policies for the Java Virtual Machine include a stop-the-world phase, which means that all threads are suspended. A considerable portion of the execution time of Java programs is spent in these stop-the-world garbage collections. To improve this behavior, a thread-local allocation and garbage collection that only affects single threads, has been proposed. Unfortunately, only objects that are not accessible by other threads ("do not escape") are eligible for this kind of allocation. It is therefore necessary to reliably predict the escaping of objects. The work presented in this paper analyzes the escaping of objects based on the line of code (program counter – PC) the object was allocated at. The results show that on average 60-80% of the objects do not escape and can therefore be locally allocated.
The ability to track moving people is a key aspect of autonomous robot systems in real-world environments. Whilst for many tasks knowing the approximate positions of people may be sufficient, the ability to identify unique people is needed to accurately count people in the real world. To accomplish the people counting task, a robust system for people detection, tracking and identification is needed.
Realism and plausibility of computer controlled entities in entertainment software have been enhanced by adding both static personalities and dynamic emotions. Here a generic model is introduced that allows findings from real-life personality studies to be transferred to a computational model. Adaptive behavior patterns are enabled by introducing dynamic event-based emotions. The advantages of this model have been validated using a four-way crossroad in a traffic simulation. Driving agents using the introduced model enhanced by dynamics were compared to agents based on static personality profiles and simple rule-based behavior. The results show that adding a dynamic factor to agents improves perceivable plausibility and realism.
In the field of domestic service robots, recovery from faults is crucial to promote user acceptance. In this context we focus in particular on some specific faults, which arise from the interaction of a robot with its real world environment. Even a well-modelled robot may fail to perform its tasks successfully due to unexpected situations, which occur while interacting. These situations occur as deviations of properties of the objects (manipulated by the robot) from their expected values. Hence, they are experienced by the robot as external faults.
Robots, which are able to carry out their tasks robustly in real world environments, are not only desirable but necessary if we want them to be more welcome for a wider audience. But very often they may fail to execute their actions successfully because of insufficient information about behaviour of objects used in the actions.
Unexpected Situations in Service Robot Environment: Classification and Reasoning Using Naive Physics
(2014)
The contribution of the most common reciprocal translocation in childhood B-cell precursor leukemia t(12;21)(p13;q22) to leukemia development is still under debate. Direct as well as secondary indirect effects of the TEL-AML1 fusion protein are commonly recorded by using cell lines and patient samples, often bearing the TEL-AML1 fusion protein for decades. To identify direct targets of the fusion protein a short-term induction of TEL-AML1 is needed. We here describe in detail the experimental procedure, quality controls and contents of the ChIP, mRNA expression and SILAC datasets associated with the study published by Linka and colleagues in the Blood Cancer Journal [1] utilizing a short term induction of TEL-AML1 in an inducible precursor B-cell line model.
Breadth-First Search is a graph traversal technique used in many applications as a building block, e.g., to systematically explore a search space or to determine single source shortest paths in unweighted graphs. For modern multicore processors and as application graphs get larger, well-performing parallel algorithms are favorable. In this paper, we systematically evaluate an important class of parallel algorithms for this problem and discuss programming optimization techniques for their implementation on parallel systems with shared memory. We concentrate our discussion on level-synchronous algorithms for larger multicore and multiprocessor systems. In our results, we show that for small core counts many of these algorithms show rather similar performance behavior. But, for large core counts and large graphs, there are considerable differences in performance and scalability influenced by several factors, including graph topology. This paper gives advice, which algorithm should be used under which circumstances.
Software repository data, for example in issue tracking systems, include natural language text and technical information, which includes anything from log files via code snippets to stack traces. However, data mining is often only interested in one of the two types e.g. in natural language text when looking at text mining. Regardless of which type is being investigated, any techniques used have to deal with noise caused by fragments of the other type i.e. methods interested in natural language have to deal with technical fragments and vice versa. This paper proposes an approach to classify unstructured data, e.g. development documents, into natural language text and technical information using a mixture of text heuristics and agglomerative hierarchical clustering. The approach was evaluated using 225 manually annotated text passages from developer emails and issue tracker data. Using white space tokenization as a basis, the overall precision of the approach is 0.84 and the recall is 0.85.
Dysregulation of IL12 Signaling As a Novel Cause of an Autoimmune Lymphoproliferative like Syndrome
(2014)
This article describes an approach to rapidly prototype the parameters of a Java application run on the IBM J9 Virtual Machine in order to improve its performance. It works by analyzing VM output and searching for behavioral patterns. These patterns are matched against a list of known patterns for which rules exist that specify how to adapt the VM to a given application. Adapting the application is done by adding parameters and changing existing ones. The process is fully automated and carried out by a toolkit. The toolkit iteratively cycles through multiple possible parameter sets, benchmarks them and proposes the best alternative to the user. The user can, without any prior knowledge about the Java application or the VM improve the performance of the deployed application and quickly cycle through a multitude of different settings to benchmark them. When tested with the representative benchmarks, improvements of up to 150% were achieved.