Refine
H-BRS Bibliography
- yes (63) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (43)
- Institute of Visual Computing (IVC) (18)
- Fachbereich Ingenieurwissenschaften und Kommunikation (12)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (11)
- Institut für Cyber Security & Privacy (ICSP) (7)
- Fachbereich Wirtschaftswissenschaften (4)
- Institut für funktionale Gen-Analytik (IFGA) (4)
- Fachbereich Angewandte Naturwissenschaften (2)
- Fachbereich Sozialpolitik und Soziale Sicherung (2)
- Institut für Sicherheitsforschung (ISF) (1)
Document Type
- Conference Object (63) (remove)
Year of publication
- 2014 (63) (remove)
Keywords
- education (2)
- path planning (2)
- Application Software (1)
- Applied and Practice-Oriented Research (1)
- Augmented reality (1)
- BFS (1)
- Business Sector (1)
- Control (1)
- Domain Expert (1)
- Eclipse Modeling Framework (1)
Realism and plausibility of computer controlled entities in entertainment software have been enhanced by adding both static personalities and dynamic emotions. Here a generic model is introduced that allows findings from real-life personality studies to be transferred to a computational model. Adaptive behavior patterns are enabled by introducing dynamic event-based emotions. The advantages of this model have been validated using a four-way crossroad in a traffic simulation. Driving agents using the introduced model enhanced by dynamics were compared to agents based on static personality profiles and simple rule-based behavior. The results show that adding a dynamic factor to agents improves perceivable plausibility and realism.
Perception is one of the most important cognitive capabilities of an entity since it determines how an entity perceives its environment. The presented work focuses on providing cost efficient but realistic perceptual processes for intelligent virtual agents (IVAs) or NPCs with the goal of providing a sound information basis for the entities' decision making processes. In addition, an agent-central perception process should rovide a common interface for developers to retrieve data from the IVAs' environment. The overall process is evaluated by applying it to a scenario demonstrating its benefits. The evaluation indicates, that such a realistically simulated perception process provides a powerful instrument to enhance the (perceived) realism of an IVA's simulated behavior.
Rendering techniques for design evaluation and review or for visualizing large volume data often use computationally expensive ray-based methods. Due to the number of pixels and the amount of data, these methods often do not achieve interactive frame rates. A view direction based rendering technique renders the users central field of view in high quality whereas the surrounding is rendered with a level of detail approach depending on the distance to the users central field of view thus giving the opportunity to increase rendering efficiency. We propose a prototype implementation and evaluation of a focus-based rendering technique based on a hybrid ray tracing/sparse voxel octree rendering approach.
Vor dem Hintergrund knapper Ressourcen, dem zunehmendem Reha-Bedarf und der politischen Diskussion um eine demografische Anpassung der Reha-Budgets gewinnt der Nachweis der Ergebnisqualität medizinischer Reha-Leistungen weiter an zentraler Bedeutung (z. B. Haaf, 2005; Steiner et al., 2009). Die kontinuierliche und klinikvergleichende Überprüfung der Behandlungsergebnisse ist darüber hinaus ein wichtiger Baustein eines funktionierenden Qualitätsmanagements (Schmidt et al., in press). Sie ermöglicht ein "Lernen von den Besten" und führt zu organisatorischen Lernprozessen (Toepler et. al., 2010).
The perceived direction of “up” is determined by gravity, visual information, and an internal estimate of body orientation (Mittelstaedt, 1983; Dyde et al., 2006). Is the gravity level found on other worlds sufficient to maintain gravity’s contribution to this perception? Difficulties in stability reported anecdotally by astronauts on the lunar surface (NASA 1972) suggest that the moon’s gravity may not be, despite this value being far above the threshold for detecting linear acceleration. Knowing how much gravity is needed to provide a reliable orientation cue is required for training and preparing astronauts for future missions to the moon, mars and beyond.
Application systems are often advertised with features, and features are used heavily for requirements man- agement. However, often software manufacturers only have incomplete information about the features of their software. The information is distributed over different sources, such as requirements documents, issue trackers, user manuals, and code. In this paper, we research the occurrence of feature information in open source software engineering data. We report on a case study with three open source systems. We analyze what information about features can be found in issue trackers and user documentation. Furthermore, we study the abstraction levels on which the features are described, how feature information is related, and we discuss the possibility to discover such information semi-automatically. To mirror the diversity of software development contexts, we choose open source systems, which are quite different, e.g., in the rigor of issue tracker usage. The results differ accordingly. One main result is that the user documentation did not provide more accurate information than the issue tracker compared to a provided feature list. The results also give hints on how the management of feature relevant information can be supported.
Software repository data, for example in issue tracking systems, include natural language text and technical information, which includes anything from log files via code snippets to stack traces. However, data mining is often only interested in one of the two types e.g. in natural language text when looking at text mining. Regardless of which type is being investigated, any techniques used have to deal with noise caused by fragments of the other type i.e. methods interested in natural language have to deal with technical fragments and vice versa. This paper proposes an approach to classify unstructured data, e.g. development documents, into natural language text and technical information using a mixture of text heuristics and agglomerative hierarchical clustering. The approach was evaluated using 225 manually annotated text passages from developer emails and issue tracker data. Using white space tokenization as a basis, the overall precision of the approach is 0.84 and the recall is 0.85.
This article describes an approach to rapidly prototype the parameters of a Java application run on the IBM J9 Virtual Machine in order to improve its performance. It works by analyzing VM output and searching for behavioral patterns. These patterns are matched against a list of known patterns for which rules exist that specify how to adapt the VM to a given application. Adapting the application is done by adding parameters and changing existing ones. The process is fully automated and carried out by a toolkit. The toolkit iteratively cycles through multiple possible parameter sets, benchmarks them and proposes the best alternative to the user. The user can, without any prior knowledge about the Java application or the VM improve the performance of the deployed application and quickly cycle through a multitude of different settings to benchmark them. When tested with the representative benchmarks, improvements of up to 150% were achieved.
In general, mathematics plays a central role in our lives because today mathematics regulates our everyday life with techniques, technologies and procedures, for example coding techniques for credit cards or the drafting of curves and surfaces for construction procedures [5]. Obviously, mathematics continues to be an important element of engineering education and it still represents a major obstacle for the students. Lacking the knowledge of several topics, changing learning behavior and inadequate overall conditions at universities for the repetition of school mathematics were mentioned to be causes for the constantly increasing gap between the initial level of mathematics at university and the prior knowledge of the first semester students [2].
Improving data acquisition techniques and rising computational power keep producing more and larger data sets that need to be analyzed. These data sets usually do not fit into a GPU's memory. To interactively visualize such data with direct volume rendering, sophisticated techniques for problem domain decomposition, memory management and rendering have to be used. The volume renderer Volt is used to show how CUDA is efficiently utilised to manage the volume data and a GPU's memory with the aim of low opacity volume renderings of large volumes at interactive frame rates.