Refine
Department, Institute
Document Type
- Conference Object (24)
- Report (2)
- Article (1)
- Bachelor Thesis (1)
Year of publication
Keywords
- Virtual Reality (3)
- Intelligent virtual agents (2)
- serious games (2)
- 3D gaming (1)
- Adaptive Behavior (1)
- Agents (1)
- Emotion (1)
- Eye Tracking (1)
- Five Factor Model (1)
- Game Engine (1)
We present a multi-user virtual reality (VR) setup that aims at providing novel training tools for paramedics that enhances current learning methods. The hardware setup consists of a two-user fullscale VR environment with head-mounted displays for two interactive trainees and one additional desktop pc for one trainer participant. The software provides a connected multi-user environment, showcasing a paramedic emergency simulation with focus on anaphylactic shock, a representative scenario for critical medical cases that happen too rare to eventually occur within a regular curricular term of vocational training. The prototype offers hands-on experience on multi-user VR in an applied scenario, generating discussion around current state and future development concerning three important research areas: (a) user navigation, (b) interaction, (c) level of visual abstraction, and (d) level of task abstraction.
Simulating eye movements for virtual humans or avatars can improve social experiences in virtual reality (VR) games, especially when wearing head mounted displays. While other researchers have already demonstrated the importance of simulating meaningful eye movements, we compare three gaze models with different levels of fidelity regarding realism: (1) a base model with static fixation and saccadic movements, (2) a proposed simulation model that extends the saccadic model with gaze shifts based on a neural network, and (3) a user's real eye movements recorded by a proprietary eye tracker. Our between-groups design study with 42 subjects evaluates impact of eye movements on social VR user experience regarding perceived quality of communication and presence. The tasks include free conversation and two guessing games in a co-located setting. Results indicate that a high quality of communication in co-located VR can be achieved without using extended gaze behavior models besides saccadic simulation. Users might have to gain more experience with VR technology before being able to notice subtle details in gaze animation. In the future, remote VR collaboration involving different tasks requires further investigation.
Integration of Multi-modal Cues in Synthetic Attention Processes to Drive Virtual Agent Behavior
(2017)
Simulations and serious games require realistic behavior of multiple intelligent agents in real-time. One particular issue is how attention and multi-modal sensory memory can be modeled in a natural but effective way, such that agents controllably react to salient objects or are distracted by other multi-modal cues from their current intention. We propose a conceptual framework that provides a solution with adherence to three main design goals: natural behavior, real-time performance, and controllability. As a proof of concept, we implement three major components and showcase effectiveness in a real-time game engine scenario. Within the exemplified scenario, a visual sensor is combined with static saliency probes and auditory cues. The attention model weighs bottom-up attention against intention-related top-down processing, controllable by a designer using memory and attention inhibitor parameters. We demonstrate our case and discuss future extensions.
Populating virtual worlds with intelligent agents can drastically improve a user's sense of presence. Applying these worlds to virtual training, simulations, or (serious) games, often requires multiple agents to be simulated in real time. The process of generating believable agent behavior starts with providing a plausible perception and attention process that is both efficient and controllable. We describe a conceptual framework for synthetic perception that specifically considers the mentioned requirements: plausibility, real-time performance, and controllability. A sample implementation will focus on sensing, attention, and memory to demonstrate the framework's capabilities in a real-time game engine scenario. A combination of dynamic geometric sensing and false coloring with static saliency information is provided to exemplify the collection of environmental stimuli. The subsequent attention process handles both bottom-up processing and task-oriented, top-down factors. Behavioral results can be influenced by controlling memory and attention The example case is demonstrated and discussed alongside future extensions.
Inventory design in games is crucial when it comes to managing in-game items efficiently. In multi-user settings, an additional goal is to support awareness concerning a coplayer’s inventory and his/her available actions. Especially in virtual reality (VR), presence and immersion are vital aspects of the experience, suggesting real-world metaphors for interface design. The presented work examines two basic inventory paradigms: an abstract menu-based inventory and a metaphoric virtual belt. Both systems are implemented in a serious game prototype for paramedic training in VR, then evaluated in a between-group design study with paramedic trainees inexperienced in VR technology. While both solutions offer comparable usability and presence scores, the results suggest future optimization.
We present an approach towards automatic parameter identification for personality models in traffic simulation telemetry. To this end, we compare the behavior data of human and artificial drivers in the same virtual environment. We record the driving behaviors of human subjects in a car simulator and use evolutionary strategies to infer parameters of models of artificial drivers from the recorded data. We evaluate our approach in several prototypic traffic situations in which we compare the resulting artificial agents against human drivers as well as against simple baseline implementations of artificial drivers. As a result, we show that particular ranges of parameters of a driver profile can be inferred for which the simulated driving behavior does not change. We further show that precision depends on the amount of data and the scenarios in which these were recorded. The proposed method can also be applied to compare human and artificial driving behavior.
Für die prototypische Erstellung von Virtual Reality (VR) Szenen auf Grundlage realer Umgebungen bieten sich Daten aus aktuellen Panorama-Kameras an. Diese Daten eignen sich jedoch nicht unmittelbar für die Integration in eine Game Engine. Wir stellen daher ein projektionsbasiertes Verfahren vor, mit dem Bilder und Videos im Fischaugenformat, wie sie z.B. die 360 Kamera Ricoh Theta erstellt, ohne Konvertierung in Echtzeit mit Hilfe der Unity Game Engine visualisiert werden können. Es wird weiterhin gezeigt, dass ein Panoramabild mit diesem Verfahren leicht manuell um grobe Tiefeninformation erweitert werden kann, sodass bei einer Darstellung in VR ein grober räumlicher Eindruck der Szene für einfach prototypische Umsetzungen ermöglicht wird.
Traffic simulations for virtual environments are concerned with the behavior of individual traffic participants. The complexity of behavior in these simulations is often rather simple to abide by the constraints of processing resources. In sophisticated traffic simulations, the behavior of individual traffic participants is also modeled, but the focus lies on the overall behavior of the entire system, e.g. to identify possible bottle necks of traffic flow [8].
Typically, virtual traffic networks are enhanced by semantic information to simplify certain processes in traffic simulations such as navigation or decision making. In this paper, we introduce an extendable model representing road network logics (RNL) which allows the integration of such semantic information. For our specific application, we focused on the following elements: road geometry, paths leading through junctions, right of way priorities and road features. Furthermore, we describe our traffic simulation approach, which contains two layers of complexity, and show how these two layers are combined within the RNL. Since the manual setup of such RNL is time-consuming and error prone, an approach for automatically generating RNL based on standardized OpenDRIVE data is introduced. For evaluation purposes, generated RNL were integrated into 3D scenes. The mapping of the RNL to the environment was examined and simulations were run for empirical validation. As a result, the automatically generated RNL could be successfully matched with the created 3D scenes, allowing our traffic simulation approach to be run on the final scenes. Additionally, a comparison between generation times of the RNL and estimates of the time required to manually create RNL showed significant time savings, in one case of more than 300 hours.
Traffic simulations are typically concerned with modeling human behavior as closely as possible to create realistic results. In conventional traffic simulations used for road planning or traffic jam prediction only the overall behavior of an entire system is of interest. In virtual environments, like digital games, simulated traffic participants are merely a backdrop to the player’s experience and only need to be “sufficiently realistic”. Additionally, restricted computational resources, typical for virtual environment applications, usually limit the complexity of simulated behavior in this field. More importantly, two integral aspects of real-world traffic are not considered in current traffic simulations from both fields: misbehavior and risk taking of traffic participants. However, for certain applications like the FIVIS bicycle simulator, these aspects are essential.
Perception is one of the most important cognitive capabilities of an entity since it determines how an entity perceives its environment. The presented work focuses on providing cost efficient but realistic perceptual processes for intelligent virtual agents (IVAs) or NPCs with the goal of providing a sound information basis for the entities' decision making processes. In addition, an agent-central perception process should rovide a common interface for developers to retrieve data from the IVAs' environment. The overall process is evaluated by applying it to a scenario demonstrating its benefits. The evaluation indicates, that such a realistically simulated perception process provides a powerful instrument to enhance the (perceived) realism of an IVA's simulated behavior.
Realism and plausibility of computer controlled entities in entertainment software have been enhanced by adding both static personalities and dynamic emotions. Here a generic model is introduced that allows findings from real-life personality studies to be transferred to a computational model. Adaptive behavior patterns are enabled by introducing dynamic event-based emotions. The advantages of this model have been validated using a four-way crossroad in a traffic simulation. Driving agents using the introduced model enhanced by dynamics were compared to agents based on static personality profiles and simple rule-based behavior. The results show that adding a dynamic factor to agents improves perceivable plausibility and realism.
Perception is an important aspect of cognition since it forms the basis for further decision-making processes. In this contribution, the overall architecture of our synthetic perception for agents framework (SynPeA) for simulating a virtual entities perception is presented. We discuss aspects of modeling visual sensation and propose mechanisms for virtual sensors and memory. Different visual sensing approaches are compared by applying them to an artificial evaluation scenario. The evaluations show promising results with respect to performance and quality.
To improve the plausibility of driving and interaction as well as the perceived realism of agents in interactive media, we extend cognitive traffic agents based on personality profiles with emotions. As proof of concept a scenario with a narrowing road was evaluated. To enable agents to handle these scenarios, an existing lane change model was adapted to model the required decision processes and incorporate the driving style defined by static and dynamic aspects of the agents.
Traditionally traffic simulations are used to predict traffic jams, plan new roads or highways, and estimate road safety. They are also used in computer games and virtual environments. There are two general concepts of modeling traffic: macroscopic and microscopic modeling. Macroscopic traffic models take vehicle collectives into account and do not consider individual vehicles. Parameters like average velocity and density are used to model the flow of traffic. In contrast, microscopic traffic models consider each vehicle individually. Therefore, vehicle specific parameters are of importance, e.g. current velocity, desired velocity, velocity difference to the lead vehicle, individual time gap.
Underlying semantics are important to facilitate traffic simulations in virtual environments. The work presented here introduces an universal and extensible model for the representation of road network logics. Additionally, setup processes of road network logics following the model are described, and the integration of different traffic simulation approaches is discussed. For evaluation, scenes according to different scenarios were created and semantics were integrated. Results indicate significant time savings from automatic generation of road network logics.
Along with the success of the digitally revived stereoscopic cinema, other events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.
To save computational resources, traffic simulations in virtual environments and digital games often remove entities from the simulation once they leave the user's visual field of view. This can lead to inconsistencies within the simulated world and break the immersive effect. To counter this effect, we propose a system consisting of a regular microscopic simulation around the user and an additional less detailed simulation layer beyond the user's immediate surroundings. The new layer performs a mesoscopic simulation based on the FastLane model, utilizing elements from queuing theory. With this hybrid approach it is possible to simulate a reasonable amount of detailed traffic participants as well as a much larger number of less detailed, but persistent traffic participants. As a result it is possible to simulate entities with complex behavior close to the user while maintaining reasonable traffic densities throughout the entire system.
Traffic simulations in current open world video games and driving simulators are still limited with respect to the complexity of the behavior of simulated agents. These limitations are typically due to scarce computational resources, but also to the applied methodologies. We suggest adding cognitive components to traffic agents in order to achieve more realistic behavior, such as opting for risky actions or occasionally breaking traffic rules. To achieve this goal, we start by adding a personality profile to each agent, which is based on the “Five Factor Model” from psychology. We test our enhancement on a specific traffic scenario where simplistic behaviors would lead to a complete standstill of traffic. Our results show that the approach resolves critical situations and keeps traffic flowing.
Using virtual environment systems for road safety education requires a realistic simulation of road traffic. Current traffic simulations are either too restricted in their complexity of agent behavior or focus on aspects not important in virtual environments. More importantly, none of them are concerned with modeling misbehavior of traffic participants which is part of every-day traffic and should therefore not be neglected in this context. We present a concept for a traffic simulation that addresses the need for more realistic agent behavior with regard to road safety education. The two major components of this concept are a simulation of persistent agents which minimizes computational overhead and a model of cognitive processes of human drivers combined with psychological personality profiles to allow for individual behavior and misbehavior.
Traffic simulations are generally used to forecast traffic behavior or to simulate non-player characters in computer games and virual environments. These systems are usually modeled in such a way that traffic rules are strictly followed. However, rule violations are a common part of real-life traffic and thus should be integrated into such models.
Recent advances in commercial technology increase the use of stereoscopy in games. While current applications display existing games in real-time rendered stereoscopic 3D, future games will also feature S3D video as part of the virtual game world, in interactive S3D movies, or for new interaction methods. Compared to the rendering of 2D video within a 3D game scene, displaying S3D video includes some technical challenges related to rendering and adaption of the depth range. Rendering is exclusively possible on professional hardware not appropriate for gaming. Our approach, Multi-pass Stereoscopic Video Rendering (MSVR), allows to present stereoscopic video streams within game engines on consumer graphics boards. We further discuss aspects of performance and occlusion of virtual objects. This allows developers and other researchers to easily apply S3D video with current game engines to explore new innovations in S3D gaming.
In our current work, we explore novel gameplay opportunities in stereoscopic 3D (S3D) gaming. Our game prototype, YouDash3D, showcases first results in the following challenges: (1) how can stereoscopic gameplay differ from current, gameplay in a way that is especially effective with S3D display, and (2) how can S3D video contribute to interactive gameplay? In conclusion, we propose entertaining S3D video effects and depth-based game mechanics.
The introduction of gestures as a supplementary input modality has become of increasing interest to human computer interaction design, especially for 3D computer environments. This thesis describes the concepts and development of a gesture recognition system based on the machine learning technique of Hidden Markov Models. Well-known from the field of speech recognition, this statistical method is employed in this thesis to represent and recognize predefined gestures. Within this work, gestures are defined as symbols, such as simple geometric shapes or Roman letters. They are extracted from a stream of three-dimensional optical tracking data which is resampled, reduced to 2D and quantized to be used as input to discrete Hidden Markov Models. A set of prerecorded training data is used to learn the parameters of the models and recognition is achieved by evaluating the trained models. The devised system was used to augment an existing virtual reality prototype application which serves as a demonstration and development platform for the VRGeo consortium. The consortium's goal is to investigate and utilize the benefits of virtual reality technology for the oil and gas industry. An isolated test of the system with seven gestures showed accuracies of up to 98.57% and the review from experts in the fields of virtual reality and geophysics was predominantly positive.
Der Einsatz von Agentensystemen ist vielfältig, dennoch sind aktuelle Realisierungen lediglich in der Lage primär regelkonformes oder aber „geskriptetes“ Verhalten auch unter Einsatz von randomisierten Verfahren abzubilden. Für eine realistische Repräsentation sind jedoch auch Abweichungen von den Regeln notwendig, die nicht zufällig sondern kontextbedingt auftreten. Im Rahmen dieses Forschungsprojektes wurde ein realitätsnaher Straßenverkehrssimulator realisiert, der mittels eines detailliert definierten Systems für kognitive Agenten auch diese irregulären Verhaltensweisen generiert und somit ein realistisches Verkehrsverhalten für die Verwendung in VR-Anwendungen simuliert. Durch das Erweitern der Agenten mit psychologischen Persönlichkeitsprofilen, basierend auf dem „Fünf-Faktoren-Modell“, zeigen die Agenten individualisierte und gleichzeitig konsistente Verhaltensmuster. Ein dynamisches Emotionsmodell sorgt zusätzlich für eine situationsbedingte Adaption des Verhaltens, z.B. bei langen Wartezeiten. Da die detaillierte Simulation kognitiver Prozesse, der Persönlichkeitseinflüsse und der emotionalen Zustände erhebliche Rechenleistungen verlangt, wurde ein mehrschichtiger Simulationsansatz entwickelt, der es erlaubt den Detailgrad der Berechnung und Darstellung jedes Agenten während der Simulation stufenweise zu verändern, so dass alle im System befindlichen Agenten konsistent simuliert werden können. Im Rahmen diverser Evaluierungsiterationen in einer bestehenden VR-Anwendung – dem FIVIS-Fahrradfahrsimulator des Antragstellers - konnte eindrucksvoll nachgewiesen werden, dass die realisierten Konzepte die ursprünglich formulierten Forschungsfragestellung überzeugend und effizient lösen.