Perception is an important aspect of cognition since it forms the basis for further decision-making processes. In this contribution, the overall architecture of our synthetic perception for agents framework (SynPeA) for simulating a virtual entities perception is presented. We discuss aspects of modeling visual sensation and propose mechanisms for virtual sensors and memory. Different visual sensing approaches are compared by applying them to an artificial evaluation scenario. The evaluations show promising results with respect to performance and quality.
Integration of Multi-modal Cues in Synthetic Attention Processes to Drive Virtual Agent Behavior
(2017)
Simulations and serious games require realistic behavior of multiple intelligent agents in real-time. One particular issue is how attention and multi-modal sensory memory can be modeled in a natural but effective way, such that agents controllably react to salient objects or are distracted by other multi-modal cues from their current intention. We propose a conceptual framework that provides a solution with adherence to three main design goals: natural behavior, real-time performance, and controllability. As a proof of concept, we implement three major components and showcase effectiveness in a real-time game engine scenario. Within the exemplified scenario, a visual sensor is combined with static saliency probes and auditory cues. The attention model weighs bottom-up attention against intention-related top-down processing, controllable by a designer using memory and attention inhibitor parameters. We demonstrate our case and discuss future extensions.