Refine
Departments, institutes and facilities
- Fachbereich Informatik (571)
- Institute of Visual Computing (IVC) (203)
- Fachbereich Ingenieurwissenschaften und Kommunikation (196)
- Institut für Cyber Security & Privacy (ICSP) (182)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (182)
- Institut für Verbraucherinformatik (IVI) (104)
- Fachbereich Wirtschaftswissenschaften (100)
- Institut für funktionale Gen-Analytik (IFGA) (48)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (43)
- Fachbereich Angewandte Naturwissenschaften (42)
Document Type
- Conference Object (1426) (remove)
Year of publication
Language
- English (1426) (remove)
Keywords
- FPGA (11)
- Machine Learning (9)
- Virtual Reality (9)
- Privacy (7)
- Robotics (7)
- Sustainability (7)
- DPA (6)
- Education (6)
- Entrepreneurship (6)
- ICT (6)
Benches and Caves
(1998)
The Latest Modeling and Implementation Techniques for an Extended Production Management System
(1998)
Get a KISS - communication infrastructure for streaming services in a heterogeneous environment
(1998)
UTRAN Internet Access
(1999)
Dual Dynamics (DD) is a mathematical model of a behavior control system for mobile autonomous robots. Behaviors are specified through differential equations; forming a global dynamical system made of behavior subsystems which interact in a number of ways. DD models can be directly compiled into executable code. The article (i) explains the model; (ii) sketches the Dual Dynamics Designer (DDD) environment that we use for the design; simulation; implementation and documentation; and (iii) illustrates our approach with the example of kicking a moving ball into a goal.
The World Wide Web (Www) offers a huge number of documents which deal with information concerning nearly any topic. Thus, search engines and meta search engines currently are the key to finding information. Search engines with crawler based indexes vary in recall and offer a very bad precision. Meta search engines try to overcome these lacks by simple methods for information extraction, information filtering and integration of heterogenous information resources. Only few search engines employ intelligent techniques in order to increase precision.
Frame-Rate Converter and high speed SDRAM Memory Controller for digital Multi-Media-Projectors
(1999)
The Internet Engineering Task Force (IETF) is currently working on the development of Differentiated Services (DiffServ). DiffServ seems to be a promising technology for next-generation IP networks supporting Quality-of-Services (QoS). Emerging applications such as IP telephony and time-critical business applications can benefit significantly from the DiffServ approach since the current Internet often can not provide the required QoS. This paper describes an implementation of Differentiated Services for Linux routers and end systems. The implementation is based on the Linux traffic control package and is, therefore, very flexible. It can be used in different network environments as first-hop, boundary or interior router for Differentiated Services. In addition to the implementation architecture, the paper describes performance results demonstrating the usefulness of the DiffServ concept in general and the implementation in particular.
We report the status of a search for pulsars in the Galactic Centre, using a completely revised and improved high-sensitivity doublehorn system at 4.85-GHz. We also present calculations about the success rate of periodicity searches for such a survey, showing that in contrast to conclusions in recent literature pulsars can be indeed detected at the chosen search frequency.
To provide seamless handoffs is an important task of cellular systems. A user of a real-time conversation on a mobile terminal should not notice when moving from one base station to another one. In this paper we address handoff procedures in a scenario where the radio access network is assumed to be IP-based, i.e., IP is used up to the base stations, and the mobile terminal runs a Mobile IP client. First we will motivate the need for differentiation of fast handoffs and seamless handoffs. Then we will survey some previously proposed micro-mobility extensions; thereby we will address the question of what degree of micro-mobility support is needed in the typical structure of a radio access network. The main part of this paper then discusses network-initiated/assisted handoffs in combination with Mobile IP. Here, we aim to bring together ideas of 2G/3G systems and of IP-based approaches.
Recent developments in the standardization of the future Internet (driven by the IETF) and next generation telecom networks (driven by 3GPP) show a convergence towards each other. While it is currently unknown if and to what extend this development leads to a unified technical approach (in terms of signaling, routing, mobility management, charging and security) for both real-time (voice / video) and non-real-time (data) networks, the vision of an All-IP-based communication environment for all classes of traffic is one relevant option to look at.
Multimediaprojectors require sophisticated image processing realized on limited board space. An architecture is presented that combines available components and a dedicated display controller for a flexible, compact and cost efficient display electronic. A basic version of the display controller is available as an ASIC, an advanced version has been prototyped as an FPGA.
This paper presents the current stage of an IP-based architecture for heterogeneous environments, covering UMTS-like W-CDMA wireless access technology, wireless and wired LANs, that is being developed under the aegis of the IST Moby Dick project. This architecture treats all transmission capabilities as basic physical and data-link layers, and attempts to replace all higher-level tasks by IP-based strategies.
In this paper we present a new storytelling approach, called Hypermedia Novel (HYMN), that extends the classical narration concept of a story. We develop an underlying modular concept – the narration module – that facilitates a new manner of reception as well as creation of a story. The HYMN focuses on the recipient and his role of consuming a story and a heterogeneous group of creative authors by providing narration modules and their interfaces without defining the granularity of the modules. Using several kinds ofmultimedia elements and a hyperlink structure, we present a first demonstrator that implements this new concept. We also discuss improvements, e.g. MPEG-4/7, that support both reception by the audience, and the process of creating the story by a dispersed team of authors.
GMD-Robots
(2001)
We present a model checking algorithm for ∀CTL (and full CTL) which uses an iterative abstraction refinement strategy.
It terminates at least for all transition systems M that have a finite simulation or bisimulation quotient. In contrast to other abstraction refinement algorithms, we always work with abstract models whose sizes depend only on the length of the formula θ (but not on the size of the system, which might be infinite).
3rd Generation networks as proposed by 3GPP claim to follow the path towards fixed-mobile convergence and full support of Internet services. Although the providers have obviously recognised the dynamics of the Internet, their attempt to provide IP-services over the system has led to a circuit switched architecture. This forthcoming infrastructure will be a sophisticated, complicated, and quite expensive network, with some IP-equipment in the middle (core-network). From an IETF-biased engineers view, some parts of this network and protocols could be dropped, except that they are probably needed for backward compatibility. But since backward compatibility and saving of investment is a major concern of the standardising bodies, the evolving architectures carry a big burden.
A way of combining a relatively new sensor-technology, that is optical analog VLSI devices, with a standard digital omni-directional vision system is investigated. The sensor used is a neuromorphic analog VLSI sensor that estimates the global visual image motion. The sensor provides two analog output voltages that represent the components of the global optical flow vector. The readout is guided by an omni-directional mirror that maps the location of the ball and directs the robot to align its position so that a sensor-actuator module that includes the analog VLSI optical flow sensor can be activated. The purpose of the sensor-actuator module is to operate with a higher update rate than the standard vision system and thus increase the reactivity of the robot for very specific situations. This paper will demonstrate an application example where the robot is a goalkeeper with the task of defending the goal during a penalty kick.
GMD-Robots
(2002)
Clusters of commodity PCs are widely considered as the way to go to improve rendering performance and quality in many real-time rendering applications. We describe the design and implementation of our parallel rendering system for real-time rendering applications. Major design objectives for our system are: usage of commodity hardware for all system components, ease of integration into existing Virtual Environments software, and flexibility in applying different rendering techniques, e.g. using ray tracing to render distinct objects with a particularly high quality.
Optical distortions, resulting from lens characteristics, non-aligned projection and variations in the light source, decrease the quality of projection displays. Knowledge of the sources and characteristics of these distortions allows their electronic correction. The integration of electronic image correction in the display controller IC allows high quality projection without additional components.
As from the beginning of the late 70's an impressive number of innovative electronic payment systems have been developed and tested commercially. However, the resulting variety and complexity of the systems have turned out to be one of the obstacles for the broad acceptance of electronic payment. In this paper we propose a process and service oriented framework, which offers a structural and conceptual orientation in the field of electronic payment. It renders possible an integral view on electronic payment that goes beyond the frame of an individual system. To do this, we have generalized the systems-oriented approaches to a phase-oriented payment model. Using this model, requirements and supporting services for electronic payment can be sorted systematically and described from both the customers' and the merchants' viewpoint. Providing integrated payment processes and services is proving to be a difficult task. With this paper we would like to outline the necessity for a Payment Service Provider to act as a mediator for suppliers and users of electronic payment systems.
The MoMoSat service will enable mobile end-users to view, manage, annotate, and communicate mapbased information in the field. The handled information exists of a huge volume of raster (satellite or aerial images) and vector data (i.e. street networks, cadastral maps or points of interest), as well as text-specific geo-referenced textual notes (the so-called 'GeoNotes') and real-time voice.
A New Approach of Using Two Wireless Tracking Systems in Mobile Augmented Reality Applications
(2003)
Augmented Perception - AuPer
(2004)
In interactive graphics it is often necessary to introduce large changes in the image in response to updated information about the state of the system. Updating the local state immediately would lead to a sudden transient change in the image, which could be perceptually disruptive. However, introducing the correction gradually using smoothing operations increases latency and degrades precision. It would be beneficial to be able to introduce graphic updates immediately if they were not perceptible. In the paper the use of saccade-contingent updates is exploited to hide graphic updates during the period of visual suppression that accompanies a rapid, or saccadic, eye movement. Sensitivity to many visual stimuli is known to be reduced during a change in fixation compared to when the eye is still. For example, motion of a small object is harder to detect during a rapid eye movement (saccade) than during a fixation. To evaluate if these findings generalize to large scene changes in a virtual environment, gaze behavior in a 180 degree hemispherical display was recorded and analyzed. This data was used to develop a saccade detection algorithm adapted to virtual environments. The detectability of trans-saccadic scene changes was evaluated using images of high resolution real world scenes. The images were translated by 0.4, 0.8 or 1.2 degrees of visual angle during horizontal saccades. The scene updates were rarely noticeable for saccades with a duration greater than 58 ms. The detection rate for the smallest translation was just 6.25%. Qualitatively, even when trans-saccadic scene changes were detectible, they were much less disturbing than equivalent changes in the absence of a saccade.
Interactive rendering of complex models has many applications in the Virtual Reality Continuum. The oil&gas industry uses interactive visualizations of huge seismic data sets to evaluate and plan drilling operations. The automotive industry evaluates designs based on very detailed models. Unfortunately, many of these very complex geometric models cannot be displayed with interactive frame rates on graphics workstations. This is due to the limited scalability of their graphics performance. Recently there is a trend to use networked standard PCs to solve this problem. Care must be taken however, because of nonexistent shared memory with clustered PCs. All data and commands have to be sent across the network. It turns out that the removal of the network bottleneck is a challenging problem to solve in this context.In this article we present some approaches for network aware parallel rendering on commodity hardware. These strategies are technological as well as algorithmic solutions.
Most VE-frameworks try to support many different input and output devices. They do not concentrate so much on the rendering because this is tradi- tionally done by graphics workstation. In this short paper we present a modern VE framework that has a small kernel and is able to use different renderers. This includes sound renderers, physics renderers and software based graphics renderers. While our VE framework, named basho is still under development we have an alpha version running under Linux and MacOS X.
Motion parameters estimation of moving objects and ego motion applying an active camera system
(2004)
The Render Cache [1,2] allows the interactive display of very large scenes, rendered with complex global illumination models, by decoupling camera movement from the costly scene sampling process. In this paper, the distributed execution of the individual components of the Render Cache on a PC cluster is shown to be a viable alternative to the shared memory implementation.As the processing power of an entire node can be dedicated to a single component, more advanced algorithms may be examined. Modular functional units also lead to increased flexibility, useful in research as well as industrial applications.We introduce a new strategy for view-driven scene sampling, as well as support for multiple camera viewpoints generated from the same cache. Stereo display and a CAVE multi-camera setup have been implemented.The use of the highly portable and inter-operable CORBA networking API simplifies the integration of most existing pixel-based renderers. So far, three renderers (C++ and Java) have been adapted to function within our framework.
This paper describes the work done at our Lab to improve visual and other quality of Virtual Environments. To be able to achieve better quality we built a new Virtual Environments framework called basho. basho is a renderer independent VE framework. Although renderers are not limited to graphics renderers we first concentrated on improving visual quality. Independence is gained from designing basho to have a small kernel and several plug-ins.
We present basho, a light weight and easily extendable virtual environment (VE) framework. Key benefits of this framework are independence of the scene element representation and the rendering API. The main goal was to make VE applications flexible without the need to change them, not only by being independent from input and output devices. As an example, with basho it is possible to switch from local illumination models to ray tracing by just replacing the renderer. Or to replace the graphical representation of the scene elements without the need to change the application. Furthermore it is possible to mix rendering technologies within a scene. This paper emphasises on the abstraction of the scene element representation.
We present a universal modular robot architecture. A robot consists of the following intelligent modules: central control unit (CCU), drive, actuators, a vision unit and sensor input unit. Software and hardware of the robot fit into this structure. We define generic interface protocols between these units. If the robot has to solve a new application and is equipped with a different drive, new actuators and different sensors, only the program for the new application has to be loaded into the CCU. The interfaces to the drive, the vision unit and the other sensors are plug-and-play interfaces. The only constraint for the CCU-program is the set of commands for the actuators.