Refine
H-BRS Bibliography
- yes (918) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (571)
- Institute of Visual Computing (IVC) (203)
- Fachbereich Ingenieurwissenschaften und Kommunikation (196)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (182)
- Fachbereich Wirtschaftswissenschaften (100)
- Institut für Cyber Security & Privacy (ICSP) (74)
- Institut für Verbraucherinformatik (IVI) (63)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (43)
- Fachbereich Angewandte Naturwissenschaften (42)
- Institut für funktionale Gen-Analytik (IFGA) (29)
Document Type
- Conference Object (918) (remove)
Year of publication
Language
- English (918) (remove)
Keywords
- FPGA (11)
- Virtual Reality (9)
- Machine Learning (8)
- Robotics (7)
- Augmented Reality (5)
- Education (5)
- Sustainability (5)
- machine learning (5)
- 3D user interface (4)
- Benchmarking (4)
In this paper we present a new storytelling approach, called Hypermedia Novel (HYMN), that extends the classical narration concept of a story. We develop an underlying modular concept – the narration module – that facilitates a new manner of reception as well as creation of a story. The HYMN focuses on the recipient and his role of consuming a story and a heterogeneous group of creative authors by providing narration modules and their interfaces without defining the granularity of the modules. Using several kinds ofmultimedia elements and a hyperlink structure, we present a first demonstrator that implements this new concept. We also discuss improvements, e.g. MPEG-4/7, that support both reception by the audience, and the process of creating the story by a dispersed team of authors.
Clusters of commodity PCs are widely considered as the way to go to improve rendering performance and quality in many real-time rendering applications. We describe the design and implementation of our parallel rendering system for real-time rendering applications. Major design objectives for our system are: usage of commodity hardware for all system components, ease of integration into existing Virtual Environments software, and flexibility in applying different rendering techniques, e.g. using ray tracing to render distinct objects with a particularly high quality.
As from the beginning of the late 70's an impressive number of innovative electronic payment systems have been developed and tested commercially. However, the resulting variety and complexity of the systems have turned out to be one of the obstacles for the broad acceptance of electronic payment. In this paper we propose a process and service oriented framework, which offers a structural and conceptual orientation in the field of electronic payment. It renders possible an integral view on electronic payment that goes beyond the frame of an individual system. To do this, we have generalized the systems-oriented approaches to a phase-oriented payment model. Using this model, requirements and supporting services for electronic payment can be sorted systematically and described from both the customers' and the merchants' viewpoint. Providing integrated payment processes and services is proving to be a difficult task. With this paper we would like to outline the necessity for a Payment Service Provider to act as a mediator for suppliers and users of electronic payment systems.
The MoMoSat service will enable mobile end-users to view, manage, annotate, and communicate mapbased information in the field. The handled information exists of a huge volume of raster (satellite or aerial images) and vector data (i.e. street networks, cadastral maps or points of interest), as well as text-specific geo-referenced textual notes (the so-called 'GeoNotes') and real-time voice.
A New Approach of Using Two Wireless Tracking Systems in Mobile Augmented Reality Applications
(2003)
Augmented Perception - AuPer
(2004)
In interactive graphics it is often necessary to introduce large changes in the image in response to updated information about the state of the system. Updating the local state immediately would lead to a sudden transient change in the image, which could be perceptually disruptive. However, introducing the correction gradually using smoothing operations increases latency and degrades precision. It would be beneficial to be able to introduce graphic updates immediately if they were not perceptible. In the paper the use of saccade-contingent updates is exploited to hide graphic updates during the period of visual suppression that accompanies a rapid, or saccadic, eye movement. Sensitivity to many visual stimuli is known to be reduced during a change in fixation compared to when the eye is still. For example, motion of a small object is harder to detect during a rapid eye movement (saccade) than during a fixation. To evaluate if these findings generalize to large scene changes in a virtual environment, gaze behavior in a 180 degree hemispherical display was recorded and analyzed. This data was used to develop a saccade detection algorithm adapted to virtual environments. The detectability of trans-saccadic scene changes was evaluated using images of high resolution real world scenes. The images were translated by 0.4, 0.8 or 1.2 degrees of visual angle during horizontal saccades. The scene updates were rarely noticeable for saccades with a duration greater than 58 ms. The detection rate for the smallest translation was just 6.25%. Qualitatively, even when trans-saccadic scene changes were detectible, they were much less disturbing than equivalent changes in the absence of a saccade.
Interactive rendering of complex models has many applications in the Virtual Reality Continuum. The oil&gas industry uses interactive visualizations of huge seismic data sets to evaluate and plan drilling operations. The automotive industry evaluates designs based on very detailed models. Unfortunately, many of these very complex geometric models cannot be displayed with interactive frame rates on graphics workstations. This is due to the limited scalability of their graphics performance. Recently there is a trend to use networked standard PCs to solve this problem. Care must be taken however, because of nonexistent shared memory with clustered PCs. All data and commands have to be sent across the network. It turns out that the removal of the network bottleneck is a challenging problem to solve in this context.In this article we present some approaches for network aware parallel rendering on commodity hardware. These strategies are technological as well as algorithmic solutions.
Most VE-frameworks try to support many different input and output devices. They do not concentrate so much on the rendering because this is tradi- tionally done by graphics workstation. In this short paper we present a modern VE framework that has a small kernel and is able to use different renderers. This includes sound renderers, physics renderers and software based graphics renderers. While our VE framework, named basho is still under development we have an alpha version running under Linux and MacOS X.
Motion parameters estimation of moving objects and ego motion applying an active camera system
(2004)
The Render Cache [1,2] allows the interactive display of very large scenes, rendered with complex global illumination models, by decoupling camera movement from the costly scene sampling process. In this paper, the distributed execution of the individual components of the Render Cache on a PC cluster is shown to be a viable alternative to the shared memory implementation.As the processing power of an entire node can be dedicated to a single component, more advanced algorithms may be examined. Modular functional units also lead to increased flexibility, useful in research as well as industrial applications.We introduce a new strategy for view-driven scene sampling, as well as support for multiple camera viewpoints generated from the same cache. Stereo display and a CAVE multi-camera setup have been implemented.The use of the highly portable and inter-operable CORBA networking API simplifies the integration of most existing pixel-based renderers. So far, three renderers (C++ and Java) have been adapted to function within our framework.
This paper describes the work done at our Lab to improve visual and other quality of Virtual Environments. To be able to achieve better quality we built a new Virtual Environments framework called basho. basho is a renderer independent VE framework. Although renderers are not limited to graphics renderers we first concentrated on improving visual quality. Independence is gained from designing basho to have a small kernel and several plug-ins.
We present basho, a light weight and easily extendable virtual environment (VE) framework. Key benefits of this framework are independence of the scene element representation and the rendering API. The main goal was to make VE applications flexible without the need to change them, not only by being independent from input and output devices. As an example, with basho it is possible to switch from local illumination models to ray tracing by just replacing the renderer. Or to replace the graphical representation of the scene elements without the need to change the application. Furthermore it is possible to mix rendering technologies within a scene. This paper emphasises on the abstraction of the scene element representation.
A generic approach to describing shape and topography of arbitrary objects is presented, using linguistic variables to combine different features in one fuzzy descriptor. Although the origin of the method lies in molecular visualization and drug design, it can be applied in principle to any surface represented by a polygon mesh. Two approaches to shape description are presented that both lead to linguistic variables that can be used for surface segmentation by means of shape: One approach is based on the calculation of canonical curvatures, the other describes the "embeddedness" of a surface area related to the overall geometry of a 3D object.
The objective of the presented approach is to develop a 3D-reconstruction method for micro organisms from sequences of microscopic images by varying the level-of-focus. The approach is limited to translucent silicatebased marine and freshwater organisms (e.g. radiolarians). The proposed 3D-reconstruction method exploits the connectivity of similarly oriented and spatially adjacent edge elements in consecutive image layers. This yields a 3D-mesh representing the global shape of the objects together with details of the inner structure. Possible applications can be found in comparative morphology or hydrobiology, where e.g. deficiencies in growth and structure during incubation in toxic water or gravity effects on metabolism have to be determined.
We present herein a new class of resin formulations for stereolithography, named FlexSL, with a broad bandwidth of tunable mechanical properties. The novel polyether(meth)acrylate based material class has outstanding material characteristics in combination with the advantages of being a biocompatible (meth)acrylate based processing material. FlexSL shows very promising results in several initial biocompatibility tests. This emphasizes its non-toxic behavior in a biomedical environment, caused mainly by the (meth)acrylate based core components. A short overview of mechanical and processing properties will be given in the end. The herein presented novel FlexSL materials show a significant lower cytotoxicity in contrast to commercial applied acrylic stereolithography resins. Further biocompatibility tests according to ISO 10993 protocols are planned. On the one hand, there are technical applications for this material (e.g. flaps, tubes, hoses, cables, sealing parts, connectors and other technical rubber-like applications), and on the other hand, broad fields of potential biomedical applications in which the FlexSL materials can be beneficial are obvious. Especially these could be small series production of medical products with special flexible material requirements. In addition, the usage for individual soft hearing aid shells, intra-operative planning services and tools like intra-op cutting templates and sawing guides is very attractive. The possibility to modify the FlexSL resins also for high-resolution applications makes it possible to manufacture now very flexible micro-prototypes with outstanding material characteristics and very fine structures with a minimum resolution of 20 mym and a layer thickness of minimal 5 myrn. These resin formulations are applicable and adjustable to other stereolithographic equipment available on the market.
The paper proposes a bond graph approach to model based fault detection and isolation (FDI) that uses residual sinks. These elements couple a reference model of a process engineering system to a bond graph model of the system that is subject to disturbances caused by faults. In this paper it is assumed that two faults do not appear simultaneously. The underlying mathematical model is a Differential Algebraic Equations (DAE) system. The approach is illustrated by means of the often used hydraulic two tanks system.
Bond Graph Modelling and Simulation of Mechatronic Systems: An Introduction into the Methodology
(2006)
This paper introduces into a graphical, computer aided modelling methodology that is particularly suited for the concurrent design of mechatronic systems, viz. of engineering systems with mechanical, electrical, hydraulic or pneumatic components including interactions of physical effects from various energy domains. Beyond the introduction, bond graph modelling of multibody systems, as an example of an advanced topic, is briefly addressed in order to demonstrate the potential of this powerful approach to modelling mechatronic systems. It is outlined how models of multibody systems including flexible bodies can be build in a systematic manner.
Swedish wheeled mobile robots have remarkable mobility properties allowing them to rotate and translate at the same time. Being holonomic systems, their kinematics model results in the possibility of designing separate and independent position and heading trajectory tracking control laws. Nevertheless, if these control laws should be implemented in the presence of unaccounted actuator saturation, the resulting saturated linear and angular velocity commands could interfere with each other thus dramatically affecting the overall expected performance. Based on Lyapunov’s direct method, a position and heading trajectory tracking control law for Swedish wheeled robots is developed. It explicitly accounts for actuator saturation by using ideas from a prioritized task based control framework.
In this paper, we describe an approach to academic teaching in computer science using storytelling as a means for background research to hypermedia and virtual reality topics. It is shown that narrative activity within the context of a Hypermedia Novel related to educational content can enhance motivation for self-conducted learning and in parallel lead to an edutainment system of its own. The narrative practice and background research as well as the resulting product can supplement lecture material with comparable success to traditional academic teaching approaches.
In this contribution a machine vision inspection system is presented which is designed as a length measuring sensor. It is developed to be applied to a range of heat shrink tubes, varying in length, diameter and color. The challenges of this task were the precision and accuracy demands as well as the real-time applicability of the entire approach since it should be realized in regular industrial line production. In production, heat shrink tubes are cut to specific sizes from a continuous tube. A multi-measurement strategy has been developed, which measures each individual tube segment several times with sub pixel accuracy while being in the visual field. The developed approach allows for a contact-free and fully automatic control of 100% of produced heat shrink tubes according to the given requirements with a measuring precision of 0.1mm. Depending on the color, length and diameter of the tubes considered, a true positive rate of 99.99% to 100% has been reached at a true negative rate of > 99.7.
Phase Space Rendering
(2007)
GL-Wrapper for Stereoscopic Rendering of Standard Applications for a PC-based Immersive Environment
(2007)
The article presents a solution to detect rotor position at stand still condition for all types of permanent magnet brushless dc motors. The solution provides both secure and fast method for starting of the brushless motor, that is independent on the sensorless control scheme used. Nonlinearities found in standard three phase permanent magnet dc motor are used to derive the rotor position at stand still. The described solution assumes that there is availability of the neutral point of the three phase star motor windings.
In the presented project, new approaches for the prevention of hand movements leading to hazards and for non-contact detection of fingers are intended to permit comprehensive and economical protection on circular saws. The basic principles may also be applied to other machines with manual loading and/or unloading. Two new detection principles are explained. The first is the distinction between skin and wood or other material by spectral analysis in the near infrared region. Using LED and photodiodes it is possible to detect fingers and hands reliable. With a kind of light curtain the intrusion into the dangerous zone near the blade can be prevented. The second principle is video image processing to detect persons, arms and fingers. In the first stage of development the detection of upper limb extremities within a defined hazard area by means of a computer based video image analysis is investigated.
Programmable logic controllers (PLCs) have traditionally been used on numerous machines for control of the various processing and manufacturing steps. In response to an initiative by the Expert Committee Chemical Industry (FA CH), suitable metrics have been defined in order to permit rapid static analysis of comprehensive PLC programs. A tool was developed for this purpose which is also able to determine quality criteria. Additional to seven of the new metrics five metrics derived from Halstead and McCabe are used to determine the quality-criteria: testability, self-description, legibility and simplicity. New concepts were developed to determine and to calculate these criteria. The results are documented in hyper-language files with links to the called modules and control-graphs for each function. The tool was validated by three voluminous PLC-programs, two are from industrial projects.
The objective of the FIVIS project is to develop a bicycle simulator which is able to simulate real life bicycle ride situations as a virtual scenario within an immersive environment. A sample test bicycle is mounted on a motion platform to enable a close to reality simulation of turns and balance situations. The visual field of the bike rider is enveloped within a multi-screen visualization environment which provides visual data relative to the motion and activity of the test bicycle. This implies the bike rider has to pedal and steer the bicycle as they would a traditional bicycle, while forward motion is recorded and processed to control the visualization. Furthermore, the platform is fed with real forces and accelerations that have been logged by a mobile data acquisition system during real bicycle test drives. Thus, using a feedback system makes the movements of the platform reflect the virtual environment and the reaction of the driver (e.g. steering angle, step rate).
In this paper we present an ongoing research work dedicated to a Virtual-Reality-based product customization application development. The work is addressing the problem of flexible and quick customization of products from a great number of parts. Our application is an effective instrument that can be simultaneously used by two users for rapid assembly tasks, allowing engineers and designers to work collaboratively. Furthermore, it is directly connected to a manufacturing environment, which is able to produce the product right after customization. In the paper we describe the architecture of the application, our interaction and assembly techniques, and explain how the system can be integrated into a manufacturing environment.
Video Surveillance is in the center of research due to high importance of safety and security issues. Usually, humans have to monitor an area and often they have to do this for 24 hours a day. Thus, it would be desirable to have automatic surveillance systems that support this job automatically. The system described in this paper is such an automatic surveillance system that has been developed to detect several dangerous situations in a subway station. This paper discusses the high-level module of the system. Herein, an expert system is used to detect events.
Todays Virtual Environment frameworks use scene graphs to represent virtual worlds. We believe that this is a proper technical approach, but a VE framework should try to model its application area as accurate as possible. Therefore a scene graph is not the best way to represent a virtual world. In this paper we present an easily extensible model to describe entities in the virtual world. Further on we show how this model drives the design of our VE framework and how it is integrated.
A Low-Cost Based 6 DoF Head Tracker for Usability Application Studies in Virtual Environments
(2008)
In this paper, we present XPERSim, a 3D simulator built on top of open source components that enables users to quickly and easily construct an accurate and photo-realistic simulation for robots of arbitrary morphology and their environments. While many existing robot simulators provide a good dynamics simulation, they often lack the high quality visualization that is now possible with general-purpose hardware. XPERSim achieves such visualization by using the Object-Oriented Graphics Rendering Engine 3D (Ogre) engine to render the simulation whose dynamics are calculated using the Open Dynamics Engine (ODE). Through XPERSim’s integration into a component-based software integration framework used for robotic learning by experimentation, XPERSIF, and the use of the scene-oriented nature of the Ogre engine, the simulation is distributed to numerous users that include researchers and robotic components, thus enabling simultaneous, quasi-realtime observation of the multiple-camera simulations.
Parallel systems leverage parallel file systems to efficiently perform I/O to shared files. These parallel file systems utilize different client-server communication and file data distribution strategies to optimize the access to data stored in the file system. In many parallel file systems, clients access data that is striped across multiple I/O devices or servers. Striping, however, results in poor access performance if the application generates a different stride pattern. This work analyzes optimization approaches of different parallel file systems and proposes new strategies for the mapping of clients to servers and the distribution of file data with special respect to strided data access. We evaluate the results of a specific approach in a parallel file system for main memory.
The goal of this work is to develop an integration framework for a robotic software system which enables robotic learning by experimentation within a distributed and heterogeneous setting. To meet this challenge, the authors specified, defined, developed, implemented and tested a component-based architecture called XPERSIF. The architecture comprises loosely-coupled, autonomous components that offer services through their well-defined interfaces and form a service-oriented architecture. The Ice middleware is used in the communication layer. Additionally, the successful integration of the XPERSim simulator into the system has enabled simultaneous quasi-realtime observation of the simulation by numerous, distributed users.
Results Obtained with a Semi-lagrangian Mass-Integrating Transport Algorithm by Using the GME Grid
(2008)
Nearest Neighbor Search (NNS) is employed by many computer vision algorithms. The computational complexity is large and constitutes a challenge for real-time capability. The basic problem is in rapidly processing a huge amount of data, which is often addressed by means of highly sophisticated search methods and parallelism. We show that NNS based vision algorithms like the Iterative Closest Points algorithm (ICP) can achieve real-time capability while preserving compact size and moderate energy consumption as it is needed in robotics and many other domains. The approach exploits the concept of general purpose computation on graphics processing units (GPGPU) and is compared to parallel processing on CPU. We apply this approach to the 3D scan registration problem, for which a speed-up factor of 88 compared to a sequential CPU implementation is reported.
Keeping planning problems as small as possible is a must in order to cope with complex tasks and environments. Earlier, we have described a method for cascading Description Logic (dl) representation and reasoning on the one hand, and Hierarchical Task Network (htn) action planning on the other. The planning domain description as well as the fundamental htn planning concepts are represented in dl and can therefore be subject to dl reasoning. From these representations, concise planning problems are generated for htn planners. We show by way of case study that this method yields significantly smaller planning problem descriptions than regular representations do in htn planning. The method is presented through a case study of a robot navigation domain and the blocks world domain. We present the benefits of using this approach in comparison with a pure htn planning approach.
This paper presents an approach to estimate theego-motion of a robot while moving. The employed sensor is aTime-of-Flight (ToF) camera, the SR3000 from Mesa Imaging.ToF cameras provide depth and reflectance data of the scene athigh frame rates.The proposed method utilizes the coherence of depth andreflectance data of ToF cameras by detecting image features onreflectance data and estimating the motion on depth data. Themotion estimate of the camera is fused with inertial measure-ments to gain higher accuracy and robustness.The result of the algorithm is benchmarked against referenceposes determined by matching accurate 2D range scans. Theevaluation shows that fusing the pose estimate with the datafromthe IMU improves the accuracy and robustness of the motionestimate against distorted measurements from the sensor.
Ray Tracing, accurate physical simulations with collision detection, particle systems and spatial audio rendering are only a few components that become more and more interesting for Virtual Environments due to the steadily increasing computing power. Many components use geometric queries for their calculations. To speed up those queries spatial data structures are used. These data structures are mostly implemented for every problem individually resulting in many individually maintained parts, unnecessary memory consumption and waste of computing power to maintain all the individual data structures. We propose a design for a centralized spatial data structure that can be used everywhere within the system.
We present an interactive system that uses ray tracing as a rendering technique. The system consists of a modular Virtual Reality framework and a cluster-based ray tracing rendering extension running on a number of Cell Broadband Engine-based servers. The VR framework allows for loading rendering plugins at runtime. By using this combination it is possible to simulate interactively effects from geometric optics, like correct reflections and refractions.
3D tracking using multiple Nintendo Wii Remotes: a simple consumer hardware tracking approach
(2009)
An easy to build and cost-effective 3D tracking solution is presented, using Nintendo Wii Remotes acting as cameras. As the hardware differs from usual tracking cameras, the calibration and tracking process has to be adapted accordingly. The tracking approach described could be used for tracking the user's motions in video games based upon physical activity (sports, fighting or dancing games), allowing the player to interact with the game in a more intuitive way than by just pressing buttons.
How Does Self-Perception Influence the Choice of Study? E-Portfolio and Gender Issues in Informatics
(2009)
Construction kit for low-cost vibration analysis systems based on low-cost acceleration sensors
(2009)
Kinetic Inductance Detectors with Integrated Antennas for Ground and Space-Based Sub-mm Astronomy
(2009)
Very large arrays of Microwave Kinetic Inductance Detectors (MKIDs) have the potential to revolutionize ground and space based astronomy. They can offer in excess of 10.000 pixels with large dynamic range and very high sensitivity in combination with very efficient frequency division multiplexing at GHz frequencies. In this paper we present the development of a 400 pixel MKID demonstration array, including optical coupling, sensitivity measurements, beam pattern measurements and readout. The design presented can be scaled to any frequency between 80 GHz and >5 THz because there is no need for superconducting structures that become lossy at frequencies above the gap frequency of the materials used. The latter would limit the frequency coverage to below 1 THz for relatively high gap materials such as NbTiN. An individual pixels of the array consist of a distributed Aluminium CPW MKID with an integrated twin slot antenna at its end. The antenna is placed in the in the second focus of an elliptical high purity Si lens. The lens-antenna coupling design allows room for the MKID resonator outside of the focal point of the lens. The best dark noise equivalent power of these devices is measured to be NEP = 7×10-19 W/[square root]Hz and the optical coupling efficiency is around 30%, in which no antireflection coating was used on the Si lens. For the readout we use a commercial arbitrary waveform generator and a 1.5 GHz FFTS. We show that using this concept it is possible to read out in excess of 400 pixels with 1 board and 1 pair of coaxial cables.
GREAT, the German REceiver for Astronomy at THz frequencies, has successfully passed its pre-shipment acceptance review conducted by DLR and NASA on December 4-5, 2008. Shipment to DAOF/Palmdale, home of the SOFIA observatory, has been released; airworthiness was stated by NASA. Since, due to schedule slips on the SOFIA project level, first science flights with GREAT were delayed to mid 2010. Here we present GREAT’s short science flight configuration: two heterodyne channels will be operated simultaneously in the frequency ranges of 1.25-1.50 and 1.82-1.91 THz, respectively, driven by solid-state type local oscillator systems, and supported by a wide suite of back-ends. The receiver was extensively tested for about 6 month in the MPIfR labs, showing performances compliant with specifications. This short science configuration will be available to the interested SOFIA user communities in collaboration with the GREAT PI team during SOFIA’s upcoming Basic Science flights.
We review the development of our digital broadband Fast Fourier Transform Spectrometers (FFTS). In just a few years, FFTS back-ends - optimized for a wide range of radio astronomical applications - have become a new standard for heterodyne receivers, particularly in the mm and sub-mm wavelength range. They offer high instantaneous bandwidths with many thousands spectral channels on a small electronic board (100 x 160 mm). Our FFT spectrometer make use of the latest versions of GHz analog-to-digital converters (ADC) and the most complex field programmable gate array (FPGA) chips commercially available today. These state-of-the-art chips have made possible to build digital spectrometers with instantaneous bandwidths up to 1.8 GHz and 8192 spectral channels.