Refine
Departments, institutes and facilities
- Fachbereich Informatik (30)
- Institute of Visual Computing (IVC) (15)
- Institut für Cyber Security & Privacy (ICSP) (6)
- Fachbereich Ingenieurwissenschaften und Kommunikation (4)
- Institut für Verbraucherinformatik (IVI) (3)
- Institut für Sicherheitsforschung (ISF) (2)
- Fachbereich Angewandte Naturwissenschaften (1)
- Fachbereich Wirtschaftswissenschaften (1)
- Institut für Detektionstechnologien (IDT) (1)
- Institut für funktionale Gen-Analytik (IFGA) (1)
Document Type
- Conference Object (64) (remove)
Year of publication
- 2012 (64) (remove)
Has Fulltext
- no (64)
Keywords
- Bag of Features (2)
- classifier combination (2)
- clustering (2)
- feature extraction (2)
- machine learning (2)
- object categorization (2)
- All-Swap Algorithm (1)
- CUDA (1)
- Cloud Security (1)
- Cloud Standards (1)
XML Encryption and XML Signature are fundamental security standards forming the core for many applications which require to process XML-based data. Due to the increased usage of XML in distributed systems and platforms such as in SOA and Cloud settings, the demand for robust and effective security mechanisms increased as well. Recent research work discovered, however, substantial vulnerabilities in these standards as well as in the vast majority of the available implementations. Amongst them, the so-called XML Signature Wrapping attack belongs to the most relevant ones. With the many possible instances of this attack type, it is feasible to annul security systems relying on XML Signature and to gain access to protected resources as has been successfully demonstrated lately for various Cloud infrastructures and services. This paper contributes a comprehensive approach to robust and effective XML Signatures for SOAP-based Web Services. An architecture is proposed, which integrates the r equired enhancements to ensure a fail-safe and robust signature generation and verification. Following this architecture, a hardened XML Signature library has been implemented. The obtained evaluation results show that the developed concept and library provide the targeted robustness against all kinds of known XML Signature Wrapping attacks. Furthermore the empirical results underline, that these security merits are obtained at low efficiency and performance costs as well as remain compliant with the underlying standards.
The documentation requirements of data published in long term archives have significantly grown over the last decade. At WDCC the data publishing process is assisted by “Atarrabi”, a web-based workflow system for reviewing and editing metadata information by the data authors and the publication agent. The system ensures high metadata quality for long-term use of the data with persistent identifiers (DOI/URN). By these well-defined references (DOI) credit can properly be given to the data producers in any publication.
In this paper we summarize our research on international educational contexts and transfer the results to the context of urban life-long learning. We will show that a collection and provision of relevant data can help instructors as well as learners to raise their awareness regarding contextual differences, to develop a higher level of acceptance regarding differences, and thus, in the long term, avoid frustration in educational processes and reduce drop out-rates.
In the context of Internet-based e-Learning, including an international auditory is a logical consequence. However, due to uncertainty regarding the foreign learners, e-Learning programs often are limited to local or national participants. Understanding the different expectations of learners regarding instructor-support is one step in order to enable providers of educational services to tailor educational programs that fit the requirements of an international auditory. We asked university students in five countries regarding their expectations towards instructor-support and found major differences between the investigated countries.
For learners, feedback can be both, a strong motivator but in case it fails its purpose, it can be a strong reason for frustration and dropouts as well. Do we have to change our locally implemented feedback strategies when adapting learning contents from national to international settings? In our study, we the investigated learners’ understanding and preferences regarding feedback in scenarios of higher education across the five different national contexts, Bulgaria, Germany, South Korea, Turkey, and Ukraine.
This presentation shows that students in different cultural contexts have different perceptions of time management and work organization. Particularly in group work scenarios, such differences can have a frustrating impact on students from other cultural contexts because e.g., expectations are not met. Being aware of such differences between the learners in a culturally heterogeneous educational scenario, educators can prevent frustration by introducing their students and providing more specific instructions.
Approximate clone detection is the process of identifying similar process fragments in business process model collections. The tool presented in this paper can efficiently cluster approximate clones in large process model repositories. Once a repository is clustered, users can filter and browse the clusters using different filtering parameters. Our tool can also visualize clusters in the 2D space, allowing a better understanding of clusters and their member fragments. This demonstration will be useful for researchers and practitioners working on large process model repositories, where process standardization is a critical task for increasing the consistency and reducing the complexity of the repository.
Traffic simulations for virtual environments are concerned with the behavior of individual traffic participants. The complexity of behavior in these simulations is often rather simple to abide by the constraints of processing resources. In sophisticated traffic simulations, the behavior of individual traffic participants is also modeled, but the focus lies on the overall behavior of the entire system, e.g. to identify possible bottle necks of traffic flow [8].
At previous SIAS conferences, we presented a novel opto-electronic safety sensor system for skin detection at circular saws jointly developed with the Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA). This work now presents the development results of our consecutive research on a prototype of a sensor system for more general production machine applications including robot workplaces. The system uses offthe shelf LEDs and photodiodes in combination with dedicated optics and a microcontroller system to implement a so-called spectral light curtain.
Traffic simulations are typically concerned with modeling human behavior as closely as possible to create realistic results. In conventional traffic simulations used for road planning or traffic jam prediction only the overall behavior of an entire system is of interest. In virtual environments, like digital games, simulated traffic participants are merely a backdrop to the player’s experience and only need to be “sufficiently realistic”. Additionally, restricted computational resources, typical for virtual environment applications, usually limit the complexity of simulated behavior in this field. More importantly, two integral aspects of real-world traffic are not considered in current traffic simulations from both fields: misbehavior and risk taking of traffic participants. However, for certain applications like the FIVIS bicycle simulator, these aspects are essential.
Traditionally traffic simulations are used to predict traffic jams, plan new roads or highways, and estimate road safety. They are also used in computer games and virtual environments. There are two general concepts of modeling traffic: macroscopic and microscopic modeling. Macroscopic traffic models take vehicle collectives into account and do not consider individual vehicles. Parameters like average velocity and density are used to model the flow of traffic. In contrast, microscopic traffic models consider each vehicle individually. Therefore, vehicle specific parameters are of importance, e.g. current velocity, desired velocity, velocity difference to the lead vehicle, individual time gap.
This paper describes the development of a Pedelec controller whose performance level (PL) conforms to European standard on safety of machinery [9] and whose soft- ware is verified to conform to EPAC standard [6] by means of a software verification technique called model checking. In compliance with the standard [9] the hardware needs to implement the required properties corresponding to categories “C” and “D”. The latter is used if the breaks are not able to bring the velomobile with a broken motor controller to a full stop. Therefore the controller needs to implement a test unit, which verifies the functionality of the components and, in case of an emergency, shuts the whole hardware down to prevent injuries of the cyclist. The MTTFd can be measured through a failure graph, which is the result of a FMEA analysis, and can be used to proof that the Pedelec controller meets the regulations of the system specification. The analysis of the system in compliance with [9] usually treats the software as a black box thus ignoring its inner workings and validating its correctness by means of testing. In this paper we present a temporal logic specification according to [6], based on which the software for the Pedelec controller is implemented, and verify instead of only testing its functionality. By means of model checking [1] we proof that the software fulfills all requirements which are regulated by its specification.
Designing stereoscopic information visualization for 3D-TV: What can we learn from S3D gaming?
(2012)
This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.
We present a study that investigates user performance benefits of playing video games using 3D motion controllers in 3D stereoscopic vision in comparison to monoscopic viewing. Using the PlayStation 3 game console coupled with the PlayStation Move Controller, we explored five different games that combine 3D stereo and 3D spatial interaction. For each game, quantitative and qualitative measures were taken to determine if users performed better and learned faster in the experimental group (3D stereo display) than in the control group (2D display). A game expertise pre-questionnaire was used to classify participants into beginners and expert game player categories to analyze a possible impact on performance differences. The results show two cases where the 3D stereo display did help participants perform significantly better than with a 2D display. For the first time, we can report a positive effect on gaming performance based on stereoscopic vision, although reserved to isolated tasks and depending on game expertise. We discuss the reasons behind these findings and provide recommendations for game designers who want to make use of 3D stereoscopic vision and 3D motion control to enhance game experiences.
Recent advances in digital game technology are making stereoscopic games more popular. Stereoscopic 3D graphics promise a better gaming experience but this potential has not yet been proven empirically. In this paper, we present a comprehensive study that evaluates player experience of three stereoscopic games in comparison with their monoscopic counterparts. We examined 60 participants, each playing one of the three games, using three self-reporting questionnaires and one psychophysiological instrument. Our main results are (1) stereoscopy in games increased experienced immersion, spatial presence, and simulator sickness; (2) the effects strongly differed across the three games and for both genders, indicating more affect on male users and with games involving depth animations; (3) results related to attention and cognitive involvement indicate more direct and less thoughtful interactions with stereoscopic games, pointing towards a more natural experience through stereoscopy.
In the realm of service robots recovery from faults is indispensable to foster user acceptance. Here fault is to be understood not in the sense of robot internal, rather as interaction faults while situated in and interacting with an environment (aka ex-ternal faults). We reason along the most frequent failures in typical scenarios which we observed during real-world demonstrations and competitions using our Care-O-bot III 1 robot. They take place in an apartment-like environments which is known as closed world. We suggest four different -for now adhoc -fault categories caused by disturbances, imperfect per-ception, inadequate planning or chaining of action sequences. The fault are categorized and then mapped to a handful of partly known, partly extended fault handling techniques. Among them we applied qualitative reasoning, use of simu-lation as oracle, learning for planning (aka en-hancement of plan operators) or -in future -case-based reasoning. Having laid out this frame we mainly ask open questions related to the applicability of the pre-sented approach. Amongst them: how to find new categories, how to extend them, how to as-sure disjointness, how to identify old and label new faults on the fly.
The work presented in this paper focuses on the comparison of well-known and new techniques for designing robust fault diagnosis schemes in the robot domain. The main challenge for fault diagnosis is to allow the robot to effectively cope not only with internal hardware and software faults but with external disturbances and errors from dynamic and complex environments as well.
Für die Entwicklung steuerungstechnischer Sicherheitsfunktionen muss ab 2012 die Normen EN ISO 13849-1 oder EN 62061 befolgt werden, die sowohl Anforderungen an die Hardware als auch Anforderungen an die Software beschreibt. Die Anforderungen an die Software spielten bis vor einigen Jahren kaum eine Rolle, da Sicherheitsfunktionen vorzugsweise in Hardware realisiert wurden. Heutzutage ist es jedoch sehr häufig üblich, Sicherheitsfunktionen mit einer dafür geeigneten programmierbaren SPS zu realisieren. Die neuen Normen bzgl. der sicheren Steuerung von Maschinen verlangen neben der Quantifizierung der Hardware-Ausfallraten von Sicherheitsfunktionen noch ein Management der Sicherheitsfunktionen. Dazu gehört auch ein Management der Softwareentwicklung für Sicherheitsfunktionen, um systematische Fehler zu minimieren. Dieses Management der Softwareentwicklung wird im Wesentlichen durch das V-Modell repräsentiert. Für die Maschinebauindustrie darf dieser Managementprozess nicht zu aufwendig sein, ansonsten wird dieser in der Praxis nur schwer angenommen. Eine Möglichkeit der Abarbeitung des V-Modells wird vorgestellt. Wahrscheinlich ist diese aufgezeigte Möglichkeit für die Industrie noch zu aufwendig.
This paper compares the memory allocation of two Java virtual machines, namely Oracle Java HotSpot VM 32-bit (OJVM) and Jamaica JamaicaVM (JJVM). The basic difference of the architectures in both machines is that the JamaicaVM uses fixed-size blocks for allocating objects on the heap. The basic difference of the architectures is that the JJVM uses fixed size block allocation on the heap. This means that objects have to be split into several connected blocks if they are bigger than the specified block-size. On the other hand, for small objects a full block must be allocated. The paper contains both theoretical and experimental analysis on the memory-overhead. The theoretical analysis is based on specifications of the two virtual machines. The experimental analysis is done with a modified JVMTI Agent together with the SPECjvm2008 Benchmark.
This paper describes adaptive time frequency analysis of EEG signals, both in theory as well as in practice. A momentary frequency estimation algorithm is discussed and applied to EEG time series of test persons performing a concentration experiment. The motivation for deriving and implementing a time frequency estimator is the assumption that an emotional change implies a transient in the measured EEG time series, which again are superimposed by biological white noise as well as artifacts. It will be shown how accurately and robustly the estimator detects the transient even under such complicated conditions.
Along with the success of the digitally revived stereoscopic cinema, other events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.
Interactive Distributed Rendering of 3D Scenes on Multiple Xbox 360 Systems and Personal Computers
(2012)
This article concerns the design and development of Information- and Communication Technology, in particular computer systems in regard to the demographic transition which will influence user capabilities. It is questionable if current applied computer systems are able to meet the requirements of altered user groups with diversified capabilities. Such an enquiry is necessary based on actual forecasts leading to the assumption that the average age of employees in enterprises will increase significantly within the next 50-60 years, while the percentage of computer aided business tasks, operated by human individuals, rises from year to year. This progress will precipitate specific consequences for enterprises regarding the design and application of computer systems. If computer systems are not adapted to altered user requirements, efficient and productive utilisation could be negatively influenced. These consequences constitute the motivation to extend traditional design methodologies and thereby ensure the application of computer systems that are usable, independent of user capabilities.