Refine
H-BRS Bibliography
- yes (35) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (30)
- Institute of Visual Computing (IVC) (15)
- Fachbereich Ingenieurwissenschaften und Kommunikation (4)
- Institut für Sicherheitsforschung (ISF) (2)
- Fachbereich Angewandte Naturwissenschaften (1)
- Fachbereich Wirtschaftswissenschaften (1)
- Institut für Cyber Security & Privacy (ICSP) (1)
- Institut für Detektionstechnologien (IDT) (1)
- Institut für Verbraucherinformatik (IVI) (1)
- Institut für funktionale Gen-Analytik (IFGA) (1)
Document Type
- Conference Object (35) (remove)
Year of publication
- 2012 (35) (remove)
Has Fulltext
- no (35)
Keywords
- Bag of Features (2)
- classifier combination (2)
- clustering (2)
- feature extraction (2)
- machine learning (2)
- object categorization (2)
- All-Swap Algorithm (1)
- CUDA (1)
- Design automation (1)
- EEG (1)
- Euronorm (1)
- Field programmable gate arrays (1)
- Givens Rotations (1)
- Graphical user interfaces (1)
- Graphics Cards (1)
- Hardware (1)
- Integrated circuit interconnections (1)
- Lattice Basis Reduction (1)
- Maschinenbau (1)
- Maschinensteuerung (1)
- Natural scene text (1)
- PTR-MS (1)
- Parallelization (1)
- Programmentwicklungsmanagement (1)
- Proton-Transfer-Reaction Mass Spectrometry (1)
- RE (1)
- Risikobewertung (1)
- Risikomanagement (1)
- TOF (1)
- Visualization (1)
- accessibility information and communication technology (1)
- adaptive binarization (1)
- adaptive filters (1)
- affective computing (1)
- analysis (1)
- brain computer interfaces (1)
- bus load (1)
- can bus (1)
- computer games (1)
- computer system (1)
- data logging (1)
- data visualisation (1)
- demographic transition (1)
- designer drugs (1)
- direct feedback (1)
- distributed processing (1)
- drug detection (1)
- emotion computing (1)
- explosives detection (1)
- fpga (1)
- knowledge engineering (1)
- light curtains (1)
- low detection limits (1)
- microcomputers (1)
- microcontroller (1)
- momentary frequency (1)
- monitoring (1)
- normgerechte Entwicklung (1)
- object identification (1)
- optical safeguard sensor (1)
- rendering (computer graphics) (1)
- screens (display) (1)
- sicherheitsgerichtete Speichersteuerung (1)
- sicherheitsrelevante SPS-Software (1)
- skin detection (1)
- software-based feedback agents (1)
- speicherprogrammierte Steuerung (1)
- support vector machine (1)
- synthetic dataset (1)
- teaching (1)
- time series processing (1)
- universal design (1)
- usability (1)
- virtual reality (1)
This paper compares the memory allocation of two Java virtual machines, namely Oracle Java HotSpot VM 32-bit (OJVM) and Jamaica JamaicaVM (JJVM). The basic difference of the architectures in both machines is that the JamaicaVM uses fixed-size blocks for allocating objects on the heap. The basic difference of the architectures is that the JJVM uses fixed size block allocation on the heap. This means that objects have to be split into several connected blocks if they are bigger than the specified block-size. On the other hand, for small objects a full block must be allocated. The paper contains both theoretical and experimental analysis on the memory-overhead. The theoretical analysis is based on specifications of the two virtual machines. The experimental analysis is done with a modified JVMTI Agent together with the SPECjvm2008 Benchmark.
In the realm of service robots recovery from faults is indispensable to foster user acceptance. Here fault is to be understood not in the sense of robot internal, rather as interaction faults while situated in and interacting with an environment (aka ex-ternal faults). We reason along the most frequent failures in typical scenarios which we observed during real-world demonstrations and competitions using our Care-O-bot III 1 robot. They take place in an apartment-like environments which is known as closed world. We suggest four different -for now adhoc -fault categories caused by disturbances, imperfect per-ception, inadequate planning or chaining of action sequences. The fault are categorized and then mapped to a handful of partly known, partly extended fault handling techniques. Among them we applied qualitative reasoning, use of simu-lation as oracle, learning for planning (aka en-hancement of plan operators) or -in future -case-based reasoning. Having laid out this frame we mainly ask open questions related to the applicability of the pre-sented approach. Amongst them: how to find new categories, how to extend them, how to as-sure disjointness, how to identify old and label new faults on the fly.
Traffic simulations are typically concerned with modeling human behavior as closely as possible to create realistic results. In conventional traffic simulations used for road planning or traffic jam prediction only the overall behavior of an entire system is of interest. In virtual environments, like digital games, simulated traffic participants are merely a backdrop to the player’s experience and only need to be “sufficiently realistic”. Additionally, restricted computational resources, typical for virtual environment applications, usually limit the complexity of simulated behavior in this field. More importantly, two integral aspects of real-world traffic are not considered in current traffic simulations from both fields: misbehavior and risk taking of traffic participants. However, for certain applications like the FIVIS bicycle simulator, these aspects are essential.
Traffic simulations for virtual environments are concerned with the behavior of individual traffic participants. The complexity of behavior in these simulations is often rather simple to abide by the constraints of processing resources. In sophisticated traffic simulations, the behavior of individual traffic participants is also modeled, but the focus lies on the overall behavior of the entire system, e.g. to identify possible bottle necks of traffic flow [8].
This article concerns the design and development of Information- and Communication Technology, in particular computer systems in regard to the demographic transition which will influence user capabilities. It is questionable if current applied computer systems are able to meet the requirements of altered user groups with diversified capabilities. Such an enquiry is necessary based on actual forecasts leading to the assumption that the average age of employees in enterprises will increase significantly within the next 50-60 years, while the percentage of computer aided business tasks, operated by human individuals, rises from year to year. This progress will precipitate specific consequences for enterprises regarding the design and application of computer systems. If computer systems are not adapted to altered user requirements, efficient and productive utilisation could be negatively influenced. These consequences constitute the motivation to extend traditional design methodologies and thereby ensure the application of computer systems that are usable, independent of user capabilities.
Approximate clone detection is the process of identifying similar process fragments in business process model collections. The tool presented in this paper can efficiently cluster approximate clones in large process model repositories. Once a repository is clustered, users can filter and browse the clusters using different filtering parameters. Our tool can also visualize clusters in the 2D space, allowing a better understanding of clusters and their member fragments. This demonstration will be useful for researchers and practitioners working on large process model repositories, where process standardization is a critical task for increasing the consistency and reducing the complexity of the repository.