Refine
H-BRS Bibliography
- yes (58)
Departments, institutes and facilities
- Fachbereich Informatik (58) (remove)
Document Type
- Conference Object (30)
- Article (14)
- Report (6)
- Part of a Book (3)
- Book (monograph, edited volume) (2)
- Doctoral Thesis (1)
- Master's Thesis (1)
- Other (1)
Year of publication
- 2012 (58) (remove)
Keywords
- 3D-Scanner (2)
- ARRs (2)
- Bag of Features (2)
- FDI (2)
- Hybrid systems (2)
- classifier combination (2)
- clustering (2)
- feature extraction (2)
- machine learning (2)
- object categorization (2)
- virtual reality (2)
- All-Swap Algorithm (1)
- Artificial Intelligence and Natural Language Processing (1)
- Assistenzsystem (1)
- Augmented Reality (1)
- Automation (1)
- Background music (1)
- CUDA (1)
- Cognition (1)
- DHF-Reduktase (1)
- DOI (1)
- Data Publication (1)
- DataCite (1)
- Databases and Data Mining (1)
- Design automation (1)
- Digital Object Identifier (1)
- Domestic service robots (1)
- EEG (1)
- ERP system (1)
- Electromagnetic Fields (1)
- Environmental Data (1)
- Field programmable gate arrays (1)
- Fuzzy-Logik (1)
- Givens Rotations (1)
- Graphical user interfaces (1)
- Graphics Cards (1)
- Hardware (1)
- Human robot interaction (1)
- Hybrid models of engineering systems (1)
- Integrated circuit interconnections (1)
- Knowledge Management (1)
- Lattice Basis Reduction (1)
- Media in education (1)
- Memory (1)
- Meteorological Data (1)
- Mobiler Roboter (1)
- Molekulare Deskriptoren (1)
- Natural scene text (1)
- PDE-5A (1)
- Parallelization (1)
- RE (1)
- Raman spectroscopy (1)
- Robotik (1)
- SOA (1)
- Semantic scene understanding (1)
- Software Architecture (1)
- Software Framework (1)
- Virtual Reality (1)
- Virtual reality (1)
- Visualization (1)
- Workflow Management (1)
- adaptive binarization (1)
- adaptive fault thresholds (1)
- adaptive filters (1)
- affective computing (1)
- analysis (1)
- analytical redundancy relation residuals (1)
- averaged bond graph models (1)
- binary classification (1)
- bond graphs (1)
- brain computer interfaces (1)
- bus load (1)
- can bus (1)
- computer games (1)
- data logging (1)
- data visualisation (1)
- direct feedback (1)
- distance perception (1)
- distributed processing (1)
- e-Research (1)
- emotion computing (1)
- enterprise software (1)
- external faults (1)
- fault scenarios (1)
- fault detection (1)
- fpga (1)
- free and open source software (1)
- incremental bond graphs (1)
- isolation (1)
- knowledge engineering (1)
- light curtains (1)
- microcomputers (1)
- microcontroller (1)
- mobile manipulators (1)
- momentary frequency (1)
- monitoring (1)
- object identification (1)
- operation mode independent causalities (1)
- optic flow (1)
- optical safeguard sensor (1)
- optoelectronic (1)
- plasma-enhanced CVD (PECVD) (deposition) (1)
- power electronic systems (1)
- regression testing (1)
- rendering (computer graphics) (1)
- residual sinks (1)
- screens (display) (1)
- self-motion perception (1)
- skin detection (1)
- software testing (1)
- software-based feedback agents (1)
- support vector machine (1)
- switched three-phase power inverter (1)
- synthetic dataset (1)
- system mode independent bond graph representation (1)
- teaching (1)
- test case reduction (1)
- time series processing (1)
- vection (1)
- web services (1)
The documentation requirements of data published in long term archives have significantly grown over the last decade. At WDCC the data publishing process is assisted by “Atarrabi”, a web-based workflow system for reviewing and editing metadata information by the data authors and the publication agent. The system ensures high metadata quality for long-term use of the data with persistent identifiers (DOI/URN). By these well-defined references (DOI) credit can properly be given to the data producers in any publication.
Software testing in web services environment faces different challenges in comparison with testing in traditional software environments. Regression testing activities are triggered based on software changes or evolutions. In web services, evolution is not a choice for service clients. They have always to use the current updated version of the software. In addition test execution or invocation is expensive in web services and hence providing algorithms to optimize test case generation and execution is vital. In this environment, we proposed several approach for test cases' selection in web services' regression testing. Testing in this new environment should evolve to be included part of the service contract. Service providers should provide data or usage sessions that can help service clients reduce testing expenses through optimizing the selected and executed test cases.
Approximate clone detection is the process of identifying similar process fragments in business process model collections. The tool presented in this paper can efficiently cluster approximate clones in large process model repositories. Once a repository is clustered, users can filter and browse the clusters using different filtering parameters. Our tool can also visualize clusters in the 2D space, allowing a better understanding of clusters and their member fragments. This demonstration will be useful for researchers and practitioners working on large process model repositories, where process standardization is a critical task for increasing the consistency and reducing the complexity of the repository.
In a research project funded by the German Research Foundation, meteorologists, data publication experts, and computer scientists optimised the publication process of meteorological data and developed software that supports metadata review. The project group placed particular emphasis on scientific and technical quality assurance of primary data and metadata. At the end, the software automatically registers a Digital Object Identifier at DataCite. The software has been successfully integrated into the infrastructure of the World Data Center for Climate, but a key was to make the results applicable to data publication processes in other sciences as well.
YAWL User Group
(2012)
This project investigated the viability of using the Microsoft Kinect in order to obtain reliable Red-Green-Blue-Depth (RGBD) information. This explored the usability of the Kinect in a variety of environments as well as its ability to detect different classes of materials and objects. This was facilitated through the implementation of Random Sample and Consensus (RANSAC) based algorithms and highly parallelized workflows in order to provide time sensitive results. We found that the Kinect provides detailed and reliable information in a time sensitive manner. Furthermore, the project results recommend usability and operational parameters for the use of the Kinect as a scientific research tool.
Traffic simulations for virtual environments are concerned with the behavior of individual traffic participants. The complexity of behavior in these simulations is often rather simple to abide by the constraints of processing resources. In sophisticated traffic simulations, the behavior of individual traffic participants is also modeled, but the focus lies on the overall behavior of the entire system, e.g. to identify possible bottle necks of traffic flow [8].