Refine
H-BRS Bibliography
- yes (17) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (8)
- Institute of Visual Computing (IVC) (5)
- Fachbereich Wirtschaftswissenschaften (4)
- Institut für Verbraucherinformatik (IVI) (3)
- Fachbereich Ingenieurwissenschaften und Kommunikation (2)
- Fachbereich Sozialpolitik und Soziale Sicherung (1)
- Zentrum für Ethik und Verantwortung (ZEV) (1)
Document Type
- Report (17) (remove)
Year of publication
Language
- English (17) (remove)
Has Fulltext
- no (17) (remove)
Keywords
- Concurrent Kleene Algebra (1)
- Kosovo (1)
- Laws of programming (1)
- Method of lines (1)
- Peer methods (1)
- Refinement (1)
- Semantic models (1)
- Shallow water equations (1)
- Trace algebra (1)
- Unifying theories (1)
The Global Compact for Safe, Orderly and Regular Migration defines Global Skill Partnerships (GSP) as an innovative means of strengthen skills development among origin countries and countries of destination in mutually beneficial manner. However, GSPs are very limited in number and scope, and empirical analyses of them are, to date, relatively rare. This study helps fill this gap in data by presenting and examining existing GSPs or GSP-like approaches (e.g., transnational training partnerships). The aim of the study is to take stock of the various conceptual discourses on and practical experience with transnational skill partnerships. Using Kosovo as a case study, the study details the structure of such partnerships and the processes they entail. It documents the experience of those involved and catalogues the factors contributing to success. On this basis, the authors propose a means of categorizing the various practices that will help structure the empirical diversity of such approaches and render them conceptually feasible: Transnational Skills and Mobility Partnerships (TSMP).
The increasing ubiquity of Artificial Intelligence (AI) poses significant political consequences. The rapid proliferation of AI over the past decade has prompted legislators and regulators to attempt to contain AI’s technological consequences. For Germany, relevant design requirements have been expressed by the European Commission’s High-Level Expert Group on Artificial Intelligence (HLEG AI), and, at the national level, by the German government’s Data Ethics Commission (DEK) as well as the German Bundestag’s Commission of Inquiry on Artificial Intelligence (EKKI).
In this paper, we provide a participatory design study of a mobile health platform for older adults that provides an integrative perspective on health data collected from different devices and apps. We illustrate the diversity and complexity of older adults’ perspectives in the context of health and technology use, the challenges which follow on for the design of mobile health platforms that support active and healthy ageing (AHA) and our approach to addressing these challenges through a participatory design (PD) process. Interviews were conducted with older adults aged 65+ in a two-month study with the goal of understanding perspectives on health and technologies for AHA support. We identified challenges and derived design ideas for a mobile health platform called “My-AHA”. For researchers in this field, the structured documentation of our procedures and results, as well as the implications derived provide valuable insights for the design of mobile health platforms for older adults.
Designing consumption feedback to support sustainable behavior is an active research topic. In recent years, relevant work has suggested a variety of possible design strategies. Addressing the more recent developments in this field, this paper presents a structured literature review, providing an overview of current information design approaches and highlighting open research questions. We suggest a literature-based taxonomy of used strategies, data source and output media with a special focus on design. In particular, we analyze which visual forms are used in current research to reach the identified strategy goals. Our survey reveals that the trend is towards more complex and contextualized feedback and almost every design within sustainable HCI adopts common visualization forms. Furthermore, adopting more advanced visual forms and techniques from information visualization research is helpful when dealing with ever-increasing data sources at home. Yet so far, this combination has often been neglected in feedback design.
Low Cost Displays
(2010)
Tracelets and Specifications
(2017)
In the accompanying paper [1] the authors study a model of concurrent programs in terms of events and a dependence relation, i.e., a set of arrows, between them. There also two simplifying interface models are presented; they abstract in different ways from the intricate network of internal points and arrows of program components. This report supplements [1] by presenting full proofs for the properties of the interface models, in particular, that both models exhibit homomorphic behaviour w.r.t. sequential and concurrent composition. [1] B. Möller, C.A.R. Hoare, M.E. Müller, G. Struth: A discrete geometric model of concurrent program execution. In H. Zhu, J. Bowen: Proc. UTP 16. LNCS 10134. Springer 2017, 1-25
This project investigated the viability of using the Microsoft Kinect in order to obtain reliable Red-Green-Blue-Depth (RGBD) information. This explored the usability of the Kinect in a variety of environments as well as its ability to detect different classes of materials and objects. This was facilitated through the implementation of Random Sample and Consensus (RANSAC) based algorithms and highly parallelized workflows in order to provide time sensitive results. We found that the Kinect provides detailed and reliable information in a time sensitive manner. Furthermore, the project results recommend usability and operational parameters for the use of the Kinect as a scientific research tool.
Nowadays Field Programmable Gate Arrays (FPGA) are used in many fields of research, e.g. to create prototypes of hardware or in applications where hardware functionality has to be changed more frequently. Boolean circuits, which can be implemented by FPGAs are the compiled result of hardware description languages such as Verilog or VHDL. Odin II is a tool, which supports developers in the research of FPGA based applications and FPGA architecture exploration by providing a framework for compilation and verification. In combination with the tools ABC, T-VPACK and VPR, Odin II is part of a CAD flow, which compiles Verilog source code that targets specific hardware resources. This paper describes the development of a graphical user interface as part of Odin II. The goal is to visualize the results of these tools in order to explore the changing structure during the compilation and optimization processes, which can be helpful to research new FPGA architectures and improve the workflow.
Having multiple talkers on a bus system rises the bandwidth on this bus. To monitor the communication on a bus, tools that constantly read the bus are needed. This report shows an implementation of a monitoring system for the CAN bus utilizing the Altera DE2 development board. The Biomedical Institute of the University of New Brunswick is currently developing together with different partners a prosthetic limb device, the UNB hand. Communication in this device is done via two CAN buses, which operate at a bit-rate of 1 Mbit/s. The developed monitoring system has been completely designed in Verilog HDL. It monitors the CAN bus in real-time and allows monitoring of different modules as well as of the overall load. The calculated data is displayed on the built-in LCD and also transmitted via UART to a PC. A sample receiver programmed in C is also given. The evaluation of this system has been done by using the Microchip CAN Bus Analyzer Tool connected to the GPIO port of the development board that simulates CAN communication.
Reversible logic synthesis is an emerging research topic with different application areas like low-power CMOS design, quantum- and optical computing. The key motivation behind reversible logic synthesis is the optimization of the heat dissipation problem current architectures show, by reducing it to theoretically zero [2].
This report presents the implementation and evaluation of a computer vision problem on a Field Programmable Gate Array (FPGA). This work is based upon [5] where the feasibility of application specific image processing algorithms on a FPGA platform have been evaluated by experimental approaches. The results and conclusions of that previous work builds the starting point for the work, described in this report. The project results show considerable improvement of previous implementations in processing performance and precision. Different algorithms for detecting Binary Large OBjects (BLOBs) more precisely have been implemented. In addition, the set of input devices for acquiring image data has been extended by a Charge-Coupled Device (CCD) camera. The main goal of the designed system is to detect BLOBs in continuous video image material and compute their center points.
This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg1 . The intent is the invention of a passive tracking device for an immersive environment to improve user interaction and system usability. Therefore the detection of the users position and orientation in relation to the projection surface is required. For a reliable estimation a robust and fast computation of the BLOB's center-points is necessary. This project has covered the development of a BLOB detection system on an Altera DE2 Development and Education Board with a Cyclone II FPGA. It detects binary spatially extended objects in image material and computes their center points. Two different sources have been applied to provide image material for the processing. First, an analog composite video input, which can be attached to any compatible video device. Second, a five megapixel CCD camera, which is attached to the DE2 board. The results are transmitted on the serial interface of the DE2 board to a PC for validation of their ground truth and further processing. The evaluation compares precision and performance gain dependent on the applied computation methods and the input device, which is providing the image material.
This report describes the design, the implementation and the usage of a system for managing different systems for automated theorem proving and automatically generated proofs. In particular, we focus on a user-friendly web-based interface and a structure for collecting and cataloguing proofs in a uniform way. The second point hopefully helps to understand the structure of automatically generated proofs and builds a starting point for new insights for strategies for proof planning.
For many practical problems an efficient solution of the one-dimensional shallow water equations (Saint-Venant equations) is important, especially when large networks of rivers, channels or pipes are considered. In order to test and develop numerical methods four test problems are formulated. These tests include the well known dam break and hydraulic jump problems and two steady state problems with varying channel bottom, channel width and friction.
This report presents the implementation and evaluation of a computer vision task on a Field Programmable Gate Array (FPGA). As an experimental approach for an application-specific image-processing problem it provides reliable results to measure gained performance and precision compared with similar solutions on General Purpose Processor (GPP) architectures.
The project addresses the problem of detecting Binary Large OBjects (BLOBs) in a continuous video stream. For this problem a number of different solutions exist. But most of these are realized on GPP platforms, where resolution and processing speed define the performance barrier. With the opportunity of parallelization and performance abilities like in hardware, the application of FPGAs become interesting. This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg. It address the detection of the users position and orientation in relation to the virtual environment in an Immersion Square.
The goal is to develop a light emitting device, that points from the user towards the point of interest on the projection screen. The projected light dots are used to represent the user in the virtual environment. By detecting the light dots with video cameras, the idea is to interface the position and orientation of the relative position of the user interface. Fort that the laser dots need to be arranged in a unique pattern, which requires at least five points.[29] For a reliable estimation a robust computation of the BLOB's center-points is necessary.
This project has covered the development of a BLOB detection system on a FPGA platform. It detects binary spatially extended objects in a continuous video stream and computes their center points. The results are displayed to the user and where validated for their ground truth. The evaluation compares precision and performance gain against similar approaches on GPP platforms.
Data management is a challenge in both scientific and technical environments. Therefore researchers have developed a special interest in this field. Modern approaches (i.e. Subversion, CVS) already offer authoring and versioning in distributed systems. However this might be insufficient in a vast number of scenarios, where not only the data resulting from a process, but also data which describes the process that generated those results is crucial.