Refine
Departments, institutes and facilities
- Fachbereich Informatik (8)
- Institute of Visual Computing (IVC) (5)
- Fachbereich Wirtschaftswissenschaften (4)
- Institut für Verbraucherinformatik (IVI) (3)
- Fachbereich Ingenieurwissenschaften und Kommunikation (2)
- Fachbereich Sozialpolitik und Soziale Sicherung (1)
- Zentrum für Ethik und Verantwortung (ZEV) (1)
Document Type
- Report (39) (remove)
Year of publication
Language
- English (39) (remove)
Has Fulltext
- no (39) (remove)
Keywords
- Benutzeroberfläche (2)
- maschinelles Lernen (2)
- Adaptation (1)
- Concurrent Kleene Algebra (1)
- Context Metadata (1)
- Elektrische Simulation (1)
- Innovation (1)
- Kosovo (1)
- Laws of programming (1)
- Learning Context (1)
„from stable to table“
(2008)
The Global Compact for Safe, Orderly and Regular Migration defines Global Skill Partnerships (GSP) as an innovative means of strengthen skills development among origin countries and countries of destination in mutually beneficial manner. However, GSPs are very limited in number and scope, and empirical analyses of them are, to date, relatively rare. This study helps fill this gap in data by presenting and examining existing GSPs or GSP-like approaches (e.g., transnational training partnerships). The aim of the study is to take stock of the various conceptual discourses on and practical experience with transnational skill partnerships. Using Kosovo as a case study, the study details the structure of such partnerships and the processes they entail. It documents the experience of those involved and catalogues the factors contributing to success. On this basis, the authors propose a means of categorizing the various practices that will help structure the empirical diversity of such approaches and render them conceptually feasible: Transnational Skills and Mobility Partnerships (TSMP).
The increasing ubiquity of Artificial Intelligence (AI) poses significant political consequences. The rapid proliferation of AI over the past decade has prompted legislators and regulators to attempt to contain AI’s technological consequences. For Germany, relevant design requirements have been expressed by the European Commission’s High-Level Expert Group on Artificial Intelligence (HLEG AI), and, at the national level, by the German government’s Data Ethics Commission (DEK) as well as the German Bundestag’s Commission of Inquiry on Artificial Intelligence (EKKI).
This report has been prepared by the SETAC Europe Scientific Task Group on Global And RegionaL Impact Categories (SETAC-Europe/STG-GARLIC) that is installed by the 2nd SETAC Europe working group on life cycle impact assessment (WIA-2). This document is background to a chapter written by the same authors under the title “Climate change, stratospheric ozone depletion, photo-oxidant formation, acidification and eutrophication” in Udo de Haes et al. (2002). The chapter summarises the work of the STG-GARLIC and aims to give a state-of-the-art review of the best available practice(s) regarding category indicators and lists of concomitant characterisation factors for climate change, stratospheric ozone depletion, photo-oxidant formation, acidification, and aquatic and terrestrial eutrophication. Backgrounds on each of the specific impact categories are given in another background report from Klöpffer and Potting (2001).
This background report provides details on a selection of general issues relevant in relation to LCA and characterisation of impact in LCA. The document starts with a short introduction of the LCA methodology and impact assessment in LCA for non LCA-experts. LCA experts, on the other hand, will usually not be familiar in-depth with scientific and political backgrounds of the specific impact categories. A review of this is given. Also the discussion is provided about the issue of the position of the category indicator in the causality chain, and into the related issue of spatial differentiation. These two issues appeared to be one of the core items for SETAC-Europe/STG-GARLIC.
In this paper, we provide a participatory design study of a mobile health platform for older adults that provides an integrative perspective on health data collected from different devices and apps. We illustrate the diversity and complexity of older adults’ perspectives in the context of health and technology use, the challenges which follow on for the design of mobile health platforms that support active and healthy ageing (AHA) and our approach to addressing these challenges through a participatory design (PD) process. Interviews were conducted with older adults aged 65+ in a two-month study with the goal of understanding perspectives on health and technologies for AHA support. We identified challenges and derived design ideas for a mobile health platform called “My-AHA”. For researchers in this field, the structured documentation of our procedures and results, as well as the implications derived provide valuable insights for the design of mobile health platforms for older adults.
Designing consumption feedback to support sustainable behavior is an active research topic. In recent years, relevant work has suggested a variety of possible design strategies. Addressing the more recent developments in this field, this paper presents a structured literature review, providing an overview of current information design approaches and highlighting open research questions. We suggest a literature-based taxonomy of used strategies, data source and output media with a special focus on design. In particular, we analyze which visual forms are used in current research to reach the identified strategy goals. Our survey reveals that the trend is towards more complex and contextualized feedback and almost every design within sustainable HCI adopts common visualization forms. Furthermore, adopting more advanced visual forms and techniques from information visualization research is helpful when dealing with ever-increasing data sources at home. Yet so far, this combination has often been neglected in feedback design.
Education is widely seen as an important means of addressing both national and international problems, such as political or religious extremism, poverty, and hunger. If publicly available educational resources (OERs) shall help overcoming the educational gap, localization is one of the major issues we need to deal with. Educators as well as learners need to be supported to determine adaptation needs. This paper provides a list of possible in-fluence factors on educational scenarios which are defined as context metadata. In the given form, the list needs to be understood as an addendum for the paper entitled ‘Open Educational Resources: Education for the World?’ from Thomas richter and Maggie McPherson; It is being published in the volume 3, issue 2 of the Journal Distance Education in 2012.
This study presents the findings of a quantitative study on the use of Open Educational Resources (OER) and Open Educational Practices (OEP) in Higher Education and Adult Learning Institutions. The study is based on the results of an online survey targeted at four educational roles: educational policy makers; institutional policy makers/managers; educational professionals; and learners. The report encompasses five chapters and four annexes. Chapter I presents the survey and Chapter II discloses the main research questions and models. Chapter III characterises the universe of respondents. Chapter IV advances with a detailed survey analysis including an overview of key statistical data. Finally, Chapter V provides an exploratory in-depth analysis of some key issues: representations, attitudes and uses of OEP. The table of contents and the complete list of diagrams and tables can be found at the end of the report.
The roadmap for quality and innovation through open educational practices has been conceived as a number of steps, a conceptual document, which can be used by organisations, leaners or professionals in order to improve their open educational practices. After the development of the core concept of the OPAL project, the guidelines for OEP, it became clear that these guidelines, would have to play an important part on the roadmap exercise, because they represent the very essence of how to foster and stimulate open educational practices. The roadmap therefore is meant to be an instrument, a tool which helps the different stakeholders to use the guidelines for their own context and purpose.
Low Cost Displays
(2010)
Tracelets and Specifications
(2017)
In the accompanying paper [1] the authors study a model of concurrent programs in terms of events and a dependence relation, i.e., a set of arrows, between them. There also two simplifying interface models are presented; they abstract in different ways from the intricate network of internal points and arrows of program components. This report supplements [1] by presenting full proofs for the properties of the interface models, in particular, that both models exhibit homomorphic behaviour w.r.t. sequential and concurrent composition. [1] B. Möller, C.A.R. Hoare, M.E. Müller, G. Struth: A discrete geometric model of concurrent program execution. In H. Zhu, J. Bowen: Proc. UTP 16. LNCS 10134. Springer 2017, 1-25
Formal concept analysis (FCA) as introduced in [4] deals with contexts and concepts. Roughly speaking, a context is an environment that is equipped with some kind of "knowledge". Such contexts are also known as information or knowledge representation systems where the knowledge consists of (intensional) descriptions relating sets of objects to sets of properties. Given extsensional and intensional descriptions (the latter one in terms of binary attributes), they can be arranged in a taxonomy or concept lattice.
This report summarises and integrates two different tracks of research for the purpose of envisioning and preparing a joint research project proposal. Soft- and hardware systems have become increasingly complex and act "concurrently", both with respect to memory access (i.e. information flow) and computational resources (i.e. "services"). The software development metaphor of cloud-storage, cloud-computing and service-oriented design has been anticipated by artificial intelligence (AI) research at least 30 years ago (parallel and distributed computation already dates back to the 1950’s and 1970s). What is known as a "service" today is what in AI is known as the capability of an agent; and the problem of information flow and consistency has been a headstone of information processing ever since. Based on a real-world robotics application we demonstrate how an increasingly abstract description of collaborating or competing agents correspond to a set of concurrent processes.
Machine Learning seems to offer the solution to the central problem in recommender systems: Learning to recommend interesting items from observations. However, one tends to run into similar problems each time one tries to apply out-of-the-box solutions from Machine Learning. This article relates the problem of recommendation by user modeling closely to the machine learning problem and explicates some inherent dilemmas. A few examples will illustrate specific approaches and discuss underlying assumptions on the domain or how learned hypotheses relate to requirements on the user model. The article concludes with a tentative 'checklist' that one might like to consider when thinking about to use Machine Learning in User Adaptive environments such as recommender systems.
The problem of filtering relevant information from the huge amount of available data is tackled by using models of the user's interest in order to discriminate interesting information from un-interesting data. As a consequence, Machine Learning for User Modeling (ML4UM) has become a key technique in recent adaptive systems. This article presents the novel approach of conceptual user models which are easy to understand and which allow for the system to explain its actions to the user. We show that ILP can be applied for the task of inducing user models from even sparse feedback by mutual sample enlargement. Results are evaluated independently of domain knowledge within a clear machine learning problem definition. The whole concept presented is realized in a meta web search engine, OySTER.
This project investigated the viability of using the Microsoft Kinect in order to obtain reliable Red-Green-Blue-Depth (RGBD) information. This explored the usability of the Kinect in a variety of environments as well as its ability to detect different classes of materials and objects. This was facilitated through the implementation of Random Sample and Consensus (RANSAC) based algorithms and highly parallelized workflows in order to provide time sensitive results. We found that the Kinect provides detailed and reliable information in a time sensitive manner. Furthermore, the project results recommend usability and operational parameters for the use of the Kinect as a scientific research tool.