Refine
H-BRS Bibliography
- no (56) (remove)
Document Type
- Report (56) (remove)
Year of publication
Has Fulltext
- no (56)
Keywords
- Benutzeroberfläche (2)
- maschinelles Lernen (2)
- Adaptation (1)
- Architektur <Informatik> (1)
- Bodengängige Arbeitsmaschine für 6000 m Meerestiefe (1)
- Client-server-Konzept (1)
- Context Metadata (1)
- Elektrische Simulation (1)
- Elektrohydraulische Fahr- und Lenkantriebe für die Tiefsee (1)
- Gestaltungsorientierte Wirtschaftsinformatik (1)
Chloride in Mosel und Saar
(1992)
Die vorliegende Arbeit beschäftigt sich mit der numerischen Behandlung Differential-Algebraischer Gleichungen (DAE" s). DAE" s treten beispielsweise bei der Modellierung der Dynamik mechanischer System, der Schaltkreissimulation sowie der chemischen Reaktionskinetik auf. Es werden Rosenbrock-Wanner ähnliche Verfahren zu deren Lösung hergeleitet und an technischen Modellen (Fahrzeugachse und Verstärker) getestet.
Zur Perzentilberechnung
(1990)
This report has been prepared by the SETAC Europe Scientific Task Group on Global And RegionaL Impact Categories (SETAC-Europe/STG-GARLIC) that is installed by the 2nd SETAC Europe working group on life cycle impact assessment (WIA-2). This document is background to a chapter written by the same authors under the title “Climate change, stratospheric ozone depletion, photo-oxidant formation, acidification and eutrophication” in Udo de Haes et al. (2002). The chapter summarises the work of the STG-GARLIC and aims to give a state-of-the-art review of the best available practice(s) regarding category indicators and lists of concomitant characterisation factors for climate change, stratospheric ozone depletion, photo-oxidant formation, acidification, and aquatic and terrestrial eutrophication. Backgrounds on each of the specific impact categories are given in another background report from Klöpffer and Potting (2001).
This background report provides details on a selection of general issues relevant in relation to LCA and characterisation of impact in LCA. The document starts with a short introduction of the LCA methodology and impact assessment in LCA for non LCA-experts. LCA experts, on the other hand, will usually not be familiar in-depth with scientific and political backgrounds of the specific impact categories. A review of this is given. Also the discussion is provided about the issue of the position of the category indicator in the causality chain, and into the related issue of spatial differentiation. These two issues appeared to be one of the core items for SETAC-Europe/STG-GARLIC.
In Anlehnung an die von Leidner und Kayworth (2006) durchgeführte Studie zum Umgang mit Kultur in der angelsächsischen Wissenschaftsdisziplin „Information Systems“ wurde eine entsprechende Literaturstudie für die gestaltungsorientierte Wirtschaftsinformatik des deutschen Sprachraums durchgeführt. In der Studie wurde in den Hauptorganen der Disziplin untersucht, in welcher Häufigkeit kulturelle Einflüsse auf Informationstechnologie thematisiert wurden, wie diese Einflüsse aufgearbeitet wurden und welche Referenzmodel-le/Referenzliteratur verwendet wurden. Nach einer kurzen Darstellung der gewählten Vorgehensweise werden die Ergebnisse und Beschränkungen der Studie präsentiert.
Education is widely seen as an important means of addressing both national and international problems, such as political or religious extremism, poverty, and hunger. If publicly available educational resources (OERs) shall help overcoming the educational gap, localization is one of the major issues we need to deal with. Educators as well as learners need to be supported to determine adaptation needs. This paper provides a list of possible in-fluence factors on educational scenarios which are defined as context metadata. In the given form, the list needs to be understood as an addendum for the paper entitled ‘Open Educational Resources: Education for the World?’ from Thomas richter and Maggie McPherson; It is being published in the volume 3, issue 2 of the Journal Distance Education in 2012.
SISAL: User manual
(1990)
CASTLE is a co-design platform developed at GMD SET institute. It provides a number of design tools for configuring application specific design flows. This paper presents a walk through the CASTLE co-design environment, following the design flow of a video processing system. The design methodology and the tool usage for this real life example are described, as seen from a designers point of view. The design flow starts with a C/C++ program and gradually derives a register-transfer level description of a processor hardware, as well as the corresponding compiler for generating the processor opcode. The main results of each design step are presented and the usage of the CASTLE tools at each step is explained.
This report summarises and integrates two different tracks of research for the purpose of envisioning and preparing a joint research project proposal. Soft- and hardware systems have become increasingly complex and act "concurrently", both with respect to memory access (i.e. information flow) and computational resources (i.e. "services"). The software development metaphor of cloud-storage, cloud-computing and service-oriented design has been anticipated by artificial intelligence (AI) research at least 30 years ago (parallel and distributed computation already dates back to the 1950’s and 1970s). What is known as a "service" today is what in AI is known as the capability of an agent; and the problem of information flow and consistency has been a headstone of information processing ever since. Based on a real-world robotics application we demonstrate how an increasingly abstract description of collaborating or competing agents correspond to a set of concurrent processes.
Formal concept analysis (FCA) as introduced in [4] deals with contexts and concepts. Roughly speaking, a context is an environment that is equipped with some kind of "knowledge". Such contexts are also known as information or knowledge representation systems where the knowledge consists of (intensional) descriptions relating sets of objects to sets of properties. Given extsensional and intensional descriptions (the latter one in terms of binary attributes), they can be arranged in a taxonomy or concept lattice.
The problem of filtering relevant information from the huge amount of available data is tackled by using models of the user's interest in order to discriminate interesting information from un-interesting data. As a consequence, Machine Learning for User Modeling (ML4UM) has become a key technique in recent adaptive systems. This article presents the novel approach of conceptual user models which are easy to understand and which allow for the system to explain its actions to the user. We show that ILP can be applied for the task of inducing user models from even sparse feedback by mutual sample enlargement. Results are evaluated independently of domain knowledge within a clear machine learning problem definition. The whole concept presented is realized in a meta web search engine, OySTER.
Machine Learning seems to offer the solution to the central problem in recommender systems: Learning to recommend interesting items from observations. However, one tends to run into similar problems each time one tries to apply out-of-the-box solutions from Machine Learning. This article relates the problem of recommendation by user modeling closely to the machine learning problem and explicates some inherent dilemmas. A few examples will illustrate specific approaches and discuss underlying assumptions on the domain or how learned hypotheses relate to requirements on the user model. The article concludes with a tentative 'checklist' that one might like to consider when thinking about to use Machine Learning in User Adaptive environments such as recommender systems.