Refine
Departments, institutes and facilities
- Fachbereich Informatik (46)
- Präsidium (35)
- Institut für funktionale Gen-Analytik (IFGA) (26)
- Fachbereich Wirtschaftswissenschaften (25)
- Institute of Visual Computing (IVC) (18)
- Fachbereich Angewandte Naturwissenschaften (17)
- Fachbereich Ingenieurwissenschaften und Kommunikation (17)
- Fachbereich Sozialpolitik und Soziale Sicherung (10)
- Institut für Cyber Security & Privacy (ICSP) (10)
- Institut für Verbraucherinformatik (IVI) (5)
Document Type
- Article (65)
- Conference Object (60)
- Part of Periodical (33)
- Part of a Book (23)
- Book (monograph, edited volume) (12)
- Report (12)
- Master's Thesis (7)
- Conference Proceedings (4)
- Bachelor Thesis (3)
- Contribution to a Periodical (2)
Year of publication
- 2011 (225) (remove)
Keywords
- Unternehmen (3)
- Business Ethnography (2)
- CUDA (2)
- Emergency support system (2)
- Finite-Elemente-Methode (2)
- Global Software Engineering (2)
- Mobile sensors (2)
- Robotik (2)
- 3D Crosstalk (1)
- 3D Display (1)
"Innovation Journalism ist die Politikberichterstattung der Zukunft" Interview mit David Nordfors
(2011)
A method for minimum range extension with improved accuracy in triangulation laser range finder
(2011)
In the eld of accessing and visualization mobile sensors and their recorded data, di erent approaches were realized. The OGC1 Sensor observation Service supplies a standard to access these information, stored on servers. To be able to access these servers, an interface must be developed and implemented. The result should be a con gurable development framework for web-based GIS clients supporting the OGC sensor observation services. In particular the framework should allow continuous position updates of mobile sensors. Visualization features like charts, bounding boxes of sensors and data series should be included.
At present, data publication is one of the most dynamic topics in e-Research. While the fundamental problems of electronic text publication have been solved in the past decade, standards for the external and internal organisation of data repositories are advanced in some research disciplines but underdeveloped in others. We discuss the differences between an electronic text publication and a data publication and the challenges that result from these differences for the data publication process. We place the data publication process in the context of the human knowledge spiral and discuss key factors for the successful acquisition of research data from the point of view of a data repository. For the relevant activities of the publication process, we list some of the measures and best practices of successful data repositories.
This thesis work presents the implementation and validation of image processing problems in hardware to estimate the performance and precision gain. It compares the implementation for the addressed problem on a Field Programmable Gate Array (FPGA) with a software implementation for a General Purpose Processor (GPP) architecture. For both solutions the implementation costs for their development is an important aspect in the validation. The analysis of the flexibility and extendability that can be achieved by a modular implementation for the FPGA design was another major aspect. This work is based upon approaches from previous work, which included the detection of Binary Large OBjects (BLOBs) in static images and continuous video streams [13, 15]. One addressed problem of this work is the tracking of the detected BLOBs in continuous image material. This has been implemented for the FPGA platform and the GPP architecture. Both approaches have been compared with respect to performance and precision. This research project is motivated by the MI6 project of the Computer Vision research group, which is located at the Bonn-Rhein-Sieg University of Applied Sciences. The intent of the MI6 project is the tracking of a user in an immersive environment. The proposed solution is to attach a light emitting device to the user for tracking the created light dots on the projection surface of the immersive environment. Having the center points of those light dots would allow the estimation of the user’s position and orientation. One major issue that makes Computer Vision problems computationally expensive is the high amount of data that has to be processed in real-time. Therefore, one major target for the implementation was to get a processing speed of more than 30 frames per second. This would allow the system to realize feedback to the user in a response time which is faster than the human visual perception. One problem that comes with the idea of using a light emitting device to represent the user, is the precision error. Dependent on the resolution of the tracked projection surface of the immersive environment, a pixel might have a size in cm2. Having a precision error of only a few pixels, might lead to an offset in the estimated user’s position of several cm. In this research work the development and validation of a detection and tracking system for BLOBs on a Cyclone II FPGA from Altera has been realized. The system supports different input devices for the image acquisition and can perform detection and tracking for five to eight BLOBs. A further extension of the design has been evaluated and is possible with some constraints. Additional modules for compressing the image data based on run-length encoding and sub-pixel precision for the computed BLOB center-points have been designed. For the comparison of the FPGA approach for BLOB tracking a similar implementation in software using a multi-threaded approach has been realized. The system can transmit the detection or tracking results on two available communication interfaces, USB and RS232. The analysis of the hardware solution showed a similar precision for the BLOB detection and tracking as the software approach. One problem is the strong increase of the allocated resources when extending the system to process more BLOBs. With one of the applied target platforms, the DE2-70 board from Altera, the BLOB detection could be extended to process up to thirty BLOBs. The implementation of the tracking approach in hardware required much more effort than the software solution. The design of high level problems in hardware for this case are more expensive than the software implementation. The search and match steps in the tracking approach could be realized more efficiently and reliably in software. The additional pre-processing modules for sub-pixel precision and run-length-encoding helped to increase the system’s performance and precision.
Adaptability as a Special Demand on Open Educational Resources: The Cultural Context of e-Learning
(2011)
Producing and providing Open Educational Resources (OERs) is driven by the concepts of openness and sharing. Although there already are a lot of free high-quality resources available, practitioners often rather rewrite learning resources than creatively embed (and thus, reuse) existing OERs. In this paper, we analyse the reasons for this in two different educational contexts. As a result of this analysis, we found that the uncertainty on possible adaptation needs is one of the major barriers. In order to overcome this barrier and make different learning contexts comparable, we analysed the context of learners and in particular, in the research project ‘Learning Culture’, we investigated the field of culturally motivated expectations and attitudes of learners. This paper shows the results of this research project and discusses which cultural issues should be taken into consideration when OERs are to be adapted from one to another cultural context.
The development of pulmonary edema can be considered as a combination of alveolar flooding via increased fluid filtration, impaired alveolar-capillary barrier integrity, and disturbed resolution due to decreased alveolar fluid clearance. An important mechanism regulating alveolar fluid clearance is sodium transport across the alveolar epithelium. Transepithelial sodium transport is largely dependent on the activity of sodium channels in alveolar epithelial cells. This paper describes how sodium channels contribute to alveolar fluid clearance under physiological conditions and how deregulation of sodium channel activity might contribute to the pathogenesis of lung diseases associated with pulmonary edema. Furthermore, sodium channels as putative molecular targets for the treatment of pulmonary edema are discussed.
The task of this thesis is to develop an OGC-compliant Sensor Observation Service (SOS) { a component of the SWE { for GPS related sensor data in this context. It should, in contrast to existing implementations, support full mobility of the sensors and be con gurable with respect to adding di erent kinds of sensors. In particular, mobile phones should be considered as sensors, which transmit their data to the SOS server through the transactional SOS interface.
This contribution describes an optical laser-based user interaction system designed for virtual reality (VR) environments. The project's objective is to realize a 6-DoF user input device for interaction with VR applications running in CAVE-type visualization environments with flat projections walls. In case of a back-projection VR system, in contrast to optical tracking systems, no camera has to be placed within the visualization environment. Instead, cameras observe patterns of laser beam projections from behind the screens. These patterns are emitted by a hand-held input device. The system is robust with respect to partial occlusion of the laser pattern. An inertial measurement unit is integrated into the device in order to improve robustness and precision.
This paper picks up on one of the ways reported in the literature to represent hybrid models of engineering systems by bond graphs with static causalities. The representation of a switching device by means of a modulated transformer (MTF) controlled by a Boolean variable in conjunction with a resistor has been used so far to build a model for simulation. In this paper, it is shown that it can also constitute an approach to bond graph based quantitative fault detection and isolation in hybrid system models. Advantages are that Analytical Redundancy Relations (ARRs) do not need to be derived again after a switch state has changed. ARRs obtained from the bond graph are valid for all system modes. Furthermore, no adaption of the standard sequential causality assignment procedure (SCAP) with respect to fault detection and isolation (FDI) is needed.
Arbeitsmarktintegration - eine Aufgabe der medizinischen Rehabilitation Abhängigkeitskranker?
(2011)
This report presents an approach on a quadrotor dynamics stabilization based on ICP SLAM. Because the quadrotor lacks sensory information to detect its horizontal drift an additional sensor as Hokuyo-UTM has been used to perform on-line ICP-based SLAM. The obtained position estimates were used in control loops to maintain desired position and orientation of the vehicle. Such attitude parameters as height, yaw and position in space were controlled based on the laser data. As a result the quadrotor demonstrated two significant for autonomous navigation capabilities: performance of on-line SLAMon a flying vehicle and maintaining desired position in 3D space. Visual approach on optical flow based on Pyramid Lucas-Kanade algorithm has been touched and tested in different environmental conditions though hasn't been implemented in the control loop. Also the performance of the Hokuyo laser scanner and the related to it ICP SLAM algorithm have been tested in different environmental conditions indoors, outdoors and in presence of smoke. Results are presented and discussed. The requirement of performing on-line SLAM algorithm and to carry quite heavy equipment for it forced to seek a solution to increase the payload of the quadrotor with its computational power. A new hardware and distributed software architectures are therefore presented in the report.
In an explorative study, we investigated on German schoolteachers how they use, reuse, produce and manage Open Educational Resources. The main questions in this research have been, what their motivators and barriers are in their use of Open Educational Resources, what others can learn from their Open Educational Practices, and what we can do to raise the dissemination level of OER in schools.
This study presents the findings of a quantitative study on the use of Open Educational Resources (OER) and Open Educational Practices (OEP) in Higher Education and Adult Learning Institutions. The study is based on the results of an online survey targeted at four educational roles: educational policy makers; institutional policy makers/managers; educational professionals; and learners. The report encompasses five chapters and four annexes. Chapter I presents the survey and Chapter II discloses the main research questions and models. Chapter III characterises the universe of respondents. Chapter IV advances with a detailed survey analysis including an overview of key statistical data. Finally, Chapter V provides an exploratory in-depth analysis of some key issues: representations, attitudes and uses of OEP. The table of contents and the complete list of diagrams and tables can be found at the end of the report.
Negative Schlagzeilen über den Klimawandel, die Verschwendung nichterneuerbarer Ressourcen und Umweltverschmutzung werden täglich über die Medien verbreitet. Durch immer neue Lebensmittelskandale sind die Verbraucher verunsichert. Ihr Vertrauen in Produzenten und Anbieter ist erschüttert. Zunehmend mehr Konsumenten versuchen, umweltbewusstes und nachhaltiges Handeln in ihren Alltag zu integrieren. Auch die gesellschaftliche Verantwortung von Unternehmen wurde noch nie so aktiv wie heute in der Öffentlichkeit herausgestellt und diskutiert. Beide Entwicklungen tragen dazu bei, dass der Begriff „Bio“ zurzeit in aller Munde ist. Er findet außer im Bereich der Lebensmittel auch in Sektoren wie Kosmetik und Mode Anwendung. Ziel dieser Arbeit ist es, Handlungsempfehlungen für den Einzelhandel herauszuarbeiten, um sein am Markt bestehendes Potential auszuschöpfen, insbesondere um, Kunden zu binden und Neukunden zu gewinnen. Zusätzlich werden mögliche Kooperationsfelder von Handel und Industrie systematisiert sowie Handlungsempfehlungen erarbeitet, die zur Verbesserung der Kooperation von im Bereich des Category Management für Bio-Produkte beitragen.
In the past decade computer models have become very popular in the field of biomechanics due to exponentially increasing computer power. Biomechanical computer models can roughly be subdivided into two groups: multi-body models and numerical models. The theoretical aspects of both modelling strategies will be introduced. However, the focus of this chapter lies on demonstrating the power and versatility of computer models in the field of biomechanics by presenting sophisticated finite element models of human body parts. Special attention is paid to explain the setup of individual models using medical scan data. In order to reach the goal of individualising the model a chain of tools including medical imaging, image acquisition and processing, mesh generation, material modelling and finite element simulation –possibly on parallel computer architectures- becomes necessary. The basic concepts of these tools are described and application results are presented. The chapter ends with a short outlook into the future of computer biomechanics.
Bond Graph Modelling of Engineering Systems: Theory, Applications and Software Support addresses readers to consider the potential and the state-of-the-art of bond graph modeling of engineering systems with respect to theory, applications and software support. Bond graph modelling is a physical modelling methodology based on first principles that is particularly suited for modelling multidisciplinary or mechatronic systems. This book covers theoretical issues and methodology topics that have been subject of ongoing research during past years, presents new promising applications such as the bond graph modeling of fuel cells and illustrates how bond graph modeling and simulation of mechatronic systems can be supported by software. This up-to-date comprehensive presentation of various topics has been made possible by the cooperation of a group of authors who are experts in various fields and share the “bond graph way of thinking.”
The Web has become an indispensable prerequisite of everyday live and the Web browser is the most used application on a variety of distinct devices. The content delivered by the Web has changed drastically from static pages to media-rich and interactive Web applications offering nearly the same functionality as native applications, a trend which is further pushed by the Cloud and more specifically the Cloud’s SaaS layer. In the light of this development, security and performance of Web browsing has become a crucial issue.
Die Zukunft der CCS-Technologie ist maßgeblich von den Entwicklungsmöglichkeiten abhängig, die ihr der Gesetzgeber einräumt. Der von der Bundesregierung am 13.4.2011 beschlossene Gesetzentwurf geht mit großer Vorsicht an den Einsatz der CCS-Technologie heran. Die Entwicklung sowie die Planungs- und Investitionssicherheit haben gegenüber den Bedenken gegen die kurz- und langfristige Sicherheit der CO2-Speicherung zurückstecken müssen. Der nachfolgende Beitrag erläutert Rechtsprobleme der Genehmigung von CO2-Abscheidungsanlagen, Transportleitungen und Speichern sowie der Gefahrenvorsorge, -nachsorge und Haftung.
Software offshoring has been established as an important business strategy over the last decade. While research on such forms of Global Software Development (GSD) has mainly focused on the situation of large enterprises, small enterprises are increasingly engaging in offshoring, too. Representing the biggest share of the German software industry, small companies are known to be important innovators and market pioneers. They often regard their flexibility and customer-orientation as core competitive advantages. Unlike large corporations, their small size allows them to adopt software development approaches that are characterized by a high agility and flat hierarchies. At the same time, their distinct strategies make it unlikely that they can simply adopt management strategies that were developed for larger companies.
Flexible development approaches like the ones preferred by small corporations have proven to be problematic in the context of offshoring, as their strong dependency on constant communication is strongly affected by the various barriers of international cooperation between companies. Cooperating closely over companies’ borders in different time zones and in culturally diverse teams poses complex obstacles for flexible management approaches. It is still a matter of discussion in fields like Software Engineering and Computer Supported Cooperative Work how these obstacles can be tackled and how they affect companies in the long term. Hence, it is agreed that we need a more detailed understanding of distributed software development practices in order to come to feasible technological and organizational solutions.
This dissertation presents results from two ethnographically-informed case studies of software offshoring in small German enterprises. By adopting Anselm Strauss’ concept of articulation work, we want to deepen the understanding of managing distributed software development in flexible, customer-oriented organizations. In doing so, we show how practices of coordinating inter-organizational software development are closely related to aspects of organizational learning in small enterprises. By means of interviews with developers and project managers from both parties of the cooperation, we do not only take into account the multiple perspectives of the cooperation, but also include the socio-cultural background of international software development projects into our analysis.
Green IT (Green IS, Green ICT) is a concept of saving energy consumption to reduce IT costs. A current survey shows that only few companies in German speaking countries consider this aspect in their daily business. This is important facing the current situation of attempts of cost saving during the current economic crisis worldwide. This paper introduces into Green IT and presents an IT management and controlling concept. Then the main results of a currently presented survey are used to modify the concept. Finally an agenda for future research is given
Based on our reconfigurable FPGA spectrometer technology, we have developed a read-out system, operating in the frequency domain, for arrays of Microwave Kinetic Inductance Detectors (MKIDs). The readout consists of a combination of two digital boards: A programmable DAC-/FPGA-board (tone-generator) to stimulate the MKIDs detectors and an ADC-/FPGA-unit to analyze the detectors response. Laboratory measurement show no deterioration of the noise performance compared to low noise analog mixing. Thus, this technique allows capturing several hundreds of detector signals with just one pair of coaxial cables.
Der I. Senat des BFH hat dem Großen Senat mit Beschluss vom 7.4.2010 die Frage vorgelegt, ob der subjektive Fehlerbegriff zwar in Bezug auf nach Bilanzaufstellung neu bekanntgewordene Tatsachen beizubehalten, in Bezug auf bessere Rechtserkenntnisse nach Bilanzaufstellung aber aufzugeben ist. Der Große Senat hat insoweit eine schwierige Entscheidung zu treffen, da sich materielles Bilanzsteuerrecht, Verfahrensrecht und Handelsrecht bei dieser Frage überlagern. hat schon 1991 von einem heillosen und verworrenen Labyrinth gesprochen, in dem es schwer falle, die Prinzipien zu erkennen. Zusätzlich hat der Große Senat unter dem Gesichtspunkt der Kontinuität der Rechtsprechung und der Rechtssicherheit abzuwägen, ob es gerechtfertigt ist, eine seit 50 Jahren bestehende Rechtsprechung aufzugeben. Die Entscheidung des Großen Senats hat sich durch den Präsidentenwechsel beim BFH verzögert, ist aber nunmehr in nächster Zeit zu erwarten.
The small and remote households in Northern regions demand thermal energy rather than electricity. Wind turbine in such places can be used to convert wind energy into thermal energy directly using a heat generator based on the principle of the Joule machine. The heat generator driven by a wind turbine can reduce the cost of energy for heating system. However the optimal performance of the system depends on the torque-speed characteristics of the wind turbine and the heat generator. To achieve maximum efficiency of operation both characteristics should be matched. In the article the condition of optimal performance is developed and an example of the system operating at maximum efficiency is simulated.
DNA Sequencing
(2011)
This paper addresses special skills, learners in Internet-based learning scenarios need. In self-directed learning scenarios, as most Internet-based learning scenarios are designed, learners bear the responsibility for their learning progress. To ease this task, institutions could prime the learners for the situation which may be quite different to their previous learning experiences. Basing on a Delphi-study, conducted with experts from the e-Learning sector in Germany, Austria, and Switzerland, the basic requirements have been determined.
Dieses Buch ist kein Lehrbuch im eigentlichen Sinne. Aus Vorlesungsskripten entstanden, verfolgt es das Ziel, den Studenten der Wirtschaftswissenschaften an Fachhochschulen und Universitäten eine systematische, auf das Wesentliche konzentrierte Lernhilfe mit Übungen zur Vorbereitung auf Prüfungen im Grundstudium im Fach „Unternehmensbesteuerung“ anzubieten.
The work done in this thesis enhances the MMD algorithm in multi-core environments. The MMD algorithm, a transformation based algorithm for reversible logic synthesis, is based on the works introduced by Maslov, Miller and Dueck and their original, sequential implementation. It synthesises a formal function specification, provided by a truth table, into a reversible network and is able to perform several optimization steps after the synthesis. This work concentrates on one of these optimization steps, the template matching. This approach is used to reduce the size of the reversible circuit by replacing a number of gates that match a template which implements the same function and uses less gates. Smaller circuits have several benefits since they need less area and are not as costly. The template matching approach introduced in the original works is computationally expensive since it tries to match a library of templates against the given circuit. For each template at each position in the circuit, a number of different combinations have to be calculated during runtime resulting in high execution times, especially for large circuits. In order to make the template matching approach more efficient and usable, it has been reimplemented in order to take advantage of modern multi-core architectures such as the Cell Broadband Engine or a Graphics Processing Unit. For this work, two algorithmically different approaches that try to consider each multi-core architecture’s strengths, have been analyzed and improved. For the analysis these approaches have been cross-implemented on the two target hardware architectures and compared to the original parallel versions. Important metrics for this analysis are the execution time of the algorithm and the result of the minimization with the template matching approach. It could be shown that the algorithmically different approaches produce the same minimization results, independent of the used hardware architecture. However, both cross-implementations also show a significantly higher execution time which makes them practically irrelevant. The results of the first analysis and comparison lead to the decision to enhance only the original parallel approaches. Using the same metrics for successful enhancements as mentioned above, it could be shown that improving the algorithmic concepts and exploiting the capabilities of the hardware lead to better results for the execution time and the minimization results compared to their original implementations.
Diese Bachelor-Thesis entwickelt ein Charakterisierungskonzept für ein autostereoskopisches 3D-Display. Der daraus entstandene Parametersatz ist die Grundlage für die Anforderungen eines 3D-Charakterisierungsmessplatzes. Die Arbeit zeigt die Auswahl geeigneter Komponenten und Methoden um einen 3D-Charakterisierungsmessplatz aufzubauen. Mithilfe des neuartigen Aufbaus wird ein autostereoskopisches Display charakterisiert. Nach der Auswertung der gewonnen Messwerte wird der ermittelte Parametersatz dem Lastenheft gegenübergestellt. Mögliche Fehlerquellen im Aufbau und der Ansteuerung des Displays werden lokalisiert und soweit es möglich ist behoben.
Erstellung des Online-Tutorials "Einführung in Fachdatenbanken und Fachportale der Niederlandistik"
(2011)
The smart home of the future is typically researched in lab settings or apartments that have been built from scratch. However, comparing the lifecycle of buildings and information technology, it is evident that modernization strategies and technologies are needed to empower residents to modify and extend their homes to make it smarter. In this paper, we describe a case study about the deployment, adaption to and adoption of tailorable home energy management systems in 7 private households. Based on this experience, we want to discuss how hardware and software technologies should be designed so that people could build their own smart home with a high usability and user experience.
Nowadays, we input text not only on stationary devices, but also on handheld devices while walking, driving, or commuting. Text entry on the move, which we term as nomadic text entry, is generally slower. This is partially due to the need for users to move their visual focus from the device to their surroundings for navigational purposes and back. To investigate if better feedback about users' surroundings on the device can improve performance, we present a number of new and existing feedback systems: textual, visual, textual & visual, and textual & visual via translucent keyboard. Experimental comparisons between the conventional and these techniques established that increased ambient awareness for mobile users enhances nomadic text entry performance. Results showed that the textual and the textual & visual via translucent keyboard conditions increased text entry speed by 14% and 11%, respectively, and reduced the error rate by 13% compared to the regular technique. The two methods also significantly reduced the number of collisions with obstacles.
A system that interacts with its environment can be much more robust if it is able to reason about the faults that occur in its environment, despite perfect functioning of its internal components. For robots, which interact with the same environment as human beings, this robustness can be obtained by incorporating human-like reasoning abilities in them. In this work we use naive physics to enable reasoning about external faults in robots. We propose an approach for diagnosing external faults that uses qualitative reasoning on naive physics concepts for diagnosis. These concepts are mainly individual properties of objects that define their state qualitatively. The reasoning process uses physical laws to generate possible states of the concerned object(s), which could result into a detected external fault. Since effective reasoning about any external fault requires the information of relevant properties and physical laws, we associate different properties and laws to different types of faults which can be detected by a robot. The underlying ontology of this association is proposed on the basis of studies conducted (by other researchers) on reasoning of physics novices about everyday physical phenomena. We also formalize some definitions of properties of objects into a small framework represented in First-Order logic. These definitions represent naive concepts behind the properties and are intended to be independent from objects and circumstances. The definitions in the framework illustrates our proposal of using different biased definitions of properties for different types of faults. In this work, we also present a brief review of important contributions in the area of naive/qualitative physics. These reviews help in understanding the limitations of naive/qualitative physics in general. We also apply our approach to simple scenarios to asses its limitations in particular. Since this work was done independent of any particular real robotic system, it can be seen as a theoretical proof of the concept of usefulness of naive physics for external fault reasoning in robotics.
forschung@h-brs
(2011)
Das Thema Querdenken ist so aktuell, spannend und interessant, dass sich Prof. Dr. Jens Böcker von der Hochschule Bonn-Rhein-Sieg nach unseren ersten Gesprächen dazu entschlossen hat, eine Projektarbeit für seine Studenten zu diesem Thema zu vergeben. Prof. Dr. Böcker ist Professor im Fachbereich Wirtschaftswissenschaften für BWL mit dem Schwerpunkt Marketing. Das „Forschungsprojekt Querdenken“ wurde von einer Gruppe Studenten um Manuel Hammes als Projektleiter im Sommersemester 2010 bei Prof. Dr. Böcker durchgeführt.
Die Komplexität der Entscheidungen im Fuhrparkmanagement hat in der jüngeren Vergangenheit deutlich zugenommen. Damit steigen die Anforderungen an den Fuhrparkcontroller, den Fuhrparkleiter mit entscheidungsrelevanten Informationen im Sinne eines internen Dienstleisters zu unterstützen. Das Dynamic Carbon Accounting bietet die Möglichkeit, strategische, strukturelle und kulturelle Anforderungen an das Fuhrparkcontrolling durch die Kombination von Prozesskostenrechnung, Target Costing, Life Cycle Costing und den Ideen des Carbon Accountings instrumentell zu berücksichtigen. Je nach Bedeutung der Nachhaltigkeit für den Unternehmenserfolg können die damit verbundenen Auszahlungen noch differenzierter aufgenommen werden. So ist es denkbar, externe Auszahlungen der Emissionen von NO(ind x), Nichtmethan-Kohlenwasserstoffen, Partikeln, Lärm und Unfällen in die Rechnung zu integrieren. Damit wird je Fahrzeug der Beitrag zur Erreichung von Emissionszielen transparent gemacht und ist durch eine zielgerichtete Integration in den Controllingprozess des Unternehmens plan- und steuerbar. Da von einer zukünftig zunehmenden Komplexität des wirtschaftlichen Handelns auszugehen ist, wird sich der praktische Bedarf an dynamischen, marktorientierten Instrumenten im Controlling generell und speziell im Fuhrparkcontrolling weiter erhöhen.
Fuzzelarbeit: Identifizierung unbekannter Sicherheitslücken und Software-Fehler durch Fuzzing
(2011)
Fuzzing als toolgestützte Identifizierung von Sicherheitslücken wird in der Regel im letzten Stadium der Softwareentwicklung zum Einsatz kommen. Es eignet sich zur Suche nach Sicherheitslücken in jeder Art Software. Die Robustheit der untersuchten Zielsoftware wird beim Fuzzing mit zielgerichteten, unvorhergesehenen Eingabedaten überprüft. Der Fuzzing-Prozess wird im Artikel beschrieben, ebenso die Taxometrie von Fuzzern, die in "dumme" und "intelligente" Fuzzer eingeteilt werden. Die Identifizierung von Sicherheitslücken oder Fehlern in der Zielsoftware erfolgt durch ein umfassendes Monitoring (Debugger, Profiler, Tracker). Die meist große Zahl identifizierter Schwachstellen und Verwundbarkeiten macht eine Bewertung jeder einzelnen erforderlich, weil in der Regel aus Wirtschaftlichkeitsgründen nicht alle behoben werden können. Als wichtige Bewertungsparameter werden genannt: Erkennbarkeit für Dritte, Reproduzierbarkeit, Ausnutzbarkeit, benötigte Zugriffsrechte und generierbarer Schaden. Im Internet werden etwa 300 Tools angeboten. Die Qualität eines Fuzzers lässt sich jedoch nicht pauschal angeben. Die Wirksamkeit und Eignung eines Fuzzers hängen von der Zielsoftware und den individuellen Anforderungen des Testers ab.
Superconducting heterodyne receiver has played a vital role in the high resolution spectroscopy applications for astronomy and atmospheric research up to 2THz. NbN hot electron bolometer (HEB) mixer, as the most sensitive mixer above 1.5THz, has been used in the Herschel space telescope for 1.4-1.9THz and has also shown an ultra-high sensitivity up to 5.3THz. Combined a HEB mixer with a novel THz quantum cascade laser (QCL) as local oscillator (LO), such an all solid-state heterodyne receiver provides the technology which can be used for any balloon-, air- and space-borne heterodyne instruments above 2THz. Here we report the first high-resolution heterodyne spectroscopy measurement using a gas cell and using such a HEB-QCL receiver. The receiver employs a 2.9THz metal-metal waveguide QCL as LO and a NbN HEB as a mixer. By using a gas cell filled with methanol (CH3OH) gas in combination with hot/cold blackbody loads as signal source, we successfully recorded the methanol emission line around 2.918THz. Spectral lines at different pressures and also different frequency of the QCL are studied.
Seit der Einführung der Fallpauschalen im Jahr 2004 herrscht ein sich rapide verschärfender Wettbewerb im deutschen Krankenhausmarkt. Über alle Bettengrößen-Klassen hinweg ist die Zahl der Krankenhäuser mit wirtschaftlichen Problemen gestiegen. Laut Krankenhaus- Barometer hat jedes vierte Krankenhaus das Jahr 2009 mit Verlusten abgeschlossen. Am erfolgreichsten wirtschafteten Krankenhäuser mittlerer Größe (300 bis 599 Betten), sie verbuchten am häufigsten einen Jahresüberschuss (67,8 Prozent) und am seltensten einen Jahresfehlbetrag (9,9 Prozent). Kleine Krankenhäuser (50 bis 299 Betten) schreiben tendenziell häufiger rote Zahlen (23,5 Prozent) und erzielen seltener Gewinne (59,2 Prozent) oder ausgeglichene Ergebnisse (15,3 Prozent). Eine klare strategische Positionierung im Wettbewerb wird somit insbesondere für sie immer wichtiger, um langfristig Marktanteile zu sichern. Ein geeignetes Instrument zur Schaffung von Transparenz und Klarheit im Hinblick auf die regionalen aktuellen und langfristigen Marktbedingungen ist die geografische Marktanalyse auf der Basis der Geocodierung, die im Rahmen eines Pilotprojektes im Dreifaltigkeits-Krankenhaus Wesseling analysiert und eingesetzt wurde.
Aufgrund der zunehmenden Behandlung von sozialen Netzwerken in den Medien war es das Ziel der Arbeit das Geschäftsmodell von sozialen Netzwerken näher zu analysieren. Die Arbeit zeigt, dass soziale Online-Netzwerke zu den Diensten im Internet gehören, die zwar schon länger existieren, ihren eigentlichen Durchbruch aber erst in den letzten Jahren erlebten. Zu Beginn als reine Kommunikationsplattform genutzt, werden sie heute zur allgemeinen Freizeitgestaltung verwendet und integrieren sich zunehmend in das alltägliche Leben. Die Arbeit beschäftigt sich mit den ökonomischen Besonderheiten von sozialen Online-Netzwerken. Analysiert werden Netzwerkeffekte, Angebots- und Nachfrageverhalten, kritische Masse-Phänomene, Tippy markets, Netzwerkgesetze, Lock-In-Effekte und Wechselkosten. Es wird untersucht, ob und inwieweit sich hinter den sozialen Online-Netzwerken auch klar erkennbare Geschäftsmodelle verbergen. Aufbauend auf einer kritischen Auseinandersetzung mit der Vielfalt existierender Geschäftsmodelle erfolgt die Entwicklung eines eigenen tragfähigen Ansatzes. Auf dieser Basis wird eine Analyse existierender Online-Netzwerke und eine Beurteilung ihres Innovationsgrades vorgenommen.
Governance and Sustainability in Information Systems. Managing the Transfer and Diffusion of IT
(2011)
Die Fachhochschulen haben sich als Hochschulen für angewandte Wissenschaften seit ihrer Gründung Anfang der 70er Jahre deutlich gewandelt. Das Fächerportfolio vieler Fachhochschulen ist inzwischen mit jenem der Universitäten vergleichbar. In einigen Fächern bilden die Fachhochschulen sogar den überwiegenden Anteil von Absolventen aus. Die anwendungsorientierte Spitzenforschung gehört zum Selbstverständnis vieler Fachhochschulen. Vor diesem Hintergrund ist es unverständlich und für die wirtschaftliche Zukunftsfähigkeit schädlich, dass Fachhochschulen immer noch deutliche Wettbewerbsnachteile in der Weiterqualifizierung des wissenschaftlichen Nachwuchses haben. Dies gilt umso mehr, wenn mit Fachhochschulen vergleichbaren privaten Hochschulen das Promotionsrecht zugestanden wird.
We present a graph-based framework for post processing filters, called GrIP, providing the possibility of arranging and connecting compatible filters in a directed, acyclic graph for realtime image manipulation. This means that the construction of whole filter graphs is possible through an external interface, avoiding the necessity of a recompilation cycle after changes in post processing. Filter graphs are implemented as XML files containing a collection of filter nodes with their parameters as well as linkage (dependency) information. Implemented methods include (but are not restricted to) depth of field, depth darkening and an implementation of screen space shadows, all applicable in real-time, with manipulable parameterizations.
We present the extensible post processing framework GrIP, usable for experimenting with screen space-based graphics algorithms in arbitrary applications. The user can easily implement new ideas as well as add known operators as components to existing ones. Through a well-defined interface, operators are realized as plugins that are loaded at run-time. Operators can be combined by defining a post processing graph (PPG) using a specific XML-format where nodes are the operators and edges define their dependencies. User-modifiable parameters can be manipulated through an automatically generated GUI. In this paper we describe our approach, show some example effects and give performance numbers for some of them.