Refine
H-BRS Bibliography
- yes (4915) (remove)
Departments, institutes and facilities
- Fachbereich Wirtschaftswissenschaften (1241)
- Fachbereich Informatik (1148)
- Fachbereich Angewandte Naturwissenschaften (766)
- Fachbereich Ingenieurwissenschaften und Kommunikation (636)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (480)
- Präsidium (404)
- Fachbereich Sozialpolitik und Soziale Sicherung (402)
- Institute of Visual Computing (IVC) (313)
- Institut für funktionale Gen-Analytik (IFGA) (241)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (195)
Document Type
- Article (1603)
- Conference Object (1119)
- Part of a Book (689)
- Part of Periodical (410)
- Book (monograph, edited volume) (370)
- Report (145)
- Preprint (88)
- Working Paper (87)
- Contribution to a Periodical (83)
- Doctoral Thesis (70)
Year of publication
Keywords
- Lehrbuch (85)
- Deutschland (27)
- Nachhaltigkeit (27)
- Controlling (23)
- Unternehmen (23)
- Digitalisierung (17)
- Management (17)
- Betriebswirtschaftslehre (16)
- Machine Learning (16)
- Corporate Social Responsibility (15)
Indoor spaces exhibit microbial compositions that are distinctly dissimilar from one another and from outdoor spaces. Unique in this regard, and a topic that has only recently come into focus, is the microbiome of hospitals. While the benefits of knowing exactly which microorganisms propagate how and where in hospitals are undoubtedly beneficial for preventing hospital-acquired infections, there are, to date, no standardized procedures on how to best study the hospital microbiome. Our study aimed to investigate the microbiome of hospital sanitary facilities, outlining the extent to which hospital microbiome analyses differ according to sample-preparation protocol. For this purpose, fifty samples were collected from two separate hospitals—from three wards and one hospital laboratory—using two different storage media from which DNA was extracted using two different extraction kits and sequenced with two different primer pairs (V1–V2 and V3–V4). There were no observable differences between the sample-preservation media, small differences in detected taxa between the DNA extraction kits (mainly concerning Propionibacteriaceae), and large differences in detected taxa between the two primer pairs V1–V2 and V3–V4. This analysis also showed that microbial occurrences and compositions can vary greatly from toilets to sinks to showers and across wards and hospitals. In surgical wards, patient toilets appeared to be characterized by lower species richness and diversity than staff toilets. Which sampling sites are the best for which assessments should be analyzed in more depth. The fact that the sample processing methods we investigated (apart from the choice of primers) seem to have changed the results only slightly suggests that comparing hospital microbiome studies is a realistic option. The observed differences in species richness and diversity between patient and staff toilets should be further investigated, as these, if confirmed, could be a result of excreted antimicrobials.
The corporate landscape is experiencing an increasing change in business models due to digitization. An increasing availability of data along the business processes enhance the opportunities for process automation. Technologies such as Robotic Process Automation (RPA) are widely used for business process optimization, but as a side effect an increase in stand-alone solutions and a lack of holistic approaches can be observed. Intelligent Process Automation (IPA) is said to support more complex processes and enable automated decision-making, but due to the lack of connectors makes the implementation difficult. RPA marketplaces can be a bridging technology to help companies implement Intelligent Process Automation. This paper explores the drivers and challenges for the adoption of RPA marketplaces to realize IPA. For this purpose, we conducted ten expert interviews with decision makers and IT staff from the process automation sector.
Trust-Building in Peer-to-Peer Carsharing: Design Case Study for Algorithm-Based Reputation Systems
(2023)
Peer-to-peer sharing platforms become increasingly important in the platform economy. From an HCI-perspective, this development is of high interest, as those platforms mediate between different users. Such mediation entails dealing with various social issues, e.g., building trust between peers online without any physical presence. Peer ratings have proven to be an important mechanism in this regard. At the same time, scoring via car telematics become more common for risk assessment by car insurances. Since user ratings face crucial problems such as fake or biased ratings, we conducted a design case study to determine whether algorithm-based scoring has the potential to improve trust-building in P2P-carsharing. We started with 16 problem-centered interviews to examine how people understand algorithm-based scoring, we co-designed an app with scored profiles, and finally evaluated it with 12 participants. Our findings show that scoring systems can support trust-building in P2P-carsharing and give insights how they should be designed.
Dynamic Programming
(2024)
Bedingt durch die fortlaufende Digitalisierung und den Big Data-Trend stehen immer mehr Daten zur Verfügung. Daraus resultieren viele Potenziale – gerade für Unternehmen. Die Fähigkeit zur Bewältigung und Auswertung dieser Daten schlägt sich in der Rolle des Data Scientist nieder, welcher aktuell einer der gefragtesten Berufe ist. Allerdings ist die Integration von Daten in Unternehmensstrategie und -kultur eine große Herausforderung. So müssen komplexe Daten und Analyseergebnisse auch nicht datenaffinen Stakeholdern kommuniziert werden. Hier kommt dem Data Storytelling eine entscheidende Rolle zu, denn um mit Daten eine Veränderung hervorrufen zu können, müssen vorerst Verständnis und Motivation für den Sachverhalt zielgruppenspezifisch geschaffen werden. Allerdings handelt es sich bei Data Storytelling noch um ein Nischenthema. Diese Arbeit leitet mithilfe einer systematischen Literaturanalyse die Erfolgsfaktoren von Data Storytelling für eine effektive und effiziente Kommunikation von Daten her, um Data Scientists in Forschung und Praxis bei der Kommunikation der Daten und Ergebnisse zu unterstützen.
Due to ongoing digitalization, more and more cloud services are finding their way into companies. In this context, data integration from the various software solutions, which are provided both on-premise (local use or licensing for local use of software) and as a service, is of great importance. In this regard, Integration Platform as a Service (IPaaS) models aim to support companies as well as software providers in the context of data integration by providing connectors to enable data flow between different applications and systems and other integration services. Since previous research has mostly focused on technical or legal aspects of IPaaS, this article focuses on deriving integration practices and design-related barriers and drivers regarding the adoption of IPaaS. Therefore, we conducted 10 interviews with experts from different software as a services vendors. Our results show that the main factors regarding the adoption of IPaaS are the standardization of data models, the usability and variety of connectors provided, and the issues regarding data privacy, security, and transparency.
Data emerged as a central success factor for companies to benefit from digitization. However, the skills in successfully creating value from data – especially at the management level – are not always profound. To address this problem, several canvas models have already been designed. Canvas models are usually created to write down an idea in a structured way to promote transparency and traceability. However, some existing data science canvas models mainly address developers and are thus unsuitable for decision-makers and communication within interdisciplinary teams. Based on a literature review, we identified influencing factors that are essential for the success of data science projects. With the information gained, the Data Science Canvas was developed in an expert workshop and finally evaluated by practitioners to find out whether such an instrument could support data-driven value creation.
In the course of growing online retailing, recommendation systems have become established that derive recommendations from customers’ purchase histories. Recommending suitable food products can represent a lucrative added value for food retailers, but at the same time challenges them to make good predictions for repeated food purchases. Repeat purchase recommendations have been little explored in the literature. These predict when a product will be purchased again by a customer. This is especially important for food recommendations, since it is not the frequency of the same item in the shopping basket that is relevant for determining repeat purchase intervals, but rather their difference over time. In this paper, in addition to critically reflecting classical recommendation systems on the underlying repeat purchase context, two models for online product recommendations are derived from the literature, validated and discussed for the food context using real transaction data of a German stationary food retailer.
Sequencing Problems
(2024)
This article describes an approach to rapidly prototype the parameters of a Java application run on the IBM J9 Virtual Machine in order to improve its performance. It works by analyzing VM output and searching for behavioral patterns. These patterns are matched against a list of known patterns for which rules exist that specify how to adapt the VM to a given application. Adapting the application is done by adding parameters and changing existing ones. The process is fully automated and carried out by a toolkit. The toolkit iteratively cycles through multiple possible parameter sets, benchmarks them and proposes the best alternative to the user. The user can, without any prior knowledge about the Java application or the VM improve the performance of the deployed application and quickly cycle through a multitude of different settings to benchmark them. When tested with the representative benchmarks, improvements of up to 150% were achieved.
Improving the Performance of Parallel SpMV Operations on NUMA Systems with Adaptive Load Balancing
(2018)
For a parallel Sparse Matrix Vector Multiply (SpMV) on a multiprocessor, rather simple and efficient work distributions often produce good results. In cases where this is not true, adaptive load balancing can improve the balance and performance. This paper introduces a low overhead framework for adaptive load balancing of parallel SpMV operations. It uses statistical filters to gather relevant runtime performance data and detects an imbalance situation. Three different algorithms were compared that adaptively balance the load with high quality and low overhead. Results show that for sparse matrices, where the adaptive load balancing was enabled, an average speedup of 1.15 (regarding the total execution time) could be achieved with our best algorithm over 4 different matrix formats and two different NUMA systems.
Die Entwicklung technischer Produkte strebt nach der Akzeptanz durch den Markt. Das abstrakte Gebilde des Marktes wird aber geprägt durch menschliche Entscheidungen. BenutzerInnen arbeiten gerne mit einem technischen System oder sie lehnen es mehr oder weniger ab. Diese Ablehnung durch die BenutzerInnen führt über kurz oder lang auch zur Ablehnung durch die Entscheidungsträger in Firmen und anderen Institutionen, die gemeinsam mit den privaten BenutzerInnen den Markt ausmachen. Somit steht und fällt der wirtschaftliche Erfolg bei der Entwicklung und Vermarktung technischer Produkte mit dem Maß der erreichten Akzeptanz durch die BenutzerInnen.
This Paper presents the methodical approach and early findings of the project SEN-TAF (Technology Acceptance by the Elderly to Increase Independence). The project aims to examine the acceptance of robotic systems by elderly people and make early recommendations of necessary features those systems should contain. Based on theoretical approaches of technology acceptance and an empirical study to examine the general need of support of the elderly we developed several scenarios of robot applications. These scenarios are then visualized in animations and simulations to check the preliminary defined acceptance model. Beside these scenarios we survey several other factors which might have an impact of the overall acceptance, e.g. the appearance of the robotic systems (humanoid vs. technical appearance) and the interaction 'mode' (speaking vs. nonspeaking). In addition to these animations and simulations we survey the acceptance of the robotic dog AIBO as early placeholder for future developments in animal robotic systems which could serve as a resource against boredom.
With the digital transformation, software systems have become an integral part of our society and economy. In every part of our life, software systems are increasingly utilized to, e.g., simplify housework or to optimize business processes. All these applications are connected to the Internet, which already includes millions of software services consumed by billions of people. Applications which process such a magnitude of users and data traffic requires to be highly scalable and are therefore denoted as Ultra Large Scale (ULS) systems. Roy Fielding has defined one of the first approaches which allows designing modern ULS software systems. In his doctoral thesis, Fielding introduced the architectural style Representational State Transfer (REST) which builds the theoretical foundation of the web. At present, the web is considered as the world's largest ULS system. Due to a large number of users and the significance of software for society and the economy, the security of ULS systems is another crucial quality factor besides high scalability.
The MoMoSat service will enable mobile end-users to view, manage, annotate, and communicate mapbased information in the field. The handled information exists of a huge volume of raster (satellite or aerial images) and vector data (i.e. street networks, cadastral maps or points of interest), as well as text-specific geo-referenced textual notes (the so-called 'GeoNotes') and real-time voice.
Knowledge-Based Instrumentation and Control for Competitive Industry-Inspired Robotic Domains
(2016)
Die vorliegende Erfindung betrifft ein Analysesystem und ein bibliotheksunabhängiges Analyseverfahren zum qualitativen Nachweis und zur Klassifizierung energetischer Materialien, insbesondere zum Nachweis von Explosiv- und Sprengstoffen sowie für komplexe Stoffzusammensetzungen, welche in IEDS (Improvised Explosive Devices) Verwendung finden.
Der Mutterpass wurde als wichtiges Vorsorgeinstrument für Schwangere Anfang der sechziger Jahre in Papierform eingeführt. Er wird bei 90% aller Schwangerschaften genutzt. Seit seiner Einführung im Jahre 1968 hat jedoch die Komplexität der Vorsorgeuntersuchungen zugenommen, wie auch die Begleitumstände einer Schwangerschaft häufig komplexer geworden sind. Dies war Anlass dafür, die elektronische Abbildung des Papier basierten Mutterpasses zu entwickeln, um den gewachsenen Anforderungen der medizinischen Dokumentation und Evaluation gerecht zu werden. Eine große Herausforderung bei der Konzeption und Entwicklung des elektronischen Mutterpasses war dabei die Definition eines strukturierten und maschinenlesbaren Austauschformates. Darüber hinaus mussten weltweit neue eindeutige Identifier entwickelt werden, um den Mutterpass elektronisch abzubilden. Nach der prototypischen Realisierung einer vollständigen Version wurde im Frühjahr 2008 die Pilotierung in der Metropolregion Rhein-Neckar begonnen.
Volks- und Raiffeisenbanken sehen sich trotz positiver Geschäftsentwicklung einem zunehmenden Wettbewerbsdruck ausgesetzt, dem die Institute auch durch einen Ausbau des Kreditgeschäftes entgegensteuern wollen. Aus diesem Grund müssen sie zu einer aktiven Steuerung des Vertriebs - auch im Kreditgeschäft - übergehen. Allerdings stellt das mit dem Kreditgeschäft einhergehende Kreditrisiko, sowohl gemessen an seiner absoluten Höhe als auch an seiner Ergebniswirkung bereits jetzt ein zentrales Risiko genossenschaftlicher Kreditinstitute dar, das - auch aus aufsichtrechtlichen Gründen - durch ein konsequentes Reporting beherrschbar gemacht werden muss. Ziel der vorliegenden Arbeit ist die Entwicklung eines entsprechenden Kreditreportings für die Raiffeisenbank RheinbachVoreifel eG, das auch bei anderen genossenschaftlichen Instituten Anwendung finden könnte. Mit ihm soll eine effiziente Steuerung des Kreditgeschäfts aufbauend auf seinen Steuerungsperspektiven Risiko, Ertrag und Prozess sichergestellt werden. Gleichzeitig wird versucht, das System durch eine überschaubare Anzahl von Kennzahlen anwenderfreundlich zu gestalten.
The development of advanced robotic systems is challenging as expertise from multiple domains needs to be integrated conceptually and technically. Model-driven engineering promises an efficient and flexible approach for developing robotics applications that copes with this challenge. Domain-specific modeling allows to describe robotics concerns with concepts and notations closer to the respective problem domain. This raises the level of abstraction and results in models that are easier to understand and validate. Furthermore, model-driven engineering allows to increase the level of automation, e.g. through code generation, and to bridge the gap between modeling and implementation. The anticipated results are improved efficiency and quality of the robotics systems engineering process. Within this contribution, we survey the available literature on domain-specific modeling and languages that target core robotics concerns. In total 137 publications were identified that comply with a set of defined criteria, which we consider essential for contributions in this field. With the presented survey, we provide an overview on the state-of-the-art of domain-specific modeling approaches in robotics. The surveyed publications are investigated from the perspective of users and developers of model-based approaches in robotics along a set of quantitative and qualitative research questions. The presented quantitative analysis clearly indicates the rising popularity of applying domain-specific modeling approaches to robotics in the academic community. Beyond this statistical analysis, we map the selected publications to a defined set of robotics subdomains and typical development phases in robotic systems engineering as reference for potential users. Furthermore, we analyze these contributions from a language engineering viewpoint and discuss aspects such as the methods and tools used for their implementation as well as their documentation status, platform integration, typical use cases and the evaluation strategies used for validation of the proposed approaches. Finally, we conclude with recommendations for discussion in the model-driven engineering and robotics community based on the insights gained in this survey.
This study deals with the in-situ detection of volume fractions of melt in labradorite and basalt at 0.3 GPa pressure and temperatures ranging from 400–1500 °C. Methods used were frequency dependent electrical conductivity (EC) and energy dispersive X-ray diffraction (EDX). These techniques allowed melt fraction determination under in-situ pressure and temperature conditions, while optical analysis (SEM) was performed on quenched samples. EC allowed detecting melt frac- tions as low as 0.03 due to changes in dielectric properties. Increasing melt fractions caused the formerly isolated melt bubbles to interconnect along grain boundaries, thus increasing the bulk conductivity. Electrical conductivity thus provides a measure for both, the formation of melt (dielectric property) and the degree of interconnection of melt (bulk conductivity). Energy dispersive X-ray diffraction experiments (EDX) provided an additional measure for the volume fraction of melt. EDX diffraction data were used to calculate the volume fraction of melt on the basis of the peak to background ratio. In a final step the experimental data (SEM, EC, EDX) were compared with geometric models of melt distribution, namely the Archie-, cube-, tube-, Hashin-Shtrikman HS + and HS - model. The electrical "polarisability" data closely fit the HS + model, while bulk conductivity data were found to be less sensitive for melt fraction detection.
Nicht nur der Praxisübernehmer, sondern auch der abgebende Arzt müssen sich zu geeigneterZeit mit den Anforderungen eines Praxis(-ver-)kaufs auseinander setzen. Eine geregelte Nachfolge ist in der Regel ein Baustein für den Noch-Unternehmer zur Sicherung der Altersvorsorge. Besondere Aktualität gewinnt das Thema dadurch, dass Banken heute schon bei 50-jährigen Praxisinhabern ihre Bonitätsnote und damit die Fortführung der Kreditvergabe von einem schlüssigen Nachfolgekonzept abhängig machen.
Vor dem Hintergrund knapper Ressourcen, dem zunehmendem Reha-Bedarf und der politischen Diskussion um eine demografische Anpassung der Reha-Budgets gewinnt der Nachweis der Ergebnisqualität medizinischer Reha-Leistungen weiter an zentraler Bedeutung (z. B. Haaf, 2005; Steiner et al., 2009). Die kontinuierliche und klinikvergleichende Überprüfung der Behandlungsergebnisse ist darüber hinaus ein wichtiger Baustein eines funktionierenden Qualitätsmanagements (Schmidt et al., in press). Sie ermöglicht ein "Lernen von den Besten" und führt zu organisatorischen Lernprozessen (Toepler et. al., 2010).