Refine
H-BRS Bibliography
- yes (4918) (remove)
Departments, institutes and facilities
- Fachbereich Wirtschaftswissenschaften (1243)
- Fachbereich Informatik (1148)
- Fachbereich Angewandte Naturwissenschaften (766)
- Fachbereich Ingenieurwissenschaften und Kommunikation (636)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (480)
- Präsidium (403)
- Fachbereich Sozialpolitik und Soziale Sicherung (402)
- Institute of Visual Computing (IVC) (313)
- Institut für funktionale Gen-Analytik (IFGA) (241)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (195)
Document Type
- Article (1603)
- Conference Object (1119)
- Part of a Book (690)
- Part of Periodical (410)
- Book (monograph, edited volume) (370)
- Report (145)
- Preprint (88)
- Working Paper (87)
- Contribution to a Periodical (83)
- Doctoral Thesis (70)
Year of publication
Keywords
- Lehrbuch (85)
- Deutschland (27)
- Nachhaltigkeit (27)
- Controlling (23)
- Unternehmen (23)
- Digitalisierung (17)
- Management (17)
- Betriebswirtschaftslehre (16)
- Machine Learning (16)
- Corporate Social Responsibility (15)
Robots applied in therapeutic scenarios, for instance in the therapy of individuals with Autism Spectrum Disorder, are sometimes used for imitation learning activities in which a person needs to repeat motions by the robot. To simplify the task of incorporating new types of motions that a robot can perform, it is desirable that the robot has the ability to learn motions by observing demonstrations from a human, such as a therapist. In this paper, we investigate an approach for acquiring motions from skeleton observations of a human, which are collected by a robot-centric RGB-D camera. Given a sequence of observations of various joints, the joint positions are mapped to match the configuration of a robot before being executed by a PID position controller. We evaluate the method, in particular the reproduction error, by performing a study with QTrobot in which the robot acquired different upper-body dance moves from multiple participants. The results indicate the method's overall feasibility, but also indicate that the reproduction quality is affected by noise in the skeleton observations.
This thesis introduces and demonstrates a novel method for learning qualitative models of the world by an autonomous robot. The method makes possible generation of qualitative models that can be used for prediction as well as directing the experiments to improve the model. The qualitative models form the knowledge representation of the robot and consists of qualitative trees and non-deterministic finite automaton. An efficient exploration algorithm that lets the robot collect the most relevant learning samples is also introduced. To demonstrate the use of the methodology, representation and algorithm, two experiments are described. The first experiment is conducted using a mobile robot and a ball, where the robot observes the ball and learns the effect of its actions on the observed attributes of the world. The second experiment is conducted using a mobile robot and five boxes, two non-movable boxes and three movable boxes. The robot experiments actively with the objects and observes the changes in the attributes of the world. The main difference with the two experiments is that the first one tries to learn by observation while the second tries to learn by experimentation. In both experiments the robot learns qualitative models from its actions and observations. Although the primary objective of the robot is to improve itself by being able to predict the outcome of its actions, the models Learned were also used at each step of the learning process to direct the experiments so that the model converges to the final model as quickly as possible.
Compliant manipulation is a crucial skill for robots when they are supposed to act as helping hands in everyday household tasks. Still, nowadays, those skills are hand-crafted by experts which frequently requires labor-intensive, manual parameter tuning. Moreover, some tasks are too complex to be specified fully using a task specification. Learning these skills, by contrast, requires a high number of costly and potentially unsafe interactions with the environment. We present a compliant manipulation approach using reinforcement learning guided by the Task Frame Formalism, a task specification method. This allows us to specify the easy to model knowledge about a task while the robot learns the unmodeled components by reinforcement learning. We evaluate the approach by performing a compliant manipulation task with a KUKA LWR 4+ manipulator. The robot was able to learn force control policies directly on the robot without using any simulation.
In this paper, we describe an approach that enables an autonomous system to infer the semantics of a command (i.e. a symbol sequence representing an action) in terms of the relations between changes in the observations and the action instances. We present a method of how to induce a theory (i.e. a semantic description) of the meaning of a command in terms of a minimal set of background knowledge. The only thing we have is a sequence of observations from which we extract what kinds of effects were caused by performing the command. This way, we yield a description of the semantics of the action and, hence, a definition.
During robot-assisted therapy, a robot typically needs to be partially or fully controlled by therapists, for instance using a Wizard-of-Oz protocol; this makes therapeutic sessions tedious to conduct, as therapists cannot fully focus on the interaction with the person under therapy. In this work, we develop a learning-based behaviour model that can be used to increase the autonomy of a robot’s decision-making process. We investigate reinforcement learning as a model training technique and compare different reward functions that consider a user’s engagement and activity performance. We also analyse various strategies that aim to make the learning process more tractable, namely i) behaviour model training with a learned user model, ii) policy transfer between user groups, and iii) policy learning from expert feedback. We demonstrate that policy transfer can significantly speed up the policy learning process, although the reward function has an important effect on the actions that a robot can choose. Although the main focus of this paper is the personalisation pipeline itself, we further evaluate the learned behaviour models in a small-scale real-world feasibility study in which six users participated in a sequence learning game with an assistive robot. The results of this study seem to suggest that learning from guidance may result in the most adequate policies in terms of increasing the engagement and game performance of users, but a large-scale user study is needed to verify the validity of that observation.
Der vorliegende Fachbericht ist der Abschlussbericht eines im Auftrag des Ministeriums für Umwelt, Landwirtschaft, Natur- und Verbraucherschutz durchgeführten Kooperationsprojekts des LANUV mit dem Internationalen Zentrum für Nachhaltige Entwicklung (Hochschule-Bonn-Rhein-Sieg) zur Untersuchung von Mengen und Gründen für die Entstehung von Lebensmittelverlusten bei Obst, Gemüse und Kartoffeln sowie zur Entwicklung von Vermeidungsstrategien im Winter 2016/2017.
Demografischer Wandel und einhergehende Probleme wie Fachkräftemangel, alternde Belegschaft und ein kontinuierlicher Know-How-Verlust, sind keine Fremdworte mehr. Anders verhält es sich mit möglichen Lösungswegen. Frau Kramer stellt in ihrem Werk ein mögliches Lösungskonzept, die lebensphasenorientierten Personalpolitik vor. Sie zeigt anhand praxisnaher Beispiele zwei unterschiedliche Konzepte auf.Die Idee dahinter: Die traditionelle begrenzte Sichtweise der Personalpolitik mit dem Fokus auf die ersten 20 Jahre des Berufslebens wird bei der lebensphasenorientierten Personalpolitik um die gesamte Lebensarbeitszeit erweitert.
Mit der Aufnahme des Lehrbetriebes im Wintersemester 1995/96 haben sich die Fachbereiche Wirtschaft in Sankt Augustin und Rheinbach die laufende Qualitätssicherung und Qualitätsverbesserung der Ausbildung zum Ziel gesetzt. Die Evaluierung der Lehre und des Studiums wurde frühzeitig implementiert. Der Fachbereich versteht den Lehr- und Evaluationsbericht als Instrument der selbst gesteuerten Qualitätssicherung.
Lehren an Hochschulen
(2015)
Die Servicestelle Lehrbeauftragtenpool ist ein vom Bundesministerium für Bildung und Forschung (BMBF) gefördertes Verbundprojekt der Hochschulen Bonn-Rhein-Sieg, Düsseldorf, Niederrhein und Rhein-Waal. Unser Ziel ist es, die Qualität der Lehre zu verbessern, indem wir Menschen mit Berufserfahrung an die Hochschulen und in die Lehre bringen.
In der Fachgruppe IT-Controlling des Fachbereichs Wirtschaftsinformatik der Gesellschaft für Informatik e. V. kommen seit 1989 Führungskräfte aus dem Informations- und ITManagement, dem IT-Controlling, Unternehmens- und IT-Berater/-innen sowie Wissenschaftler/-innen zusammen, um Methoden, Anwendungen und Herausforderungen des ITControllings zu diskutieren. Die Fachgruppe ist im deutschsprachigen Raum das zentrale Fachgremium für das Controlling der betrieblichen Informationsverarbeitung (gegenwärtig verbreitet als IT-Controlling und IV-Controlling bezeichnet; weit gehend synonym dazu auch Informatik-Controlling, Informationssystem-Controlling, Informations-Controlling).
Less is Often More: Header Whitelisting as Semantic Gap Mitigation in HTTP-Based Software Systems
(2021)
The web is the most wide-spread digital system in the world and is used for many crucial applications. This makes web application security extremely important and, although there are already many security measures, new vulnerabilities are constantly being discovered. One reason for some of the recent discoveries lies in the presence of intermediate systems—e.g. caches, message routers, and load balancers—on the way between a client and a web application server. The implementations of such intermediaries may interpret HTTP messages differently, which leads to a semantically different understanding of the same message. This so-called semantic gap can cause weaknesses in the entire HTTP message processing chain.
In this paper we introduce the header whitelisting (HWL) approach to address the semantic gap in HTTP message processing pipelines. The basic idea is to normalize and reduce an HTTP request header to the minimum required fields using a whitelist before processing it in an intermediary or on the server, and then restore the original request for the next hop. Our results show that HWL can avoid misinterpretations of HTTP messages in the different components and thus prevent many attacks rooted in a semantic gap including request smuggling, cache poisoning, and authentication bypass.
The curricula of all degree programs at H-BRS have many different practice-oriented activities and focus on hands-on learning. In labs and small classrooms (30–60 persons), students get a personalized learning environment which is complemented with many individual and group projects that foster collaborative work situations. There are several main areas that students learn from working with industry, local organizations or public institutions.
A reference model is always developed in order to support a specific purpose. The development environment is setting the broader context. Limitations are not only set by size and experience of the modeler team or by budget and time constraints. The intended usage scenario also defines the fundamental contour of a reference model. During the practical work with reference models, a range of key issues has come up to increase the suitability of reference models for daily use. As the result of many projects, the authors have summarized the key issues and formulated critical success factors for reference modeling projects.
Aim of this study is to investigate the effects of user experience (UX) on shopping mall customers’ intention to use a social robot. Therefore, we used a Wizard of Oz approach that enabled data collection in situ. Quantitative data was obtained from a questionnaire completed by shopping mall customers who interacted with a social robot. Data was used in a regression analysis, where user experience factors served as predictors for robot use in retail. The regression model explains up to 23.2% of the variance in customers’ intention to use a social robot. In addition, we collected qualitative data on human-robot-interactions and used the data to complement the interpretation of statistical results. Our findings suggest that only hedonic qualities significantly contribute to the prediction of customers’ intention, that shopping mall customers are reluctant to grant pragmatic qualities to social robots, and that UX evaluation in HRI requires additional predictors.
The lattice Boltzmann method (LBM) is an efficient simulation technique for computational fluid mechanics and beyond. It is based on a simple stream-and-collide algorithm on Cartesian grids, which is easily compatible with modern machine learning architectures. While it is becoming increasingly clear that deep learning can provide a decisive stimulus for classical simulation techniques, recent studies have not addressed possible connections between machine learning and LBM. Here, we introduce Lettuce, a PyTorch-based LBM code with a threefold aim. Lettuce enables GPU accelerated calculations with minimal source code, facilitates rapid prototyping of LBM models, and enables integrating LBM simulations with PyTorch's deep learning and automatic differentiation facility. As a proof of concept for combining machine learning with the LBM, a neural collision model is developed, trained on a doubly periodic shear layer and then transferred to a different flow, a decaying turbulence. We also exemplify the added benefit of PyTorch's automatic differentiation framework in flow control and optimization. To this end, the spectrum of a forced isotropic turbulence is maintained without further constraining the velocity field.
Level-Synchronous Parallel Breadth-First Search Algorithms For Multicore and Multiprocessor Systems
(2014)
Breadth-First Search (BFS) is a graph traversal technique used in many applications as a building block, e.g.,~to systematically explore a search space. For modern multicore processors and as application graphs get larger, well-performing parallel algorithms are favourable. In this paper, we systematically evaluate an important class of parallel BFS algorithms and discuss programming optimization techniques for their implementation. We concentrate our discussion on level-synchronous algorithms for larger multicore and multiprocessor systems. In our results, we show that for small core counts many of these algorithms show rather similar behaviour. But, for large core counts and large graphs, there are considerable differences in performance and scalability influenced by several factors. This paper gives advice, which algorithm should be used under which circumstances.
LiDAR-based Indoor Localization with Optimal Particle Filters using Surface Normal Constraints
(2023)
Liebe Leserinnen und Leser
(2022)
Liebe Leserinnen und Leser!
(2023)
IT performance measurement is often associated by chief executive officers with IT cost cutting although IT protects business processes from increasing IT costs. IT cost cutting only endangers the company’s efficiency. This opinion discriminates those who do IT performance measurement in companies as a bean-counter. The present paper describes an integrated reference model for IT performance measurement based on a life cycle model and a performance oriented framework. The presented model was created from a practical point of view. It is designed lank compared with other known concepts and is very appropriate for small and medium enterprises (SME).
Today, more than 70 million tons of lignin are produced by the pulp and paper industry every year. However, the utilization of lignin as a source for chemical synthesis is still limited due to the complex and heterogeneous lignin structure. The purpose of this study was a selective photodegradation of industrially available kraft lignin in order to obtain appropriate fragments and building block chemicals for further utilization, e.g. polymerization. Thus, kraft lignin obtained from soft wood black liquor by acidification was dissolved in sodium hydroxide and irradiated at a wavelength of 254 nm with and without the presence of titanium dioxide in various concentrations. Analyses of the irradiated products via SEC showed decreasing molar masses and decreasing polydispersity indices over time. At the end of the irradiation period the lignin was depolymerised to form fragments as small as the lignin monomers. TOC analyses showed minimal mineralisation due to the depolymerisation process.
Renewable resources gain increasing interest as source for environmentally benign biomaterials, such as drug encapsulation/release compounds, and scaffolds for tissue engineering in regenerative medicine. Being the second largest naturally abundant polymer, the interest in lignin valorization for biomedical utilization is rapidly growing. Depending on resource and isolation procedure, lignin shows specific antioxidant and antimicrobial activity. Today, efforts in research and industry are directed toward lignin utilization as renewable macromolecular building block for the preparation of polymeric drug encapsulation and scaffold materials. Within the last five years, remarkable progress has been made in isolation, functionalization and modification of lignin and lignin-derived compounds. However, literature so far mainly focuses lignin-derived fuels, lubricants and resins. The purpose of this review is to summarize the current state of the art and to highlight the most important results in the field of lignin-based materials for potential use in biomedicine (reported in 2014–2018). Special focus is drawn on lignin-derived nanomaterials for drug encapsulation and release as well as lignin hybrid materials used as scaffolds for guided bone regeneration in stem cell-based therapies.
Renewable resources are gaining increasing interest as a source for environmentally benign biomaterials, such as drug encapsulation/release compounds, and scaffolds for tissue engineering in regenerative medicine. Being the second largest naturally abundant polymer, the interest in lignin valorization for biomedical utilization is rapidly growing. Depending on its resource and isolation procedure, lignin shows specific antioxidant and antimicrobial activity. Today, efforts in research and industry are directed toward lignin utilization as a renewable macromolecular building block for the preparation of polymeric drug encapsulation and scaffold materials. Within the last five years, remarkable progress has been made in isolation, functionalization and modification of lignin and lignin-derived compounds. However, the literature so far mainly focuses lignin-derived fuels, lubricants and resins. The purpose of this review is to summarize the current state of the art and to highlight the most important results in the field of lignin-based materials for potential use in biomedicine (reported in 2014⁻2018). Special focus is placed on lignin-derived nanomaterials for drug encapsulation and release as well as lignin hybrid materials used as scaffolds for guided bone regeneration in stem cell-based therapies.
Lignin ist ein aromatisches Biopolymer, das in den Zellwänden von Pflanzen vorkommt. Es ist hauptsächlich aus drei sogenannten Monolignolen (p-Hydroxyphenyl (H), Guajakol (G) und Syringol (S)) aufgebaut, die über verschiedene Bindungen miteinander verknüpft sein können, und enthält eine Vielzahl an funktionellen Gruppen. Interessant für die Verwendung von Lignin sind dabei insbesondere die vielen phenolischen Hydroxygruppen, die als Ausgangsstoff bei der Synthese neuer Produkte dienen können, daneben aber auch für seine antioxidativen Eigenschaften verantwortlich sind. Da Struktur und Eigenschaften von vielen Faktoren wie Biomasse und Aufschlussprozess abhängen, ist eine detaillierte Charakterisierung der Lignine nötig, um Struktur-Eigenschafts-Beziehungen aufzuklären und so einen Schritt näher an eine mögliche stoffliche Nutzung zu kommen. Mit dieser Arbeit soll der Einfluss der Biomasse inklusive der verwendeten Partikelgröße sowie des Organosolv-Aufschlussprozesses auf die Monomerzusammensetzung, das Molekulargewicht und die Antioxidanz der isolierten Lignine untersucht werden.
Als Rohstoffe zur Ligningewinnung dienen die drei mehrjährigen lignocellulosereichen Low-Input-Pflanzen Miscanthus x giganteus, Silphium perfoliatum und Paulownia tomentosa, die momentan hauptsächlich zur Energiegewinnung genutzt werden. Im Rahmen der Bioökonomiestrategie der Europäischen Union soll der Schwerpunkt zukünftiger Bioraffinerien jedoch auf eine ganzheitliche Nutzung von Biomassen gelegt und so auch die stoffliche Nutzung fokussiert werden. Zusätzlich zu diesen drei Pflanzen werden auch Organosolv-Lignine aus den in der Literatur bereits gut beschriebenen Biomassen Weizenstroh und Buchenholz isoliert, und zwei Nadelholz-Kraft-Lignine als Vergleich herangezogen. Die Ergebnisse zeigen, dass die Art der Biomasse hauptsächlich die Monomerzusammensetzung beeinflusst: Gräser bestehen aus allen drei Monolignolen, Laubhölzer mehrheitlich aus S- und G-Einheiten, während Nadelhölzer nur aus G-Einheiten aufgebaut sind. Die Holzlignine besitzen zudem höhere Molekulargewichte sowie bessere antioxidative Eigenschaften als die Gras- und Krautlignine. Mit der feineren Vermahlung der Biomasse kann die Monomerzusammensetzung beeinflusst werden: der Einsatz kleinerer Partikelgrößen führt zu Ligninen mit einem höheren Gehalt an H-Einheiten, sowohl für Miscanthus als auch für Paulownia. Außerdem kann bei Paulownia die Ausbeute gesteigert und eine Zunahme des Molekulargewichtes beobachtet werden, wenn die kleinste Siebfraktion für den Organosolv-Aufschluss verwendet wird. Einen größeren Einfluss als der Mahlgrad der Biomasse haben die Autohydrolyse sowie der Organosolv-Aufschlussprozess selbst. Die Monomerzusammensetzung ändert sich aufgrund derselben Biomasse zwar kaum, die Bindungstypen zwischen den Monolignolen dagegen schon. Mit höherer Prozessstärke (Zeit, Temperatur, Ethanol-Konzentration) werden Etherbindungen gespalten, was den Anteil an phenolischen Hydroxygruppen und somit die Antioxidanz erhöht. Neben dieser Depolymerisation werden partiell auch Rekondensationsreaktionen beobachtet.
Die erzielten Ergebnisse liefern einen Beitrag zum Verständnis des Zusammenhangs zwischen Ligninquelle und -gewinnung mit der daraus resultierenden Ligninstruktur und Antioxidanz und bieten damit eine Grundlage für den Wandel von der energetischen hin zu einer nachhaltigen stofflichen Nutzung dieses nachwachsenden Biopolymers. Gerade über die Wahl der Aufschlussparameter können Struktur und Antioxidanz gezielt beeinflusst werden, was in zukünftigen Studien weiter fokussiert werden sollte.
As a low-input crop, Miscanthus offers numerous advantages that, in addition to agricultural applications, permits its exploitation for energy, fuel, and material production. Depending on the Miscanthus genotype, season, and harvest time as well as plant component (leaf versus stem), correlations between structure and properties of the corresponding isolated lignins differ. Here, a comparative study is presented between lignins isolated from M. x giganteus, M. sinensis, M. robustus and M. nagara using a catalyst-free organosolv pulping process. The lignins from different plant constituents are also compared regarding their similarities and differences regarding monolignol ratio and important linkages. Results showed that the plant genotype has the weakest influence on monolignol content and interunit linkages. In contrast, structural differences are more significant among lignins of different harvest time and/or season. Analyses were performed using fast and simple methods such as nuclear magnetic resonance (NMR) spectroscopy. Data was assigned to four different linkages (A: β-O-4 linkage, B: phenylcoumaran, C: resinol, D: β-unsaturated ester). In conclusion, A content is particularly high in leaf-derived lignins at just under 70% and significantly lower in stem and mixture lignins at around 60% and almost 65%. The second most common linkage pattern is D in all isolated lignins, the proportion of which is also strongly dependent on the crop portion. Both stem and mixture lignins, have a relatively high share of approximately 20% or more (maximum is M. sinensis Sin2 with over 30%). In the leaf-derived lignins, the proportions are significantly lower on average. Stem samples should be chosen if the highest possible lignin content is desired, specifically from the M. x giganteus genotype, which revealed lignin contents up to 27%. Due to the better frost resistance and higher stem stability, M. nagara offers some advantages compared to M. x giganteus. Miscanthus crops are shown to be very attractive lignocellulose feedstock (LCF) for second generation biorefineries and lignin generation in Europe.
Antioxidant activity is an essential aspect of oxygen-sensitive merchandise and goods, such as food and corresponding packaging, cosmetics, and biomedicine. Technical lignin has not yet been applied as a natural antioxidant, mainly due to the complex heterogeneous structure and polydispersity of lignin. This report presents antioxidant capacity studies completed using the 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay. The influence of purification on lignin structure and activity was investigated. The purification procedure showed that double-fold selective extraction is the most efficient (confirmed by ultraviolet-visible (UV/Vis), Fourier transform infrared (FTIR), heteronuclear single quantum coherence (HSQC) and 31P nuclear magnetic resonance spectroscopy, size exclusion chromatography, and X-ray diffraction), resulting in fractions of very narrow polydispersity (3.2⁻1.6), up to four distinct absorption bands in UV/Vis spectroscopy. Due to differential scanning calorimetry measurements, the glass transition temperature increased from 123 to 185 °C for the purest fraction. Antioxidant capacity is discussed regarding the biomass source, pulping process, and degree of purification. Lignin obtained from industrial black liquor are compared with beech wood samples: antioxidant activity (DPPH inhibition) of kraft lignin fractions were 62⁻68%, whereas beech and spruce/pine-mixed lignin showed values of 42% and 64%, respectively. Total phenol content (TPC) of the isolated kraft lignin fractions varied between 26 and 35%, whereas beech and spruce/pine lignin were 33% and 34%, respectively. Storage decreased the TPC values but increased the DPPH inhibition.
Antioxidant activity is an essential feature required for oxygen-sensitive merchandise and goods, such as food and corresponding packaging as well as materials used in cosmetics and biomedicine. For example, vanillin, one of the most prominent antioxidants, is fabricated from lignin, the second most abundant natural polymer in the world. Antioxidant potential is primarily related to the termination of oxidation propagation reactions through hydrogen transfer. The application of technical lignin as a natural antioxidant has not yet been implemented in the industrial sector, mainly due to the complex heterogeneous structure and polydispersity of lignin. Thus, current research focuses on various isolation and purification strategies to improve the compatibility of lignin material with substrates and enhancing its stabilizing effect.
The Participation Act, introduced in the Netherlands in 2015, puts into practice the idea that every individual has to make a contribution in a participatory society. The Act includes aspects of income support, compulsory activities in return for benefits, and labour market reintegration. Drawing on 45 interviews, we provide insights into interactions between the individual financial and social situation, an individual’s position in society, and reintegration activities. The narratives show the fundamental need for individual freedom and societal meaning, recognition, and appreciation, as well as the complex circumstances in which social assistance recipients make decisions. Conflicts between those needs and the Act lead to the question of how personal and societal objectives can be reconciled.
Linear Optimization
(2024)
The recent explosion of available audio-visual media is the new challenge for information retrieval research. Audio speech recognition systems translate spoken content to the text domain. There is a need for searching and indexing this data which possesses no logical structure. One possible way to structure it on a high level of abstraction is by finding topic boundaries. Two unsupervised topic segmentation methods were evaluated with real-world data in the course of this work. The first one, TSF, models topic shifts as fluctuations in the similarity function of the transcript. The second one, LCSeg, approaches topic changes as places with the least overlapping lexical chains. Only LCSeg performed close to a similar real-world corpus. Other reported results could not be outperformed. Topic analysis based on the repeated word usage models renders topic changes more ambiguous than expected. This issue has more impact on the segmentation quality than the state-of-the-art ASR word error rate. It could be concluded that it is advisable to develop topic segmentation algorithms with real-world data to avoid potential biases to artificial data. Unlike evaluated approaches based on word usage analysis, methods operating with local contexts can be expected to perform better through emulation of semantic dependencies.
Internet, Soziale Netzwerke, Spiele, Smartphones, DVDs, digitaler Rundfunk und digitales Fernsehen funktionieren nur deshalb, weil zu ihrer Entwicklung und Anwendung mathematisch abgesicherte Verfahren zur Verfügung stehen. Dieses Buch vermittelt Einsichten in grundlegende Konzepte und Methoden der Linearen Algebra, auf denen diese Verfahren beruhen. Am Beispiel fehlertoleranter Codierung wird einführend gezeigt, wie diese Konzepte und Methoden in der Praxis eingesetzt werden, und am Beispiel von Quantenalgorithmen, die möglicherweise in Zukunft eine Rolle spielen, wird deutlich, dass die Lineare Algebra zeitinvariante Konzepte, Methoden und Verfahren bereitstellt, mit denen IT-Technologien konzipiert, implementiert, angewendet und weiterentwickelt werden können. Wegen seiner didaktischen Elemente wie Vorgabe von Lernzielen, Zusammenfassungen, Marginalien und einer Vielzahl von Übungen mit Musterlösungen eignet sich das Buch nicht nur als Begleitlektüre zu entsprechenden Informatik- und Mathematik-Lehrveranstaltungen, sondern insbesondere auch zum Selbststudium.
Das Konzept des Living Lab ist eine in der Wissenschaft anerkannte Innovations- und Forschungsmethodik. Im betrieblichen Kontext - insbesondere in kleinen und mittleren Unternehmen (KMU) – wird sie bislang jedoch kaum genutzt. Um die Nutzung im kommerziellen Kontext von Smart Home zu erforschen, wird im Forschungsprojekt SmartLive aktuell ein Living Lab zum Thema aufgebaut, bei dem Unternehmen, Forscher sowie ca. 30 teilnehmenden Haushalte die alltägliche Nutzung von kommerziellen, sowie experimentell entwickelten Lösungen untersuchen und neue Interaktionskonzepte gemeinsam erarbeiten. Ferner wurden mit den teilnehmenden Unternehmen Interviews zu deren Entwicklungsprozessen, deren Einstellung zu Usability und User Experience (UUX), sowie den Potenzialen und Möglichkeiten eines Living Labs für KMU geführt. Ziel der Interviews ist es, darauf aufbauend UUX-Dienstleistungen zu identifizieren, die rund um ein kommerziell betriebenes Living Lab angeboten werden können. Hierbei wurde zunächst das Kompetenz-Netzwerk als ein wichtiges Asset eines Living Lab hervorgehoben, da es eine projektförmige Kooperation fördert. Zudem wurde der Bedarf nach flexiblen Dienstleistungen ähnlich einem Baukastensystem deutlich, mit dessen Hilfe relativ kurzfristig als auch nachhaltige innovative Konzepte erprobt, Marketingstrategien entwickelt sowie prototypische Entwicklungen hinsichtlich UUX und technischer Qualität evaluiert werden können.
Informations- und Kommunikationstechnologie (IKT) in den Bereichen Smart Home und Smart Living ist durch die zunehmende Vernetzung des häuslichen Anwendungsfelds mit der Digitalisierung des Stromnetzes, alternativen Möglichkeiten der Energiegewinnung und -speicherung und neuer Mobilitätskonzepte geprägt und zu einem unverzichtbaren Bestandteil privaten wie unternehmerischen Handelns geworden.
Viele Controller verzeichnen steigende IT-Kosten, die zum Teil durch Lizenzkosten verursacht werden. Oft werden Kosteneinsparungen durch ineffizientes oder fehlendes Lizenzmanagement nicht realisiert, zudem sehen sich zahlreiche Unternehmen zusehends mit dem Problem nicht budgetierter Nachlizenzierungen, aufgrund steigender Aktivitäten der Softwarehersteller im Bereich der Lizenzierungsüberprüfung, konfrontiert. Der Beitrag zeigt die Problematik des IT-Lizenzmanagements und grundsätzliche Lösungsmöglichkeiten auf. Ein Referenzmodell demonstriert, wie im praktischen Anwendungsfall eine Kostensenkung und Prozessverbesserung realisiert werden kann.
Login Data Set for Risk-Based Authentication
Synthesized login feature data of >33M login attempts and >3.3M users on a large-scale online service in Norway. Original data collected between February 2020 and February 2021.
This data sets aims to foster research and development for <a href="https://riskbasedauthentication.org">Risk-Based Authentication (RBA) systems. The data was synthesized from the real-world login behavior of more than 3.3M users at a large-scale single sign-on (SSO) online service in Norway.
Logistikmarkt Russland
(2011)
Russland ist eine der bedeutenden und aufstrebenden Wirtschaftsregionen, für die sich im Besonderen logistische Herausforderungen stellen. Die vorliegende Arbeit analysiert und beschreibt den "Logistikmarkt Russland", der sich für europäische und insbesondere deutsche Logistikdienstleister sowohl wegen seiner Größe und Nähe zur EU als auch aufgrund des enormen Wachstumspotentials als sehr attraktiv darstellt. Zielsetzung dieser Abhandlung ist das Aufzeigen relevanter Trends und Entwicklungstendenzen, sowohl um Logistikdienstleistern bei der Beurteilung des strategischen Marktpotentials Russlands Hilfestellungen zu geben, als auch um wirtschaftliche Chancen und Risiken auf diesem Markt zu betrachten.
BACKGROUND: Humans demonstrate many physiological changes in microgravity for which long-duration head down bed rest (HDBR) is a reliable analog. However, information on how HDBR affects sensory processing is lacking.
OBJECTIVE: We previously showed [25] that microgravity alters the weighting applied to visual cues in determining the perceptual upright (PU), an effect that lasts long after return. Does long-duration HDBR have comparable effects?
METHODS: We assessed static spatial orientation using the luminous line test (subjective visual vertical, SVV) and the oriented character recognition test (PU) before, during and after 21 days of 6° HDBR in 10 participants. Methods were essentially identical as previously used in orbit [25].
RESULTS: Overall, HDBR had no effect on the reliance on visual relative to body cues in determining the PU. However, when considering the three critical time points (pre-bed rest, end of bed rest, and 14 days post-bed rest) there was a significant decrease in reliance on visual relative to body cues, as found in microgravity. The ratio had an average time constant of 7.28 days and returned to pre-bed-rest levels within 14 days. The SVV was unaffected.
CONCLUSIONS: We conclude that bed rest can be a useful analog for the study of the perception of static self-orientation during long-term exposure to microgravity. More detailed work on the precise time course of our effects is needed in both bed rest and microgravity conditions.
Long-term variability of solar irradiance and its implications for photovoltaic power in West Africa
(2020)
This paper addresses long-term changes in solar irradiance for West Africa (3° N to 20° N and 20° W to 16° E) and its implications for photovoltaic power systems. Here we use satellite irradiance (Surface Solar Radiation Data Set-Heliosat, Edition 2.1, SARAH-2.1) to derive photovoltaic yields. Based on 35 years of data (1983–2017) the temporal and regional variability as well as long-term trends of global and direct horizontal irradiance are analyzed. Furthermore, at four locations a detailed time series analysis is undertaken. The dry and the wet season are considered separately.
Long-term variability of solar irradiance and its implications for photovoltaic power in West Africa
(2020)
West Africa is one of the least developed regions in the world regarding the energy availability and energy security. Located close to the equator West Africa receives high amounts of global horizontal irradiance (GHI). Thus, solar power and especially photovoltaic (PV) systems seem to be a promising solution to provide electricity with low environmental impact. To plan and to dimension a PV power system climatological data for global horizontal irradiance (GHI) and its variability need to be taken into account. However, ground based measurements of irradiances are not available continuously and cover only a few discrete locations.
Preleukemic clones carrying BCR-ABLp190 oncogenic lesions are found in neonatal cord blood, where the majority of preleukemic carriers do not convert into precursor B-cell acute lymphoblastic leukemia (pB-ALL). However, the critical question of how these preleukemic cells transform into pB-ALL remains undefined. Here we model a BCR-ABLp190 preleukemic state and show that limiting BCR-ABLp190 expression to hematopoietic stem/progenitor cells (HS/PC) in mice (Sca1-BCR-ABLp190) causes pB-ALL at low penetrance, which resembles the human disease. pB-ALL blast cells were BCR-ABL-negative and transcriptionally similar to pro-B/pre-B cells, suggesting disease onset upon reduced Pax5 functionality. Consistent with this, double Sca1-BCR-ABLp190+Pax5+/- mice developed pB-ALL with shorter latencies, 90% incidence, and accumulation of genomic alterations in the remaining wild-type Pax5 allele. Mechanistically, the Pax5-deficient leukemic pro-B cells exhibited a metabolic switch towards increased glucose utilization and energy metabolism. Transcriptome analysis revealed that metabolic genes (IDH1, G6PC3, GAPDH, PGK1, MYC, ENO1, ACO1) were upregulated in Pax5-deficient leukemic cells, and a similar metabolic signature could be observed in human leukemia. Our studies unveil the first in vivo evidence that the combination between Sca1-BCR-ABLp190 and metabolic reprogramming imposed by reduced Pax5 expression is sufficient for pB-ALL development. These findings might help to prevent conversion of BCR-ABLp190 preleukemic cells.
Low Cost Displays
(2010)
Cost efficient energy monitoring in existing large buildings demands for autonomous indoor sensors with low power consumption, high performance in multipath fading channels and economic implementation. Good performance in multipath fading channels can be achieved with noncoherent chaotic modulation schemes such as chaos on-off keying (COOK) or differential chaos shift keying (DCSK). While COOK stands out in the area of power consumption, DCSK excels when it comes to its performance in noisy and multipath fading channels. This paper evaluates a combination of both schemes for autonomous indoor sensors. The simulation results show 50% less power consumption than DCSK and more than 3dB SNR gain in Rayleigh fading channels at BER=10-3 as compared to COOK, making it a promising candidate for low power transmission in autonomous wireless indoor sensors. We further present an enhanced version of this scheme showing another 1 dB SNR improvement, but at 25% less power consumption than DCSK.
Low-Cost In-Hand Slippage Detection and Avoidance for Robust Robotic Grasping with Compliant Fingers
(2021)
Statins are a group of hypolipidemic drugs that act by competitive inhibition of the HMGR enzyme. They are generally considered effective and safe but claimed to have side effects on skeletal muscles. A molecular side effect of statins is the block of terpene biosynthesis and hence of dolichol involved in N-glycosylation and O-mannosylation of proteins. Defects in O-mannosylation lead to α-dystroglycan (α-DG) hypoglycosylation and a series of hereditary dystroglycanopathies. The current project aims to get insight into molecular pathomechanisms induced by statins in mammalian muscle cells and to unravel a potential link between these effects and statin-induced decreases of α-DG O-mannosylation. The study was based on mass spectrometric proteomics supported by western blot analysis to reveal Rosuvastatin effects on cellular pathways under high (micromolar) or low (nanomolar) conditions. Differential proteomics revealed higher statin effects on muscle cell function in micromolar than nanomolar concentration, which is reached in the patient’s plasma. We demonstrated distinct and partially overlapping patterns of fold-changed proteins under high and low statin conditions. Gene ontology term enrichment (GOTE) analyses of fold-changed proteins revealed cellular pathways related to muscle function and development are affected, even under low statin conditions, typically reached in the patient’s plasma during prophylactic medication.
Low-frequency vibrational excitations in zeolite ZSM-5 and its partially crystalline derivatives
(2004)
Lignocellulose feedstock (LCF) provides a sustainable source of components to produce bioenergy, biofuel, and novel biomaterials. Besides hard and soft wood, so-called low-input plants such as Miscanthus are interesting crops to be investigated as potential feedstock for the second generation biorefinery. The status quo regarding the availability and composition of different plants, including grasses and fast-growing trees (i.e., Miscanthus, Paulownia), is reviewed here. The second focus of this review is the potential of multivariate data processing to be used for biomass analysis and quality control. Experimental data obtained by spectroscopic methods, such as nuclear magnetic resonance (NMR) and Fourier-transform infrared spectroscopy (FTIR), can be processed using computational techniques to characterize the 3D structure and energetic properties of the feedstock building blocks, including complex linkages. Here, we provide a brief summary of recently reported experimental data for structural analysis of LCF biomasses, and give our perspectives on the role of chemometrics in understanding and elucidating on LCF composition and lignin 3D structure.
Luxusgut Wohnen
(2019)
Löten oder schreiben?
(2005)
We present a new interface for interactive comparisons of more than two alternative documents in the context of a generative design system that uses generative data-flow networks defined via directed acyclic graphs. To better show differences between such networks, we emphasize added, deleted, (un)changed nodes and edges. We emphasize differences in the output as well as parameters using highlighting and enable post-hoc merging of the state of a parameter across a selected set of alternatives. To minimize visual clutter, we introduce new difference visualizations for selected nodes and alternatives using additive and subtractive encodings, which improve readability and keep visual clutter low. We analyzed similarities in networks from a set of alternative designs produced by architecture students and found that the number of similarities outweighs the differences, which motivates use of subtractive encoding. We ran a user study to evaluate the two main proposed difference visualization encodings and found that they are equally effective.
Macht in Unternehmen
(2012)
Kaum ein Begriff ist so diskreditiert wie der der Macht. Sie wird mit Missbrauch, Willkür und Irrationalität verbunden, allein das Streben danach gilt als verdächtig. Dabei ist die regulierende und gestaltende Funktion der Macht für und in Organisationen aus dem Blick geraten, aber auch die Frage nach den Ursprüngen und den Regulativen von Macht. Das Buch untersucht diese Thematik aus der Perspektive von Wissenschaftlern verschiedener Disziplinen und von "Praktikern der Macht" aus Unternehmen und Politik.
Machtperspektiven
(2012)
The digitization of financial activities in consumers' lives is increasing, and the digitalization of invoicing processes is expected to play a significant role, although this area is not well understood regarding the private sector. Human-Computer Interaction (HCI) and Computer Supported Cooperative Work (CSCW) research have a long history of analyzing the socio-material and temporal aspects of work practices that are relevant for the domestic domain. The socio-material structuring of invoicing work and the working styles of consumers must be considered when designing effective consumer support systems. In this ethnomethodologically-informed, design-oriented interview study, we followed 17 consumers in their daily practices of dealing with invoices to make the invisible administrative work involved in this process visible. We identified and described the meaningful artifacts that were used in a spatial-temporal process within various storage locations such as input, reminding, intermediate (for postponing cases) buffers, and archive systems. Furthermore, we identified three different working styles that consumers exhibited: direct completion, at the next opportunity, and postpone as far as possible. This study contributes to our understanding of household economics and domestic workplace studies in the tradition of CSCW and has implications for the design of electronic invoicing systems.
Management der Rehabilitation: Case Management im Handlungsfeld Rehabilitation. Band 1 - Grundlagen
(2017)
Mit dem Paradigmenwechsel im Verständnis von Rehabilitation, weg von der rein defizitorientierten, medizinischen Sichtweise hin zur selbstbestimmten Teilhabe am Leben in der Gesellschaft, der mit dem Inkrafttreten des SGB IX im Jahr 2001 in Deutschland, der Ratifizierung der UN-Behindertenrechtskonvention im Jahr 2009, der Weiterentwicklung des Behindertengleichstellungsrechts und der Verabschiedung des Bundesteilhabegesetztes im Jahr 2016 endgültig vollzogen wurde, haben sich die Anforderungen an die Strukturen und Prozesse derjenigen Institutionen verändert, die mit der Organisation, Durchführung und Finanzierung von Rehabilitation befasst sind. Rehabilitation entwickelt sich damit von einer nachgelagerten (Teil-)Leistung zu einer der Schlüsselstrategien für die gesundheitliche Versorgung und soziale Sicherung.
Das Management von Sicherheit und Gesundheit bei der Arbeit beinhaltet die Praevention psychischer Fehlbelastungen und psychischer Erkrankungen. Die gesetzliche Grundlage der Praeventionsaktivitaeten der gesetzlichen Unfallversicherung wird aufgefuehrt. Anschliessend wird verdeutlicht, in welchen Handlungsfeldern und mit welchen Instrumenten die gesetzliche Unfallversicherung die Praevention psychischer Erkrankungen in den Betrieben unterstuetzt. Als theoretischer Rahmen fuer Praeventionsmassnahmen werden das Dreiebenenmodell psychischer Belastungen (Mitarbeiter, Unternehmen, Gesellschaft) und das Dreiebenen-Interventionsmodell psychischer Erkrankungen skizziert. Die betriebliche Praevention erfolgt im Idealfall auf der Basis konkreter innerbetrieblicher Regelungen. Dies wird beispielhaft fuer den Umgang mit E-Mails und die Regelungen zur Erreichbarkeit naeher beschrieben.
Management von Unternehmen
(2021)
Eine funktionierende Partnerschaft zwischen Kunden und Beratern ist maßgeblich von der verantwortungsvollen Wahrnehmung der Rollen, die beide Parteien in der Berater-Klienten-Beziehung übernehmen, abhängig: Vom Management in seiner Rolle als Entscheidungsträger im Interesse des Unternehmens und seiner Anspruchsgruppen, von den Beratern in ihrer Rolle als Sinnstifter und Problemlöser. Vertrauen und ethisches Handeln prägen dabei den Erfolg der Berater-Klienten-Beziehung maßgeblich. Sind diese Voraussetzungen nicht gegeben, gewinnen Gestaltungsmaßnahmen im Sinne der Corporate Governance aus dem Blickwinkel von Beratern und ihren Auftraggebern eine besondere Relevanz.
Viele Managementkonzepte, die heute zu den "Klassikern" der Unternehmensführung zählen, wurden ursprünglich von Unternehmensberatern entwickelt, um innovative Lösungsmöglichkeiten für operative und vor allem strategische Managementprobleme aufzuzeigen. Nicht selten prägten ihre Ideen das betriebswirtschaftliche Leitbild einer ganzen Epoche. So revolutionierte beispielsweise das Portfoliokonzept der Boston Consulting Group Anfang der siebziger Jahre das Denken und Handeln vieler Führungskräfte – heute zählt es zum Basisrepertoire eines jeden strategischen Planers oder Controllers.
“Building Bridges Across Continents” (BBAC) is an intercultural and student-centered project that seeks to promote international communication and helps students develop competencies in entrepreneurship, international trade and global cultural awareness. The project, which is in its fourth phase of implementation, connects students from the United States, Germany, Ghana and Kenya with the help of Information Communication Technologies (ICT) in order to work on a common research assignment for a period of ten calendar weeks. The main ICTs used in the project are Skype, Facebook, wiki, email and WhatsApp. This paper describes and analyzes the background, structure, and results of the project.
Culture is at the core of any social, economic and business interactions and relationships. The way people perceive the culture of others influences their decision to collaborate socially, politically and economically with them. It is therefore, imperative students appreciate the dynamics of cross-cultural interactions and collaborations, since it exposes them to a wider view of the world. In doing this, it is important they (students) are allowed to explore as much as possible with little interference by their teachers. Using the project students went through real-life experience in a self-directed enquiry. In the process, they were taught to solve problems encountered during the learning process. The focus of the intercultural communication project was to understand how people from different cultures speak, interact and perceive others’ culture. It was found students innovate if allowed to explore a certain phenomenon on their own. Furthermore, face-to-face meetings can be arranged between people in the different countries can be arranged using these Web 2.0 tools. Based on the experience from the project, it was observed that the success of a collaborative international project depends on the understanding of the crosscultural dynamics of partners. For such collaborations, it is imperative to establish personal relationships, be flexible and adaptable to situations and change as well as being swift resolving potential conflict situation.
Mergers and acquisitions take place all over the world and in many industries, typically motivated by corporate politics. While IT management is often not involved in the decision-making, it has to solve a wide range of problems in the post-merger phase. Indeed, merging two or more companies implies not only merging their core businesses, but also creating a new and efficiently integrated IT organisation from the individual ones, since persistence of the current IT organisations usually does not make sense. In addition, corporate management frequently imposes constraints, e.g., cost reductions, on the IT infrastructure. The principal critical success factor when merging IT organisations is the uninterrupted operation of the IT business, because a service gap is neither acceptable for in-house functional departments nor for external customers. Therefore, the IT rebuilding phase has to focus on IT services that facilitate the processes of functional departments, support processes, and processes of customers and suppliers, so that any transformation work is transparent to internal and external customers. In this article we describe a real-world but anonymous case study. Our goals are to highlight the points important for merging IT organisations, and to help decision-makers, particularly in the areas of IT organisation and IT personnel. We focus on the arising organisational and non-technical issues from a management perspective, i.e., the CIO's view, and provide checklists intended to help IT managers to address the most pressing issues. To assist CIOs surviving in the post merger phase, we give check lists for merging IT organisations, check lists for merging IT human resources, check lists for IT budgets and reporting, and assess activities in a merger scenario. IT hardware, software and IT infrastructure as well as running IT projects are not considered in this paper.
Managing the Work-Nonwork Interface: Personal Social Media Use as a Tool to Craft Boundaries?
(2021)
Process-induced changes in thermo-mechanical viscoelastic properties and the corresponding morphology of biodegradable polybutylene adipate terephthalate (PBAT) and polylactic acid (PLA) blown film blends modified with four multifunctional chain-extending cross-linkers (CECL) were investigated. The introduction of CECL modified the properties of the reference PBAT/PLA blend significantly. The thermal analysis showed that the chemical reactions were incomplete after compounding, and that film blowing extended them. SEM investigations of the fracture surfaces of blown extrusion films reveal the significant effect of CECL on the morphology formed during the processing. The anisotropic morphology introduced during film blowing proved to affect the degradation processes as well. Furthermore, the reactions of CECL with PBAT/PLA induced by the processing depend on the deformation directions. The blow-up ratio parameter was altered to investigate further process-induced changes proving synergy with mechanical and morphological features. Using blown film extrusion, the elongational behavior represents a very important characteristic. However, its evaluation may be quite often problematic, but with the SER Universal Testing Platform it was possible to determine changes in the duration of time intervals corresponding to the rupture of elongated samples.
Augmented Reality (AR) findet heutzutage sehr viele Anwendungsbereiche. Durch die Überlagerung von virtuellen Informationen mit der realen Umgebung eignet sich diese Technologie besonders für die Unterstützung der Benutzer bei technischen Wartungs- oder Reparaturvorgängen. Damit die virtuellen Daten korrekt mit der realen Welt überlagert werden, müssen Position und Orientierung der Kamera durch ein Trackingverfahren ermittelt werden. In dieser Arbeit wurde für diesen Zweck ein markerloses, modellbasiertes Trackingsystem implementiert. Während einer Initialisierungs-Phase wird die Kamerapose mithilfe von kalibrierten Referenzbildern, sogenannten Keyframes, bestimmt. In einer darauffolgenden Tracking-Phase wird das zu trackende Objekt weiterverfolgt. Evaluiert wurde das System an dem 1:1 Trainingsmodell des biologischen Forschungslabors Biolab, welches von der Europäischen Weltraumorganisation ESA zur Verfügung gestellt wurde.