Refine
H-BRS Bibliography
- yes (4917) (remove)
Departments, institutes and facilities
- Fachbereich Wirtschaftswissenschaften (1243)
- Fachbereich Informatik (1148)
- Fachbereich Angewandte Naturwissenschaften (766)
- Fachbereich Ingenieurwissenschaften und Kommunikation (636)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (480)
- Präsidium (403)
- Fachbereich Sozialpolitik und Soziale Sicherung (402)
- Institute of Visual Computing (IVC) (313)
- Institut für funktionale Gen-Analytik (IFGA) (241)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (195)
Document Type
- Article (1603)
- Conference Object (1119)
- Part of a Book (690)
- Part of Periodical (410)
- Book (monograph, edited volume) (370)
- Report (145)
- Preprint (88)
- Working Paper (87)
- Contribution to a Periodical (83)
- Doctoral Thesis (70)
Year of publication
Keywords
- Lehrbuch (85)
- Deutschland (27)
- Nachhaltigkeit (27)
- Controlling (23)
- Unternehmen (23)
- Digitalisierung (17)
- Management (17)
- Betriebswirtschaftslehre (16)
- Machine Learning (16)
- Corporate Social Responsibility (15)
Die Welt war es in den letzten drei Jahrzehnten gewöhnt, dass größere ökonomische Krisen von den Entwicklungsländern ausgehen. Dies traf zu im Falle Mexikos (1995), Thailands (1997) als Auslöser der sich ausbreitenden Asienkrise) sowie der tiefen ökonomischen Verwerfungen in Argentinien (2001). Umso größer war die mentale Schockwelle, als die jüngste – und vor allem erstmals seit achtzig Jahren globale – Wirtschaftskrise von den USA ausging.
Wirtschaft und Entwicklung: Die Bedeutung der Privatwirtschaft in der Entwicklungszusammenarbeit
(2013)
Die sozialen Herausforderungen der Zukunft und die gesellschaftspolitische Rolle von Unternehmen
(2012)
Recent findings in South Africa have once again underlined the fact that the oldest people in the world obviously came from Africa. Thus, historically, this continent has a very special significance. However, its history in more recent times, especially from the mid-19th century onwards, was strongly influenced by colonisation by European states. Many deep wounds from that time still have an impact on society as a whole today. However, the continent is currently also confronted with a greater number of challenges of a different nature.
On the one hand, Africa is trying to strengthen internal cohesion by means of a number of regional organisations and the African Union as a globally active institution; on the other hand, the continent has been marked by political and military conflicts between neighbouring states over the past decades until the recent present. In addition, there are regular internal social upheavals in individual countries due to violent or manipulated political change.
Yet the continent could well be on a good development path, since it has a large number of important raw materials - also in comparison to other continents. However, the individual African states - and especially their citizens - often do not benefit from this to an adequate extent. This results in a social imbalance in large parts of the continent (data collection until the end of June 2023), which leads to considerable internal tensions. To make matters worse, Africa is the continent most affected by climate change.
A closer look at the partly very different economic, political and social situations of the large continent leads to an overall predominantly critical assessment of Africa's further development, which is explained in more detail in the final chapter with regard to the foreseeable consequences for the continent.
Der richtige Umgang mit Kritik ist in vielen Unternehmen noch eine große Herausforderung. So fehlt Vorgesetzten oft jegliche Sensibilität gegenüber diesem Thema. Daher schrecken die meisten Mitarbeiter davor zurück, sich an einem kritischen Dialog im Betrieb zu beteiligen. Dabei könnten hierdurch wichtige Potenziale an Kreativität in Betrieb und Gesellschaft ausgeschöpft und deren innere Stabilität erhöht werden.
Durch eine zusammenfassende Matrix bietet das Buch im Falle von Konflikten sowohl für junge als auch schon erfahrene Mitarbeiter sowie Vorgesetzte einen Leitfaden für das eigene Verhalten. Der Autor greift hierbei neben einem geschichtlichen Rückblick auf die Erfahrungen eines langen Berufslebens in einer international tätigen Institution zurück.
Neueste Funde in Südafrika haben nochmals unterstrichen, dass die ältesten Menschen der Welt offensichtlich aus Afrika abstammen. Somit kommt diesem Kontinent historisch gesehen ganz besondere Bedeutung zu. Allerdings war seine Geschichte in der jüngeren Zeit, insbesondere ab Mitte des 19. Jahrhunderts, von der Kolonialisierung durch europäische Staaten stark geprägt. Viele tiefe Wunden aus der damaligen Zeit haben noch heute Auswirkungen auf die Gesellschaft insgesamt. Allerdings ist der Kontinent derzeit auch mit einer größeren Zahl anders gelagerter Herausforderungen konfrontiert.
Zum einen versucht Afrika mittels einer Anzahl von Regionalorganisationen sowie der Afrikanischen Union als global agierender Institution den inneren Zusammenhalt zu stärken, zum anderen ist der Kontinent über die letzten Jahrzehnte bis in die jüngste Gegenwart durch politische und militärische Konflikte zwischen Nachbarstaaten geprägt. Hinzu kommen regelmäßig innere gesellschaftliche Umwälzungen einzelner Länder durch einen gewaltsamen oder manipulierten politischen Wechsel.
Dabei könnte der Kontinent sich durchaus auf einem guten Entwicklungspfad befinden, verfügt er doch – auch im Vergleich zu anderen Kontinenten – über eine Vielzahl von wichtigen Rohstoffen. Allerdings profitieren die einzelnen afrikanischen Staaten – und insbesondere ihre Bürgerinnen und Bürger - hiervon oft nicht in einem angemessenen Rahmen. Somit ergibt sich in großen Teilen des Kontinents ein soziales Ungleichgewicht, das zu erheblichen inneren Spannungen führt. Erschwerend kommt hinzu, dass Afrika weltweit am stärksten vom Klimawandel betroffen ist.
Bei näherer Betrachtung der z.T. sehr unterschiedlichen wirtschaftlichen, politischen und sozialen Situation des großen Kontinents (Datenerhebung bis Ende Juni 2023) führt die vorliegende Untersuchung zu einer insgesamt überwiegend kritischen Einschätzung hinsichtlich der weiteren Entwicklung Afrikas, die im Schlusskapitel bzgl. der absehbaren Konsequenzen für den Kontinent näher dargelegt wird.
These times are very troubled ones. Not only do wars and political unrest seem to prevail in different regions of the world, but, corruption and fraud have reached an incredible dimension, too. It seems that societies have, to a large extent, lost values in which they had formerly believed in. These issues may be the background why at the moment Corporate Social Responsibility (CSR) as a voluntary commitment is discussed in public that intensively. However, one gets the impression that this rather often seems to be superficial. Therefore, it is time to do some in-depth research to identify whether there is real substance behind these discussions or not. Latin America is a big continent with a greater number of countries which are running through difficult times as to corruption and fraud. Consequently, the author studied the policy of the central employers association Consejo Empresarial de America Latina (CEAL) with respect to the role of CSR. On the basis of statements, news and results of studies being regularly published, conclusions were drawn to which extent social and environmental aspects, along the line of ISO 26000, are playing a relevant role.
In order to avoid a too narrow view of the issue, a holistic approach concerning the generalsituation of Latin America has been selected using parameters such as economic growth, increase of population, poverty, inequality, and the global responsibility for environment. Furthermore, apart from the central organization CEAL, regional and national institutions with a specific mission for spreading and implementing CSR and two communal projects were analyzed as well. The conclusion of the paper is that there are some CSR "lighthouses" but an urgent need exists to spread the idea of CSR more intensively across the continent. Corresponding recommendations about how to increase the relevance of CSR in Latin America are given at the end of the paper.
The application of Raman and infrared (IR) microspectroscopy is leading to hyperspectral data containing complementary information concerning the molecular composition of a sample. The classification of hyperspectral data from the individual spectroscopic approaches is already state-of-the-art in several fields of research. However, more complex structured samples and difficult measuring conditions might affect the accuracy of classification results negatively and could make a successful classification of the sample components challenging. This contribution presents a comprehensive comparison in supervised pixel classification of hyperspectral microscopic images, proving that a combined approach of Raman and IR microspectroscopy has a high potential to improve classification rates by a meaningful extension of the feature space. It shows that the complementary information in spatially co-registered hyperspectral images of polymer samples can be accessed using different feature extraction methods and, once fused on the feature-level, is in general more accurately classifiable in a pattern recognition task than the corresponding classification results for data derived from the individual spectroscopic approaches.
Was ist dran am Hype um die Cloud? Während Gartner Research bereits von einem Abwärts trend spricht, sehen Prof. Alda und Prof. Bonne von der Hochschule Bonn-Rhein-Sieg viele gewinnbringende Anwendungsszenarien in der Praxis. Insbesondere auf den Finance- und Accounting-Bereich lassen sich die positiven Vorteile einer Cloud- Lösung übertragen.
The Anomalous X‐ray Pulsar 4U 0142+61 is the only neutron star where it is believed that one of the long searched‐for ‘fallback’ disks has been detected in the mid‐IR by Wang et al. [1] using Spitzer. Such a disk originates from material falling back to the NS after the supernova. We search for cold circumstellar material in the 90 GHz continuum using the Plateau de Bure Interferometer. No millimeter flux is detected at the position of 4U 0142+61, the upper flux limit is 150 μJy corresponding to the 3σ noise rms level. The re‐processed Spitzer MIPS 24μm data presented previously by Wang et al. [2] show some indication of flux enhancement at the position of the neutron star, albeit below the 3σ statistical significance limit. At far infrared wavelengths the source flux densities are probably below the Herschel confusion limits.
We report on submillimetre bolometer observations of the isolated neutron star RX J1856.5−3754 using the Large Apex Bolometer Camera bolometer array on the Atacama Pathfinder Experiment telescope. No cold dust continuum emission peak at the position of RX J1856.5−3754 was detected. The 3σ flux density upper limit of 5 mJy translates into a cold dust mass limit of a few earth masses. We use the new submillimetre limit, together with a previously obtained H-band limit, to constrain the presence of a gaseous, circumpulsar disc. Adopting a simple irradiated disc model, we obtain a mass accretion limit of Graphic and a maximum outer disc radius of ∼1014 cm. By examining the projected proper motion of RX J1856.5−3754, we speculate about a possible encounter of the neutron star with a dense fragment of the CrA molecular cloud a few thousand years ago.
The Poverty Reduction Effect of Social Protection: The Pros and Cons of a Multidisciplinary Approach
(2022)
There is a growing body of knowledge on the complex effects of social protection on poverty in Africa. This article explores the pros and cons of a multidisciplinary approach to studying social protection policies. Our research aimed at studying the interaction between cash transfers and social health protection policies in terms of their impact on inclusive growth in Ghana and Kenya. Also, it explored the policy reform context over time to unravel programme dynamics and outcomes. The analysis combined econometric and qualitative impact assessments with national- and local-level political economic analyses. In particular, dynamic effects and improved understanding of processes are well captured by this approach, thus, pushing the understanding of implementation challenges over and beyond a ‘technological fix,’ as has been argued before by Niño-Zarazúa et al. (World Dev 40:163–176, 2012), However, multidisciplinary research puts considerable demands on data and data handling. Finally, some poverty reduction effects play out over a longer time, requiring longitudinal consistent data that is still scarce.
This paper analyzes the complex effects and risks of social protection programmes in Ghana and Kenya on poor people’s human wellbeing, voice and empowerment and interactions with the social protection regulatory framework and policy instruments. For this purpose, it adopts a comprehensive Inclusive Development framework to systematically explore the complex effects of cash transfers and health insurance at the individual, household and community level. The findings highlight the positive provisionary and preventive effects of social protection, but also illustrate that the poorest are still excluded and that promotive effects, in the form of enhanced productivity, manifest themselves mainly for the people who are less resource poor. They can build more effectively upon an existing asset base, capabilities, power and social relations to counter the exclusionary mechanisms of the system, address inequity concerns and offset the transaction costs of accessing and benefitting from social protection. The inclusive development framework enables to lay these complex effects and interactions bear, and points to areas that require more longitudinal and mixed methodology research.
The ability to finely segment different instances of various objects in an environment forms a critical tool in the perception tool-box of any autonomous agent. Traditionally instance segmentation is treated as a multi-label pixel-wise classification problem. This formulation has resulted in networks that are capable of producing high-quality instance masks but are extremely slow for real-world usage, especially on platforms with limited computational capabilities. This thesis investigates an alternate regression-based formulation of instance segmentation to achieve a good trade-off between mask precision and run-time. Particularly the instance masks are parameterized and a CNN is trained to regress to these parameters, analogous to bounding box regression performed by an object detection network.
In this investigation, the instance segmentation masks in the Cityscape dataset are approximated using irregular octagons and an existing object detector network (i.e., SqueezeDet) is modified to regresses to the parameters of these octagonal approximations. The resulting network is referred to as SqueezeDetOcta. At the image boundaries, object instances are only partially visible. Due to the convolutional nature of most object detection networks, special handling of the boundary adhering object instances is warranted. However, the current object detection techniques seem to be unaffected by this and handle all the object instances alike. To this end, this work proposes selectively learning only partial, untainted parameters of the bounding box approximation of the boundary adhering object instances. Anchor-based object detection networks like SqueezeDet and YOLOv2 have a discrepancy between the ground-truth encoding/decoding scheme and the coordinate space used for clustering, to generate the prior anchor shapes. To resolve this disagreement, this work proposes clustering in a space defined by two coordinate axes representing the natural log transformations of the width and height of the ground-truth bounding boxes.
When both SqueezeDet and SqueezeDetOcta were trained from scratch, SqueezeDetOcta lagged behind the SqueezeDet network by a massive ≈ 6.19 mAP. Further analysis revealed that the sparsity of the annotated data was the reason for this lackluster performance of the SqueezeDetOcta network. To mitigate this issue transfer-learning was used to fine-tune the SqueezeDetOcta network starting from the trained weights of the SqueezeDet network. When all the layers of the SqueezeDetOcta were fine-tuned, it outperformed the SqueezeDet network paired with logarithmically extracted anchors by ≈ 0.77 mAP. In addition to this, the forward pass latencies of both SqueezeDet and SqueezeDetOcta are close to ≈ 19ms. Boundary adhesion considerations, during training, resulted in an improvement of ≈ 2.62 mAP of the baseline SqueezeDet network. A SqueezeDet network paired with logarithmically extracted anchors improved the performance of the baseline SqueezeDet network by ≈ 1.85 mAP.
In summary, this work demonstrates that if given sufficient fine instance annotated data, an existing object detection network can be modified to predict much finer approximations (i.e., irregular octagons) of the instance annotations, whilst having the same forward pass latency as that of the bounding box predicting network. The results justify the merits of logarithmically extracted anchors to boost the performance of any anchor-based object detection network. The results also showed that the special handling of image boundary adhering object instances produces more performant object detectors.
In Artificial Intelligence, numerous learning paradigms have been developed over the past decades. In most cases of embodied and situated agents, the learning goal for the artificial agent is to „map“ or classify the environment and the objects therein [1, 2], in order to improve navigation or the execution of some other domain-specific task. Dynamic environments and changing tasks still pose a major challenge for robotic learning in real-world domains. In order to intelligently adapt its task strategies, the agent needs cognitive abilities to more deeply understand its environment and the effects of its actions. In order to approach this challenge within an open-ended learning loop, the XPERO project (http://www.xpero.org) explores the paradigm of Learning by Experimentation to increase the robot's conceptual world knowledge autonomously. In this setting, tasks which are selected by an actionselection mechanism are interrupted by a learning loop in those cases where the robot identifies learning as necessary for solving a task or for explaining observations. It is important to note that our approach targets unsupervised learning, since there is no oracle available to the agent, nor does it have access to a reward function providing direct feedback on the quality of its learned model, as e.g. in reinforcement learning approaches. In the following sections we present our framework for integrating autonomous robotic experimentation into such a learning loop. In section 1 we explain the different modules for stimulation and design of experiments and their interaction. In section 2 we describe our implementation of these modules and how we applied them to a real world scenario to gather target-oriented data for learning conceptual knowledge. There we also indicate how the goaloriented data generation enables machine learning algorithms to revise the failed prediction model.
Domestic Robotics
(2008)
Domestic Robotics
(2016)
High-dimensional and multi-variate data from dynamical systems such as turbulent flows and wind turbines can be analyzed with deep learning due to its capacity to learn representations in lower-dimensional manifolds. Two challenges of interest arise from data generated from these systems, namely, how to anticipate wind turbine failures and how to better understand air flow through car ventilation systems. There are deep neural network architectures that can project data into a lower-dimensional space with the goal of identifying and understanding patterns that are not distinguishable in the original dimensional space. Learning data representations in lower dimensions via non-linear mappings allows one to perform data compression, data clustering (for anomaly detection), data reconstruction and synthetic data generation.
In this work, we explore the potential that variational autoencoders (VAE) have to learn low-dimensional data representations in order to tackle the problems posed by the two dynamical systems mentioned above. A VAE is a neural network architecture that combines the mechanisms of the standard autoencoder and variational bayes. The goal here is to train a neural network to minimize a loss function defined by a reconstruction term together with a variational term defined as a Kulback-Leibler (KL) divergence.
The report discusses the results obtained for the two different data domains: wind turbine time series and turbulence data from computational fluid dynamics (CFD) simulations.
We report on the reconstruction, clustering and unsupervised anomaly detection of wind turbine multi-variate time series data using a variant of a VAE called Variational Recurrent Autoencoder (VRAE). We trained a VRAE to cluster normal and abnormal wind turbine series (two class problem) as well as normal and multiple abnormal series (multi-class problem). We found that the model is capable of distinguishing between normal and abnormal cases by reducing the dimensionality of the input data and projecting it to two dimensions using techniques such as Principal Component Analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE). A set of anomaly scoring methods is applied on top of these latent vectors in order to compute unsupervised clustering. We have achieved an accuracy of up to 96% with the KM eans + + algorithm.
We also report the data reconstruction and generation results of two dimensional turbulence slices corresponding to CFD simulation of a HVAC air duct. For this, we have trained a Convolutional Variational Autoencoder (CVAE). We have found that the model is capable of reconstructing laminar flows up to a certain degree of resolution as well generating synthetic turbulence data from the learned latent distribution.
Ice accumulation in the blades of wind turbines can cause them to describe anomalous rotations or no rotations at all, thus affecting the generation of electricity and power output. In this work, we investigate the problem of ice accumulation in wind turbines by framing it as anomaly detection of multi-variate time series. Our approach focuses on two main parts: first, learning low-dimensional representations of time series using a Variational Recurrent Autoencoder (VRAE), and second, using unsupervised clustering algorithms to classify the learned representations as normal (no ice accumulated) or abnormal (ice accumulated). We have evaluated our approach on a custom wind turbine time series dataset, for the two-classes problem (one normal versus one abnormal class), we obtained a classification accuracy of up to 96$\%$ on test data. For the multiple-class problem (one normal versus multiple abnormal classes), we present a qualitative analysis of the low-dimensional learned latent space, providing insights into the capacities of our approach to tackle such problem. The code to reproduce this work can be found here https://github.com/agrija9/Wind-Turbines-VRAE-Paper.
It has been well proved that deep networks are efficient at extracting features from a given (source) labeled dataset. However, it is not always the case that they can generalize well to other (target) datasets which very often have a different underlying distribution. In this report, we evaluate four different domain adaptation techniques for image classification tasks: DeepCORAL, DeepDomainConfusion, CDAN and CDAN+E. These techniques are unsupervised given that the target dataset dopes not carry any labels during training phase. We evaluate model performance on the office-31 dataset. A link to the github repository of this report can be found here: https://github.com/agrija9/Deep-Unsupervised-Domain-Adaptation.
Self-supervised learning has proved to be a powerful approach to learn image representations without the need of large labeled datasets. For underwater robotics, it is of great interest to design computer vision algorithms to improve perception capabilities such as sonar image classification. Due to the confidential nature of sonar imaging and the difficulty to interpret sonar images, it is challenging to create public large labeled sonar datasets to train supervised learning algorithms. In this work, we investigate the potential of three self-supervised learning methods (RotNet, Denoising Autoencoders, and Jigsaw) to learn high-quality sonar image representation without the need of human labels. We present pre-training and transfer learning results on real-life sonar image datasets. Our results indicate that self-supervised pre-training yields classification performance comparable to supervised pre-training in a few-shot transfer learning setup across all three methods. Code and self-supervised pre-trained models are be available at https://github.com/agrija9/ssl-sonar-images
Die nachhaltige Organisation des Verkehrs soll auf kostengünstige, umweltfreundliche und nutzerfreundlichere Nahverkehrskonzepte, die möglichst viele Bürger zur Nutzung des ÖPNV einladen, abzielen. Vor diesem Hintergrund ist es das Ziel dieses Beitrags, instrumentelle Ansatzpunkte für ein Nachhaltigkeitscontrolling in ÖPNV-Unternehmen aufzuzeigen. Hierzu werden nachfolgend die Berücksichtigung der Nachhaltigkeit bei Investitionsentscheidungen, das Carbon Accounting (Transparenz über CO2-Emissionen), die Integration der ökologischen, ökonomischen und sozialen Nachhaltigkeitsdimension bei der Berichterstattung und die Einbindung der genannten Instrumente in ein Managementsystem skizziert. Die Nachhaltigkeitsdimensionen Ökologie, Ökonomie und Soziales lassen sich gut mit Hilfe mehrdimensionaler, integrierter Managementsysteme in der Organisation verankern und systematisch in interne Strukturen und Prozesse einbetten. Integrierte Managementsysteme können so eine wichtige Voraussetzung für ein effizientes Nachhaltigkeitscontrolling sein.
Corporate Design Leitfaden
(2014)
Das Roboter-Baukastensystem ProfiBot vom Fraunhofer Institut IAIS wird in Zusammenarbeit mit der Hochschule für Technik und Wirtschaft (HTW-Saarbrücken) und der Firma HighTec EDV-Systeme GmbH aus Saarbrücken zu einer mobilen Roboterplattform für die Ausbildung an Hochschulen weiterentwickelt. In dieser Diplomarbeit wird das vorgegebene eingebettete System und ein Echtzeitbetriebssystem der Firma HighTec EDV-Systeme GmbH benutzt, um die Regelung der Motoren und der Fahrzeugbewegungen zu implementieren. Ein Benutzer kann die mobile Roboterplattform mithilfe einer Schnittstelle zur Anwendungsprogrammierung (API) auf Basis von physikalischen Größen ansteuern und aktuelle Zustände abfragen. Um die mobile Roboterplattform flexibel benutzen zu können, werden mit einer zusätzlichen Elektronik digitale und analoge Ein- und Ausgänge des eingebetteten Systems auf die Anwendungen der Robotik angepasst. Neben einem Programmstartschalter und Status-LEDs können vier Schaltleisten zur Kollisionserkennung und analoge Sensoren mit Einheitssignal angeschlossen werden. Zuletzt wird die Reglerstruktur kalibriert und getestet.
Problem Fersenbeinfraktur
(2007)
Durch Dotierung eines nematischen Flüssigkristalles mit einer chiralen Substanz wird eine helikal strukturierte Phase induziert, die in der Lage ist, einfallendes Licht wellenlängenselektiv zu reflektieren. Bei der Reaktion des Dotiermittels mit einem gasförmigen Analyten verändern sich die Ganghöhe dieser Struktur und damit die reflektierte Wellenlänge. Liegt diese im Bereich des sichtbaren Lichts, ist eine Farbänderung mit dem menschlichen Auge zu beobachten. Es ist dabei sinnvoll den Flüssigkristall z.B. in einem Polymer einzukapseln, um ihn vor mechanischen Einflüssen und Umwelteinflüssen zu schützen. Eine Möglichkeit zur Einkapselung ist das koaxiale Elektrospinnen. Vorteile sind unter anderem die Realisierung einer großen Oberfläche und einer sehr geringen Wanddicke der schützenden Schale, die die Diffusion von Gasen durch die Wand hindurch ermöglicht. Um die Funktionsfähigkeit eines solchen Sensors zu testen, wurde ein CO2-sensitiver Flüssigkristall verwendet. Dieser wurde in eine Schale aus Polyvinylpyrrolidon (PVP) versponnen und die Reaktion mit CO2 spektroskopisch analysiert.
Optical gas sensors based on chiral-nematic liquid crystals (N* LCs) forming one-dimensional photonic crystals do not require electrical energy and have a considerable potential to supplement established types of sensors. A chiral-nematic phase with tunable selective reflection is induced in a nematic host LC by adding reactive chiral dopants. The selective chemical reaction between dopant and analyte is capable to vary the pitch length (the lattice constant) of the soft, self-assembled, one-dimensional photonic crystal. The progress of the ongoing chemical reaction can be observed even by naked eye because the color of the samples varies. In this work, we encapsulate the responsive N* LC in microscale polyvinylpyrrolidone (PVP) fibers via coaxial electrospinning. The sensor is, thus, given a solid form and has an improved stability against nonavoidable environmental influences. The reaction behavior of encapsulated and nonencapsulated N* LC toward a gaseous analyte is compared, systematically. Making use of the encapsulation is an important step to improve the applicability.
The analysis of Δ9-tetrahydrocannabinol (THC) and its metabolites 11-hydroxy-Δ9-tetrahydrocannabinol (11-OH-THC), and 11-nor-9-carboxy-Δ9-tetrahydrocannabinol (THC-COOH) from blood serum is a routine task in forensic toxicology laboratories. For examination of consumption habits, the concentration of the phase I metabolite THC-COOH is used. Recommendations for interpretation of analysis values in medical-psychological assessments (regranting of driver’s licenses, Germany) include threshold values for the free, unconjugated THC-COOH. Using a fully automated two-step liquid-liquid extraction, THC, 11-OH-THC, and free, unconjugated THC-COOH were extracted from blood serum, silylated with N-methyl-N-(trimethylsilyl) trifluoroacetamide (MSTFA), and analyzed by GC/MS. The automation was carried out by an x-y-z sample robot equipped with modules for shaking, centrifugation, and solvent evaporation. This method was based on a previously developed manual sample preparation method. Validation guidelines of the Society of Toxicological and Forensic Chemistry (GTFCh) were fulfilled for both methods, at which the focus of this article is the automated one. Limits of detection and quantification for THC were 0.3 and 0.6 μg/L, for 11-OH-THC were 0.1 and 0.8 μg/L, and for THC-COOH were 0.3 and 1.1 μg/L, when extracting only 0.5 mL of blood serum. Therefore, the required limit of quantification for THC of 1 μg/L in driving under the influence of cannabis cases in Germany (and other countries) can be reached and the method can be employed in that context. Real and external control samples were analyzed, and a round robin test was passed successfully. To date, the method is employed in the Institute of Legal Medicine in Giessen, Germany, in daily routine. Automation helps in avoiding errors during sample preparation and reduces the workload of the laboratory personnel. Due to its flexibility, the analysis system can be employed for other liquid-liquid extractions as well. To the best of our knowledge, this is the first publication on a comprehensively automated classical liquid-liquid extraction workflow in the field of forensic toxicological analysis.
In order to achieve the highest possible performance, the ray traversal and intersection routines at the core of every high-performance ray tracer are usually hand-coded, heavily optimized, and implemented separately for each hardware platform—even though they share most of their algorithmic core. The results are implementations that heavily mix algorithmic aspects with hardware and implementation details, making the code non-portable and difficult to change and maintain.
In this paper, we present a new approach that offers the ability to define in a functional language a set of conceptual, high-level language abstractions that are optimized away by a special compiler in order to maximize performance. Using this abstraction mechanism we separate a generic ray traversal and intersection algorithm from its low-level aspects that are specific to the target hardware. We demonstrate that our code is not only significantly more flexible, simpler to write, and more concise but also that the compiled results perform as well as state-of-the-art implementations on any of the tested CPU and GPU platforms.
In order to achieve the highest possible performance, the ray traversal and intersection routines at the core of every high-performance ray tracer are usually hand-coded, heavily optimized, and implemented separately for each hardware platform—even though they share most of their algorithmic core. The results are implementations that heavily mix algorithmic aspects with hardware and implementation details, making the code non-portable and difficult to change and maintain.
In this paper, we present a new approach that offers the ability to define in a functional language a set of conceptual, high-level language abstractions that are optimized away by a special compiler in order to maximize performance. Using this abstraction mechanism we separate a generic ray traversal and intersection algorithm from its low-level aspects that are specific to the target hardware. We demonstrate that our code is not only significantly more flexible, simpler to write, and more concise but also that the compiled results perform as well as state-of-the-art implementations on any of the tested CPU and GPU platforms.
Nearest Neighbor Search (NNS) is employed by many computer vision algorithms. The computational complexity is large and constitutes a challenge for real-time capability. The basic problem is in rapidly processing a huge amount of data, which is often addressed by means of highly sophisticated search methods and parallelism. We show that NNS based vision algorithms like the Iterative Closest Points algorithm (ICP) can achieve real-time capability while preserving compact size and moderate energy consumption as it is needed in robotics and many other domains. The approach exploits the concept of general purpose computation on graphics processing units (GPGPU) and is compared to parallel processing on CPU. We apply this approach to the 3D scan registration problem, for which a speed-up factor of 88 compared to a sequential CPU implementation is reported.
In a research project funded by the German Research Foundation, meteorologists, data publication experts, and computer scientists optimised the publication process of meteorological data and developed software that supports metadata review. The project group placed particular emphasis on scientific and technical quality assurance of primary data and metadata. At the end, the software automatically registers a Digital Object Identifier at DataCite. The software has been successfully integrated into the infrastructure of the World Data Center for Climate, but a key was to make the results applicable to data publication processes in other sciences as well.