Refine
H-BRS Bibliography
- yes (1119) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (606)
- Fachbereich Ingenieurwissenschaften und Kommunikation (235)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (216)
- Institute of Visual Computing (IVC) (210)
- Fachbereich Wirtschaftswissenschaften (148)
- Institut für Verbraucherinformatik (IVI) (86)
- Institut für Cyber Security & Privacy (ICSP) (81)
- Fachbereich Angewandte Naturwissenschaften (76)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (54)
- Fachbereich Sozialpolitik und Soziale Sicherung (39)
Document Type
- Conference Object (1119) (remove)
Year of publication
Keywords
- FPGA (11)
- Virtual Reality (9)
- Machine Learning (8)
- Robotics (7)
- Usable Security (7)
- User Experience (6)
- Augmented Reality (5)
- CUDA (5)
- Education (5)
- Sustainability (5)
An essential measure of autonomy in assistive service robots is adaptivity to the various contexts of human-oriented tasks, which are subject to subtle variations in task parameters that determine optimal behaviour. In this work, we propose an apprenticeship learning approach to achieving context-aware action generalization on the task of robot-to-human object hand-over. The procedure combines learning from demonstration and reinforcement learning: a robot first imitates a demonstrator’s execution of the task and then learns contextualized variants of the demonstrated action through experience. We use dynamic movement primitives as compact motion representations, and a model-based C-REPS algorithm for learning policies that can specify hand-over position, conditioned on context variables. Policies are learned using simulated task executions, before transferring them to the robot and evaluating emergent behaviours. We additionally conduct a user study involving participants assuming different postures and receiving an object from a robot, which executes hand-overs by either imitating a demonstrated motion, or adapting its motion to hand-over positions suggested by the learned policy. The results confirm the hypothesized improvements in the robot’s perceived behaviour when it is context-aware and adaptive, and provide useful insights that can inform future developments.
Entrepreneurship education serves a conduit for new venture creation as it provides the knowledge and skills needed to increase the self-efficacy of individuals to start and run new businesses and to grow existing ones. This study, therefore, sought to assess the relationship between the approaches to the teaching of entrepreneur-ship and entrepreneurial intention on a cohort of 292 respondents consisting of students who have studied entrepreneurship in three selected Universities. A structured questionnaire was used to obtain data randomly from students. The canonical correlation results indicate that education for and through entrepreneurship is the best approach to promoting entrepreneurial intensity among University students, if the aim of teaching entrepreneur-ship is to promote start-up activities. The findings provide valuable insights for institutions of higher learning and policy makers in Ghana with respect to the appropriate methodologies to be adopted in the teaching of entrepreneurship in our universities.
Robust Indoor Localization Using Optimal Fusion Filter For Sensors And Map Layout Information
(2014)
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot.We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation and robust object recognition.
Despite perfect functioning of its internal components, a robot can be unsuccessful in performing its tasks because of unforeseen situations. These situations occur when the behavior of the objects in the robot’s environment deviates from its expected values. For robots, such deviations are exhibited in the form of unknown external faults which prohibit them from performing their tasks successfully. In this work we propose to use naive physics knowledge to reason about such faults in the robotics domain. We propose an approach that uses naive physics concepts to find information about the situations which result in a detected unknown fault. The naive physics knowledge is represented by the physical properties of objects which are formalized in a logical framework. The proposed approach applies a qualitative version of physical laws to these properties for reasoning about the detected fault. By interpreting the reasoning results the robot finds the information about the situations which can cause the fault. We apply the proposed approach to the scenarios in which a robot performs manipulation tasks of picking and placing objects. Results of this application show that naive physics holds great promise for reasoning about unknown ex- ternal faults in robotics.
Mergers and acquisitions take place all over the world and in many industries, typically motivated by corporate politics. While IT management is often not involved in the decision-making, it has to solve a wide range of problems in the post-merger phase. Indeed, merging two or more companies implies not only merging their core businesses, but also creating a new and efficiently integrated IT organisation from the individual ones, since persistence of the current IT organisations usually does not make sense. In addition, corporate management frequently imposes constraints, e.g., cost reductions, on the IT infrastructure. The principal critical success factor when merging IT organisations is the uninterrupted operation of the IT business, because a service gap is neither acceptable for in-house functional departments nor for external customers. Therefore, the IT rebuilding phase has to focus on IT services that facilitate the processes of functional departments, support processes, and processes of customers and suppliers, so that any transformation work is transparent to internal and external customers. In this article we describe a real-world but anonymous case study. Our goals are to highlight the points important for merging IT organisations, and to help decision-makers, particularly in the areas of IT organisation and IT personnel. We focus on the arising organisational and non-technical issues from a management perspective, i.e., the CIO's view, and provide checklists intended to help IT managers to address the most pressing issues. To assist CIOs surviving in the post merger phase, we give check lists for merging IT organisations, check lists for merging IT human resources, check lists for IT budgets and reporting, and assess activities in a merger scenario. IT hardware, software and IT infrastructure as well as running IT projects are not considered in this paper.
Sind kleinere und mittlere Unternehmen (KMU) bereits auf die Digitale Transformation vorbereitet?
(2018)
Eine von den Autoren durchgeführte Untersuchung konnte deutliche Indizien dafür ausmachen, dass viele kleinere und mittlere Unternehmen (KMU) aktuell noch keine ausreichende Reife zur Digitalen Transformation haben. Zur Lösung des Problems wird vorgeschlagen, ein agiles IT-Management-Konzept zu entwickeln, um den IT-Bereich dynamisch und ohne formalen Ballast des klassischen IT-Managements zu steuern.
Während sich die unternehmerische Arbeitswelt immer mehr in Richtung Agilität verschiebt, verharrt das IT-Controlling noch in alten, klassischen Strukturen. Diese Arbeit untersucht die Fragestellung, ob und inwieweit agile Ansätze im IT-Controlling eingesetzt werden können. Dieser Beitrag ist eine modifizierte Version des in der Zeitschrift „HMD Praxis der Wirtschaftsinformatik“ (https://link.springer.com/article/10.1365/s40702-022-00837-0) erschienenen Artikels „Agiles IT-Controlling“.
IT performance measurement is often associated by chief executive officers with IT cost cutting although IT protects business processes from increasing IT costs. IT cost cutting only endangers the company’s efficiency. This opinion discriminates those who do IT performance measurement in companies as a bean-counter. The present paper describes an integrated reference model for IT performance measurement based on a life cycle model and a performance oriented framework. The presented model was created from a practical point of view. It is designed lank compared with other known concepts and is very appropriate for small and medium enterprises (SME).
A plethora of architectural patterns and elements for developing service-oriented applications can be gathered from the state-of-the-art. Most of these approaches are merely applicable for single-tenant applications. However, less methodical support is provided for scenarios, in which multiple different tenants with varying requirements access the same application stack concurrently. In order to fill this gap, both novel and existing architectural patterns, architectural elements, as well as fundamental design decisions must be considered and integrated into a framework that leverages the devel- opment of multi-tenant application. This paper addresses this demand and presents the SOAdapt framework. It promotes the development of adaptable multi-tenant applications based on a service-oriented architecture that is capable of incorporating specific requirements of new tenants in a flexible manner.
Die digitale Transformation verändert die internationale Kooperation der Hochschulen massiv. Über die Möglichkeiten der virtuellen Mobilität hinaus entstehen neue Themenfelder, die internationale Lern- und Lehrerlebnisse mit digitaler Unterstützung verändern, ergänzen oder neu ermöglichen. Dazu sind im Bereich der Förderung der Internationalisierung (DAAD, Erasmus+, BMBF u.a.) Projekte und Förderformate entstanden, die Digitalisierung und Internationalisierung kombinieren und die neuen Themenstellungen adressieren, z.B. didaktische Formate, administrative Prozesse (auch im Kontext OZG und DSGVO), virtuelle und hybride Mobilität, internationale Projekt- und Teamformate sowie schlussendlich auch Inhalte, die internationale, interkulturelle und interdisziplinäre Kompetenzen mit digitalen Kompetenzen verbinden. Der vorgeschlagene Workshop soll entsprechende Projekte zusammenbringen und die Themen strukturieren, um einen Überblick der Entwicklungen zu schaffen und somit einen Beitrag zur Definition des Themenfelds „Digitalisierung & Internationalisierung“ zu leisten.
The need for innovation around the control functions of inverters is great. PV inverters were initially expected to be passive followers of the grid and to disconnect as soon as abnormal conditions happened. Since future power systems will be dominated by generation and storage resources interfaced through inverters these converters must move from following to forming and sustaining the grid. As “digital natives” PV inverters can also play an important role in the digitalisation of distribution networks. In this short review we identified a large potential to make the PV inverter the smart local hub in a distributed energy system. At the micro level, costs and coordination can be improved with bidirectional inverters between the AC grid and PV production, stationary storage, car chargers and DC loads. At the macro level the distributed nature of PV generation means that the same devices will support both to the local distribution network and to the global stability of the grid. Much success has been obtained in the former. The later remains a challenge, in particular in terms of scaling. Yet there is some urgency in researching and demonstrating such solutions. And while digitalisation offers promise in all control aspects it also raises significant cybersecurity concerns.
In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI” from the users, who were interacting with AI but did not realize it. Three decades of research and we are still facing the same issue with the AItechnology users. In the lack of users’ awareness and mutual understanding of AI-enabled systems between designers and users, informal theories of the users about how a system works (“Folk theories”) become inevitable but can lead to misconceptions and ineffective interactions. To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. However, a profound understanding of the current users’ perception of AI is still missing. In this study, we introduce the term “Perceived AI” as “AI defined from the perspective of its users”. We then present our preliminary results from deep-interviews with 50 AItechnology users, which provide a framework for our future research approach towards a better understanding of PAI and users’ folk theories.
For most people, using their body to authenticate their identity is an integral part of daily life. From our fingerprints to our facial features, our physical characteristics store the information that identifies us as "us." This biometric information is becoming increasingly vital to the way we access and use technology. As more and more platform operators struggle with traffic from malicious bots on their servers, the burden of proof is on users, only this time they have to prove their very humanity and there is no court or jury to judge, but an invisible algorithmic system. In this paper, we critique the invisibilization of artificial intelligence policing. We argue that this practice obfuscates the underlying process of biometric verification. As a result, the new "invisible" tests leave no room for the user to question whether the process of questioning is even fair or ethical. We challenge this thesis by offering a juxtaposition with the science fiction imagining of the Turing test in Blade Runner to reevaluate the ethical grounds for reverse Turing tests, and we urge the research community to pursue alternative routes of bot identification that are more transparent and responsive.
Technological objects present themselves as necessary, only to become obsolete faster than ever before. This phenomenon has led to a population that experiences a plethora of technological objects and interfaces as they age, which become associated with certain stages of life and disappear thereafter. Noting the expanding body of literature within HCI about appropriation, our work pinpoints an area that needs more attention, “outdated technologies.” In other words, we assert that design practices can profit as much from imaginaries of the future as they can from reassessing artefacts from the past in a critical way. In a two-week fieldwork with 37 HCI students, we gathered an international collection of nostalgic devices from 14 different countries to investigate what memories people still have of older technologies and the ways in which these memories reveal normative and accidental use of technological objects. We found that participants primarily remembered older technologies with positive connotations and shared memories of how they had adapted and appropriated these technologies, rather than normative uses. We refer to this phenomenon as nostalgic reminiscence. In the future, we would like to develop this concept further by discussing how nostalgic reminiscence can be operationalized to stimulate speculative design in the present.
When dialogues with voice assistants (VAs) fall apart, users often become confused or even frustrated. To address these issues and related privacy concerns, Amazon recently introduced a feature allowing Alexa users to inquire about why it behaved in a certain way. But how do users perceive this new feature? In this paper, we present preliminary results from research conducted as part of a three-year project involving 33 German households. This project utilized interviews, fieldwork, and co-design workshops to identify common unexpected behaviors of VAs, as well as users’ needs and expectations for explanations. Our findings show that, contrary to its intended purpose, the new feature actually exacerbates user confusion and frustration instead of clarifying Alexa's behavior. We argue that such voice interactions should be characterized as explanatory dialogs that account for VA’s unexpected behavior by providing interpretable information and prompting users to take action to improve their current and future interactions.
In the fermentation process sugars are transformed into lactic acid. pH meters have traditionally been used for fermentation process monitoring based on acidity. More recently, near infrared (NIR) spectroscopy has proven to provide an accurate and non-invasive method to detect when the transformation of sugars into lactic acid is finished. The fermentation process when sugars are transformed into lactic acid. This research proposes the use of simplified NIR spectroscopy using multispectral optical sensors as a simpler and less expensive measure to end the fermentation process. The NIR spectrum of milk and yogurt is compared to find and extract features that can be used to design a simple sensor to monitor the yogurt fermentation process. Multispectral images in four selected wavebands within the NIR spectrum are captured and show different spectral remission characteristics for milk, yogurt and water, which support the selection of these wavebands for milk and yogurt classification.
Introduction: After cellulose, lignin represents the most abundant biopolymer on earth that accounts for up to 18-35 % by weight of lignocellulose biomass. Today, it is a by-product of the paper and pulping industry. Although lignin is available in huge amounts, mainly in form of so called black liquor produced via Kraft-pulping, processes for the valorization of lignin are still limited [1]. Due to its hyperbranched polyphenol-like structure, lignin gained increasing interest as biobased building block for polymer synthesis [2]. The present work is focused on extraction and purification of lignin from industrial black liquor and synthesis of lignin-based polyurethanes.
In today’s business, culture plays a vital role or to a high degree influences the attitude, perception and decision making process of an individual. Culture is an unavoidable state of rules and regulations that defines people’s daily life in a particular environment or society. There are plenty examples of business failures or stagnation or failure of joint ventures, on account of the management's inability to recognize cross-cultural challenges and tackle them appropriately.
LiDAR-based Indoor Localization with Optimal Particle Filters using Surface Normal Constraints
(2023)
The transport of carbon dioxide through pipelines is one of the important components of Carbon dioxide Capture and Storage (CCS) systems that are currently being developed. If high flow rates are desired a transportation in the liquid or supercritical phase is to be preferred. For technical reasons, the transport must stay in that phase, without transitioning to the gaseous state. In this paper, a numerical simulation of the stationary process of carbon dioxide transport with impurities and phase transitions is considered. We use the Homogeneous Equilibrium Model (HEM) and the GERG-2008 thermodynamic equation of state to describe the transport parameters. The algorithms used allow to solve scenarios of carbon dioxide transport in the liquid or supercritical phase, with the detection of approaching the phase transition region. Convergence of the solution algorithms is analyzed in connection with fast and abrupt changes of the equation of state and the enthalpy function in the region of phase transitions.
Nowadays, we input text not only on stationary devices, but also on handheld devices while walking, driving, or commuting. Text entry on the move, which we term as nomadic text entry, is generally slower. This is partially due to the need for users to move their visual focus from the device to their surroundings for navigational purposes and back. To investigate if better feedback about users' surroundings on the device can improve performance, we present a number of new and existing feedback systems: textual, visual, textual & visual, and textual & visual via translucent keyboard. Experimental comparisons between the conventional and these techniques established that increased ambient awareness for mobile users enhances nomadic text entry performance. Results showed that the textual and the textual & visual via translucent keyboard conditions increased text entry speed by 14% and 11%, respectively, and reduced the error rate by 13% compared to the regular technique. The two methods also significantly reduced the number of collisions with obstacles.
Emotion and gender recognition from facial features are important properties of human empathy. Robots should also have these capabilities. For this purpose we have designed special convolutional modules that allow a model to recognize emotions and gender with a considerable lower number of parameters, enabling real-time evaluation on a constrained platform. We report accuracies of 96% in the IMDB gender dataset and 66% in the FER-2013 emotion dataset, while requiring a computation time of less than 0.008 seconds on a Core i7 CPU. All our code, demos and pre-trained architectures have been released under an open-source license in our repository at https://github.com/oarriaga/face classification.
The goal of this work is to develop an integration framework for a robotic software system which enables robotic learning by experimentation within a distributed and heterogeneous setting. To meet this challenge, the authors specified, defined, developed, implemented and tested a component-based architecture called XPERSIF. The architecture comprises loosely-coupled, autonomous components that offer services through their well-defined interfaces and form a service-oriented architecture. The Ice middleware is used in the communication layer. Additionally, the successful integration of the XPERSim simulator into the system has enabled simultaneous quasi-realtime observation of the simulation by numerous, distributed users.
Adapting plans to changes in the environment by finding alternatives and taking advantage of opportunities is a common human behavior. The need for such behavior is often rooted in the uncertainty produced by our incomplete knowledge of the environment. While several existing planning approaches deal with such issues, artificial agents still lack the robustness that humans display in accomplishing their tasks. In this work, we address this brittleness by combining Hierarchical Task Network planning, Description Logics, and the notions of affordances and conceptual similarity. The approach allows a domestic service robot to find ways to get a job done by making substitutions. We show how knowledge is modeled, how the reasoning process is used to create a constrained planning problem, and how the system handles cases where plan generation fails due to missing/unavailable objects. The results of the evaluation for two tasks in a domestic service domain show the viability of the approach in finding and making the appropriate goal transformations.
In this paper, we present XPERSim, a 3D simulator built on top of open source components that enables users to quickly and easily construct an accurate and photo-realistic simulation for robots of arbitrary morphology and their environments. While many existing robot simulators provide a good dynamics simulation, they often lack the high quality visualization that is now possible with general-purpose hardware. XPERSim achieves such visualization by using the Object-Oriented Graphics Rendering Engine 3D (Ogre) engine to render the simulation whose dynamics are calculated using the Open Dynamics Engine (ODE). Through XPERSim’s integration into a component-based software integration framework used for robotic learning by experimentation, XPERSIF, and the use of the scene-oriented nature of the Ogre engine, the simulation is distributed to numerous users that include researchers and robotic components, thus enabling simultaneous, quasi-realtime observation of the multiple-camera simulations.
The non-farm sector is critical for the socio-economic development of Ghana especially the rural poor. Literature suggest that people engage in non-farm enterprises as a way out of poverty or a survival strategy, perhaps as a substitute for the landless. This paper analyses the determinants of individual participation in non-farm enterprises and the intensity of participation. The paper uses EGC/ISSER Socio-Economic Panel Survey data collected in 2009. The paper estimated the determinants of participation using a probit model and then estimated the intensity of participation using a truncated regression model. The results indicate that majority of women (about 73%) are engaged in non-farm enterprises in rural Ghana. The study found that females tended to participate more in non-farm self-employment and are less likely to participate in non-farm wage employment. The results further showed that individual characteristics such as the gender of the individual, being head of a household, being the spouse of a household head, having formal education, age of the individual, having access to credit, possessing a mobile phone, per capita landing holding and ownership of livestock influenced the participation of individuals in self-and wage employment. Results from truncated regression model for self-employed enterprises showed that having access to mobile phones, owning more livestock and electricity are important in determining the intensity of participation in self-employed enterprises. For wage-employment, being a household head, spouse of household head, having access to mobile phone and owning more livestock increased the number of days working on wage employment. Education is relevant for employment in the non-farm sector especially wage-employment. Government should play a lead role in making formal education accessible to the rural people. Deliberate policies should focus on addressing critical factors such as access to credit, mobile phone, electricity and education which are relevant for increasing participation intensity in rural enterprises.
ProtSTonKGs: A Sophisticated Transformer Trained on Protein Sequences, Text, and Knowledge Graphs
(2022)
While most approaches individually exploit unstructured data from the biomedical literature or structured data from biomedical knowledge graphs, their union can better exploit the advantages of such approaches, ultimately improving representations of biology. Using multimodal transformers for such purposes can improve performance on context dependent classication tasks, as demonstrated by our previous model, the Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs (STonKGs). In this work, we introduce ProtSTonKGs, a transformer aimed at learning all-encompassing representations of protein-protein interactions. ProtSTonKGs presents an extension to our previous work by adding textual protein descriptions and amino acid sequences (i.e., structural information) to the text- and knowledge graph-based input sequence used in STonKGs. We benchmark ProtSTonKGs against STonKGs, resulting in improved F1 scores by up to 0.066 (i.e., from 0.204 to 0.270) in several tasks such as predicting protein interactions in several contexts. Our work demonstrates how multimodal transformers can be used to integrate heterogeneous sources of information, paving the foundation for future approaches that use multiple modalities for biomedical applications.
In this paper, modeling of piston and generic type gas compressors for a globally convergent algorithm for solving stationary gas transport problems is carried out. A theoretical analysis of the simulation stability, its practical implementation and verification of convergence on a realistic gas network have been carried out. The relevance of the paper for the topics of the conference is defined by a significance of gas transport networks as an advanced application of simulation and modeling, including the development of novel mathematical and numerical algorithms and methods.
Solving transport network problems can be complicated by non-linear effects. In the particular case of gas transport networks, the most complex non-linear elements are compressors and their drives. They are described by a system of equations, composed of a piecewise linear ‘free’ model for the control logic and a non-linear ‘advanced’ model for calibrated characteristics of the compressor. For all element equations, certain stability criteria must be fulfilled, providing the absence of folds in associated system mapping. In this paper, we consider a transformation (warping) of a system from the space of calibration parameters to the space of transport variables, satisfying these criteria. The algorithm drastically improves stability of the network solver. Numerous tests on realistic networks show that nearly 100% convergence rate of the solver is achieved with this approach.
In this paper, an analysis of the error ellipsoid in the space of solutions of stationary gas transport problems is carried out. For this purpose, a Principal Component Analysis of the solution set has been performed. The presence of unstable directions is shown associated with the marginal fulfillment of the resistivity conditions for the equations of compressors and other control elements in gas networks. Practically, the instabilities occur when multiple compressors or regulators try to control pressures or flows in the same part of the network. Such problems can occur, in particular, when the compressors or regulators reach their working limits. Possible ways of resolving instabilities are considered.
The paper presents the topological reduction method applied to gas transport networks, using contraction of series, parallel and tree-like subgraphs. The contraction operations are implemented for pipe elements, described by quadratic friction law. This allows significant reduction of the graphs and acceleration of solution procedure for stationary network problems. The algorithm has been tested on several realistic network examples. The possible extensions of the method to different friction laws and other elements are discussed.
Target meaning representations for semantic parsing tasks are often based on programming or query languages, such as SQL, and can be formalized by a context-free grammar. Assuming a priori knowledge of the target domain, such grammars can be exploited to enforce syntactical constraints when predicting logical forms. To that end, we assess how syntactical parsers can be integrated into modern encoder-decoder frameworks. Specifically, we implement an attentional SEQ2SEQ model that uses an LR parser to maintain syntactically valid sequences throughout the decoding procedure. Compared to other approaches to grammar-guided decoding that modify the underlying neural network architecture or attempt to derive full parse trees, our approach is conceptually simpler, adds less computational overhead during inference and integrates seamlessly with current SEQ2SEQ frameworks. We present preliminary evaluation results against a recurrent SEQ2SEQ baseline on GEOQUERY and ATIS and demonstrate improved performance while enforcing grammatical constraints.
Photovoltaic (PV) power data are a valuable but as yet under-utilised resource that could be used to characterise global irradiance with unprecedented spatio-temporal resolution. The resulting knowledge of atmospheric conditions can then be fed back into weather models and will ultimately serve to improve forecasts of PV power itself. This provides a data-driven alternative to statistical methods that use post-processing to overcome inconsistencies between ground-based irradiance measurements and the corresponding predictions of regional weather models (see for instance Frank et al., 2018). This work reports first results from an algorithm developed to infer global horizontal irradiance as well as atmospheric optical properties such as aerosol or cloud optical depth from PV power measurements.
Incoming solar radiation is an important driver of our climate and weather. Several studies (see for instance Frank et al. 2018) have revealed discrepancies between ground-based irradiance measurements and the predictions of regional weather models. In the realm of electricity generation, accurate forecasts of solar photovoltaic (PV)energy yield are becoming indispensable for cost-effective grid operation: in Germany there are 1.6 million PVsystems installed, with a nominal power of 46 GW (Bundesverband Solarwirtschaft 2019). The proliferation of PV systems provides a unique opportunity to characterise global irradiance with unprecedented spatiotemporalresolution, which in turn will allow for highly resolved PV power forecasts.
In view of the rapid growth of solar power installations worldwide, accurate forecasts of photovoltaic (PV) power generation are becoming increasingly indispensable for the overall stability of the electricity grid. In the context of household energy storage systems, PV power forecasts contribute towards intelligent energy management and control of PV-battery systems, in particular so that self-sufficiency and battery lifetime are maximised. Typical battery control algorithms require day-ahead forecasts of PV power generation, and in most cases a combination of statistical methods and numerical weather prediction (NWP) models are employed. The latter are however often inaccurate, both due to deficiencies in model physics as well as an insufficient description of irradiance variability.
The rapid increase in solar photovoltaic (PV) installations worldwide has resulted in the electricity grid becoming increasingly dependent on atmospheric conditions, thus requiring more accurate forecasts of incoming solar irradiance. In this context, measured data from PV systems are a valuable source of information about the optical properties of the atmosphere, in particular the cloud optical depth (COD). This work reports first results from an inversion algorithm developed to infer global, direct and diffuse irradiance as well as atmospheric optical properties from PV power measurements, with the goal of assimilating this information into numerical weather prediction (NWP) models.
The electricity grid of the future will be built on renewable energy sources, which are highly variable and dependent on atmospheric conditions. In power grids with an increasingly high penetration of solar photovoltaics (PV), an accurate knowledge of the incoming solar irradiance is indispensable for grid operation and planning, and reliable irradiance forecasts are thus invaluable for energy system operators. In order to better characterise shortwave solar radiation in time and space, data from PV systems themselves can be used, since the measured power provides information about both irradiance and the optical properties of the atmosphere, in particular the cloud optical depth (COD). Indeed, in the European context with highly variable cloud cover, the cloud fraction and COD are important parameters in determining the irradiance, whereas aerosol effects are only of secondary importance.
In this contribution a machine vision inspection system is presented which is designed as a length measuring sensor. It is developed to be applied to a range of heat shrink tubes, varying in length, diameter and color. The challenges of this task were the precision and accuracy demands as well as the real-time applicability of the entire approach since it should be realized in regular industrial line production. In production, heat shrink tubes are cut to specific sizes from a continuous tube. A multi-measurement strategy has been developed, which measures each individual tube segment several times with sub pixel accuracy while being in the visual field. The developed approach allows for a contact-free and fully automatic control of 100% of produced heat shrink tubes according to the given requirements with a measuring precision of 0.1mm. Depending on the color, length and diameter of the tubes considered, a true positive rate of 99.99% to 100% has been reached at a true negative rate of > 99.7.
Kinetic Inductance Detectors with Integrated Antennas for Ground and Space-Based Sub-mm Astronomy
(2009)
Very large arrays of Microwave Kinetic Inductance Detectors (MKIDs) have the potential to revolutionize ground and space based astronomy. They can offer in excess of 10.000 pixels with large dynamic range and very high sensitivity in combination with very efficient frequency division multiplexing at GHz frequencies. In this paper we present the development of a 400 pixel MKID demonstration array, including optical coupling, sensitivity measurements, beam pattern measurements and readout. The design presented can be scaled to any frequency between 80 GHz and >5 THz because there is no need for superconducting structures that become lossy at frequencies above the gap frequency of the materials used. The latter would limit the frequency coverage to below 1 THz for relatively high gap materials such as NbTiN. An individual pixels of the array consist of a distributed Aluminium CPW MKID with an integrated twin slot antenna at its end. The antenna is placed in the in the second focus of an elliptical high purity Si lens. The lens-antenna coupling design allows room for the MKID resonator outside of the focal point of the lens. The best dark noise equivalent power of these devices is measured to be NEP = 7×10-19 W/[square root]Hz and the optical coupling efficiency is around 30%, in which no antireflection coating was used on the Si lens. For the readout we use a commercial arbitrary waveform generator and a 1.5 GHz FFTS. We show that using this concept it is possible to read out in excess of 400 pixels with 1 board and 1 pair of coaxial cables.
Bisher ist nicht bekannt, in welchem Ausmaß Fremd- oder Störgerüche dazu geeignet sind, die allgemeine Leistungsfähigkeit eines Sprengstoffspürhundes einzuschränken oder sogar die Detektion eines Sprengkörpers zu verhindern. Ziel ist es zu untersuchen, inwieweit sich durch den gezielten Einsatz von Störsubstanzen die Sprengstoffdetektionsfähigkeit von Spürhunden beeinflussen lässt. Mit Detektionsfähigkeit ist hier sowohl die Wahrscheinlichkeit einer richtigen Detektion von Sprengstoffen in Gegenwart von starken Fremdgerüchen, als auch die ebenfalls zu erwartende Verringerung der Einsatzdauer (vorzeitige Erschöpfung) gemeint.
The Render Cache [1,2] allows the interactive display of very large scenes, rendered with complex global illumination models, by decoupling camera movement from the costly scene sampling process. In this paper, the distributed execution of the individual components of the Render Cache on a PC cluster is shown to be a viable alternative to the shared memory implementation.As the processing power of an entire node can be dedicated to a single component, more advanced algorithms may be examined. Modular functional units also lead to increased flexibility, useful in research as well as industrial applications.We introduce a new strategy for view-driven scene sampling, as well as support for multiple camera viewpoints generated from the same cache. Stereo display and a CAVE multi-camera setup have been implemented.The use of the highly portable and inter-operable CORBA networking API simplifies the integration of most existing pixel-based renderers. So far, three renderers (C++ and Java) have been adapted to function within our framework.
Für die Entwicklung steuerungstechnischer Sicherheitsfunktionen muss ab 2012 die Normen EN ISO 13849-1 oder EN 62061 befolgt werden, die sowohl Anforderungen an die Hardware als auch Anforderungen an die Software beschreibt. Die Anforderungen an die Software spielten bis vor einigen Jahren kaum eine Rolle, da Sicherheitsfunktionen vorzugsweise in Hardware realisiert wurden. Heutzutage ist es jedoch sehr häufig üblich, Sicherheitsfunktionen mit einer dafür geeigneten programmierbaren SPS zu realisieren. Die neuen Normen bzgl. der sicheren Steuerung von Maschinen verlangen neben der Quantifizierung der Hardware-Ausfallraten von Sicherheitsfunktionen noch ein Management der Sicherheitsfunktionen. Dazu gehört auch ein Management der Softwareentwicklung für Sicherheitsfunktionen, um systematische Fehler zu minimieren. Dieses Management der Softwareentwicklung wird im Wesentlichen durch das V-Modell repräsentiert. Für die Maschinebauindustrie darf dieser Managementprozess nicht zu aufwendig sein, ansonsten wird dieser in der Praxis nur schwer angenommen. Eine Möglichkeit der Abarbeitung des V-Modells wird vorgestellt. Wahrscheinlich ist diese aufgezeigte Möglichkeit für die Industrie noch zu aufwendig.
Over the past two decades many low and middle income countries worldwide have started to extend the coverage and improve the functioning of public social protection systems. The research program on international policy diffusion provides empirical evidence that apart from domestic factors international interdependencies matter as well for national policy change in social protection. However, little is known about the governance structures mediating international policy diffusion in social protection.
We present herein a new class of resin formulations for stereolithography, named FlexSL, with a broad bandwidth of tunable mechanical properties. The novel polyether(meth)acrylate based material class has outstanding material characteristics in combination with the advantages of being a biocompatible (meth)acrylate based processing material. FlexSL shows very promising results in several initial biocompatibility tests. This emphasizes its non-toxic behavior in a biomedical environment, caused mainly by the (meth)acrylate based core components. A short overview of mechanical and processing properties will be given in the end. The herein presented novel FlexSL materials show a significant lower cytotoxicity in contrast to commercial applied acrylic stereolithography resins. Further biocompatibility tests according to ISO 10993 protocols are planned. On the one hand, there are technical applications for this material (e.g. flaps, tubes, hoses, cables, sealing parts, connectors and other technical rubber-like applications), and on the other hand, broad fields of potential biomedical applications in which the FlexSL materials can be beneficial are obvious. Especially these could be small series production of medical products with special flexible material requirements. In addition, the usage for individual soft hearing aid shells, intra-operative planning services and tools like intra-op cutting templates and sawing guides is very attractive. The possibility to modify the FlexSL resins also for high-resolution applications makes it possible to manufacture now very flexible micro-prototypes with outstanding material characteristics and very fine structures with a minimum resolution of 20 mym and a layer thickness of minimal 5 myrn. These resin formulations are applicable and adjustable to other stereolithographic equipment available on the market.
Taste is a complex phenomenon that depends on the individual experience and is a matter of collective negotiation and mediation. On the contrary, it is uncommon to include taste and its many facets in everyday design, particularly online shopping for fresh food products. To realize this unused potential, we conducted two Co-Design workshops. Based on the participants’ results in the workshops, we prototyped and evaluated a click-dummy smart-phone app to explore consumers’ needs for digital taste depiction. We found that emphasizing the natural qualities of food products, external reviews, and personalizing features lead to a reflection on the individual taste experience. The self-reflection through our design enables consumers to develop their taste competencies and thus strengthen their autonomy in decision-making. Ultimately, exploring taste as a social experience adds to a broader understanding of taste beyond a sensory phenomenon.
Updating a shared data structure in a parallel program is usually done with some sort of high-level synchronization operation to ensure correctness and consistency. However, underlying synchronization instructions in a processor architecture are costly and rather limited in their scalability on larger multi-core/multi-processors systems. In this paper, we examine work queue operations where such costly atomic update operations are replaced with non-atomic modifiers (simple read+write). In this approach, we trade the exact amount of work with atomic operations against doing more and redundant work but without atomic operations and without violating the correctness of the algorithm. We show results for the application of this idea to the concrete scenario of parallel Breadth First Search (BFS) algorithms for undirected graphs on two large NUMA shared memory system with up to 64 cores.
In this paper, a set of micro-benchmarks is proposed to determine basic performance parameters of single-node mainstream hardware architectures for High Performance Computing. Performance parameters of recent processors, including those of accelerators, are determined. The investigated systems are Intel server processor architectures as well as the two accelerator lines Intel Xeon Phi and Nvidia graphic processors. Results show similarities for some parameters between all architectures, but significant differences for others.
Level-Synchronous Parallel Breadth-First Search Algorithms For Multicore and Multiprocessor Systems
(2014)
Breadth-First Search (BFS) is a graph traversal technique used in many applications as a building block, e.g.,~to systematically explore a search space. For modern multicore processors and as application graphs get larger, well-performing parallel algorithms are favourable. In this paper, we systematically evaluate an important class of parallel BFS algorithms and discuss programming optimization techniques for their implementation. We concentrate our discussion on level-synchronous algorithms for larger multicore and multiprocessor systems. In our results, we show that for small core counts many of these algorithms show rather similar behaviour. But, for large core counts and large graphs, there are considerable differences in performance and scalability influenced by several factors. This paper gives advice, which algorithm should be used under which circumstances.
The SpMV operation -- the multiplication of a sparse matrix with a dense vector -- is used in many simulations in natural and engineering sciences as a computational kernel. This kernel is quite performance critical as it is used, e.g.,~in a linear solver many times in a simulation run. Such performance critical kernels of a program may be optimized on certain levels, ranging from using a rather coarse grained and comfortable single compiler optimization switch down to utilizing architecural features by explicitly using special instructions on an assembler level. This paper discusses a selection of such program optimization techniques in this spectrum applied to the SpMV operation. The achievable performance gain as well as the additional programming effort are discussed. It is shown that low effort optimizations can improve the performance of the SpMV operation compared to a basic implementation. But further than that, more complex low level optimizations have a higher impact on the performance, although changing the original program and the readability / maintainability of a program significantly.
In Sensor-based Fault Detection and Diagnosis (SFDD) methods, spatial and temporal dependencies among the sensor signals can be modeled to detect faults in the sensors, if the defined dependencies change over time. In this work, we model Granger causal relationships between pairs of sensor data streams to detect changes in their dependencies. We compare the method on simulated signals with the Pearson correlation, and show that the method elegantly handles noise and lags in the signals and provides appreciable dependency detection. We further evaluate the method using sensor data from a mobile robot by injecting both internal and external faults during operation of the robot. The results show that the method is able to detect changes in the system when faults are injected, but is also prone to detecting false positives. This suggests that this method can be used as a weak detection of faults, but other methods, such as the use of a structural model, are required to reliably detect and diagnose faults.