Refine
H-BRS Bibliography
- yes (87) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (38)
- Fachbereich Ingenieurwissenschaften und Kommunikation (21)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (17)
- Fachbereich Wirtschaftswissenschaften (15)
- Institut für Verbraucherinformatik (IVI) (14)
- Institut für Cyber Security & Privacy (ICSP) (13)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (8)
- Fachbereich Sozialpolitik und Soziale Sicherung (5)
- Institute of Visual Computing (IVC) (4)
- Zentrum für Innovation und Entwicklung in der Lehre (ZIEL) (3)
Document Type
- Conference Object (87) (remove)
Year of publication
- 2021 (87) (remove)
Keywords
- Augmented Reality (3)
- Big Data Analysis (2)
- Cognitive robot control (2)
- Explainable robotics (2)
- Learning from experience (2)
- Usable Privacy (2)
- Usable Security (2)
- AES (1)
- AR design (1)
- AR development (1)
- AR/VR (1)
- Adoption (1)
- Advances in Design Science Research (1)
- Applications in Energy Transport (1)
- Appropriation (1)
- Architecture (1)
- Assistive robots (1)
- Authentication features (1)
- Automatic Differentiation (1)
- Bayesian Hierarchical Model (1)
- Benchmarking (1)
- Block cipher (1)
- Branch and cut (1)
- Cache line fingerprinting (1)
- Cognitive robotics (1)
- Complex Systems Modeling and Simulation (1)
- Compliant fingers (1)
- Computational fluid dynamics (1)
- Computing methodologies (1)
- Connected Car (1)
- Consumer Informatics (1)
- Continual robot learning (1)
- Correlative Microscopy (1)
- Cortex-M3 (1)
- DC-DC converter (1)
- DPA (1)
- Data Integration (1)
- Data literacy (1)
- Data visualization (1)
- Design (1)
- Design Probe (1)
- Design Recommendations (1)
- Design Theory and Practice (1)
- Differential analysis (1)
- Digital Receipt (1)
- Digital design (1)
- Digitalisierung & Internationalisierung (1)
- Digitalisierungsstrategie (1)
- Domestic Technology (1)
- E-Health (1)
- Efficiency (1)
- Employee data protection (1)
- Engaging Experience (1)
- Explainable Machine Learning (1)
- Fault Detection & Diagnosis (1)
- Feature extraction (1)
- Field Programmable Gate Array (FPGA) (1)
- Flow control (1)
- Food (1)
- Food Practices (1)
- Food Retail (1)
- GDPR (1)
- Generative Models (1)
- Grid-forming converters (1)
- Guidelines (1)
- H-BRS (1)
- HTTP (1)
- Header whitelisting (1)
- High Power Density Systems (1)
- Hochschule Bonn-Rhein-Sieg (1)
- Human centered computing (1)
- Hyperspectral image (1)
- Inductive Logic Programming (1)
- Integer programming (1)
- Integrated Household Information System (1)
- Integration Platform as a Service (1)
- Interactive Artifacts (1)
- Intermediaries (1)
- IoT (1)
- Lattice Boltzmann Method (1)
- Learning analytics (1)
- LoRa (1)
- LoRaWAN (1)
- Low-Power Wide Area Network (LP-WAN) (1)
- MR (1)
- Machine Learning (1)
- Machine-learning (1)
- Measurement (1)
- Medium Voltage (1)
- Memory-Constrained Devices (1)
- Microarchitectural Data Sampling (MDS) (1)
- Microgrid (1)
- Mixed (1)
- Mixed Reality (1)
- Mixed-methods (1)
- Multi-level converters (1)
- Multimodal Microspectroscopy (1)
- Multimodal Mobility (1)
- NISTPQC (1)
- Non-linear systems (1)
- OER (1)
- Object detection (1)
- Open Educational Resources (1)
- PDSTSP (1)
- PHR (1)
- Parallel drone scheduling traveling salesman problem (1)
- Password (1)
- Path Loss (1)
- Personal Health Record (1)
- Post-Quantum Signatures (1)
- Power Supply (1)
- Practice Theory (1)
- Privacy Awareness (1)
- Privacy engineering (1)
- Public Transport (1)
- Pulse Width Modulation (PWM) (1)
- Pytorch (1)
- QoS (1)
- Qualitative Study (1)
- Quality control (1)
- Quality diversity (1)
- Recommender Systems (1)
- Reflectance modeling (1)
- Registration Refinement (1)
- Remote lab (1)
- Repeat Purchase Recommendations (1)
- Risk Perception (1)
- Risk-based Authentication (1)
- Risk-based Authentication (RBA) (1)
- Robot failure diagnosis (1)
- Robot learning (1)
- Robot software (1)
- Robotics (1)
- Robotics competitions (1)
- Robust grasping (1)
- SAML (1)
- SIMO (1)
- SOAP (1)
- Scan and Go (1)
- Self-checkout (1)
- Self-service (1)
- Semantic gap (1)
- Separation algorithm (1)
- Shopping Experience (1)
- SiC (1)
- Side channel attack (1)
- Signature Verification (1)
- Silicon Carbide (SiC) (1)
- Slippage detection (1)
- Smart Home (1)
- Soft Switching (1)
- Software as a Service (1)
- Streaming (1)
- Sustainability (1)
- Unidirectional thermoplastic composites (1)
- Urban (1)
- Usable Security and Privacy (1)
- User Interface Design (1)
- User Requirements (1)
- Variational Autoencoder (1)
- Visual Arts (1)
- Visualization design and evaluation methods (1)
- Visualization systems and tools (1)
- Voice Assistants (1)
- Web (1)
- Well-being (1)
- XML Signature (1)
- XML Signature Wrapping (1)
- XR (1)
- ZombieLoad (1)
- advanced applications (1)
- antenna array correlation (1)
- app (1)
- authoring tools (1)
- channel sounding (1)
- cityplanning (1)
- co-design (1)
- component based (1)
- consumer informatics (1)
- critical consumerism (1)
- cultural diversity (1)
- data literacy (1)
- data science (1)
- data science canvas (1)
- design (1)
- digital images (1)
- digitale internationale Kooperation (1)
- e-learning (1)
- ethics (1)
- evaluation (1)
- higher education (1)
- input-series output-parallel (1)
- interdisciplinary virtual exchange (1)
- largescale parameter (1)
- mMIMO (1)
- mathematical chemistry (1)
- measurements (1)
- modeling of complex systems (1)
- multi robot systems (1)
- multi-disciplinary approach (1)
- neural networks (1)
- observational data and simulations (1)
- practitioners (1)
- property-based testing for robots (1)
- quiz formats (1)
- quizzes (1)
- recommender systems (1)
- security (1)
- simulation-based robot testing (1)
- sustainability (1)
- verification and validation of robot action execution (1)
- virtuelle und hybride Mobilität (1)
Robot Action Diagnosis and Experience Correction by Falsifying Parameterised Execution Models
(2021)
When faced with an execution failure, an intelligent robot should be able to identify the likely reasons for the failure and adapt its execution policy accordingly. This paper addresses the question of how to utilise knowledge about the execution process, expressed in terms of learned constraints, in order to direct the diagnosis and experience acquisition process. In particular, we present two methods for creating a synergy between failure diagnosis and execution model learning. We first propose a method for diagnosing execution failures of parameterised action execution models, which searches for action parameters that violate a learned precondition model. We then develop a strategy that uses the results of the diagnosis process for generating synthetic data that are more likely to lead to successful execution, thereby increasing the set of available experiences to learn from. The diagnosis and experience correction methods are evaluated for the problem of handle grasping, such that we experimentally demonstrate the effectiveness of the diagnosis algorithm and show that corrected failed experiences can contribute towards improving the execution success of a robot.
When an autonomous robot learns how to execute actions, it is of interest to know if and when the execution policy can be generalised to variations of the learning scenarios. This can inform the robot about the necessity of additional learning, as using incomplete or unsuitable policies can lead to execution failures. Generalisation is particularly relevant when a robot has to deal with a large variety of objects and in different contexts. In this paper, we propose and analyse a strategy for generalising parameterised execution models of manipulation actions over different objects based on an object ontology. In particular, a robot transfers a known execution model to objects of related classes according to the ontology, but only if there is no other evidence that the model may be unsuitable. This allows using ontological knowledge as prior information that is then refined by the robot’s own experiences. We verify our algorithm for two actions – grasping and stowing everyday objects – such that we show that the robot can deduce cases in which an existing policy can generalise to other objects and when additional execution knowledge has to be acquired.
Target meaning representations for semantic parsing tasks are often based on programming or query languages, such as SQL, and can be formalized by a context-free grammar. Assuming a priori knowledge of the target domain, such grammars can be exploited to enforce syntactical constraints when predicting logical forms. To that end, we assess how syntactical parsers can be integrated into modern encoder-decoder frameworks. Specifically, we implement an attentional SEQ2SEQ model that uses an LR parser to maintain syntactically valid sequences throughout the decoding procedure. Compared to other approaches to grammar-guided decoding that modify the underlying neural network architecture or attempt to derive full parse trees, our approach is conceptually simpler, adds less computational overhead during inference and integrates seamlessly with current SEQ2SEQ frameworks. We present preliminary evaluation results against a recurrent SEQ2SEQ baseline on GEOQUERY and ATIS and demonstrate improved performance while enforcing grammatical constraints.
Property-Based Testing in Simulation for Verifying Robot Action Execution in Tabletop Manipulation
(2021)
An important prerequisite for the reliability and robustness of a service robot is ensuring the robot’s correct behavior when it performs various tasks of interest. Extensive testing is one established approach for ensuring behavioural correctness; this becomes even more important with the integration of learning-based methods into robot software architectures, as there are often no theoretical guarantees about the performance of such methods in varying scenarios. In this paper, we aim towards evaluating the correctness of robot behaviors in tabletop manipulation through automatic generation of simulated test scenarios in which a robot assesses its performance using property-based testing. In particular, key properties of interest for various robot actions are encoded in an action ontology and are then verified and validated within a simulated environment. We evaluate our framework with a Toyota Human Support Robot (HSR) which is tested in a Gazebo simulation. We show that our framework can correctly and consistently identify various failed actions in a variety of randomised tabletop manipulation scenarios, in addition to providing deeper insights into the type and location of failures for each designed property.
Execution monitoring is essential for robots to detect and respond to failures. Since it is impossible to enumerate all failures for a given task, we learn from successful executions of the task to detect visual anomalies during runtime. Our method learns to predict the motions that occur during the nominal execution of a task, including camera and robot body motion. A probabilistic U-Net architecture is used to learn to predict optical flow, and the robot's kinematics and 3D model are used to model camera and body motion. The errors between the observed and predicted motion are used to calculate an anomaly score. We evaluate our method on a dataset of a robot placing a book on a shelf, which includes anomalies such as falling books, camera occlusions, and robot disturbances. We find that modeling camera and body motion, in addition to the learning-based optical flow prediction, results in an improvement of the area under the receiver operating characteristic curve from 0.752 to 0.804, and the area under the precision-recall curve from 0.467 to 0.549.
Over the last decades, different kinds of design guides have been created to maintain consistency and usability in interactive system development. However, in the case of spatial applications, practitioners from research and industry either have difficulty finding them or perceive such guides as lacking relevance, practicability, and applicability. This paper presents the current state of scientific research and industry practice by investigating currently used design recommendations for mixed reality (MR) system development. We analyzed and compared 875 design recommendations for MR applications elicited from 89 scientific papers and documentation from six industry practitioners in a literature review. In doing so, we identified differences regarding four key topics: Focus on unique MR design challenges, abstraction regarding devices and ecosystems, level of detail and abstraction of content, and covered topics. Based on that,we contribute to the MR design research by providing three factors for perceived irrelevance and six main implications for design recommendations that are applicable in scientific and industry practice.
Augmented/Virtual Reality (AR/VR) is still a fragmented space to design for due to the rapidly evolving hardware, the interdisciplinarity of teams, and a lack of standards and best practices. We interviewed 26 professional AR/VR designers and developers to shed light on their tasks, approaches, tools, and challenges. Based on their work and the artifacts they generated, we found that AR/VR application creators fulfill four roles: concept developers, interaction designers, content authors, and technical developers. One person often incorporates multiple roles and faces a variety of challenges during the design process from the initial contextual analysis to the deployment. From analysis of their tool sets, methods, and artifacts, we describe critical key challenges. Finally, we discuss the importance of prototyping for the communication in AR/VR development teams and highlight design implications for future tools to create a more usable AR/VR tool chain.
New cars are increasingly "connected" by default. Since not having a car is not an option for many people, understanding the privacy implications of driving connected cars and using their data-based services is an even more pressing issue than for expendable consumer products. While risk-based approaches to privacy are well established in law, they have only begun to gain traction in HCI. These approaches are understood not only to increase acceptance but also to help consumers make choices that meet their needs. To the best of our knowledge, perceived risks in the context of connected cars have not been studied before. To address this gap, our study reports on the analysis of a survey with 18 open-ended questions distributed to 1,000 households in a medium-sized German city. Our findings provide qualitative insights into existing attitudes and use cases of connected car features and, most importantly, a list of perceived risks themselves. Taking the perspective of consumers, we argue that these can help inform consumers about data use in connected cars in a user-friendly way. Finally, we show how these risks fit into and extend existing risk taxonomies from other contexts with a stronger social perspective on risks of data use.
Since stationary self-checkout is widely introduced and well understood, previous research barely examined newer generations of smartphone-based Scan&Go. Especially from a design perspective, we know little about the factors contributing to the adoption of Scan&Go solutions and how design enables consumers to take full advantage of this development rather than being burdened with using complex and unenjoyable systems. To understand the influencing factors and the design from a consumer perspective, we conducted a mixed-methods study where we triangulated data of an online survey with 103 participants and a qualitative study with 20 participants. Based on the results, our study presents a refined and nuanced understanding of technology as well as infrastructure-related factors that influence adoption. Moreover, we present several implications for designing and implementing of Scan&Go in retail environments.
Atomic oxygen in the mesosphere and lower thermosphere measured by terahertz heterodyne spectroscopy
(2021)
Atomic oxygen is a main component of the mesosphere and lower thermosphere (MLT). The photochemistry and the energy balance of the MLT are governed by atomic oxygen. In addition, it is a tracer for dynamical motions in the MLT. It is difficult to measure with remote sensing techniques. Concentrations can be inferred indirectly from the oxygen air glow or from observations of OH, which is involved in photochemical processes related to atomic oxygen. Such measurements have been performed with several satellite instruments such as SCIAMACHY, SABER, WINDII and OSIRIS. However, the methods are indirect and rely on photochemical models and assumptions such as quenching rates, radiative lifetimes, and reaction coefficients. The results are not always in agreement, particularly when obtained with different instruments.
Representation and Experience-Based Learning of Explainable Models for Robot Action Execution
(2021)
For robots acting in human-centered environments, the ability to improve based on experience is essential for reliable and adaptive operation; however, particularly in the context of robot failure analysis, experience-based improvement is only useful if robots are also able to reason about and explain the decisions they make during execution. In this paper, we describe and analyse a representation of execution-specific knowledge that combines (i) a relational model in the form of qualitative attributes that describe the conditions under which actions can be executed successfully and (ii) a continuous model in the form of a Gaussian process that can be used for generating parameters for action execution, but also for evaluating the expected execution success given a particular action parameterisation. The proposed representation is based on prior, modelled knowledge about actions and is combined with a learning process that is supervised by a teacher. We analyse the benefits of this representation in the context of two actions – grasping handles and pulling an object on a table – such that the experiments demonstrate that the joint relational-continuous model allows a robot to improve its execution based on experience, while reducing the severity of failures experienced during execution.
We consider multi-solution optimization and generative models for the generation of diverse artifacts and the discovery of novel solutions. In cases where the domain's factors of variation are unknown or too complex to encode manually, generative models can provide a learned latent space to approximate these factors. When used as a search space, however, the range and diversity of possible outputs are limited to the expressivity and generative capabilities of the learned model. We compare the output diversity of a quality diversity evolutionary search performed in two different search spaces: 1) a predefined parameterized space and 2) the latent space of a variational autoencoder model. We find that the search on an explicit parametric encoding creates more diverse artifact sets than searching the latent space. A learned model is better at interpolating between known data points than at extrapolating or expanding towards unseen examples. We recommend using a generative model's latent space primarily to measure similarity between artifacts rather than for search and generation. Whenever a parametric encoding is obtainable, it should be preferred over a learned representation as it produces a higher diversity of solutions.
Sharing economies enabled by technical platforms have been studied regarding their economic, legal, and social effects, as well as with regard to their possible influences on CSCW topics such as work, collaboration, and trust. While a lot current research is focusing on the sharing economy and related communities, there is little work addressing the phenomenon from a socio-technical point of view. Our workshop is meant to address this gap. Building on research themes and discussion from last year’s ECSCW, we seek to engage deeper with topics such as novel socio-technical approaches for enabling sharing communities, discussing issues around digital consumer and worker protection, as well as emerging challenges and opportunities of existing platforms and approaches.
Frequently the main purpose of domestic artifacts equipped with smart sensors is to hide technology, like previous examples of a Smart Mirror show. However, current Smart Homes often fail to provide meaningful IoT applications for all residents’ needs. To design beyond efficiency and productivity, we propose to realize the potential of the traditional artifact for calm and engaging experiences. Therefore, we followed a design case study approach with 22 participants in total. After an initial focus group, we conducted a diary study to examine home routines and developed a conceptual design. The evaluation of our mid-fidelity prototype shows, that we need to study carefully the practices of the residents to leverage the physical material of the artifact to fit the routines. Our Smart Mirror, enhanced by digital qualities, supports meaningful activities and makes the bathroom more appealing. Thereby, we discuss domestic technology design beyond automation.
An der Hochschule Bonn-Rhein-Sieg fand am Donnerstag, den 23.9.21 das erste Verbraucherforum für Verbraucherinformatik statt. Im Rahmen der Online-Tagesveranstaltung diskutierten mehr als 30 Teilnehmer:innen über Themen und Ideen rund um den Bereich Verbraucherdatenschutz. Dabei kamen sowohl Beiträge aus der Informatik, den Verbraucher- und Sozialwissenschaften sowie auch der regulatorischen Perspektive zur Sprache. Der folgende Beitrag stellt den Hintergrund der Veranstaltung dar und berichtet über Inhalte der Vorträge sowie Anknüpfungspunkte für die weitere Konstituierung der Verbraucherinformatik. Veranstalter waren das Institut für Verbraucherinformatik an der H-BRS in Zusammenarbeit mit dem Lehrstuhl IT-Sicherheit der Universität Siegen sowie dem Kompetenzzentrum Verbraucherforschung NRW der Verbraucherzentrale NRW e. V. mit Förderung des Bundesministeriums der Justiz und für Verbraucherschutz.
Recent publications propose concepts of systems that integrate the various services and data sources of everyday food practices. However, this research does not go beyond the conceptualization of such systems. Therefore, there is a deficit in understanding how to combine different services and data sources and which design challenges arise from building integrated Household Information Systems. In this paper, we probed the design of an Integrated Household Information System with 13 participants. The results point towards more personalization, automatization of storage administration and enabling flexible artifact ecologies. Our paper contributes to understanding the design and usage of Integrated Household Information Systems, as a new class of information systems for HCI research.
Voice assistants (VA) collect data about users’ daily life including interactions with other connected devices, musical preferences, and unintended interactions. While users appreciate the convenience of VAs, their understanding and expectations of data collection by vendors are often vague and incomplete. By making the collected data explorable for consumers, our research-through-design approach seeks to unveil design resources for fostering data literacy and help users in making better informed decisions regarding their use of VAs. In this paper, we present the design of an interactive prototype that visualizes the conversations with VAs on a timeline and provides end users with basic means to engage with data, for instance allowing for filtering and categorization. Based on an evaluation with eleven households, our paper provides insights on how users reflect upon their data trails and presents design guidelines for supporting data literacy of consumers in the context of VAs.
Critical consumerism is complex as ethical values are difficult to negotiate, appropriate products are hard to find, and product information is overwhelming. Although recommender systems offer solutions to reduce such complexity, current designs are not appropriate for niche practices and use non-personalized intransparent ethics. To support critical consumption, we conducted a design case study on a personalized food recommender system. Therefore, we first conducted an empirical pre-study with 24 consumers to understand value negotiations and current practices, co-designed the recommender system, and finally evaluated it in a real-world trial with ten consumers. Our findings show how recommender systems can support the negotiation of ethical values within the context of consumption practices, reduce the complexity of finding products and stores, and strengthen consumers. In addition to providing implications for the design to support critical consumption practices, we critically reflect on the scope of such recommender systems and its appropriation.
New communication technologies are changing the way we work and communicate with people around the world. Given this reality, students in Higher Education (HE) worldwide need to develop knowledge in their area of study as well as attitudes and values that will enable them to be responsible and ethical global citizens in the workforce they will soon enter, regardless of the degree. Different institutional and country-specific requirements are important factors when developing an international Virtual Exchange (VE) program. Digital learning environments such as ProGlobe – Promoting the Global Exchange of Ideas on Sustainable Goals, Practices, and Cultural Diversity – offer a platform for collaborating with diverse students around the world to share and reflect on ideas on sustainable practices. Students work together virtually on a joint interdisciplinary project that aims to create knowledge and foster cultural diversity. This project was successfully integrated into each country’s course syllabus through a common global theme; sustainability. The focus of this paper is to present multi-disciplinary perspectives on the opportunities and challenges in implementing a VE project in HE. Furthermore, it will present the challenges that country coordinators dealt with when planning and implementing their project. Given the disparity found in each course syllabus, project coordinators uniquely handled the project goal, approach, and assessment for their specific course and program. Not only did the students and faculty gain valuable insight into different aspects of collaboration when working in interdisciplinary HE projects, they also reflected on their own impact on the environment and learned to listen to how people in different countries deal with environmental issues. This approach provided students with meaningful intercultural experiences that helped them link ideas and concepts about a global issue through the lens of their own discipline as well as other disciplines worldwide.
Designs for decorative surfaces, such as flooring, must cover several square meters to avoid visible repeats. While the use of desktop systems is feasible to support the designer, it is challenging for a non-domain expert to get the right impression of the appearances of surfaces due to limited display sizes and a potentially unnatural interaction with digital designs. At the same time, large-format editing of structure and gloss is becoming increasingly important. Advances in the printing industry allow for more faithful reproduction of such surface details. Unfortunately, existing systems for visualizing surface designs cannot adequately account for gloss, especially for non-domain experts. Here, the complex interaction of light sources and the camera position must be controlled using software controls. As a result, only small parts of the data set can be properly inspected at a time. Also, real-world lighting is not considered here. This work presents a system for the processing and realistic visualization of large decorative surface designs. To this end, we present a tabletop solution that is coupled to a live 360° video feed and a spatial tracking system. This allows for reproducing natural view-dependent effects like real-world reflections, live image-based lighting, and the interaction with the design using virtual light sources employing natural interaction techniques that allow for a more accurate inspection even for non-domain experts.
In this paper, the electrochemical alkaline methanol oxidation process, which is relevant for the design of efficient fuel cells, is considered. An algorithm for reconstructing the reaction constants for this process from the experimentally measured polarization curve is presented. The approach combines statistical and principal component analysis and determination of the trust region for a linearized model. It is shown that this experiment does not allow one to determine accurately the reaction constants, but only some of their linear combinations. The possibilities of extending the method to additional experiments, including dynamic cyclic voltammetry and variations in the concentration of the main reagents, are discussed.
Die digitale Transformation verändert die internationale Kooperation der Hochschulen massiv. Über die Möglichkeiten der virtuellen Mobilität hinaus entstehen neue Themenfelder, die internationale Lern- und Lehrerlebnisse mit digitaler Unterstützung verändern, ergänzen oder neu ermöglichen. Dazu sind im Bereich der Förderung der Internationalisierung (DAAD, Erasmus+, BMBF u.a.) Projekte und Förderformate entstanden, die Digitalisierung und Internationalisierung kombinieren und die neuen Themenstellungen adressieren, z.B. didaktische Formate, administrative Prozesse (auch im Kontext OZG und DSGVO), virtuelle und hybride Mobilität, internationale Projekt- und Teamformate sowie schlussendlich auch Inhalte, die internationale, interkulturelle und interdisziplinäre Kompetenzen mit digitalen Kompetenzen verbinden. Der vorgeschlagene Workshop soll entsprechende Projekte zusammenbringen und die Themen strukturieren, um einen Überblick der Entwicklungen zu schaffen und somit einen Beitrag zur Definition des Themenfelds „Digitalisierung & Internationalisierung“ zu leisten.
Solving transport network problems can be complicated by non-linear effects. In the particular case of gas transport networks, the most complex non-linear elements are compressors and their drives. They are described by a system of equations, composed of a piecewise linear ‘free’ model for the control logic and a non-linear ‘advanced’ model for calibrated characteristics of the compressor. For all element equations, certain stability criteria must be fulfilled, providing the absence of folds in associated system mapping. In this paper, we consider a transformation (warping) of a system from the space of calibration parameters to the space of transport variables, satisfying these criteria. The algorithm drastically improves stability of the network solver. Numerous tests on realistic networks show that nearly 100% convergence rate of the solver is achieved with this approach.
Konzept zum Umgang mit Prüfungsstress und Lernblockaden bei Studierenden in der Studieneingangsphase
(2021)
In view of the rapid growth of solar power installations worldwide, accurate forecasts of photovoltaic (PV) power generation are becoming increasingly indispensable for the overall stability of the electricity grid. In the context of household energy storage systems, PV power forecasts contribute towards intelligent energy management and control of PV-battery systems, in particular so that self-sufficiency and battery lifetime are maximised. Typical battery control algorithms require day-ahead forecasts of PV power generation, and in most cases a combination of statistical methods and numerical weather prediction (NWP) models are employed. The latter are however often inaccurate, both due to deficiencies in model physics as well as an insufficient description of irradiance variability.
Photovoltaic (PV) power data are a valuable but as yet under-utilised resource that could be used to characterise global irradiance with unprecedented spatio-temporal resolution. The resulting knowledge of atmospheric conditions can then be fed back into weather models and will ultimately serve to improve forecasts of PV power itself. This provides a data-driven alternative to statistical methods that use post-processing to overcome inconsistencies between ground-based irradiance measurements and the corresponding predictions of regional weather models (see for instance Frank et al., 2018). This work reports first results from an algorithm developed to infer global horizontal irradiance as well as atmospheric optical properties such as aerosol or cloud optical depth from PV power measurements.
The rapid increase in solar photovoltaic (PV) installations worldwide has resulted in the electricity grid becoming increasingly dependent on atmospheric conditions, thus requiring more accurate forecasts of incoming solar irradiance. In this context, measured data from PV systems are a valuable source of information about the optical properties of the atmosphere, in particular the cloud optical depth (COD). This work reports first results from an inversion algorithm developed to infer global, direct and diffuse irradiance as well as atmospheric optical properties from PV power measurements, with the goal of assimilating this information into numerical weather prediction (NWP) models.
West Africa has a great potential for the application of solar energy systems, as it combines high levels of solar irradiance with a lack of energy production. Southern West Africa is a region with a very high aerosol load. Urbanization, uncontrolled fires, traffic as well as power plants and oil rigs lead to increasing anthropogenic emissions. The naturally circulating north winds bring mineral dust from the Sahel and Sahara and monsoons - sea salt and other oceanic compounds from the south. The EU-funded Dynamics-Aerosol-Chemistry-Cloud Interactions in West Africa (DACCIWA) project (2014–2018), dlivered the most complete dataset of the atmosphere over the region to date. In our study, we use in-situ measured optical properties of aerosols from the airborne campaign over the Gulf of Guinea and inland, and from ground measurements in coastal cities.
In den Atmosphärenwissenschaften spielt die Strahlungsbilanz der Erde eine wichtige Rolle für unser Verständnis des Klimasystems. Hier liefern ausgereifte Satellitenprodukte dekadische Klimazeitreihen mit einer so hohen Genauigkeit, dass z.B. Änderungen im Zusammenhang mit dem Klimawandel detektiert werden können. Dies gilt insbesondere auch für die solaren Strahlungsflüsse an der Erdoberfläche. Beim Vergleich dieser Satellitenprodukte mit instantanen Beobachtungen der Strahlung am Erdboden sind jedoch oft erhebliche Abweichungen feststellbar, die hauptsächlich durch kleinskalige Variabilität in der räumlichen Struktur von Wolken und ihrer Strahlungswirkung verursacht werden. Hier ist auch zu bedenken, dass Bodenbeobachtungen fast einer Punktmessung entsprechen, während Satellitenpixel eine Fläche in der Größenordnung von Quadratkilometern abtasten.