Refine
H-BRS Bibliography
- yes (239) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (64)
- Fachbereich Ingenieurwissenschaften und Kommunikation (47)
- Fachbereich Sozialpolitik und Soziale Sicherung (41)
- Fachbereich Wirtschaftswissenschaften (40)
- Fachbereich Angewandte Naturwissenschaften (34)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (28)
- Institut für funktionale Gen-Analytik (IFGA) (16)
- Institut für Verbraucherinformatik (IVI) (13)
- Institut für Cyber Security & Privacy (ICSP) (12)
- Institute of Visual Computing (IVC) (9)
Document Type
- Article (68)
- Conference Object (62)
- Part of a Book (29)
- Book (monograph, edited volume) (25)
- Contribution to a Periodical (10)
- Preprint (10)
- Video (8)
- Report (8)
- Research Data (6)
- Doctoral Thesis (5)
- Book review (3)
- Conference Proceedings (1)
- Lecture (1)
- Patent (1)
- Study Thesis (1)
- Working Paper (1)
Year of publication
- 2021 (239) (remove)
Has Fulltext
- no (239) (remove)
Keywords
- Lehrbuch (7)
- DGQ (6)
- Melcher (6)
- Grundwerkzeug des Qualitätsmanagements (3)
- Machine Learning (3)
- Augmented Reality (2)
- Automatic Differentiation (2)
- Cognitive robot control (2)
- Digitalisierung (2)
- Dimensionality reduction (2)
Jahresbericht 2020
(2021)
It has been well proved that deep networks are efficient at extracting features from a given (source) labeled dataset. However, it is not always the case that they can generalize well to other (target) datasets which very often have a different underlying distribution. In this report, we evaluate four different domain adaptation techniques for image classification tasks: DeepCORAL, DeepDomainConfusion, CDAN and CDAN+E. These techniques are unsupervised given that the target dataset dopes not carry any labels during training phase. We evaluate model performance on the office-31 dataset. A link to the github repository of this report can be found here: https://github.com/agrija9/Deep-Unsupervised-Domain-Adaptation.
Kundenloyalität stellt als langfristig wirkende Metrik eine erstrebenswerte Erfolgsgröße vieler Unternehmen dar. Im Rahmen einer Strukturgleichungsmodellierung wurden die Beziehungen und Auswirkungen der wahrgenommenen Kundenzentrierung, des Markenvertrauens (kognitiv und affektiv) und der Preis-Wahrnehmung auf die Kundenloyalität (Wiederkaufintention und Empfehlungsbereitschaft) bei physischen high-Involvement-Produkten untersucht. (Verlagsangaben)
Ice accumulation in the blades of wind turbines can cause them to describe anomalous rotations or no rotations at all, thus affecting the generation of electricity and power output. In this work, we investigate the problem of ice accumulation in wind turbines by framing it as anomaly detection of multi-variate time series. Our approach focuses on two main parts: first, learning low-dimensional representations of time series using a Variational Recurrent Autoencoder (VRAE), and second, using unsupervised clustering algorithms to classify the learned representations as normal (no ice accumulated) or abnormal (ice accumulated). We have evaluated our approach on a custom wind turbine time series dataset, for the two-classes problem (one normal versus one abnormal class), we obtained a classification accuracy of up to 96$\%$ on test data. For the multiple-class problem (one normal versus multiple abnormal classes), we present a qualitative analysis of the low-dimensional learned latent space, providing insights into the capacities of our approach to tackle such problem. The code to reproduce this work can be found here https://github.com/agrija9/Wind-Turbines-VRAE-Paper.
Robot Action Diagnosis and Experience Correction by Falsifying Parameterised Execution Models
(2021)
When faced with an execution failure, an intelligent robot should be able to identify the likely reasons for the failure and adapt its execution policy accordingly. This paper addresses the question of how to utilise knowledge about the execution process, expressed in terms of learned constraints, in order to direct the diagnosis and experience acquisition process. In particular, we present two methods for creating a synergy between failure diagnosis and execution model learning. We first propose a method for diagnosing execution failures of parameterised action execution models, which searches for action parameters that violate a learned precondition model. We then develop a strategy that uses the results of the diagnosis process for generating synthetic data that are more likely to lead to successful execution, thereby increasing the set of available experiences to learn from. The diagnosis and experience correction methods are evaluated for the problem of handle grasping, such that we experimentally demonstrate the effectiveness of the diagnosis algorithm and show that corrected failed experiences can contribute towards improving the execution success of a robot.
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited. To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs. This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against two baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.083. Additionally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications. Finally, the source code and pre-trained STonKGs models are available at https://github.com/stonkgs/stonkgs and https://huggingface.co/stonkgs/stonkgs-150k.
When an autonomous robot learns how to execute actions, it is of interest to know if and when the execution policy can be generalised to variations of the learning scenarios. This can inform the robot about the necessity of additional learning, as using incomplete or unsuitable policies can lead to execution failures. Generalisation is particularly relevant when a robot has to deal with a large variety of objects and in different contexts. In this paper, we propose and analyse a strategy for generalising parameterised execution models of manipulation actions over different objects based on an object ontology. In particular, a robot transfers a known execution model to objects of related classes according to the ontology, but only if there is no other evidence that the model may be unsuitable. This allows using ontological knowledge as prior information that is then refined by the robot’s own experiences. We verify our algorithm for two actions – grasping and stowing everyday objects – such that we show that the robot can deduce cases in which an existing policy can generalise to other objects and when additional execution knowledge has to be acquired.
Spätestens seit der Belegausgabepflicht in Deutschland ist der digitale Kassenbon in aller Munde. Neben der Reduzierung umweltschädlichen Thermopapiers ergeben sich mit dieser Technologie auch neue Schnittstellen zwischen Kunde:in und Handel. Diese können für eine stärkere Digitalisierung und ein gesteigertes Kund:innen-Erlebnis genutzt werden.
Vor diesem Hintergrund betrachtet dieses Whitepaper die Perspektiven der verschiedenen Stakeholder, Architekturen sowie mögliche Mehrwertdienste zur Steigerung des Kund:innen-Erlebnis, aber auch zur Optimierung der Handelsprozesse.
Simultaneous determination of selected catechins and pyrogallol in deer intoxications by HPLC-MS/MS
(2021)
Sie sind im Bereich Qualitätsmanagement tätig und haben die Aufgabe bekommen, ein Problem systematisch zu untersuchen und methodisch zu lösen? Sie haben zu viele Aufgaben und wissen nicht, wie Sie diese priorisieren sollen? Oder haben Sie zu begrenzte Ressourcen, um alle Reklamationen gleichzeitig bearbeiten zu können? Oder wissen nicht, wie Sie einen bestimmten Prozess in seinen Grenzen zielführend verbessern können?
Application of underwater robots are on the rise, most of them are dependent on sonar for underwater vision, but the lack of strong perception capabilities limits them in this task. An important issue in sonar perception is matching image patches, which can enable other techniques like localization, change detection, and mapping. There is a rich literature for this problem in color images, but for acoustic images, it is lacking, due to the physics that produce these images. In this paper we improve on our previous results for this problem (Valdenegro-Toro et al, 2017), instead of modeling features manually, a Convolutional Neural Network (CNN) learns a similarity function and predicts if two input sonar images are similar or not. With the objective of improving the sonar image matching problem further, three state of the art CNN architectures are evaluated on the Marine Debris dataset, namely DenseNet, and VGG, with a siamese or two-channel architecture, and contrastive loss. To ensure a fair evaluation of each network, thorough hyper-parameter optimization is executed. We find that the best performing models are DenseNet Two-Channel network with 0.955 AUC, VGG-Siamese with contrastive loss at 0.949 AUC and DenseNet Siamese with 0.921 AUC. By ensembling the top performing DenseNet two-channel and DenseNet-Siamese models overall highest prediction accuracy obtained is 0.978 AUC, showing a large improvement over the 0.91 AUC in the state of the art.
The dataset contains the following data from successful and failed executions of the Toyota HSR robot placing a book on a shelf.
RGB images from the robot's head camera
Depth images from the robot's head camera
Rendered images of the robot's 3D model from the point of view of the robot's head camera
Force-torque readings from a wrist-mounted force-torque sensor
Joint efforts, velocities and positions
extrinsic and intrinsic camera calibration parameters
frame-level anomaly annotations
The anomalies that occur during execution include:
the manipulated book falling down
books on the shelf being disturbed significantly
camera occlusions
robot being disturbed by an external collision
The dataset is split into a train, validation and test set with the following number of trials:
Train: 48 successful trials
Validation: 6 successful trials
Test: 60 anomalous trials and 7 successful trials
Property-Based Testing in Simulation for Verifying Robot Action Execution in Tabletop Manipulation
(2021)
An important prerequisite for the reliability and robustness of a service robot is ensuring the robot’s correct behavior when it performs various tasks of interest. Extensive testing is one established approach for ensuring behavioural correctness; this becomes even more important with the integration of learning-based methods into robot software architectures, as there are often no theoretical guarantees about the performance of such methods in varying scenarios. In this paper, we aim towards evaluating the correctness of robot behaviors in tabletop manipulation through automatic generation of simulated test scenarios in which a robot assesses its performance using property-based testing. In particular, key properties of interest for various robot actions are encoded in an action ontology and are then verified and validated within a simulated environment. We evaluate our framework with a Toyota Human Support Robot (HSR) which is tested in a Gazebo simulation. We show that our framework can correctly and consistently identify various failed actions in a variety of randomised tabletop manipulation scenarios, in addition to providing deeper insights into the type and location of failures for each designed property.
Execution monitoring is essential for robots to detect and respond to failures. Since it is impossible to enumerate all failures for a given task, we learn from successful executions of the task to detect visual anomalies during runtime. Our method learns to predict the motions that occur during the nominal execution of a task, including camera and robot body motion. A probabilistic U-Net architecture is used to learn to predict optical flow, and the robot's kinematics and 3D model are used to model camera and body motion. The errors between the observed and predicted motion are used to calculate an anomaly score. We evaluate our method on a dataset of a robot placing a book on a shelf, which includes anomalies such as falling books, camera occlusions, and robot disturbances. We find that modeling camera and body motion, in addition to the learning-based optical flow prediction, results in an improvement of the area under the receiver operating characteristic curve from 0.752 to 0.804, and the area under the precision-recall curve from 0.467 to 0.549.
Short summary
Accompanying dataset for our paper
A. Mitrevski, P. G. Plöger, and G. Lakemeyer, "Robot Action Diagnosis and Experience Correction by Falsifying Parameterised Execution Models," in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2021.
Contents
The dataset includes a single zip archive, containing data from the experiment described in the paper (conducted with a Toyota HSR). The zip archive contains three subdirectories:
handle_grasping_failure_database: A dump of a MongoDB database containing data from the handle grasping experiment, including ground-truth grasping failure annotations
pre_arm_motion_images: Images collected from the robot's hand camera before moving the robot's hand towards the handle
pregrasp_images: Images collected from the robot's hand camera just before closing the gripper for grasping
The image names include the time stamp at which the images were taken; this allows matching each image with the execution data in the database.
Database usage
After unzipping the archive, the database can be restored with the command
mongorestore handle_grasping_failure_database
This will create a MongoDB database with the name drawer_handle_grasping_failures.
Code for processing the data and failure analysis can be found in our <a href="https://github.com/alex-mitrevski/explainable-robot-execution-models">GitHub repository.
Contents
There are two zip archives included (grasping.zip and throwing.zip), corresponding to two experiments (grasping objects and throwing them in a drawer), both performed with a Toyota HSR. Each archive contains two directories - learning and generalisation - with object-specific learning and generalisation data. For each object, we provide a dump of a MongoDB database, which contains data sufficient for learning the models used in our experiments.
Usage
After unzipping the archives, each database can be restored with the command
mongorestore [data_directory_name]
This will create a MongoDB database with the name of the directory. Code for processing the data and model learning can be found in our <a href="https://github.com/alex-mitrevski/explainable-robot-execution-models">GitHub repository.
In tree-based adaptive mesh refinement (AMR) we store refinement trees in the cells of an unstructured coarse mesh. This lets us combine the speed and simpler management of structured refinement trees with the more flexible mesh generation of the unstructured coarse mesh. But this creates a conflict between performance and geometrical accuracy. If we favor speed we reduce the cells in our coarse mesh and hence reduce the accuracy of our geometrical representation. If we want more accurate results we generate a finer coarse mesh and lose performance by managing more cells in our unstructured coarse mesh. To mitigate this conflict we present the prototype of an geometry description which we implement in an already existing library. With this description we build geometry adapted hexahedral refinement trees, which also support high-order curved boundary cells. We also present examples on how to use this description. Moreover, we test the speedup of this new algorithm compared with coarse meshes with different geometrical errors.
Die nationale Politik- und Forschungsstrategie Bioökonomie sieht eine Transformation der Wirtschaft vor, bei der die Verwendung fossiler Rohstoffe zunehmend durch den Einsatz nachwachsender Rohstoffe ersetzt wird. Der Einsatz biobasierter Kunststoffe soll dabei gefördert werden. Erste Analysen der Berichterstattung zu Biokunststoffen im Rahmen einer Pilotstudie ergaben, dass der Grundgedanke biologisch abbaubarer Kunststoffe breite Zustimmung im öffentlichen Diskurs erfährt. Abseits der soziopolitischen Diskursebene entwickelt sich jedoch eine medial geführte Diskussion um erhebliche Probleme mit den Stoffen in der Abfallwirtschaft. Die Gefahr besteht nun, dass diese Haltung verbreitet durch die Massenmedien auf die öffentliche Meinung abfärbt. Mangelnde öffentliche Akzeptanz könnte den Erfolg von innovativen Biokunststoff-Produkten gefährden.
In diesem Lehrbuch geben die Autoren eine Einführung in die Methoden des traditionellen Projektmanagements in Anlehnung an die deutsche Gesellschaft für Projektmanagement (GPM/IPMA) und das Project Management Institute (PMI). Die typischen Merkmale und Schritte in einem Projekt, wichtige Rollen sowie die hilfreiche Einteilung in Phasen werden aufgezeigt und der Leser lernt, eine realistische Projektplanung aufzustellen und das Projekt zielgerichtet zu steuern. Viele praktische Beispiele sowie eingefügte Video-Sequenzen veranschaulichen das Gelernte. Arbeitsvorlagen sowie Kontrollfragen und Musterlösungen zu jedem Kapitel erleichtern den Einsatz in der Lehre und im Selbststudium. (Verlagsangaben)
Off-lattice Boltzmann methods increase the flexibility and applicability of lattice Boltzmann methods by decoupling the discretizations of time, space, and particle velocities. However, the velocity sets that are mostly used in off-lattice Boltzmann simulations were originally tailored to on-lattice Boltzmann methods. In this contribution, we show how the accuracy and efficiency of weakly and fully compressible semi-Lagrangian off-lattice Boltzmann simulations is increased by velocity sets derived from cubature rules, i.e. multivariate quadratures, which have not been produced by the Gauss-product rule. In particular, simulations of 2D shock-vortex interactions indicate that the cubature-derived degree-nine D2Q19 velocity set is capable to replace the Gauss-product rule-derived D2Q25. Likewise, the degree-five velocity sets D3Q13 and D3Q21, as well as a degree-seven D3V27 velocity set were successfully tested for 3D Taylor-Green vortex flows to challenge and surpass the quality of the customary D3Q27 velocity set. In compressible 3D Taylor-Green vortex flows with Mach numbers Ma={0.5;1.0;1.5;2.0} on-lattice simulations with velocity sets D3Q103 and D3V107 showed only limited stability, while the off-lattice degree-nine D3Q45 velocity set accurately reproduced the kinetic energy provided by literature.
This thesis explores novel haptic user interfaces for touchscreens, virtual and remote environments (VE and RE). All feedback modalities have been designed to study performance and perception while focusing on integrating an additional sensory channel - the sense of touch. Related work has shown that tactile stimuli can increase performance and usability when interacting with a touchscreen. It was also shown that perceptual aspects in virtual environments could be improved by haptic feedback. Motivated by previous findings, this thesis examines the versatility of haptic feedback approaches. For this purpose, five haptic interfaces from two application areas are presented. Research methods from prototyping and experimental design are discussed and applied. These methods are used to create and evaluate the interfaces; therefore, seven experiments have been performed. All five prototypes use a unique feedback approach. While three haptic user interfaces designed for touchscreen interaction address the fingers, two interfaces developed for VE and RE target the feet. Within touchscreen interaction, an actuated touchscreen is presented, and study shows the limits and perceptibility of geometric shapes. The combination of elastic materials and a touchscreen is examined with the second interface. A psychophysical study has been conducted to highlight the potentials of the interface. The back of a smartphone is used for haptic feedback in the third prototype. Besides a psychophysical study, it is found that the touch accuracy could be increased. Interfaces presented in the second application area also highlight the versatility of haptic feedback. The sides of the feet are stimulated in the first prototype. They are used to provide proximity information of remote environments sensed by a telepresence robot. In a study, it was found that spatial awareness could be increased. Finally, the soles of the feet are stimulated. A designed foot platform that provides several feedback modalities shows that self-motion perception can be increased.
Using Visual and Auditory Cues to Locate Out-of-View Objects in Head-Mounted Augmented Reality
(2021)
The Covid-19 pandemic has challenged educators across the world to move their teaching and mentoring from in-person to remote. During nonpandemic semesters at their institutes (e.g. universities), educators can directly provide students the software environment needed to support their learning - either in specialized computer laboratories (e.g. computational chemistry labs) or shared computer spaces. These labs are often supported by staff that maintains the operating systems (OS) and software. But how does one provide a specialized software environment for remote teaching? One solution is to provide students a customized operating system (e.g., Linux) that includes open-source software for supporting your teaching goals. However, such a solution should not require students to install the OS alongside their existing one (i.e. dual/multi-booting) or be used as a complete replacement. Such approaches are risky because of a) the students' possible lack of software expertise, b) the possible disruption of an existing software workflow that is needed in other classes or by other family members, and c) the importance of maintaining a working computer when isolated (e.g. societal restrictions). To illustrate possible solutions, we discuss our approach that used a customized Linux OS and a Docker container in a course that teaches computational chemistry and Python3.
Steuerlehre für Dummies
(2021)
BWL für Dummies
(2021)
Diese etablierte Formelsammlung enthält und erklärt statistische Formeln, wie sie in den Wirtschaftswissenschaften und in der wirtschaftswissenschaftlichen Praxis fundamental notwendig sind. Das Verständnis der Formeln und deren praktische Anwendung werden durch nützliche Hilfen und verständliche Beispiele sinnvoll unterstützt, so dass der Kontext wirtschaftsstatistischer Formeln klar und allgemein verständlich erklärt dargestellt wird. Diese Formelsammlung ist ein unverzichtbares Tool für Studierende der Wirtschaftswissenschaften, aber auch ein nützliches Nachschlagewerk für Verantwortliche aus Wirtschaft, Politik und Lehre. In der 4. Auflage wurden die Inhalte überarbeitet und ergänzt. (Verlagsangaben)
This textbook contains and explains essential mathematical formulas within an economic context. A broad range of aids and supportive examples will help readers to understand the formulas and their practical applications. This mathematical formulary is presented in a practice-oriented, clear, and understandable manner, as it is needed for meaningful and relevant application in global business, as well as in the academic setting and economic practice.
Here we provide the electrophysiology data for the manuscript "Two functional epithelial sodium channel isoforms are present in rodents despite pronounced evolutionary pseudogenization and exon fusion", published in Molecular Biology and Evolution (2021): msab271 (doi: 10.1093/molbev/msab271). Data are reported as current values in Excel format, sorted according to the appearance in Figures and supplemented by explanatory text on the procedures/data presentation.
New cars are increasingly "connected" by default. Since not having a car is not an option for many people, understanding the privacy implications of driving connected cars and using their data-based services is an even more pressing issue than for expendable consumer products. While risk-based approaches to privacy are well established in law, they have only begun to gain traction in HCI. These approaches are understood not only to increase acceptance but also to help consumers make choices that meet their needs. To the best of our knowledge, perceived risks in the context of connected cars have not been studied before. To address this gap, our study reports on the analysis of a survey with 18 open-ended questions distributed to 1,000 households in a medium-sized German city. Our findings provide qualitative insights into existing attitudes and use cases of connected car features and, most importantly, a list of perceived risks themselves. Taking the perspective of consumers, we argue that these can help inform consumers about data use in connected cars in a user-friendly way. Finally, we show how these risks fit into and extend existing risk taxonomies from other contexts with a stronger social perspective on risks of data use.
Representation and Experience-Based Learning of Explainable Models for Robot Action Execution
(2021)
For robots acting in human-centered environments, the ability to improve based on experience is essential for reliable and adaptive operation; however, particularly in the context of robot failure analysis, experience-based improvement is only useful if robots are also able to reason about and explain the decisions they make during execution. In this paper, we describe and analyse a representation of execution-specific knowledge that combines (i) a relational model in the form of qualitative attributes that describe the conditions under which actions can be executed successfully and (ii) a continuous model in the form of a Gaussian process that can be used for generating parameters for action execution, but also for evaluating the expected execution success given a particular action parameterisation. The proposed representation is based on prior, modelled knowledge about actions and is combined with a learning process that is supervised by a teacher. We analyse the benefits of this representation in the context of two actions – grasping handles and pulling an object on a table – such that the experiments demonstrate that the joint relational-continuous model allows a robot to improve its execution based on experience, while reducing the severity of failures experienced during execution.
We consider multi-solution optimization and generative models for the generation of diverse artifacts and the discovery of novel solutions. In cases where the domain's factors of variation are unknown or too complex to encode manually, generative models can provide a learned latent space to approximate these factors. When used as a search space, however, the range and diversity of possible outputs are limited to the expressivity and generative capabilities of the learned model. We compare the output diversity of a quality diversity evolutionary search performed in two different search spaces: 1) a predefined parameterized space and 2) the latent space of a variational autoencoder model. We find that the search on an explicit parametric encoding creates more diverse artifact sets than searching the latent space. A learned model is better at interpolating between known data points than at extrapolating or expanding towards unseen examples. We recommend using a generative model's latent space primarily to measure similarity between artifacts rather than for search and generation. Whenever a parametric encoding is obtainable, it should be preferred over a learned representation as it produces a higher diversity of solutions.
Sharing economies enabled by technical platforms have been studied regarding their economic, legal, and social effects, as well as with regard to their possible influences on CSCW topics such as work, collaboration, and trust. While a lot current research is focusing on the sharing economy and related communities, there is little work addressing the phenomenon from a socio-technical point of view. Our workshop is meant to address this gap. Building on research themes and discussion from last year’s ECSCW, we seek to engage deeper with topics such as novel socio-technical approaches for enabling sharing communities, discussing issues around digital consumer and worker protection, as well as emerging challenges and opportunities of existing platforms and approaches.
Voice assistants (VA) collect data about users’ daily life including interactions with other connected devices, musical preferences, and unintended interactions. While users appreciate the convenience of VAs, their understanding and expectations of data collection by vendors are often vague and incomplete. By making the collected data explorable for consumers, our research-through-design approach seeks to unveil design resources for fostering data literacy and help users in making better informed decisions regarding their use of VAs. In this paper, we present the design of an interactive prototype that visualizes the conversations with VAs on a timeline and provides end users with basic means to engage with data, for instance allowing for filtering and categorization. Based on an evaluation with eleven households, our paper provides insights on how users reflect upon their data trails and presents design guidelines for supporting data literacy of consumers in the context of VAs.
Critical consumerism is complex as ethical values are difficult to negotiate, appropriate products are hard to find, and product information is overwhelming. Although recommender systems offer solutions to reduce such complexity, current designs are not appropriate for niche practices and use non-personalized intransparent ethics. To support critical consumption, we conducted a design case study on a personalized food recommender system. Therefore, we first conducted an empirical pre-study with 24 consumers to understand value negotiations and current practices, co-designed the recommender system, and finally evaluated it in a real-world trial with ten consumers. Our findings show how recommender systems can support the negotiation of ethical values within the context of consumption practices, reduce the complexity of finding products and stores, and strengthen consumers. In addition to providing implications for the design to support critical consumption practices, we critically reflect on the scope of such recommender systems and its appropriation.
«Ich notiere, also bin ich»
(2021)
Was tun gegen Infektionsjournalismus? Die mediale Allgegenwart des Virus und die vergessenen Themen
(2021)
Designs for decorative surfaces, such as flooring, must cover several square meters to avoid visible repeats. While the use of desktop systems is feasible to support the designer, it is challenging for a non-domain expert to get the right impression of the appearances of surfaces due to limited display sizes and a potentially unnatural interaction with digital designs. At the same time, large-format editing of structure and gloss is becoming increasingly important. Advances in the printing industry allow for more faithful reproduction of such surface details. Unfortunately, existing systems for visualizing surface designs cannot adequately account for gloss, especially for non-domain experts. Here, the complex interaction of light sources and the camera position must be controlled using software controls. As a result, only small parts of the data set can be properly inspected at a time. Also, real-world lighting is not considered here. This work presents a system for the processing and realistic visualization of large decorative surface designs. To this end, we present a tabletop solution that is coupled to a live 360° video feed and a spatial tracking system. This allows for reproducing natural view-dependent effects like real-world reflections, live image-based lighting, and the interaction with the design using virtual light sources employing natural interaction techniques that allow for a more accurate inspection even for non-domain experts.
Orešković and Porsdam Mann draw a distinction between ‘fast’ and ‘slow’ science. Whereas the latter involves rigorous and laborious adherence to the scientific method, the former represents the reality that much scientific work faces time pressures which at times force shortcuts. The distinction can be seen to operate in contemporary research into the coronavirus pandemic: whereas the development of vaccines and treatments usually requires years of meticulous laboratory work and several more years of clinical testing, the many millions suffering from the disease need a treatment now. However, by taking too many safeguards off the treatment discovery and testing pipelines, or by refusing to act in accordance with scientific advice, governments risk sacrificing the public’s trust not only in the government’s scientific bona fides but in the scientific process itself. This is a heavy price to pay, argue Orešković and Porsdam Mann, and point to evidence indicating that the success of Germany and Japan in combating COVID-19 can be traced to public trust in science and government, as well as scientifically-informed and respectful national leadership.
Landesozialgericht München, Urteil vom 12.05.2021 – L 3 U 373/18 – juris
„Ein Arbeitsunfall [im Homeoffice] liegt mangels Unfallkausalität nicht vor, wenn sich im rein persönlichen Wohnbereich des Versicherten ein Unfall ereignet, dessen wesentliche Ursache eine spezifische Gefahr der eigenen Häuslichkeit ist.“
Psychische Belastungen führen im Gegensatz zu physikalischen, chemischen und biologischen Risiken häufig noch ein Schattendasein bei der Beurteilung möglicher Risikofaktoren für Sicherheit und Gesundheit bei der Arbeit. Die Hinweise auf Zusammenhänge mit Sicherheit und Gesundheit führen aber langsam zu einem Umdenken.
In this paper, the electrochemical alkaline methanol oxidation process, which is relevant for the design of efficient fuel cells, is considered. An algorithm for reconstructing the reaction constants for this process from the experimentally measured polarization curve is presented. The approach combines statistical and principal component analysis and determination of the trust region for a linearized model. It is shown that this experiment does not allow one to determine accurately the reaction constants, but only some of their linear combinations. The possibilities of extending the method to additional experiments, including dynamic cyclic voltammetry and variations in the concentration of the main reagents, are discussed.
Solving transport network problems can be complicated by non-linear effects. In the particular case of gas transport networks, the most complex non-linear elements are compressors and their drives. They are described by a system of equations, composed of a piecewise linear ‘free’ model for the control logic and a non-linear ‘advanced’ model for calibrated characteristics of the compressor. For all element equations, certain stability criteria must be fulfilled, providing the absence of folds in associated system mapping. In this paper, we consider a transformation (warping) of a system from the space of calibration parameters to the space of transport variables, satisfying these criteria. The algorithm drastically improves stability of the network solver. Numerous tests on realistic networks show that nearly 100% convergence rate of the solver is achieved with this approach.
Despite their age, ray-based rendering methods are still a very active field of research with many challenges when it comes to interactive visualization. In this thesis, we present our work on Guided High-Quality Rendering, Foveated Ray Tracing for Head Mounted Displays and Hash-based Hierarchical Caching and Layered Filtering. Our system for Guided High-Quality Rendering allows for guiding the sampling rate of ray-based rendering methods by a user-specified Region of Interest (RoI). We propose two interaction methods for setting such an RoI when using a large display system and a desktop display, respectively. This makes it possible to compute images with a heterogeneous sample distribution across the image plane. Using such a non-uniform sample distribution, the rendering performance inside the RoI can be significantly improved in order to judge specific image features. However, a modified scheduling method is required to achieve sufficient performance. To solve this issue, we developed a scheduling method based on sparse matrix compression, which has shown significant improvements in our benchmarks. By filtering the sparsely sampled image appropriately, large brightness variations in areas outside the RoI are avoided and the overall image brightness is similar to the ground truth early in the rendering process. When using ray-based methods in a VR environment on head-mounted display de vices, it is crucial to provide sufficient frame rates in order to reduce motion sickness. This is a challenging task when moving through highly complex environments and the full image has to be rendered for each frame. With our foveated rendering sys tem, we provide a perception-based method for adjusting the sample density to the user’s gaze, measured with an eye tracker integrated into the HMD. In order to avoid disturbances through visual artifacts from low sampling rates, we introduce a reprojection-based rendering pipeline that allows for fast rendering and temporal accumulation of the sparsely placed samples. In our user study, we analyse the im pact our system has on visual quality. We then take a closer look at the recorded eye tracking data in order to determine tracking accuracy and connections between different fixation modes and perceived quality, leading to surprising insights. For previewing global illumination of a scene interactively by allowing for free scene exploration, we present a hash-based caching system. Building upon the concept of linkless octrees, which allow for constant-time queries of spatial data, our frame work is suited for rendering such previews of static scenes. Non-diffuse surfaces are supported by our hybrid reconstruction approach that allows for the visualization of view-dependent effects. In addition to our caching and reconstruction technique, we introduce a novel layered filtering framework, acting as a hybrid method between path space and image space filtering, that allows for the high-quality denoising of non-diffuse materials. Also, being designed as a framework instead of a concrete filtering method, it is possible to adapt most available denoising methods to our layered approach instead of relying only on the filtering of primary hitpoints.
Soweit grundsätzlich als gemeinnützig anerkannte Sportvereine neben dem Betrieb einer Abteilung mit ideeller und gemeinnütziger Zwecksetzung (Kinder- und Jugendfußball) auch einen wirtschaftlichen Geschäftsbetrieb (erste Fußballmannschaft in der Regionalliga, Bistro) unterhalten, entfällt die Befreiung von den Umlagen zum Lastenausgleich und zur Lastenverteilung in diesem Umfang.
The lattice Boltzmann method (LBM) is an efficient simulation technique for computational fluid mechanics and beyond. It is based on a simple stream-and-collide algorithm on Cartesian grids, which is easily compatible with modern machine learning architectures. While it is becoming increasingly clear that deep learning can provide a decisive stimulus for classical simulation techniques, recent studies have not addressed possible connections between machine learning and LBM. Here, we introduce Lettuce, a PyTorch-based LBM code with a threefold aim. Lettuce enables GPU accelerated calculations with minimal source code, facilitates rapid prototyping of LBM models, and enables integrating LBM simulations with PyTorch's deep learning and automatic differentiation facility. As a proof of concept for combining machine learning with the LBM, a neural collision model is developed, trained on a doubly periodic shear layer and then transferred to a different flow, a decaying turbulence. We also exemplify the added benefit of PyTorch's automatic differentiation framework in flow control and optimization. To this end, the spectrum of a forced isotropic turbulence is maintained without further constraining the velocity field.