Refine
Departments, institutes and facilities
- Fachbereich Informatik (76)
- Fachbereich Angewandte Naturwissenschaften (49)
- Fachbereich Ingenieurwissenschaften und Kommunikation (37)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (36)
- Fachbereich Wirtschaftswissenschaften (23)
- Institut für funktionale Gen-Analytik (IFGA) (19)
- Institut für Verbraucherinformatik (IVI) (18)
- Institute of Visual Computing (IVC) (18)
- Graduierteninstitut (11)
- Institut für Cyber Security & Privacy (ICSP) (11)
Document Type
- Article (94)
- Conference Object (65)
- Preprint (18)
- Doctoral Thesis (12)
- Part of a Book (10)
- Research Data (5)
- Report (4)
- Book (monograph, edited volume) (3)
- Contribution to a Periodical (2)
- Master's Thesis (2)
Year of publication
- 2020 (217) (remove)
Language
- English (217) (remove)
Keywords
- Inborn error of metabolism (3)
- Organic aciduria (3)
- Quality diversity (3)
- Shared autonomous vehicles (3)
- post-buckling (3)
- ARIMA (2)
- Artificial Intelligence (2)
- Autoencoder (2)
- Automatic Short Answer Grading (2)
- Bayesian optimization (2)
Striated muscle contraction is regulated by the translocation of troponin-tropomyosin strands over the thin filament surface. Relaxation relies partly on highly-favorable, conformation-dependent electrostatic contacts between actin and tropomyosin, which position tropomyosin such that it impedes actomyosin associations. Impaired relaxation and hypercontractile properties are hallmarks of various muscle disorders. The α-cardiac actin M305L hypertrophic cardiomyopathy-causing mutation lies near residues that help confine tropomyosin to an inhibitory position along thin filaments. Here, we investigate M305L actin in vivo, in vitro, and in silico to resolve emergent pathological properties and disease mechanisms. Our data suggest the mutation reduces actin flexibility and distorts the actin-tropomyosin electrostatic energy landscape that, in muscle, result in aberrant contractile inhibition and excessive force. Thus, actin flexibility may be required to establish and maintain interfacial contacts with tropomyosin as well as facilitate its movement over distinct actin surface features and is, therefore, likely necessary for proper regulation of contraction.
Mendelian diseases of dysregulated canonical NF-κB signaling: From immunodeficiency to inflammation
(2020)
This dissertation presents a probabilistic state estimation framework for integrating data-driven machine learning models and a deformable facial shape model in order to estimate continuous-valued intensities of 22 different facial muscle movements, known as Action Units (AU), defined in the Facial Action Coding System (FACS). A practical approach is proposed and validated for integrating class-wise probability scores from machine learning models within a Gaussian state estimation framework. Furthermore, driven mass-spring-damper models are applied for modelling the dynamics of facial muscle movements. Both facial shape and appearance information are used for estimating AU intensities, making it a hybrid approach. Several features are designed and explored to help the probabilistic framework to deal with multiple challenges involved in automatic AU detection. The proposed AU intensity estimation method and its features are evaluated quantitatively and qualitatively using three different datasets containing either spontaneous or acted facial expressions with AU annotations. The proposed method produced temporally smoother estimates that facilitate a fine-grained analysis of facial expressions. It also performed reasonably well, even though it simultaneously estimates intensities of 22 AUs, some of which are subtle in expression or resemble each other closely. The estimated AU intensities tended to the lower range of values, and were often accompanied by a small delay in onset. This shows that the proposed method is conservative. In order to further improve performance, state-of-the-art machine learning approaches for AU detection could be integrated within the proposed probabilistic AU intensity estimation framework.
It is only a matter of time until autonomous vehicles become ubiquitous; however, human driving supervision will remain a necessity for decades. To assess the drive's ability to take control over the vehicle in critical scenarios, driver distractions can be monitored using wearable sensors or sensors that are embedded in the vehicle, such as video cameras. The types of driving distractions that can be sensed with various sensors is an open research question that this study attempts to answer. This study compared data from physiological sensors (palm electrodermal activity (pEDA), heart rate and breathing rate) and visual sensors (eye tracking, pupil diameter, nasal EDA (nEDA), emotional activation and facial action units (AUs)) for the detection of four types of distractions. The dataset was collected in a previous driving simulation study. The statistical tests showed that the most informative feature/modality for detecting driver distraction depends on the type of distraction, with emotional activation and AUs being the most promising. The experimental comparison of seven classical machine learning (ML) and seven end-to-end deep learning (DL) methods, which were evaluated on a separate test set of 10 subjects, showed that when classifying windows into distracted or not distracted, the highest F1-score of 79%; was realized by the extreme gradient boosting (XGB) classifier using 60-second windows of AUs as input. When classifying complete driving sessions, XGB's F1-score was 94%. The best-performing DL model was a spectro-temporal ResNet, which realized an F1-score of 75%; when classifying segments and an F1-score of 87%; when classifying complete driving sessions. Finally, this study identified and discussed problems, such as label jitter, scenario overfitting and unsatisfactory generalization performance, that may adversely affect related ML approaches.
Towards an Interaction-Centered and Dynamically Constructed Episodic Memory for Social Robots
(2020)
This study advances the research and methodological approach to measuring and understanding national-level destination competitiveness, sustainability and governance, by creating a model that could be of use for both developing and developed destinations. The study gives a detailed overview of the research field of measuring destination competitiveness and sustainability. It also identifies major predictors of destination competitiveness and sustainability and thereby presents destination researchers and practitioners with a useful list of priority areas, both from a global perspective and from the perspective of other similar destinations. Finally, the study identifies two major types of destination governance with implications for research, policy and practice across the destination life-cycle. The research deals with the analysis of the secondary data from the World Economic Forum Travel and Tourism Index (WEF T&T). Major types of destination governance and predictors of belonging to either one of the types, as well as inside cluster predictors have been extracted through a two-step cluster analysis. The results support the notion that a meaningful model of national-level destination governance needs to take into account different development levels of different destinations. The main limitation of the study is its typology creation approach, as it inevitably leads to simplifications.
Towards a conceptual framework for sustainable business models in the food and beverage industry
(2020)
Alkaline methanol oxidation is an important electrochemical process in the design of efficient fuel cells. Typically, a system of ordinary differential equations is used to model the kinetics of this process. The fitting of the parameters of the underlying mathematical model is performed on the basis of different types of experiments, characterizing the fuel cell. In this paper, we describe generic methods for creation of a mathematical model of electrochemical kinetics from a given reaction network, as well as for identification of parameters of this model. We also describe methods for model reduction, based on a combination of steady-state and dynamical descriptions of the process. The methods are tested on a range of experiments, including different concentrations of the reagents and different voltage range.
The general method of topological reduction for the network problems is presented on example of gas transport networks. The method is based on a contraction of series, parallel and tree-like subgraphs for the element equations of quadratic, power law and general monotone dependencies. The method allows to reduce significantly the complexity of the graph and to accelerate the solution procedure for stationary network problems. The method has been tested on a large set of realistic network scenarios. Possible extensions of the method have been described, including triangulated element equations, continuation of the equations at infinity, providing uniqueness of solution, a choice of Newtonian stabilizer for nearly degenerated systems. The method is applicable for various sectors in the field of energetics, including gas networks, water networks, electric networks, as well as for coupling of different sectors.
Computers can help us to trigger our intuition about how to solve a problem. But how does a computer take into account what a user wants and update these triggers? User preferences are hard to model as they are by nature vague, depend on the user’s background and are not always deterministic, changing depending on the context and process under which they were established. We pose that the process of preference discovery should be the object of interest in computer aided design or ideation. The process should be transparent, informative, interactive and intuitive. We formulate Hyper-Pref, a cyclic co-creative process between human and computer, which triggers the user’s intuition about what is possible and is updated according to what the user wants based on their decisions. We combine quality diversity algorithms, a divergent optimization method that can produce many, diverse solutions, with variational autoencoders to both model that diversity as well as the user’s preferences, discovering the preference hypervolume within large search spaces.
Short summary
This dataset accompanies our paper
A. Mitrevski, P. G. Plöger, and G. Lakemeyer, "Representation and Experience-Based Learning of Explainable Models for Robot Action Execution," in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
Contents
There are three zip archives included, each of them a dump of a MongoDB database corresponding to one of the three experiments in the paper:
Grasping a drawer handle (handle_drawer_logs.zip)
Grasping a fridge handle (handle_fridge_logs.zip)
Pulling an object (pull_logs.zip)
All three experiments were performed with a Toyota HSR. Only the data necessary for learning the models used in our experiments are included here.
Usage
After unzipping the archives, each database can be restored with the command
mongorestore [directory_name]
This will create a MongoDB database with the name of the directory (handle_drawer_logs, handle_fridge_logs, and pull_logs).
Code for processing the data and model learning can be found in our <a href="https://github.com/alex-mitrevski/explainable-robot-execution-models">GitHub repository.
This dataset contains data from two measurement campaigns in autumn 2018 and summer 2019 that were part of the BMWi project "MetPVNet", and serve as a supplement to the paper "Dynamic model of photovoltaic module temperature as a function of atmospheric conditions", published in the special edition of "Advances in Science and Research", the proceedings of the 19th EMS Annual Meeting: European Conference for Applied Meteorology and Climatology 2019.
Data are resampled to one minute, and include:
PV module temperature
Ambient temperature
Plane-of-array irradiance
Windspeed
Atmospheric thermal emission
The data were used for the dynamic temperature model, as presented in the paper
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
Who do you trust: Peers or Technology? A conjoint analysis about computational reputation mechanisms
(2020)
Peer-to-peer sharing platforms are taking over an increasingly important role in the platform economy due to their sustainable business model. By sharing private goods and services, the challenge arises to build trust between peers online mostly without any kind of physical presence. Peer rating has been proven as an important mechanism. In this paper, we explore the concept called Trust Score, a computational rating mechanism adopted from car telematics, which can play a similar role in carsharing. For this purpose, we conducted a conjoint analysis where 77 car owners chose between fictitious user profiles. Our results show that in our experiment the telemetric-based score slightly outperforms the peer rating in the decision process, while the participants perceived the peer rating more helpful in retrospect. Further, we discuss potential benefits with regard to existing shortcomings of user rating, but also various concerns that should be considered in concepts like telemetric-based reputation mechanism that supplements existing trust factors such as user ratings.
The ongoing digitisation in everyday working life means that ever larger amounts of personal data of employees are processed by their employers. This development is particularly problematic with regard to employee data protection and the right to informational self-determination. We strive for the use of company Privacy Dashboards as a means to compensate for missing transparency and control. For conceptual design we use among other things the method of mental models. We present the methodology and first results of our research. We highlight the opportunities that such an approach offers for the user-centred development of Privacy Dashboards.
Listen to Developers! A Participatory Design Study on Security Warnings for Cryptographic APIs
(2020)
Carbon capture and storage
(2020)
This Special Report explores the most recent regulatory, political and economic trends and themes arising from CCS technologies and projects to help the reader succeed in this rapidly changing market.
This article explores the opportunities, challenges, as well as the activities of the Chinese governmental and commercial stakeholders to promote cross-border e-commerce trade between China and Africa, based on the classification and correlation analysis of the literature from 2011 to 2019. The results show that the biggest driver for the development of China-Africa cross-border e-commerce trade is the gap between the rapid growth of the African population, especially the middle class, and the limited local capability to satisfy their demand. The rapid development of the internet and mobile internet is another driving factor. The biggest challenge is the last mile delivery of logistics, and online payment issues in Africa. At the macro-level the Chinese government has promoted measures such as infrastructure investment, e-commerce test zones and the establishment of pilot projects. At the firm level, Chinese companies have focused on solving practical micro-level local operational problems such as logistics, online payment, and talent training. The results also show that the referred literature is still in its infancy, mostly theoretical and less practical, and requires more in-depth domain specific analysis in the future.
Eco-InfoVis at Work
(2020)
With the digital transformation, software systems have become an integral part of our society and economy. In every part of our life, software systems are increasingly utilized to, e.g., simplify housework or to optimize business processes. All these applications are connected to the Internet, which already includes millions of software services consumed by billions of people. Applications which process such a magnitude of users and data traffic requires to be highly scalable and are therefore denoted as Ultra Large Scale (ULS) systems. Roy Fielding has defined one of the first approaches which allows designing modern ULS software systems. In his doctoral thesis, Fielding introduced the architectural style Representational State Transfer (REST) which builds the theoretical foundation of the web. At present, the web is considered as the world's largest ULS system. Due to a large number of users and the significance of software for society and the economy, the security of ULS systems is another crucial quality factor besides high scalability.
Due to the use of fossil fuel resources, many environmental problems have been increasingly growing. Thus, the recent research focuses on the use of environment friendly materials from sustainable feedstocks for future fuels, chemicals, fibers and polymers. Lignocellulosic biomass has become the raw material of choice for these new materials. Recently, the research has focused on using lignin as a substitute material in many industrial applications. The antiradical and antimicrobial activity of lignin and lignin-based films are both of great interest for applications such as food packaging additives. DPPH assay was used to determine the antioxidant activity of Kraft lignin compared to Organosolv lignins from different biomasses. The purification procedure of Kraft lignin showed that double-fold selective extraction is the most efficient confirmed by UV-Vis, FTIR, HSQC, 31PNMR, SEC, and XRD. The antioxidant capacity was discussed regarding the biomass source, pulping process, and degree of purification. Lignin obtained from industrial black liquor are compared with beech wood samples: Biomass source influences the DPPH inhibition (softwood > grass) and the TPC (softwood < grass). DPPH inhibition affected by the polarity of the extraction solvent. Following the trend: ethanol > diethylether > acetone. Reduced polydispersity has positive influence on the DPPH inhibition. Storage decreased the DPPH inhibition but increased the TPC values. The DPPH assay was also used to discuss the antiradical activity of HPMC/lignin and HPMC/lignin/chitosan films. In both binary (HPMC/lignin) and ternary (HPMC/lignin/chitosan) systems the 5% addition showed the highest activity and the highest addition had the lowest. Both scavenging activity and antimicrobial activity are dependent on the biomass source; Organosolv of softwood > Kraft of softwood > Organosolv of grass. Lignins and lignin-containing films showed high antimicrobial activities against Gram-positive and Gram-negative bacteria at 35 °C and at low temperatures (0-7 °C). Purification of Kraft lignin has a negative effect on the antimicrobial activity while storage has positive effect. The lignin leaching in the produced films affected the activity positively and the chitosan addition enhances the activity for both Gram-positive and Gram-negative bacteria. Testing the films against food spoilage bacteria that grow at low temperatures revealed the activity of the 30% addition on HPMC/L1 film against both B. thermosphacta and P. fluorescens while L5 was active only against B. thermosphacta. In HPMC/lignin/chitosan films, the 5% addition exhibited activity against both food spoilage bacteria.
Describing the elephant: a foundational model of human needs, motivation, behaviour, and wellbeing
(2020)
Models of basic psychological needs have been present and popular in the academic and lay literature for more than a century yet reviews of needs models show an astonishing lack of consensus. This raises the question of what basic human psychological needs are and if this can be consolidated into a model or framework that can align previous research and empirical study. The authors argue that the lack of consensus arises from researchers describing parts of the proverbial elephant correctly but failing to describe the full elephant. Through redefining what human needs are and matching this to an evolutionary framework we can see broad consensus across needs models and neatly slot constructs and psychological and behavioural theories into this framework. This enables a descriptive model of drives, motives, and well-being that can be simply outlined but refined enough to do justice to the complexities of human behaviour. This also raises some issues of how subjective well-being is and should be measured. Further avenues of research and how to continue building this model and framework are proposed.
Autonomous driving enables new mobility concepts such as shared-autonomous services. Although significant re-search has been done on passenger-car interaction, work on passenger interaction with robo-taxis is still rare. In this paper, we tackle the question of how passengers experience robo-taxis as a service in real-life settings to inform the interaction design. We conducted a Wizard of Oz study with an electric vehicle where the driver was hidden from the passenger to simulate the service experience of a robo-taxi. 10 participants had the opportunity to use the simulated shared-autonomous service in real-life situations for one week. By the week's end, 33 rides were completed and recorded on video. Also, we flanked the study conducting interviews before and after with all participants. The findings provided insights into four design themes that could inform the service design of robo-taxis along the different stages including hailing, pick-up, travel, and drop-off.
For a sustainable development the electricity sector needs to be decarbonized. In 2017 only 54% of the West African households had access to the electrical grid. Thus, renewable sources should play a major role for the development of the power sector in West Africa. Above all, solar power shows highest potential of renewable energy sources. However, it is highly variable, depending on the atmospheric conditions. This study addresses the challenges for a solar based power system in West Africa by analyzing the atmospheric variability of solar power. For this purpose, two aspects are investigated. In the first part, the daily power reduction due to atmospheric aerosols is quantified for different solar power technologies. Meteorological data at six ground-based stations is used to model photovoltaic and parabolic trough power during all mostly clear-sky days in 2006. A radiative transfer model is combined with solar power model. The results show, that the reduction due to aerosols can be up to 79% for photovoltaic and up to 100% for parabolic trough power plants during a major dust outbreak. Frequent dust outbreaks occurring in West Africa would cause frequent blackouts if sufficient storage capacities are not available. On average, aerosols reduce the daily power yields by 13% to 22% for photovoltaic and by 22% to 37% for parabolic troughs. For the second part, long-term atmospheric variability and trends of solar irradiance are analyzed and their impact on photovoltaic yields is examined for West Africa. Based on a 35-year satellite data record (1983 - 2017) the temporal and spatial variability and general trend are depicted for global and direct horizontal irradiances. Furthermore, photovoltaic yields are calculated on a daily basis. They show a strong meridional gradient with highest values of 5 kWh/kWp in the Sahara and Sahel zone and lowest values in southern West Africa (around 4 kWh/kWp). Thereby, the temporal variability is highest in southern West Africa (up to around 18%) and lowest in the Sahara (around 4.5%). This implies the need of a North-South grid development, to feed the increasing demand on the highly populated coast by solar power from the northern parts of West Africa. Additionally, global irradiances show a long-term positive trend (up to +5 W/m²/decade) in the Sahara and a negative trend (up to -5 W/m²/decade) in southern West Africa. If this trend is continuing, the spatial differences in solar power potential will increase in the future. This thesis provides a better understanding of the impact of atmospheric variability on solar power in a challenging environment like West Africa, characterized by the strong influence of the African monsoon. Thereby, the importance of aerosols is pointed out. Furthermore, long-term changes of irradiance are characterized concerning their implications for photovoltaic power.
Turbulent compressible flows are traditionally simulated using explicit Eulerian time integration applied to the Navier-Stokes equations. However, the associated Courant-Friedrichs-Lewy condition severely restricts the maximum time step size. Exploiting the Lagrangian nature of the Boltzmann equation's material derivative, we now introduce a feasible three-dimensional semi-Lagrangian lattice Boltzmann method (SLLBM), which elegantly circumvents this restriction. Previous lattice Boltzmann methods for compressible flows were mostly restricted to two dimensions due to the enormous number of discrete velocities needed in three dimensions. In contrast, this Rapid Communication demonstrates how cubature rules enhance the SLLBM to yield a three-dimensional velocity set with only 45 discrete velocities. Based on simulations of a compressible Taylor-Green vortex we show that the new method accurately captures shocks or shocklets as well as turbulence in 3D without utilizing additional filtering or stabilizing techniques, even when the time step sizes are up to two orders of magnitude larger compared to simulations in the literature. Our new method therefore enables researchers for the first time to study compressible turbulent flows by a fully explicit scheme, whose range of admissible time step sizes is only dictated by physics, while being decoupled from the spatial discretization.
4GREAT is an extension of the German Receiver for Astronomy at Terahertz frequencies (GREAT) operated aboard the Stratospheric Observatory for Infrared Astronomy (SOFIA). The spectrometer comprises four different detector bands and their associated subsystems for simultaneous and fully independent science operation. All detector beams are co-aligned on the sky. The frequency bands of 4GREAT cover 491-635, 890-1090, 1240-1525 and 2490-2590 GHz, respectively. This paper presents the design and characterization of the instrument, and its in-flight performance. 4GREAT saw first light in June 2018, and has been offered to the interested SOFIA communities starting with observing cycle 6.
Fundamental hydrogen storage properties of TiFe-alloy with partial substitution of Fe by Ti and Mn
(2020)
TiFe intermetallic compound has been extensively studied, owing to its low cost, good volumetric hydrogen density, and easy tailoring of hydrogenation thermodynamics by elemental substitution. All these positive aspects make this material promising for large-scale applications of solid-state hydrogen storage. On the other hand, activation and kinetic issues should be amended and the role of elemental substitution should be further understood. This work investigates the thermodynamic changes induced by the variation of Ti content along the homogeneity range of the TiFe phase (Ti:Fe ratio from 1:1 to 1:0.9) and of the substitution of Mn for Fe between 0 and 5 at.%. In all considered alloys, the major phase is TiFe-type together with minor amounts of TiFe2 or \b{eta}-Ti-type and Ti4Fe2O-type at the Ti-poor and rich side of the TiFe phase domain, respectively. Thermodynamic data agree with the available literature but offer here a comprehensive picture of hydrogenation properties over an extended Ti and Mn compositional range. Moreover, it is demonstrated that Ti-rich alloys display enhanced storage capacities, as long as a limited amount of \b{eta}-Ti is formed. Both Mn and Ti substitutions increase the cell parameter by possibly substituting Fe, lowering the plateau pressures and decreasing the hysteresis of the isotherms. A full picture of the dependence of hydrogen storage properties as a function of the composition will be discussed, together with some observed correlations.
Among the celestial bodies in the Solar System, Mars currently represents the main target for the search for life beyond Earth. However, its surface is constantly exposed to high doses of cosmic rays (CRs) that may pose a threat to any biological system. For this reason, investigations into the limits of resistance of life to space relevant radiation is fundamental to speculate on the chance of finding extraterrestrial organisms on Mars. In the present work, as part of the STARLIFE project, the responses of dried colonies of the black fungus Cryomyces antarcticus Culture Collection of Fungi from Extreme Environments (CCFEE) 515 to the exposure to accelerated iron (LET: 200 keV/μm) ions, which mimic part of CRs spectrum, were investigated. Samples were exposed to the iron ions up to 1000 Gy in the presence of Martian regolith analogues. Our results showed an extraordinary resistance of the fungus in terms of survival, recovery of metabolic activity and DNA integrity. These experiments give new insights into the survival probability of possible terrestrial-like life forms on the present or past Martian surface and shallow subsurface environments.
Telepresence robots allow users to be spatially and socially present in remote environments. Yet, it can be challenging to remotely operate telepresence robots, especially in dense environments such as academic conferences or workplaces. In this paper, we primarily focus on the effect that a speed control method, which automatically slows the telepresence robot down when getting closer to obstacles, has on user behaviors. In our first user study, participants drove the robot through a static obstacle course with narrow sections. Results indicate that the automatic speed control method significantly decreases the number of collisions. For the second study we designed a more naturalistic, conference-like experimental environment with tasks that require social interaction, and collected subjective responses from the participants when they were asked to navigate through the environment. While about half of the participants preferred automatic speed control because it allowed for smoother and safer navigation, others did not want to be influenced by an automatic mechanism. Overall, the results suggest that automatic speed control simplifies the user interface for telepresence robots in static dense environments, but should be considered as optionally available, especially in situations involving social interactions.
This paper aspires to develop a deeper understanding of the sharing/collaborative/platform economy, and in particular of the technical mechanisms upon which the digital platforms supporting it are built. In surveying the research literature, the paper identifies a gap between studies from economical, social or socio-technical angles, and presentations of detailed technical solutions. Most cases study larger, ‘monotechnological’ platforms, rather than local platforms that lend components from several technologies. Almost no literature takes a design perspective. Rooted in Sharing & Caring, an EU COST Action (network), the paper presents work to systematically map out functionalities across domains of the sharing economy. The 145 technical mechanisms we collected illustrate how most platforms are depending on a limited number of functionalities that lack in terms of holding communities together. The paper points to the necessity of a better terminology and concludes by discussing challenges and opportunities for the design of future and more inclusive platforms.
The ability to finely segment different instances of various objects in an environment forms a critical tool in the perception tool-box of any autonomous agent. Traditionally instance segmentation is treated as a multi-label pixel-wise classification problem. This formulation has resulted in networks that are capable of producing high-quality instance masks but are extremely slow for real-world usage, especially on platforms with limited computational capabilities. This thesis investigates an alternate regression-based formulation of instance segmentation to achieve a good trade-off between mask precision and run-time. Particularly the instance masks are parameterized and a CNN is trained to regress to these parameters, analogous to bounding box regression performed by an object detection network.
In this investigation, the instance segmentation masks in the Cityscape dataset are approximated using irregular octagons and an existing object detector network (i.e., SqueezeDet) is modified to regresses to the parameters of these octagonal approximations. The resulting network is referred to as SqueezeDetOcta. At the image boundaries, object instances are only partially visible. Due to the convolutional nature of most object detection networks, special handling of the boundary adhering object instances is warranted. However, the current object detection techniques seem to be unaffected by this and handle all the object instances alike. To this end, this work proposes selectively learning only partial, untainted parameters of the bounding box approximation of the boundary adhering object instances. Anchor-based object detection networks like SqueezeDet and YOLOv2 have a discrepancy between the ground-truth encoding/decoding scheme and the coordinate space used for clustering, to generate the prior anchor shapes. To resolve this disagreement, this work proposes clustering in a space defined by two coordinate axes representing the natural log transformations of the width and height of the ground-truth bounding boxes.
When both SqueezeDet and SqueezeDetOcta were trained from scratch, SqueezeDetOcta lagged behind the SqueezeDet network by a massive ≈ 6.19 mAP. Further analysis revealed that the sparsity of the annotated data was the reason for this lackluster performance of the SqueezeDetOcta network. To mitigate this issue transfer-learning was used to fine-tune the SqueezeDetOcta network starting from the trained weights of the SqueezeDet network. When all the layers of the SqueezeDetOcta were fine-tuned, it outperformed the SqueezeDet network paired with logarithmically extracted anchors by ≈ 0.77 mAP. In addition to this, the forward pass latencies of both SqueezeDet and SqueezeDetOcta are close to ≈ 19ms. Boundary adhesion considerations, during training, resulted in an improvement of ≈ 2.62 mAP of the baseline SqueezeDet network. A SqueezeDet network paired with logarithmically extracted anchors improved the performance of the baseline SqueezeDet network by ≈ 1.85 mAP.
In summary, this work demonstrates that if given sufficient fine instance annotated data, an existing object detection network can be modified to predict much finer approximations (i.e., irregular octagons) of the instance annotations, whilst having the same forward pass latency as that of the bounding box predicting network. The results justify the merits of logarithmically extracted anchors to boost the performance of any anchor-based object detection network. The results also showed that the special handling of image boundary adhering object instances produces more performant object detectors.
The simultaneous operation of multiple different semiconducting metal oxide (MOX) gas sensors is demanding for the readout circuitry. The challenge results from the strongly varying signal intensities of the various sensor types to the target gas. While some sensors change their resistance only slightly, other types can react with a resistive change over a range of several decades. Therefore, a suitable readout circuit has to be able to capture all these resistive variations, requiring it to have a very large dynamic range. This work presents a compact embedded system that provides a full, high range input interface (readout and heater management) for MOX sensor operation. The system is modular and consists of a central mainboard that holds up to eight sensor-modules, each capable of supporting up to two MOX sensors, therefore supporting a total maximum of 16 different sensors. Its wide input range is archived using the resistance-to-time measurement method. The system is solely built with commercial off-the-shelf components and tested over a range spanning from 100Ω to 5 GΩ (9.7 decades) with an average measurement error of 0.27% and a maximum error of 2.11%. The heater management uses a well-tested power-circuit and supports multiple modes of operation, hence enabling the system to be used in highly automated measurement applications. The experimental part of this work presents the results of an exemplary screening of 16 sensors, which was performed to evaluate the system’s performance.
This paper addresses long-term historical changes in solar irradiance in West Africa (3 to 20° N and 20° W to 16° E) and the implications for photovoltaic systems. Here, we use satellite irradiance (Surface Solar Radiation Data Set – Heliosat, Edition 2.1 – SARAH-2.1) and temperature data from a reanalysis (ERA5) to derive photovoltaic yields. Based on 35 years of data (1983–2017), the temporal and regional variability as well as long-term trends in global and direct horizontal irradiance are analyzed. Furthermore, a detailed time series analysis is undertaken at four locations. According to the high spatial resolution SARAH-2.1 data record (0.05°×0.05°), solar irradiance is largest (up to a 300 W m−2 daily average) in the Sahara and the Sahel zone with a positive trend (up to 5 W m−2 per decade) and a lower temporal variability (<75 W m−2 between 1983 and 2017 for daily averages). In contrast, the solar irradiance is lower in southern West Africa (between 200 W m−2 and 250 W m−2) with a negative trend (up to −5 W m−2 per decade) and a higher temporal variability (up to 150 W m−2). The positive trend in the north is mostly connected to the dry season, whereas the negative trend in the south occurs during the wet season. Both trends show 95 % significance. Photovoltaic (PV) yields show a strong meridional gradient with the lowest values of around 4 kWh kWp−1 in southern West Africa and values of more than 5.5 kWh kWp−1 in the Sahara and Sahel zone.
Object detectors have improved considerably in the last years by using advanced CNN architectures. However, many detector hyper-parameters are generally manually tuned, or they are used with values set by the detector authors. Automatic Hyper-parameter optimization has not been explored in improving CNN-based object detectors hyper-parameters. In this work, we propose the use of Black-box optimization methods to tune the prior/default box scales in Faster R-CNN and SSD, using Bayesian Optimization, SMAC, and CMA-ES. We show that by tuning the input image size and prior box anchor scale on Faster R-CNN mAP increases by 2% on PASCAL VOC 2007, and by 3% with SSD. On the COCO dataset with SSD there are mAP improvement in the medium and large objects, but mAP decreases by 1% in small objects. We also perform a regression analysis to find the significant hyper-parameters to tune.
In this paper we introduce the Perception for Autonomous Systems (PAZ) software library. PAZ is a hierarchical perception library that allow users to manipulate multiple levels of abstraction in accordance to their requirements or skill level. More specifically, PAZ is divided into three hierarchical levels which we refer to as pipelines, processors, and backends. These abstractions allows users to compose functions in a hierarchical modular scheme that can be applied for preprocessing, data-augmentation, prediction and postprocessing of inputs and outputs of machine learning (ML) models. PAZ uses these abstractions to build reusable training and prediction pipelines for multiple robot perception tasks such as: 2D keypoint estimation, 2D object detection, 3D keypoint discovery, 6D pose estimation, emotion classification, face recognition, instance segmentation, and attention mechanisms.
Extremophiles are optimal models in experimentally addressing questions about the effects of cosmic radiation on biological systems. The resistance to high charge energy (HZE) particles, and helium (He) ions and iron (Fe) ions (LET at 2.2 and 200 keV/µm, respectively, until 1000 Gy), of spores from two thermophiles, Bacillushorneckiae SBP3 and Bacilluslicheniformis T14, and two psychrotolerants, Bacillus sp. A34 and A43, was investigated. Spores survived He irradiation better, whereas they were more sensitive to Fe irradiation (until 500 Gy), with spores from thermophiles being more resistant to irradiations than psychrotolerants. The survived spores showed different germination kinetics, depending on the type/dose of irradiation and the germinant used. After exposure to He 1000 Gy, D-glucose increased the lag time of thermophilic spores and induced germination of psychrotolerants, whereas L-alanine and L-valine increased the germination efficiency, except alanine for A43. FTIR spectra showed important modifications to the structural components of spores after Fe irradiation at 250 Gy, which could explain the block in spore germination, whereas minor changes were observed after He radiation that could be related to the increased permeability of the inner membranes and alterations of receptor complex structures. Our results give new insights on HZE resistance of extremophiles that are useful in different contexts, including astrobiology.
Background: Coniferous woods (Abies nordmanniana (Stev.) Spach, Abies procera Rehd, Picea abies (L.) H.Karst, and Picea pungens Engelm.) could contain useful secondary metabolites to produce sustainable packaging materials, e.g., by substitution of harmful petrol-based additives in plastic packaging. This study aims to characterise the antioxidant and light-absorbing properties and ingredients of different coniferous wood extracts with regard to different plant fragments and drying conditions. Furthermore, the valorisation of used Christmas trees is evaluated. Methods: Different drying and extraction techniques were applied with the extracts being characterised by determining the total phenolic content (TPC), total antioxidant capacity (TAC), and absorbance in the ultraviolet range (UV). Gas chromatography coupled with mass spectrometry (GC-MS) and an acid–butanol assay (ABA) were used to characterise the extract constituents. Results: All the extracts show a considerably high UV absorbance while interspecies differences did occur. All the fresh and some of the dried biomass extracts reached utilisable TAC and TPC values. A simplified extraction setup for industrial application is evaluated; comparable TAC results could be reached with modifications. Conclusion: Coniferous woods are a promising renewable resource for preparation of sustainable antioxidants and photostabilisers. This particularly applies to Christmas trees used for up to 12 days. After extraction, the biomass can be fully valorised by incorporation in paper packaging.
Reinforcement learning (RL) algorithms should learn as much as possible about the environment but not the properties of the physics engines that generate the environment. There are multiple algorithms that solve the task in a physics engine based environment but there is no work done so far to understand if the RL algorithms can generalize across physics engines. In this work, we compare the generalization performance of various deep reinforcement learning algorithms on a variety of control tasks. Our results show that MuJoCo is the best engine to transfer the learning to other engines. On the other hand, none of the algorithms generalize when trained on PyBullet. We also found out that various algorithms have a promising generalizability if the effect of random seeds can be minimized on their performance.
An internal model of self-motion provides a fundamental basis for action in our daily lives, yet little is known about its development. The ability to control self-motion develops in youth and often deteriorates with advanced age. Self-motion generates relative motion between the viewer and the environment. Thus, the smoothness of the visual motion created will vary as control improves. Here, we study the influence of the smoothness of visually simulated self-motion on an observer's ability to judge how far they have travelled over a wide range of ages. Previous studies were typically highly controlled and concentrated on university students. But are such populations representative of the general public? And are there developmental and sex effects? Here, estimates of distance travelled (visual odometry) during visually induced self-motion were obtained from 466 participants drawn from visitors to a public science museum. Participants were presented with visual motion that simulated forward linear self-motion through a field of lollipops using a head-mounted virtual reality display. They judged the distance of their simulated motion by indicating when they had reached the position of a previously presented target. The simulated visual motion was presented with or without horizontal or vertical sinusoidal jitter. Participants' responses indicated that they felt they travelled further in the presence of vertical jitter. The effectiveness of the display increased with age over all jitter conditions. The estimated time for participants to feel that they had started to move also increased slightly with age. There were no differences between the sexes. These results suggest that age should be taken into account when generating motion in a virtual reality environment. Citizen science studies like this can provide a unique and valuable insight into perceptual processes in a truly representative sample of people.
OSC data
(2020)
Optimization plays an essential role in industrial design, but is not limited to minimization of a simple function, such as cost or strength. These tools are also used in conceptual phases, to better understand what is possible. To support this exploration we focus on Quality Diversity (QD) algorithms, which produce sets of varied, high performing solutions. These techniques often require the evaluation of millions of solutions -- making them impractical in design cases. In this thesis we propose methods to radically improve the data-efficiency of QD with machine learning, enabling its application to design. In our first contribution, we develop a method of modeling the performance of evolved neural networks used for control and design. The structures of these networks grow and change, making them difficult to model -- but with a new method we are able to estimate their performance based on their heredity, improving data-efficiency by several times. In our second contribution we combine model-based optimization with MAP-Elites, a QD algorithm. A model of performance is created from known designs, and MAP-Elites creates a new set of designs using this approximation. A subset of these designs are the evaluated to improve the model, and the process repeats. We show that this approach improves the efficiency of MAP-Elites by orders of magnitude. Our third contribution integrates generative models into MAP-Elites to learn domain specific encodings. A variational autoencoder is trained on the solutions produced by MAP-Elites, capturing the common “recipe” for high performance. This learned encoding can then be reused by other algorithms for rapid optimization, including MAP-Elites. Throughout this thesis, though the focus of our vision is design, we examine applications in other fields, such as robotics. These advances are not exclusive to design, but serve as foundational work on the integration of QD and machine learning.
Do socio-economic factors impede the engagement in online banking transactions? Evidence from Ghana
(2020)
Researchers have long pondered on the online banking transaction adoption. Some of these studies focus primarily on the motivating factors that affect customers’ intention to adopt/accept these services (technologies). However, research into the constraining factors, in particular socio-economic factors, barely exist in the literature, especially in the context of sub-Saharan Africa. Against this background, the paper seeks to fill in this gap by: (1) assessing the socio-economic factors impeding the engagement of e-banking transactions among retail bank customers in Ghana, and (2) examining the moderating effect of ‘customer experience of Internet’ on the identified factors that inhibit the engagement in online banking in Ghana. The paper used a quantitative research approach to obtain data from two leading Ghanaian banks. Out of the 450 questionnaires distributed, 393 were valid for analysis. Data were analyzed with the aid of PLS-SEM (partial least squares and structural equation modeling). Findings revealed that perceived knowledge gap and the price of digital devices were directly important to the intention to disembark on e-banking transactions among Ghanaian bank customers. Whilst customer experience (frequent use of the Internet), as a moderator variable, has a significant effect on the interaction between perceived knowledge gap and the intent to disembark on e-banking transactions; and finance charges and the intent to disembark on e-banking transactions. Study implications and directions for future research are discussed in the paper.
Until recently, studies regarding e-banking transactions have focused more on motivational factors that trigger the intention to accept and use the e-banking transaction, rather than the de-motivational factors that propel the action. However, in the developing countries like Sub-Sahara economies, the factors associated with the former have not been explored and are still rudimentary in the literature. Drawing from the Technology Threat Avoidance Theory (TTAT), the study seeks to examine the impact of online identity theft on customers’ willingness to engage in e-banking transactions in Ghana. A quantitative survey of 393 valid responses from retail bank customers amongst two leading commercial banks in Ghana for the analyses. Results from the PLS-SEM showed that the research constructs; perceived online identity theft’ positively and significantly predict “fear of financial loss”, “fear of reputational damage”, and “security and privacy concern” whilst the former has a negative mediated-relationship between perceived online identity theft and the intention to engage in e-banking transaction. This study is the first of its kind that has extended the application of the TTAT framework into the study of e-banking transactions. The study serves as a practical tool that will enable the banks in their quest to assess customers’ restriction/aversion towards the use of Fintech while ensuring sustainable growth of e-banking transactions in an emerging economy context. The study is limited to only banking institutions in Ghana without considering other players in the financial sub-sector. Future research direction has been suggested in the concluding part of the paper.
Multiwalled carbon nanotubes (MWCNTs) were easily and efficiently functionalised with highly cross-linked polyamines. The radical polymerisation of two bis-vinylimidazolium salts in the presence of pristine MWCNTs and azobisisobutyronitrile (AIBN) as a radical initiator led to the formation of materials with a high functionalisation degree. The subsequent treatment with sodium borohydride gave rise to the reduction of imidazolium moieties with the concomitant formation of secondary and tertiary amino groups. The obtained materials were characterised by thermogravimetric analysis (TGA), elemental analysis, solid state 13C-NMR, Fourier-transform infrared spectroscopy (FT-IR), transmission electron microscopy (TEM), potentiometric titration, and temperature programmed desorption of carbon dioxide (CO2-TPD). One of the prepared materials was tested as a heterogeneous base catalyst in C–C bond forming reactions such as the Knoevenagel condensation and Henry reaction. Furthermore, two examples concerning a sequential one-pot approach involving two consecutive reactions, namely Knoevenagel and Michael reactions, were reported.
The motor protein myosin drives a wide range of cellular and muscular functions by generating directed movement and force, fueled through adenosine triphosphate (ATP) hydrolysis. Release of the hydrolysis product adenosine diphosphate (ADP) is a fundamental and regulatory process during force production. However, details about the molecular mechanism accompanying ADP release are scarce due to the lack of representative structures. Here we solved a novel blebbistatin-bound myosin conformation with critical structural elements in positions between the myosin pre-power stroke and rigor states. ADP in this structure is repositioned towards the surface by the phosphate-sensing P-loop, and stabilized in a partially unbound conformation via a salt-bridge between Arg131 and Glu187. A 5 Å rotation separates the mechanical converter in this conformation from the rigor position. The crystallized myosin structure thus resembles a conformation towards the end of the two-step power stroke, associated with ADP release. Computationally reconstructing ADP release from myosin by means of molecular dynamics simulations further supported the existence of an equivalent conformation along the power stroke that shows the same major characteristics in the myosin motor domain as the resolved blebbistatin-bound myosin-II·ADP crystal structure, and identified a communication hub centered on Arg232 that mediates chemomechanical energy transduction.
Risk-based Authentication (RBA) is an adaptive security measure to strengthen password-based authentication. RBA monitors additional features during login, and when observed feature values differ significantly from previously seen ones, users have to provide additional authentication factors such as a verification code. RBA has the potential to offer more usable authentication, but the usability and the security perceptions of RBA are not studied well.
We present the results of a between-group lab study (n=65) to evaluate usability and security perceptions of two RBA variants, one 2FA variant, and password-only authentication. Our study shows with significant results that RBA is considered to be more usable than the studied 2FA variants, while it is perceived as more secure than password-only authentication in general and comparably secure to 2FA in a variety of application types. We also observed RBA usability problems and provide recommendations for mitigation. Our contribution provides a first deeper understanding of the users' perception of RBA and helps to improve RBA implementations for a broader user acceptance.
When a robotic agent experiences a failure while acting in the world, it should be possible to discover why that failure has occurred, namely to diagnose the failure. In this paper, we argue that the diagnosability of robot actions, at least in a classical sense, is a feature that cannot be taken for granted since it strongly depends on the underlying action representation. We specifically define criteria that determine the diagnosability of robot actions. The diagnosability question is then analysed in the context of a handle manipulation action, such that we discuss two different representations of the action – a composite policy with a learned success model for the action parameters, and a neural network-based monolithic policy – both of which exist on different sides of the diagnosability spectrum. Through this comparison, we conclude that composite actions are more suited to explicit diagnosis, but representations with less prior knowledge are more flexible. This suggests that model learning may provide balance between flexibility and diagnosability; however, data-driven diagnosis methods also need to be enhanced in order to deal with the complexity of modern robots.
Toshiyuki Fukao
(2020)
The ongoing coronavirus disease 2019 (COVID-19) pandemic threatens global health thereby causing unprecedented social, economic, and political disruptions. One way to prevent such a pandemic is through interventions at the human-animal-environment interface by using an integrated One Health (OH) approach. This systematic literature review documented the three coronavirus outbreaks, i.e. SARS, MERS, COVID-19, to evaluate the evolution of the OH approach, including the identification of key OH actions taken for prevention, response, and control.
The OH understandings identified were categorized into three distinct patterns: institutional coordination and collaboration, OH in action/implementation, and extended OH (i.e. a clear involvement of the environmental domain). Across all studies, OH was most often framed as OH in action/implementation and least often in its extended meaning. Utilizing OH as institutional coordination and collaboration and the extended OH both increased over time. OH actions were classified into twelve sub-groups and further categorized as classical OH actions (i.e. at the human-animal interface), classical OH actions with outcomes to the environment, and extended OH actions.
The majority of studies focused on human-animal interaction, giving less attention to the natural and built environment. Different understandings of the OH approach in practice and several practical limitations might hinder current efforts to achieve the operationalization of OH by combining institutional coordination and collaboration with specific OH actions. The actions identified here are a valuable starting point for evaluating the stage of OH development in different settings. This study showed that by moving beyond the classical OH approach and its actions towards a more extended understanding, OH can unfold its entire capacity thereby improving preparedness and mitigating the impacts of the next outbreak.
Comparing Non-Visual and Visual Guidance Methods for Narrow Field of View Augmented Reality Displays
(2020)
Gone But Not Forgotten: Evaluating Performance and Scalability of Real-Time Mesoscopic Agents
(2020)
Telepresence robots allow people to participate in remote spaces, yet they can be difficult to manoeuvre with people and obstacles around. We designed a haptic-feedback system called “FeetBack," which users place their feet in when driving a telepresence robot. When the robot approaches people or obstacles, haptic proximity and collision feedback are provided on the respective sides of the feet, helping inform users about events that are hard to notice through the robot’s camera views. We conducted two studies: one to explore the usage of FeetBack in virtual environments, another focused on real environments.We found that FeetBack can increase spatial presence in simple virtual environments. Users valued the feedback to adjust their behaviour in both types of environments, though it was sometimes too frequent or unneeded for certain situations after a period of time. These results point to the value of foot-based haptic feedback for telepresence robot systems, while also the need to design context-sensitive haptic feedback.
A Comparative Study of Uncertainty Estimation Methods in Deep Learning Based Classification Models
(2020)
Deep learning models produce overconfident predictions even for misclassified data. This work aims to improve the safety guarantees of software-intensive systems that use deep learning based classification models for decision making by performing comparative evaluation of different uncertainty estimation methods to identify possible misclassifications.
In this work, uncertainty estimation methods applicable to deep learning models are reviewed and those which can be seamlessly integrated to existing deployed deep learning architectures are selected for evaluation. The different uncertainty estimation methods, deep ensembles, test-time data augmentation and Monte Carlo dropout with its variants, are empirically evaluated on two standard datasets (CIFAR-10 and CIFAR-100) and two custom classification datasets (optical inspection and RoboCup@Work dataset). A relative ranking between the methods is provided by evaluating the deep learning classifiers on various aspects such as uncertainty quality, classifier performance and calibration. Standard metrics like entropy, cross-entropy, mutual information, and variance, combined with a rank histogram based method to identify uncertain predictions by thresholding on these metrics, are used to evaluate uncertainty quality.
The results indicate that Monte Carlo dropout combined with test-time data augmentation outperforms all other methods by identifying more than 95% of the misclassifications and representing uncertainty in the highest number of samples in the test set. It also yields a better classifier performance and calibration in terms of higher accuracy and lower Expected Calibration Error (ECE), respectively. A python based uncertainty estimation library for training and real-time uncertainty estimation of deep learning based classification models is also developed.
Human and robot tasks in household environments include actions such as carrying an object, cleaning a surface, etc. These tasks are performed by means of dexterous manipulation, and for humans, they are straightforward to accomplish. Moreover, humans perform these actions with reasonable accuracy and precision but with much less energy and stress on the actuators (muscles) than the robots do. The high agility in controlling their forces and motions is actually due to "laziness", i.e. humans exploit the existing natural forces and constraints to execute the tasks.
The above-mentioned properties of the human lazy strategy motivate us to relax the problem of controlling robot motions and forces, and solve it with the help of the environment. Therefore, in this work, we developed a lazy control strategy, i.e. task specification models and control architectures that relax several aspects of robot control by exploiting prior knowledge about the task and environment. The developed control strategy is realized in four different robotics use cases. In this work, the Popov-Vereshchagin hybrid dynamics solver is used as one of the building blocks in the proposed control architectures. An extension of the solver’s interface with the artificial Cartesian force and feed-forward joint torque task-drivers is proposed in this thesis.
To validate the proposed lazy control approach, an experimental evaluation was performed in a simulation environment and on a real robot platform.
Comparative Evaluation of Pretrained Transfer Learning Models on Automatic Short Answer Grading
(2020)
Automatic Short Answer Grading (ASAG) is the process of grading the student answers by computational approaches given a question and the desired answer. Previous works implemented the methods of concept mapping, facet mapping, and some used the conventional word embeddings for extracting semantic features. They extracted multiple features manually to train on the corresponding datasets. We use pretrained embeddings of the transfer learning models, ELMo, BERT, GPT, and GPT-2 to assess their efficiency on this task. We train with a single feature, cosine similarity, extracted from the embeddings of these models. We compare the RMSE scores and correlation measurements of the four models with previous works on Mohler dataset. Our work demonstrates that ELMo outperformed the other three models. We also, briefly describe the four transfer learning models and conclude with the possible causes of poor results of transfer learning models.
Graph drawing with spring embedders employs a V x V computation phase over the graph's vertex set to compute repulsive forces. Here, the efficacy of forces diminishes with distance: a vertex can effectively only influence other vertices in a certain radius around its position. Therefore, the algorithm lends itself to an implementation using search data structures to reduce the runtime complexity. NVIDIA RT cores implement hierarchical tree traversal in hardware. We show how to map the problem of finding graph layouts with force-directed methods to a ray tracing problem that can subsequently be implemented with dedicated ray tracing hardware. With that, we observe speedups of 4x to 13x over a CUDA software implementation.
In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI” from the users, who were interacting with AI but did not realize it. Three decades of research and we are still facing the same issue with the AItechnology users. In the lack of users’ awareness and mutual understanding of AI-enabled systems between designers and users, informal theories of the users about how a system works (“Folk theories”) become inevitable but can lead to misconceptions and ineffective interactions. To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. However, a profound understanding of the current users’ perception of AI is still missing. In this study, we introduce the term “Perceived AI” as “AI defined from the perspective of its users”. We then present our preliminary results from deep-interviews with 50 AItechnology users, which provide a framework for our future research approach towards a better understanding of PAI and users’ folk theories.
Discrimination and classification of eight strains related to meat spoilage microorganisms commonly found in poultry meat were successfully carried out using two dispersive Raman spectrometers (Microscope and Portable Fiber-Optic systems) in combination with chemometric methods. Principal Components Analysis (PCA) and Multi-Class Support Vector Machines (MC-SVM) were applied to develop discrimination and classification models. These models were certified using validation data sets which were successfully assigned to the correct bacterial genera and even to the right strain. The discrimination of bacteria down to the strain level was performed for the pre-processed spectral data using a 3-stage model based on PCA. The spectral features and differences among the species on which the discrimination was based were clarified through PCA loadings. In MC-SVM the pre-processed spectral data was subjected to PCA and utilized to build a classification model. When using the first two components, the accuracy of the MC-SVM model was 97.64% and 93.23% for the validation data collected by the Raman Microscope and the Portable Fiber-Optic Raman system, respectively. The accuracy reached 100% for the validation data by using the first eight and ten PC’s from the data collected by Raman Microscope and by Portable Fiber-Optic Raman system, respectively. The results reflect the strong discriminative power and the high performance of the developed models, the suitability of the pre-processing method used in this study and that the low accuracy of the Portable Fiber-Optic Raman system does not adversely affect the discriminative power of the developed models.
Business Management
(2020)
Background: Human mesenchymal stem cells (hMSCs) have shown their multipotential including differentiating towards endothelial and smooth muscle cell lineages, which triggers a new interest for using hMSCs as a putative source for cardiovascular regenerative medicine. Our recent publication has shown for the first time that purinergic 2 receptors are key players during hMSC differentiation towards adipocytes and osteoblasts. Purinergic 2 receptors play an important role in cardiovascular function when they bind to extracellular nucleotides. In this study, the possible functional role of purinergic 2 receptors during MSC endothelial and smooth muscle differentiation was investigated. Methods and Results: Human MSCs were isolated from liposuction materials. Then, endothelial and smooth muscle-like cells were differentiated and characterized by specific markers via Reverse Transcriptase-PCR (RT-PCR), Western blot and immunochemical stainings. Interestingly, some purinergic 2 receptor subtypes were found to be differently regulated during these specific lineage commitments: P2Y4 and P2Y14 were involved in the early stage commitment while P2Y1 was the key player in controlling MSC differentiation towards either endothelial or smooth muscle cells. The administration of natural and artificial purinergic 2 receptor agonists and antagonists had a direct influence on these differentiations. Moreover, a feedback loop via exogenous extracellular nucleotides on these particular differentiations was shown by apyrase digest. Conclusions: Purinergic 2 receptors play a crucial role during the differentiation towards endothelial and smooth muscle cell lineages. Some highly selective and potent artificial purinergic 2 ligands can control hMSC differentiation, which might improve the use of adult stem cells in cardiovascular tissue engineering in the future.
Background: While work-related rumination increases the risk of acute stressors developing into chronic load reactions and adverse health, mental detachment has been suggested as a way to interrupt this chain. Despite the importance of mentally detaching from work during leisure time, workers seem to struggle to disengage and, instead, experience the constant mental representation of work-related stressors, regardless of their absence. Those who struggle with work-related rumination could benefit from an easy-access intervention that fosters mental detachment by promoting recreational activities. Especially during vacations, workers appear to naturally engage in sufficient recovery activities; however, this beneficial behaviour is not sustained. The smartphone app-based intervention “Holidaily” promotes recovery behaviour and, thus, mental detachment from work with the intension of extending the beneficial effects of workers’ vacations into their daily working life.
Methods: This randomised-controlled trial (RCT) evaluates the efficacy of “Holidaily”. The Holidaily app is a German stand-alone program for mobile devices with either Android/iOS operating systems. The sample includes workers, who are awaiting to go on vacation and are randomly assigned to either the intervention (IG) or a waitlist-control group (CG). The IG receives two weeks pre-vacation access to Holidaily, while the CG receives access two weeks post-vacation. On a daily basis participants in the IG are provided with three options promoting recreational activities and beneficial recovery experiences. Online questionnaires are distributed to all participants at several timepoints. The primary outcome measure assesses participants’ work-related rumination (Irritation Scale). A significant difference two weeks post-vacation is expected, favouring the IG. Secondary outcomes include symptoms of depression, insomnia severity, emotional exhaustion, thinking about work, recovery experiences, vacation specifics, work and personal characteristics. To help explain the intervention’s effect, explorative analyses will investigate the mediation properties of the frequency of engaging in recreational activities and the moderation properties of Holidaily users’ experiences.
Discussion: If successful, workers will maintain their recovery behaviour beyond their vacation into daily working life. Findings could, therefore, provide evidence for low-intensity interventions that could be very valuable from a public-health perspective. App-based interventions have greater reach; hence, more workers might access preventative tools to protect themselves from developing adverse health effects linked to work-related rumination. Further studies will still be needed to investigate whether the vacation phenomenon of “lots of fun quickly gone” can be defied and long-term benefits attained.
Facial emotion recognition is the task to classify human emotions in face images. It is a difficult task due to high aleatoric uncertainty and visual ambiguity. A large part of the literature aims to show progress by increasing accuracy on this task, but this ignores the inherent uncertainty and ambiguity in the task. In this paper we show that Bayesian Neural Networks, as approximated using MC-Dropout, MC-DropConnect, or an Ensemble, are able to model the aleatoric uncertainty in facial emotion recognition, and produce output probabilities that are closer to what a human expects. We also show that calibration metrics show strange behaviors for this task, due to the multiple classes that can be considered correct, which motivates future work. We believe our work will motivate other researchers to move away from Classical and into Bayesian Neural Networks.