Refine
H-BRS Bibliography
- yes (69) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (45)
- Fachbereich Ingenieurwissenschaften und Kommunikation (14)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (10)
- Fachbereich Angewandte Naturwissenschaften (5)
- Fachbereich Wirtschaftswissenschaften (4)
- Institut für Cyber Security & Privacy (ICSP) (2)
- Fachbereich Sozialpolitik und Soziale Sicherung (1)
- Institut für funktionale Gen-Analytik (IFGA) (1)
- Institute of Visual Computing (IVC) (1)
Document Type
- Preprint (69) (remove)
Year of publication
Has Fulltext
- no (69) (remove)
Keywords
- Evolutionary Computation (2)
- inborn error of metabolism (2)
- ketone body (2)
- metabolic acidosis (2)
- metabolic decompensation (2)
- organic aciduria (2)
- unsupervised learning (2)
- ACAT1 (1)
- AML (1)
- Air Pollution Monitoring (1)
TSEM: Temporally Weighted Spatiotemporal Explainable Neural Network for Multivariate Time Series
(2022)
Deep learning has become a one-size-fits-all solution for technical and business domains thanks to its flexibility and adaptability. It is implemented using opaque models, which unfortunately undermines the outcome trustworthiness. In order to have a better understanding of the behavior of a system, particularly one driven by time series, a look inside a deep learning model so-called posthoc eXplainable Artificial Intelligence (XAI) approaches, is important. There are two major types of XAI for time series data, namely model-agnostic and model-specific. Model-specific approach is considered in this work. While other approaches employ either Class Activation Mapping (CAM) or Attention Mechanism, we merge the two strategies into a single system, simply called the Temporally Weighted Spatiotemporal Explainable Neural Network for Multivariate Time Series (TSEM). TSEM combines the capabilities of RNN and CNN models in such a way that RNN hidden units are employed as attention weights for the CNN feature maps temporal axis. The result shows that TSEM outperforms XCM. It is similar to STAM in terms of accuracy, while also satisfying a number of interpretability criteria, including causality, fidelity, and spatiotemporality.
Work-related thoughts in off-job time have been studied extensively in occupational health psychology and related fields. We provide a focused review of research on overcommitment – a component within the effort-reward imbalance model – and aim to connect this line of research to the most commonly studied aspects of work-related rumination. Drawing on this integrative review, we analyze survey data on ten facets of work-related rumination, namely (1) overcommitment, (2) psychological detachment, (3) affective rumination, (4) problem-solving pondering, (5) positive work reflection, (6) negative work reflection, (7) distraction, (8) cognitive irritation, (9) emotional irritation, and (10) inability to recover. First, we leverage exploratory factor analysis to self-report survey data from 357 employees to calibrate overcommitment items and to position overcommitment within the nomological net of work-related rumination constructs. Second, we leverage confirmatory factor analysis to self-report survey data from 388 employees to provide a more specific test of uniqueness vs. overlap among these constructs. Third, we apply relative weight analysis to quantify the unique criterion-related validity of each work-related rumination facet regarding (1) physical fatigue, (2) cognitive fatigue, (3) emotional fatigue, (4) burnout, (5) psychosomatic complaints, and (6) satisfaction with life. Our results suggest that several measures of work-related rumination (e.g., overcommitment and cognitive irritation) can be used interchangeably. Emotional irritation and affective rumination emerge as the strongest unique predictors of fatigue, burnout, psychosomatic complaints, and satisfaction with life. Our study assists researchers in making informed decisions on selecting scales for their research and paves the way for integrating research on effort-reward imbalance and work-related rumination.
The ability to discriminate between different ionic species, termed ion selectivity, is a key feature of ion channels and forms the basis for their physiological function. Members of the degenerin/epithelial sodium channel (DEG/ENaC) superfamily of trimeric ion channels are typically sodium selective, but to a surprisingly variable degree. While acid-sensing ion channels (ASICs) are weakly sodium selective (sodium:potassium around 10:1), ENaCs show a remarkably high preference for sodium over potassium (>500:1). The most obvious explanation for this discrepancy may be expected to originate from differences in the pore-lining second transmembrane segment (M2). However, these show a relatively high degree of sequence conservation between ASICs and ENaCs and previous functional and structural studies could not unequivocally establish that differences in M2 alone can account for the disparate degrees of ion selectivity. By contrast, surprisingly little is known about the contributions of the first transmembrane segment (M1) and the preceding pre-M1 region. In this study, we use conventional and non-canonical amino acid-based mutagenesis in combination with a variety of electrophysiological approaches to show that the pre-M1 and M1 regions of mASIC1a channels are major determinants of ion selectivity. Mutational investigations of the corresponding regions in hENaC show that they contribute less to ion selectivity, despite affecting ion conductance. In conclusion, our work supports the notion that the remarkably different degrees of sodium selectivity in ASICs and ENaCs are achieved through different mechanisms. The results further highlight how M1 and pre-M1 are likely to differentially affect pore structure in these related channels.
The prototype of a workflow system for the submission of content to a digital object repository is here presented. It is based entirely on open-source standard components and features a service-oriented architecture. The front-end consists of Java Business Process Management (jBPM), Java Server Faces (JSF), and Java Server Pages (JSP). A Fedora Repository and a mySQL data base management system serve as a back-end. The communication between front-end and back-end uses a SOAP minimal binding stub. We describe the design principles and the construction of the prototype and discuss the possibilities and limitations of work ow creation by administrators. The code of the prototype is open-source and can be retrieved in the project escipub at http://sourceforge.net/ .
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited. To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs. This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against two baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.083. Additionally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications. Finally, the source code and pre-trained STonKGs models are available at https://github.com/stonkgs/stonkgs and https://huggingface.co/stonkgs/stonkgs-150k.
This work introduces a semi-Lagrangian lattice Boltzmann (SLLBM) solver for compressible flows (with or without discontinuities). It makes use of a cell-wise representation of the simulation domain and utilizes interpolation polynomials up to fourth order to conduct the streaming step. The SLLBM solver allows for an independent time step size due to the absence of a time integrator and for the use of unusual velocity sets, like a D2Q25, which is constructed by the roots of the fifth-order Hermite polynomial. The properties of the proposed model are shown in diverse example simulations of a Sod shock tube, a two-dimensional Riemann problem and a shock-vortex interaction. It is shown that the cell-based interpolation and the use of Gauss-Lobatto-Chebyshev support points allow for spatially high-order solutions and minimize the mass loss caused by the interpolation. Transformed grids in the shock-vortex interaction show the general applicability to non-uniform grids.
Self-supervised learning has proved to be a powerful approach to learn image representations without the need of large labeled datasets. For underwater robotics, it is of great interest to design computer vision algorithms to improve perception capabilities such as sonar image classification. Due to the confidential nature of sonar imaging and the difficulty to interpret sonar images, it is challenging to create public large labeled sonar datasets to train supervised learning algorithms. In this work, we investigate the potential of three self-supervised learning methods (RotNet, Denoising Autoencoders, and Jigsaw) to learn high-quality sonar image representation without the need of human labels. We present pre-training and transfer learning results on real-life sonar image datasets. Our results indicate that self-supervised pre-training yields classification performance comparable to supervised pre-training in a few-shot transfer learning setup across all three methods. Code and self-supervised pre-trained models are be available at https://github.com/agrija9/ssl-sonar-images
Vietnam requires a sustainable urbanization, for which city sensing is used in planning and de-cision-making. Large cities need portable, scalable, and inexpensive digital technology for this purpose. End-to-end air quality monitoring companies such as AirVisual and Plume Air have shown their reliability with portable devices outfitted with superior air sensors. They are pricey, yet homeowners use them to get local air data without evaluating the causal effect. Our air quality inspection system is scalable, reasonably priced, and flexible. Minicomputer of the sys-tem remotely monitors PMS7003 and BME280 sensor data through a microcontroller processor. The 5-megapixel camera module enables researchers to infer the causal relationship between traffic intensity and dust concentration. The design enables inexpensive, commercial-grade hardware, with Azure Blob storing air pollution data and surrounding-area imagery and pre-venting the system from physically expanding. In addition, by including an air channel that re-plenishes and distributes temperature, the design improves ventilation and safeguards electrical components. The gadget allows for the analysis of the correlation between traffic and air quali-ty data, which might aid in the establishment of sustainable urban development plans and poli-cies.
Saliency methods are frequently used to explain Deep Neural Network-based models. Adebayo et al.'s work on evaluating saliency methods for classification models illustrate certain explanation methods fail the model and data randomization tests. However, on extending the tests for various state of the art object detectors we illustrate that the ability to explain a model is more dependent on the model itself than the explanation method. We perform sanity checks for object detection and define new qualitative criteria to evaluate the saliency explanations, both for object classification and bounding box decisions, using Guided Backpropagation, Integrated Gradients, and their Smoothgrad versions, together with Faster R-CNN, SSD, and EfficientDet-D0, trained on COCO. In addition, the sensitivity of the explanation method to model parameters and data labels varies class-wise motivating to perform the sanity checks for each class. We find that EfficientDet-D0 is the most interpretable method independent of the saliency method, which passes the sanity checks with little problems.
Fatigue strength estimation is a costly manual material characterization process in which state-of-the-art approaches follow a standardized experiment and analysis procedure. In this paper, we examine a modular, Machine Learning-based approach for fatigue strength estimation that is likely to reduce the number of experiments and, thus, the overall experimental costs. Despite its high potential, deployment of a new approach in a real-life lab requires more than the theoretical definition and simulation. Therefore, we study the robustness of the approach against misspecification of the prior and discretization of the specified loads. We identify its applicability and its advantageous behavior over the state-of-the-art methods, potentially reducing the number of costly experiments.
In this paper we propose an implement a general convolutional neural network (CNN) building framework for designing real-time CNNs. We validate our models by creating a real-time vision system which accomplishes the tasks of face detection, gender classification and emotion classification simultaneously in one blended step using our proposed CNN architecture. After presenting the details of the training procedure setup we proceed to evaluate on standard benchmark sets. We report accuracies of 96% in the IMDB gender dataset and 66% in the FER-2013 emotion dataset. Along with this we also introduced the very recent real-time enabled guided back-propagation visualization technique. Guided back-propagation uncovers the dynamics of the weight changes and evaluates the learned features. We argue that the careful implementation of modern CNN architectures, the use of the current regularization methods and the visualization of previously hidden features are necessary in order to reduce the gap between slow performances and real-time architectures. Our system has been validated by its deployment on a Care-O-bot 3 robot used during RoboCup@Home competitions. All our code, demos and pre-trained architectures have been released under an open-source license in our public repository.
Transdermal therapeutic systems (TTS) represent an up-to-day medication applied to human skin, which consists of a drug-containing pressure-sensitive adhesive (PSA) and a flexible backing layer. The development of a reliable TTS requires precise knowledge of the viscoelastic tack behavior of PSA in terms of adhesion and detaching. Tailoring of a PSA can be achieved by altering the resin content or modifying the chemical properties of the macromolecules. In this study, three different resin content of two silicone-based PSA – non-amine compatible, and less tack, amine-compatible – were investigated with the help of recently developed RheoTack method to characterize the retraction speed dependent tack behavior for various geometries of the testing rods. The obtained force-retraction displacement-curves clearly depict the effect of the chemical structure as well as the resin content. Decreasing the resin content shifts the start of fibril fracture to larger deformations states and significantly enhances the stretchability of the fibrils. To compare various rod geometries precisely, the force-retraction displacement curves were normalized to account for effective contact areas. The flat and spherical rods led to completely different failure and tack behaviors. Furthermore, the adhesion formation between TTS with flexible backing layers and rods during the dwell phase happens in a different manner compared to rigid plates, in particular for flat rods, where maximum compression stresses occur at the edges and not uniformly over the cross-section. Thus, the approach to follow ASTM D2949 has to be reconsidered for tests of these materials.
We derive rates of convergence for limit theorems that reveal the intricate structure of the phase transitions in a mean-field version of the Blume-Emery-Griffith model. The theorems consist of scaling limits for the total spin. The model depends on the inverse temperature β and the interaction strength K. The rates of convergence results are obtained as (β,K) converges along appropriate sequences (βn,Kn) to points belonging to various subsets of the phase diagram which include a curve of second-order points and a tricritical point. We apply Stein's method for normal and non-normal approximation avoiding the use of transforms and supplying bounds, such as those of Berry-Esseen quality, on approximation error. We observe an additional phase transition phenomenon in the sense that depending on how fast Kn and βn are converging to points in various subsets of the phase diagram, different rates of convergences to one and the same limiting distribution occur.
In robot-assisted therapy for individuals with Autism Spectrum Disorder, the workload of therapists during a therapeutic session is increased if they have to control the robot manually. To allow therapists to focus on the interaction with the person instead, the robot should be more autonomous, namely it should be able to interpret the person's state and continuously adapt its actions according to their behaviour. In this paper, we develop a personalised robot behaviour model that can be used in the robot decision-making process during an activity; this behaviour model is trained with the help of a user model that has been learned from real interaction data. We use Q-learning for this task, such that the results demonstrate that the policy requires about 10,000 iterations to converge. We thus investigate policy transfer for improving the convergence speed; we show that this is a feasible solution, but an inappropriate initial policy can lead to a suboptimal final return.
Grasp verification is advantageous for autonomous manipulation robots as they provide the feedback required for higher level planning components about successful task completion. However, a major obstacle in doing grasp verification is sensor selection. In this paper, we propose a vision based grasp verification system using machine vision cameras, with the verification problem formulated as an image classification task. Machine vision cameras consist of a camera and a processing unit capable of on-board deep learning inference. The inference in these low-power hardware are done near the data source, reducing the robot's dependence on a centralized server, leading to reduced latency, and improved reliability. Machine vision cameras provide the deep learning inference capabilities using different neural accelerators. Although, it is not clear from the documentation of these cameras what is the effect of these neural accelerators on performance metrics such as latency and throughput. To systematically benchmark these machine vision cameras, we propose a parameterized model generator that generates end to end models of Convolutional Neural Networks(CNN). Using these generated models we benchmark latency and throughput of two machine vision cameras, JeVois A33 and Sipeed Maix Bit. Our experiments demonstrate that the selected machine vision camera and the deep learning models can robustly verify grasp with 97% per frame accuracy.
In this paper we introduce the Perception for Autonomous Systems (PAZ) software library. PAZ is a hierarchical perception library that allow users to manipulate multiple levels of abstraction in accordance to their requirements or skill level. More specifically, PAZ is divided into three hierarchical levels which we refer to as pipelines, processors, and backends. These abstractions allows users to compose functions in a hierarchical modular scheme that can be applied for preprocessing, data-augmentation, prediction and postprocessing of inputs and outputs of machine learning (ML) models. PAZ uses these abstractions to build reusable training and prediction pipelines for multiple robot perception tasks such as: 2D keypoint estimation, 2D object detection, 3D keypoint discovery, 6D pose estimation, emotion classification, face recognition, instance segmentation, and attention mechanisms.
Urban LoRa networks promise to provide a cost-efficient and scalable communication backbone for smart cities. One core challenge in rolling out and operating these networks is radio network planning, i.e., precise predictions about possible new locations and their impact on network coverage. Path loss models aid in this task, but evaluating and comparing different models requires a sufficiently large set of high-quality received packet power samples. In this paper, we report on a corresponding large-scale measurement study covering an urban area of 200km2 over a period of 230 days using sensors deployed on garbage trucks, resulting in more than 112 thousand high-quality samples for received packet power. Using this data, we compare eleven previously proposed path loss models and additionally provide new coefficients for the Log-distance model. Our results reveal that the Log-distance model and other well-known empirical models such as Okumura or Winner+ provide reasonable estimations in an urban environment, and terrain based models such as ITM or ITWOM have no advantages. In addition, we derive estimations for the needed sample size in similar measurement campaigns. To stimulate further research in this direction, we make all our data publicly available.
Suppose we have n keys, n access probabilities for the keys, and n+1 access probabilities for the gaps between the keys. Let h_min(n) be the minimal height of a binary search tree for n keys. We consider the problem to construct an optimal binary search tree with near minimal height, i.e.\ with height h <= h_min(n) + Delta for some fixed Delta. It is shown, that for any fixed Delta optimal binary search trees with near minimal height can be constructed in time O(n^2). This is as fast as in the unrestricted case. So far, the best known algorithms for the construction of height-restricted optimal binary search trees have running time O(L n^2), whereby L is the maximal permitted height. Compared to these algorithms our algorithm is at least faster by a factor of log n, because L is lower bounded by log n.