Refine
H-BRS Bibliography
- yes (178) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (178) (remove)
Document Type
- Article (79)
- Report (44)
- Conference Object (32)
- Part of a Book (8)
- Master's Thesis (7)
- Preprint (5)
- Book (monograph, edited volume) (1)
- Conference Proceedings (1)
- Working Paper (1)
Year of publication
Has Fulltext
- yes (178) (remove)
Keywords
- Robotik (6)
- Usable Security (5)
- Big Data Analysis (4)
- Machine Learning (4)
- Risk-based Authentication (4)
- Virtuelle Realität (4)
- computer vision (4)
- 802.11 (3)
- Cutting sticks-Problem (3)
- Knowledge Graphs (3)
Neuromorphic computing aims to mimic the computational principles of the brain in silico and has motivated research into event-based vision and spiking neural networks (SNNs). Event cameras (ECs) capture local, independent changes in brightness, and offer superior power consumption, response latencies, and dynamic ranges compared to frame-based cameras. SNNs replicate neuronal dynamics observed in biological neurons and propagate information in sparse sequences of ”spikes”. Apart from biological fidelity, SNNs have demonstrated potential as an alternative to conventional artificial neural networks (ANNs), such as in reducing energy expenditure and inference time in visual classification. Although potentially beneficial for robotics, the novel event-driven and spike-based paradigms remain scarcely explored outside the domain of aerial robots.
To investigate the utility of brain-inspired sensing and data processing in a robotics application, we developed a neuromorphic approach to real-time, online obstacle avoidance on a manipulator with an onboard camera. Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN, decoding neural activations into avoidance motions, and adjusting plans in a dynamic motion primitive formulation. We conducted simulated and real experiments with a Kinova Gen3 arm performing simple reaching tasks involving static and dynamic obstacles. Our implementation was systematically tuned, validated, and tested in sets of distinct task scenarios, and compared to a non-adaptive baseline through formalized quantitative metrics and qualitative criteria.
The neuromorphic implementation facilitated reliable avoidance of imminent collisions in most scenarios, with 84% and 92% median success rates in simulated and real experiments, where the baseline consistently failed. Adapted trajectories were qualitatively similar to baseline trajectories, indicating low impacts on safety, predictability and smoothness criteria. Among notable properties of the SNN were the correlation of processing time with the magnitude of perceived motions (captured in events) and robustness to different event emulation methods. Preliminary tests with a DAVIS346 EC showed similar performance, validating our experimental event emulation method. These results motivate future efforts to incorporate SNN learning, utilize neuromorphic processors, and target other robot tasks to further explore this approach.
An essential measure of autonomy in service robots designed to assist humans is adaptivity to the various contexts of human-oriented tasks. These robots may have to frequently execute the same action, but subject to subtle variations in task parameters that determine optimal behaviour. Such actions are traditionally executed by robots using pre-determined, generic motions, but a better approach could utilize robot arm maneuverability to learn and execute different trajectories that work best in each context.
In this project, we explore a robot skill acquisition procedure that allows incorporating contextual knowledge, adjusting executions according to context, and improvement through experience, as a step towards more adaptive service robots. We propose an apprenticeship learning approach to achieving context-aware action generalisation on the task of robot-to-human object hand-over. The procedure combines learning from demonstration, with which a robot learns to imitate a demonstrator’s execution of the task, and a reinforcement learning strategy, which enables subsequent experiential learning of contextualized policies, guided by information about context that is integrated into the learning process. By extending the initial, static hand-over policy to a contextually adaptive one, the robot derives and executes variants of the demonstrated action that most appropriately suit the current context. We use dynamic movement primitives (DMPs) as compact motion representations, and a model-based Contextual Relative Entropy Policy Search (C-REPS) algorithm for learning policies that can specify hand-over position, trajectory shape, and execution speed, conditioned on context variables. Policies are learned using simulated task executions, before transferring them to the robot and evaluating emergent behaviours.
We demonstrate the algorithm’s ability to learn context-dependent hand-over positions, and new trajectories, guided by suitable reward functions, and show that the current DMP implementation limits learning context-dependent execution speeds. We additionally conduct a user study involving participants assuming different postures and receiving an object from the robot, which executes hand-overs by either exclusively imitating a demonstrated motion, or selecting hand-over positions based on learned contextual policies and adapting its motion accordingly. The results confirm the hypothesized improvements in the robot’s perceived behaviour when it is context-aware and adaptive, and provide useful insights that can inform future developments.
YAWL (Yet Another Workflow Language) is an open source Business Process Management System, first released in 2003. YAWL grew out of a university research environment to become a unique system that has been deployed worldwide as a laboratory environment for research in Business Process Management and as a productive system in other scientific domains.
When users in virtual reality cannot physically walk and self-motions are instead only visually simulated, spatial updating is often impaired. In this paper, we report on a study that investigated if HeadJoystick, an embodied leaning-based flying interface, could improve performance in a 3D navigational search task that relies on maintaining situational awareness and spatial updating in VR. We compared it to Gamepad, a standard flying interface. For both interfaces, participants were seated on a swivel chair and controlled simulated rotations by physically rotating. They either leaned (forward/backward, right/left, up/down) or used the Gamepad thumbsticks for simulated translation. In a gamified 3D navigational search task, participants had to find eight balls within 5 min. Those balls were hidden amongst 16 randomly positioned boxes in a dark environment devoid of any landmarks. Compared to the Gamepad, participants collected more balls using the HeadJoystick. It also minimized the distance travelled, motion sickness, and mental task demand. Moreover, the HeadJoystick was rated better in terms of ease of use, controllability, learnability, overall usability, and self-motion perception. However, participants rated HeadJoystick could be more physically fatiguing after a long use. Overall, participants felt more engaged with HeadJoystick, enjoyed it more, and preferred it. Together, this provides evidence that leaning-based interfaces like HeadJoystick can provide an affordable and effective alternative for flying in VR and potentially telepresence drones.
Airborne and spaceborne platforms are the primary data sources for large-scale forest mapping, but visual interpretation for individual species determination is labor-intensive. Hence, various studies focusing on forests have investigated the benefits of multiple sensors for automated tree species classification. However, transferable deep learning approaches for large-scale applications are still lacking. This gap motivated us to create a novel dataset for tree species classification in central Europe based on multi-sensor data from aerial, Sentinel-1 and Sentinel-2 imagery. In this paper, we introduce the TreeSatAI Benchmark Archive, which contains labels of 20 European tree species (i.e., 15 tree genera) derived from forest administration data of the federal state of Lower Saxony, Germany. We propose models and guidelines for the application of the latest machine learning techniques for the task of tree species classification with multi-label data. Finally, we provide various benchmark experiments showcasing the information which can be derived from the different sensors including artificial neural networks and tree-based machine learning methods. We found that residual neural networks (ResNet) perform sufficiently well with weighted precision scores up to 79 % only by using the RGB bands of aerial imagery. This result indicates that the spatial content present within the 0.2 m resolution data is very informative for tree species classification. With the incorporation of Sentinel-1 and Sentinel-2 imagery, performance improved marginally. However, the sole use of Sentinel-2 still allows for weighted precision scores of up to 74 % using either multi-layer perceptron (MLP) or Light Gradient Boosting Machine (LightGBM) models. Since the dataset is derived from real-world reference data, it contains high class imbalances. We found that this dataset attribute negatively affects the models' performances for many of the underrepresented classes (i.e., scarce tree species). However, the class-wise precision of the best-performing late fusion model still reached values ranging from 54 % (Acer) to 88 % (Pinus). Based on our results, we conclude that deep learning techniques using aerial imagery could considerably support forestry administration in the provision of large-scale tree species maps at a very high resolution to plan for challenges driven by global environmental change. The original dataset used in this paper is shared via Zenodo (https://doi.org/10.5281/zenodo.6598390, Schulz et al., 2022). For citation of the dataset, we refer to this article.
A robot (e.g. mobile manipulator) that interacts with its environment to perform its tasks, often faces situations in which it is unable to achieve its goals despite perfect functioning of its sensors and actuators. These situations occur when the behavior of the object(s) manipulated by the robot deviates from its expected course because of unforeseeable ircumstances. These deviations are experienced by the robot as unknown external faults. In this work we present an approach that increases reliability of mobile manipulators against the unknown external faults. This approach focuses on the actions of manipulators which involve releasing of an object. The proposed approach, which is triggered after detection of a fault, is formulated as a three-step scheme that takes a definition of a planning operator and an example simulation as its inputs. The planning operator corresponds to the action that fails because of the fault occurrence, whereas the example simulation shows the desired/expected behavior of the objects for the same action. In its first step, the scheme finds a description of the expected behavior of the objects in terms of logical atoms (i.e. description vocabulary). The description of the simulation is used by the second step to find limits of the parameters of the manipulated object. These parameters are the variables that define the releasing state of the object.
Using randomly chosen values of the parameters within these limits, this step creates different examples of the releasing state of the object. Each one of these examples is labelled as desired or undesired according to the behavior exhibited by the object (in the simulation), when the object is released in the state corresponded by the example. The description vocabulary is also used in labeling the examples autonomously. In the third step, an algorithm (i.e. N-Bins) uses the labelled examples to suggest the state for the object in which releasing it avoids the occurrence of unknown external faults.
The proposed N-Bins algorithm can also be used for binary classification problems. Therefore, in our experiments with the proposed approach we also test its prediction ability along with the analysis of the results of our approach. The results show that under the circumstances peculiar to our approach, N-Bins algorithm shows reasonable prediction accuracy where other state of the art classification algorithms fail to do so. Thus, N-Bins also extends the ability of a robot to predict the behavior of the object to avoid unknown external faults. In this work we use simulation environment OPENRave that uses physics engine ODE to simulate the dynamics of rigid bodies.
A system that interacts with its environment can be much more robust if it is able to reason about the faults that occur in its environment, despite perfect functioning of its internal components. For robots, which interact with the same environment as human beings, this robustness can be obtained by incorporating human-like reasoning abilities in them. In this work we use naive physics to enable reasoning about external faults in robots. We propose an approach for diagnosing external faults that uses qualitative reasoning on naive physics concepts for diagnosis. These concepts are mainly individual properties of objects that define their state qualitatively. The reasoning process uses physical laws to generate possible states of the concerned object(s), which could result into a detected external fault. Since effective reasoning about any external fault requires the information of relevant properties and physical laws, we associate different properties and laws to different types of faults which can be detected by a robot. The underlying ontology of this association is proposed on the basis of studies conducted (by other researchers) on reasoning of physics novices about everyday physical phenomena. We also formalize some definitions of properties of objects into a small framework represented in First-Order logic. These definitions represent naive concepts behind the properties and are intended to be independent from objects and circumstances. The definitions in the framework illustrates our proposal of using different biased definitions of properties for different types of faults. In this work, we also present a brief review of important contributions in the area of naive/qualitative physics. These reviews help in understanding the limitations of naive/qualitative physics in general. We also apply our approach to simple scenarios to asses its limitations in particular. Since this work was done independent of any particular real robotic system, it can be seen as a theoretical proof of the concept of usefulness of naive physics for external fault reasoning in robotics.
A principal step towards solving diverse perception problems is segmentation. Many algorithms benefit from initially partitioning input point clouds into objects and their parts. In accordance with cognitive sciences, segmentation goal may be formulated as to split point clouds into locally smooth convex areas, enclosed by sharp concave boundaries. This goal is based on purely geometrical considerations and does not incorporate any constraints, or semantics, of the scene and objects being segmented, which makes it very general and widely applicable. In this work we perform geometrical segmentation of point cloud data according to the stated goal. The data is mapped onto a graph and the task of graph partitioning is considered. We formulate an objective function and derive a discrete optimization problem based on it. Finding the globally optimal solution is an NP-complete problem; in order to circumvent this, spectral methods are applied. Two algorithms that implement the divisive hierarchical clustering scheme are proposed. They derive graph partition by analyzing the eigenvectors obtained through spectral relaxation. The specifics of our application domain are used to automatically introduce cannot-link constraints in the clustering problem. The algorithms function in completely unsupervised manner and make no assumptions about shapes of objects and structures that they segment. Three publicly available datasets with cluttered real-world scenes and an abundance of box-like, cylindrical, and free-form objects are used to demonstrate convincing performance. Preliminary results of this thesis have been contributed to the International Conference on Autonomous Intelligent Systems (IAS-13).
A company's financial documents use tables along with text to organize the data containing key performance indicators (KPIs) (such as profit and loss) and a financial quantity linked to them. The KPI’s linked quantity in a table might not be equal to the similarly described KPI's quantity in a text. Auditors take substantial time to manually audit these financial mistakes and this process is called consistency checking. As compared to existing work, this paper attempts to automate this task with the help of transformer-based models. Furthermore, for consistency checking it is essential for the table's KPIs embeddings to encode the semantic knowledge of the KPIs and the structural knowledge of the table. Therefore, this paper proposes a pipeline that uses a tabular model to get the table's KPIs embeddings. The pipeline takes input table and text KPIs, generates their embeddings, and then checks whether these KPIs are identical. The pipeline is evaluated on the financial documents in the German language and a comparative analysis of the cell embeddings' quality from the three tabular models is also presented. From the evaluation results, the experiment that used the English-translated text and table KPIs and Tabbie model to generate table KPIs’ embeddings achieved an accuracy of 72.81% on the consistency checking task, outperforming the benchmark, and other tabular models.
Recent work in image captioning and scene-segmentation has shown significant results in the context of scene-understanding. However, most of these developments have not been extrapolated to research areas such as robotics. In this work we review the current state-ofthe- art models, datasets and metrics in image captioning and scenesegmentation. We introduce an anomaly detection dataset for the purpose of robotic applications, and we present a deep learning architecture that describes and classifies anomalous situations. We report a METEOR score of 16.2 and a classification accuracy of 97 %.
AErOmAt Abschlussbericht
(2020)
Das Projekt AErOmAt hatte zum Ziel, neue Methoden zu entwickeln, um einen erheblichen Teil aerodynamischer Simulationen bei rechenaufwändigen Optimierungsdomänen einzusparen. Die Hochschule Bonn-Rhein-Sieg (H-BRS) hat auf diesem Weg einen gesellschaftlich relevanten und gleichzeitig wirtschaftlich verwertbaren Beitrag zur Energieeffizienzforschung geleistet. Das Projekt führte außerdem zu einer schnelleren Integration der neuberufenen Antragsteller in die vorhandenen Forschungsstrukturen.
XPERSIF: a software integration framework & architecture for robotic learning by experimentation
(2008)
The integration of independently-developed applications into an efficient system, particularly in a distributed setting, is the core issue addressed in this work. Cooperation between researchers across various field boundaries in order to solve complex problems has become commonplace. Due to the multidisciplinary nature of such efforts, individual applications are developed independent of the integration process. The integration of individual applications into a fully-functioning architecture is a complex and multifaceted task. This thesis extends a component-based architecture, previously developed by the authors, to allow the integration of various software applications which are deployed in a distributed setting. The test bed for the framework is the EU project XPERO, the goal of which is robot learning by experimentation. The task at hand is the integration of the required applications, such as planning of experiments, perception of parametrized features, robot motion control and knowledge-based learning, into a coherent cognitive architecture. This allows a mobile robot to use the methods involved in experimentation in order to learn about its environment. To meet the challenge of developing this architecture within a distributed, heterogeneous environment, the authors specified, defined, developed, implemented and tested a component-based architecture called XPERSIF. The architecture comprises loosely-coupled, autonomous components that offer services through their well-defined interfaces and form a service-oriented architecture. The Ice middleware is used in the communication layer. Its deployment facilitates the necessary refactoring of concepts. One fully specified and detailed use case is the successful integration of the XPERSim simulator which constitutes one of the kernel components of XPERO.The results of this work demonstrate that the proposed architecture is robust and flexible, and can be successfully scaled to allow the complete integration of the necessary applications, thus enabling robot learning by experimentation. The design supports composability, thus allowing components to be grouped together in order to provide an aggregate service. Distributed simulation enabled real time tele-observation of the simulated experiment. Results show that incorporating the XPERSim simulator has substantially enhanced the speed of research and the information flow within the cognitive learning loop.
Recent advances in Natural Language Processing have substantially improved contextualized representations of language. However, the inclusion of factual knowledge, particularly in the biomedical domain, remains challenging. Hence, many Language Models (LMs) are extended by Knowledge Graphs (KGs), but most approaches require entity linking (i.e., explicit alignment between text and KG entities). Inspired by single-stream multimodal Transformers operating on text, image and video data, this thesis proposes the Sophisticated Transformer trained on biomedical text and Knowledge Graphs (STonKGs). STonKGs incorporates a novel multimodal architecture based on a cross encoder that uses the attention mechanism on a concatenation of input sequences derived from text and KG triples, respectively. Over 13 million so-called text-triple pairs, coming from PubMed and assembled using the Integrated Network and Dynamical Reasoning Assembler (INDRA), were used in an unsupervised pre-training procedure to learn representations of biomedical knowledge in STonKGs. By comparing STonKGs to an NLP- and a KG-baseline (operating on either text or KG data) on a benchmark consisting of eight fine-tuning tasks, the proposed knowledge integration method applied in STonKGs was empirically validated. Specifically, on tasks with a comparatively small dataset size and a larger number of classes, STonKGs resulted in considerable performance gains, beating the F1-score of the best baseline by up to 0.083. Both the source code as well as the code used to implement STonKGs are made publicly available so that the proposed method of this thesis can be extended to many other biomedical applications.
MOTIVATION
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited.
RESULTS
To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs (KGs). This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations in a shared embedding space. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against three baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.084 (i.e., from 0.881 to 0.965). Finally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications.
AVAILABILITY
We make the source code and the Python package of STonKGs available at GitHub (https://github.com/stonkgs/stonkgs) and PyPI (https://pypi.org/project/stonkgs/). The pre-trained STonKGs models and the task-specific classification models are respectively available at https://huggingface.co/stonkgs/stonkgs-150k and https://zenodo.org/communities/stonkgs.
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online.
ProtSTonKGs: A Sophisticated Transformer Trained on Protein Sequences, Text, and Knowledge Graphs
(2022)
While most approaches individually exploit unstructured data from the biomedical literature or structured data from biomedical knowledge graphs, their union can better exploit the advantages of such approaches, ultimately improving representations of biology. Using multimodal transformers for such purposes can improve performance on context dependent classication tasks, as demonstrated by our previous model, the Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs (STonKGs). In this work, we introduce ProtSTonKGs, a transformer aimed at learning all-encompassing representations of protein-protein interactions. ProtSTonKGs presents an extension to our previous work by adding textual protein descriptions and amino acid sequences (i.e., structural information) to the text- and knowledge graph-based input sequence used in STonKGs. We benchmark ProtSTonKGs against STonKGs, resulting in improved F1 scores by up to 0.066 (i.e., from 0.204 to 0.270) in several tasks such as predicting protein interactions in several contexts. Our work demonstrates how multimodal transformers can be used to integrate heterogeneous sources of information, paving the foundation for future approaches that use multiple modalities for biomedical applications.
Target meaning representations for semantic parsing tasks are often based on programming or query languages, such as SQL, and can be formalized by a context-free grammar. Assuming a priori knowledge of the target domain, such grammars can be exploited to enforce syntactical constraints when predicting logical forms. To that end, we assess how syntactical parsers can be integrated into modern encoder-decoder frameworks. Specifically, we implement an attentional SEQ2SEQ model that uses an LR parser to maintain syntactically valid sequences throughout the decoding procedure. Compared to other approaches to grammar-guided decoding that modify the underlying neural network architecture or attempt to derive full parse trees, our approach is conceptually simpler, adds less computational overhead during inference and integrates seamlessly with current SEQ2SEQ frameworks. We present preliminary evaluation results against a recurrent SEQ2SEQ baseline on GEOQUERY and ATIS and demonstrate improved performance while enforcing grammatical constraints.
Object-Based Trace Model for Automatic Indicator Computation in the Human Learning Environments
(2021)
This paper proposes a traces model in the form of an object or class model (in the UML sense) which allows the automatic calculation of indicators of various kinds and independently of the computer environment for human learning (CEHL). The model is based on the establishment of a trace-based system that encompasses all the logic of traces collecting and indicators calculation. It is im-plemented in the form of a trace database. It is an important contribution in the field of the exploitation of the traces of apprenticeship in a CEHL because it pro-vides a general formalism for modeling the traces and allowing the calculation of several indicators at the same time. Also, with the inclusion of calculated indica-tors as potential learning traces, our model provides a formalism for classifying the various indicators in the form of inheritance relationships, which promotes the reuse of indicators already calculated. Economically, the model can allow organi-zations with different learning platforms to invest only in one traces Management System. At the social level, it can allow a better sharing of trace databases be-tween the various research institutions in the field of CEHL.
Telepresence robots allow users to be spatially and socially present in remote environments. Yet, it can be challenging to remotely operate telepresence robots, especially in dense environments such as academic conferences or workplaces. In this paper, we primarily focus on the effect that a speed control method, which automatically slows the telepresence robot down when getting closer to obstacles, has on user behaviors. In our first user study, participants drove the robot through a static obstacle course with narrow sections. Results indicate that the automatic speed control method significantly decreases the number of collisions. For the second study we designed a more naturalistic, conference-like experimental environment with tasks that require social interaction, and collected subjective responses from the participants when they were asked to navigate through the environment. While about half of the participants preferred automatic speed control because it allowed for smoother and safer navigation, others did not want to be influenced by an automatic mechanism. Overall, the results suggest that automatic speed control simplifies the user interface for telepresence robots in static dense environments, but should be considered as optionally available, especially in situations involving social interactions.
Human butyrylcholinesterase (BChE) is a glycoprotein capable of bioscavenging toxic compounds such as organophosphorus (OP) nerve agents. For commercial production of BChE, it is practical to synthesize BChE in non-human expression systems, such as plants or animals. However, the glycosylation profile in these systems is significantly different from the human glycosylation profile, which could result in changes in BChE's structure and function. From our investigation, we found that the glycan attached to ASN241 is both structurally and functionally important due to its close proximity to the BChE tetramerization domain and the active site gorge. To investigate the effects of populating glycosylation site ASN241, monomeric human BChE glycoforms were simulated with and without site ASN241 glycosylated. Our simulations indicate that the structure and function of human BChE are significantly affected by the absence of glycan 241.
RoCKIn@Work was focused on benchmarks in the domain of industrial robots. Both task and functionality benchmarks were derived from real world applications. All of them were part of a bigger user story painting the picture of a scaled down real world factory scenario. Elements used to build the testbed were chosen from common materials in modern manufacturing environments. Networked devices, machines controllable through a central software component, were also part of the testbed and introduced a dynamic component to the task benchmarks. Strict guidelines on data logging were imposed on participating teams to ensure gathered data could be automatically evaluated. This also had the positive effect that teams were made aware of the importance of data logging, not only during a competition but also during research as useful utility in their own laboratory. Tasks and functionality benchmarks are explained in detail, starting with their use case in industry, further detailing their execution and providing information on scoring and ranking mechanisms for the specific benchmark.
While humans can effortlessly pick a view from multiple streams, automatically choosing the best view is a challenge. Choosing the best view from multi-camera streams poses a problem regarding which objective metrics should be considered. Existing works on view selection lack consensus about which metrics should be considered to select the best view. The literature on view selection describes diverse possible metrics. And strategies such as information-theoretic, instructional design, or aesthetics-motivated fail to incorporate all approaches. In this work, we postulate a strategy incorporating information-theoretic and instructional design-based objective metrics to select the best view from a set of views. Traditionally, information-theoretic measures have been used to find the goodness of a view, such as in 3D rendering. We adapted a similar measure known as the viewpoint entropy for real-world 2D images. Additionally, we incorporated similarity penalization to get a more accurate measure of the entropy of a view, which is one of the metrics for the best view selection. Since the choice of the best view is domain-dependent, we chose demonstration-based training scenarios as our use case. The limitation of our chosen scenarios is that they do not include collaborative training and solely feature a single trainer. To incorporate instructional design considerations, we included the trainer’s body pose, face, face when instructing, and hands visibility as metrics. To incorporate domain knowledge we included predetermined regions’ visibility as another metric. All of those metrics are taken into account to produce a parameterized view recommendation approach for demonstration-based training. An online study using recorded multi-camera video streams from a simulation environment was used to validate those metrics. Furthermore, the responses from the online study were used to optimize the view recommendation performance with a normalized discounted cumulative gain (NDCG) value of 0.912, which shows good performance with respect to matching user choices.
This thesis work presents the implementation and validation of image processing problems in hardware to estimate the performance and precision gain. It compares the implementation for the addressed problem on a Field Programmable Gate Array (FPGA) with a software implementation for a General Purpose Processor (GPP) architecture. For both solutions the implementation costs for their development is an important aspect in the validation. The analysis of the flexibility and extendability that can be achieved by a modular implementation for the FPGA design was another major aspect. This work is based upon approaches from previous work, which included the detection of Binary Large OBjects (BLOBs) in static images and continuous video streams [13, 15]. One addressed problem of this work is the tracking of the detected BLOBs in continuous image material. This has been implemented for the FPGA platform and the GPP architecture. Both approaches have been compared with respect to performance and precision. This research project is motivated by the MI6 project of the Computer Vision research group, which is located at the Bonn-Rhein-Sieg University of Applied Sciences. The intent of the MI6 project is the tracking of a user in an immersive environment. The proposed solution is to attach a light emitting device to the user for tracking the created light dots on the projection surface of the immersive environment. Having the center points of those light dots would allow the estimation of the user’s position and orientation. One major issue that makes Computer Vision problems computationally expensive is the high amount of data that has to be processed in real-time. Therefore, one major target for the implementation was to get a processing speed of more than 30 frames per second. This would allow the system to realize feedback to the user in a response time which is faster than the human visual perception. One problem that comes with the idea of using a light emitting device to represent the user, is the precision error. Dependent on the resolution of the tracked projection surface of the immersive environment, a pixel might have a size in cm2. Having a precision error of only a few pixels, might lead to an offset in the estimated user’s position of several cm. In this research work the development and validation of a detection and tracking system for BLOBs on a Cyclone II FPGA from Altera has been realized. The system supports different input devices for the image acquisition and can perform detection and tracking for five to eight BLOBs. A further extension of the design has been evaluated and is possible with some constraints. Additional modules for compressing the image data based on run-length encoding and sub-pixel precision for the computed BLOB center-points have been designed. For the comparison of the FPGA approach for BLOB tracking a similar implementation in software using a multi-threaded approach has been realized. The system can transmit the detection or tracking results on two available communication interfaces, USB and RS232. The analysis of the hardware solution showed a similar precision for the BLOB detection and tracking as the software approach. One problem is the strong increase of the allocated resources when extending the system to process more BLOBs. With one of the applied target platforms, the DE2-70 board from Altera, the BLOB detection could be extended to process up to thirty BLOBs. The implementation of the tracking approach in hardware required much more effort than the software solution. The design of high level problems in hardware for this case are more expensive than the software implementation. The search and match steps in the tracking approach could be realized more efficiently and reliably in software. The additional pre-processing modules for sub-pixel precision and run-length-encoding helped to increase the system’s performance and precision.
Bond graph modelling was devised by Professor Paynter at the Massachusetts Institute of Technology in 1959 and subsequently developed into a methodology for modelling multidisciplinary systems at a time when nobody was speaking of object-oriented modelling. On the other hand, so-called object-oriented modelling has become increasingly popular during the last few years. By relating the characteristics of both approaches, it is shown that bond graph modelling, although much older, may be viewed as a special form of object-oriented modelling. For that purpose the new object-oriented modelling language Modelica is used as a working language which aims at supporting multiple formalisms. Although it turns out that bond graph models can be described rather easily, it is obvious that Modelica started from generalized networks and was not designed to support bond graphs. The description of bond graph models in Modelica is illustrated by means of a hydraulic drive. Since VHDL-AMS as an important language standardized and supported by IEEE has been extended to support also modelling of non-electrical systems, it is briefly investigated as to whether it can be used for description of bond graphs. It turns out that currently it does not seem to be suitable.
Multidisciplinary systems are described most suitably by bond graphs. In order to determine unnormalized frequency domain sensitivities in symbolic form, this paper proposes to construct in a systematic manner a bond graph from another bond graph, which is called the associated incremental bond graph in this paper. Contrary to other approaches reported in the literature the variables at the bonds of the incremental bond graph are not sensitivities but variations (incremental changes) in the power variables from their nominal values due to parameter changes. Thus their product is power. For linear elements their corresponding model in the incremental bond graph also has a linear characteristic. By deriving the system equations in symbolic state space form from the incremental bond graph in the same way as they are derived from the initial bond graph, the sensitivity matrix of the system can be set up in symbolic form. Its entries are transfer functions depending on the nominal parameter values and on the nominal states and the inputs of the original model. The sensitivities can be determined automatically by the bond graph preprocessor CAMP-G and the widely used program MATLAB together with the Symbolic Toolbox for symbolic mathematical calculation. No particular program is needed for the approach proposed. The initial bond graph model may be non-linear and may contain controlled sources and multiport elements. In that case the sensitivity model is linear time variant and must be solved in the time domain. The rationale and the generality of the proposed approach are presented. For illustration purposes a mechatronic example system, a load positioned by a constant-excitation d.c. motor, is presented and sensitivities are determined in symbolic form by means of CAMP-G/MATLAB.
In this paper, residual sinks are used in bond graph model-based quantitative fault detection for the coupling of a model of a faultless process engineering system to a bond graph model of the faulty system. By this way, integral causality can be used as the preferred computational causality in both models. There is no need for numerical differentiation. Furthermore, unknown variables do not need to be eliminated from power continuity equations in order to obtain analytical redundancy relations (ARRs) in symbolic form. Residuals indicating faults are computed numerically as components of a descriptor vector of a differential algebraic equation system derived from the coupled bond graphs. The presented bond graph approach especially aims at models with non-linearities that make it cumbersome or even impossible to derive ARRs from model equations by elimination of unknown variables. For illustration, the approach is applied to a non-controlled as well as to a controlled hydraulic two-tank system. Finally, it is shown that not only the numerical computation of residuals but also the simultaneous numerical computation of their sensitivities with respect to a parameter can be supported by bond graph modelling.
For the case when the abstraction of instantaneous state transitions is adopted, this paper proposes to start fault detection and isolation in an engineering system from a single time-invariant causality bond graph representation of a hybrid model. To that end, the paper picks up on a long-known proposal to model switching devices by a transformer modulated by a Boolean variable and a resistor in fixed conductance causality accounting for its ON resistance. Bond graph representations of hybrid system models developed in this way have been used so far mainly for the purpose of simulation. The paper shows that they can well constitute an approach to the bond-graph-based quantitative fault detection and isolation of hybrid models. Advantages are that the standard sequential causality assignment procedure can be a used without modification. A single set of analytical redundancy relations valid for all physically feasible system modes can be (automatically) derived from the bond graph. Stiff model equations due to small values of the ON resistance in the switch model may be avoided by symbolic reformulation of equations and letting the ON resistance of some switches tend to zero, turning them into ideal switches.
First, for two examples considered in the literature, it is shown that the approach proposed in this paper can produce the same analytical redundancy relations as were obtained from a hybrid bond graph with controlled junctions and the use of a sequential causality assignment procedure especially for fault detection and isolation purpose. Moreover, the usefulness of the proposed approach is illustrated in two case studies by its application to standard switching circuits extensively used in power electronic systems and by simulation of some fault scenarios. The approach, however, is not confined to the fault detection and isolation of such systems. Analytically validated simulation results obtained by means of the program Scilab give confidence in the approach.
A bond graph representation of switching devices known for a long time has been a modulated transformer with a modulus b(t)∈{0,1}∀t≥0 in conjunction with a resistor R:Ron accounting for the ON-resistance of a switch considered non-ideal. Besides other representations, this simple model has been used in bond graphs for simulation of the dynamic behaviour of hybrid systems. A previous article of the author has proposed to use the transformer–resistor pair in bond graphs for fault diagnosis in hybrid systems. Advantages are a unique bond graph for all system modes, the application of the unmodified standard Sequential Causality Assignment Procedure, fixed computational causalities and the derivation of analytical redundancy relations incorporating ‘Boolean’ transformer moduli so that they hold for all system modes. Switches temporarily connect and disconnect model parts. As a result, some independent storage elements may temporarily become dependent, so that the number of state variables is not time-invariant. This article addresses this problem in the context of modelling and simulation of fault scenarios in hybrid systems. In order to keep time-invariant preferred integral causality at storage ports, residual sinks previously introduced by the author are used. When two storage elements become dependent at a switching time instance ts, a residual sink is activated. It enforces that the outputs of two dependent storage elements become immediately equal by imposing the conjugate3 power variable of appropriate value on their inputs. The approach is illustrated by the bond graph modelling and simulation of some fault scenarios in a standard three-phase switched power inverter supplying power into an RL-load in a delta configuration. A well-developed approach to model-based fault detection and isolation is to evaluate the residual of analytical redundancy relations. In this article, analytical redundancy relation residuals have been computed numerically by coupling a bond graph of the faulty system to one of the non-faulty systems by means of residual sinks. The presented approach is not confined to power electronic systems but can be used for hybrid systems in other domains as well. In further work, the RL-load may be replaced by a bond graph model of an alternating current motor in order to study the effect of switch failures in the power inverter on to the dynamic behaviour of the motor.
Failure prognostic builds up on constant data acquisition and processing and fault diagnosis and is an essential part of predictive maintenance of smart manufacturing systems enabling condition based maintenance, optimised use of plant equipment, improved uptime and yield and to prevent safety problems. Given known control inputs into a plant and real sensor outputs or simulated measurements, the model-based part of the proposed hybrid method provides numerical values of unknown parameter degradation functions at sampling time points by the evaluation of equations that have been derived offline from a bicausal diagnostic bond graph. These numerical values are computed concurrently to the constant monitoring of a system and are stored in a buffer of fixed length. The data-driven part of the method provides a sequence of remaining useful life estimates by repeated projection of the parameter degradation into the future based on the use of values in a sliding time window. Existing software can be used to determine the best fitting function and can account for its random parameters. The continuous parameter estimation and their projection into the future can be performed in parallel for multiple isolated simultaneous parametric faults on a multicore, multiprocessor computer.
The proposed hybrid bond graph model-based, data-driven method is verified by an offline simulation case study of a typical power electronic circuit. It can be used to implement embedded systems that enable cooperating machines in smart manufacturing to perform prognostic themselves.
Analytical redundancy relations are fundamental in model-based fault detection and isolation. Their numerical evaluation yields a residual that may serve as a fault indicator. Considering switching linear time-invariant system models that use ideal switches, it is shown that analytical redundancy relations can be systematically deduced from a diagnostic bond graph with fixed causalities that hold for all modes of operation. Moreover, as to a faultless system, the presented bond graph–based approach enables to deduce a unique implicit state equation with coefficients that are functions of the discrete switch states. Devices or phenomena with fast state transitions, for example, electronic diodes and transistors, clutches, or hard mechanical stops are often represented by ideal switches which give rise to variable causalities. However, in the presented approach, fixed causalities are assigned only once to a diagnostic bond graph. That is, causal strokes at switch ports in the diagnostic bond graph reflect only the switch-state configuration in a specific system mode. The actual discrete switch states are implicitly taken into account by the discrete values of the switch moduli. The presented approach starts from a diagnostic bond graph with fixed causalities and from a partitioning of the bond graph junction structure and systematically deduces a set of equations that determines the wanted residuals. Elimination steps result in analytical redundancy relations in which the states of the storage elements and the outputs of the ideal switches are unknowns. For the later two unknowns, the approach produces an implicit differential algebraic equations system. For illustration of the general matrix-based approach, an electromechanical system and two small electronic circuits are considered. Their equations are directly derived from a diagnostic bond graph by following causal paths and are reformulated so that they conform with the matrix equations obtained by the formal approach based on a partitioning of the bond graph junction structure. For one of the three mode-switching examples, a fault scenario has been simulated.
Bond graph software can simulate bond graph models without the user needing to manually derive equations. This offers the power to model larger and more complex systems than in the past. Multibond graphs (those with vector bonds) offer a compact model which further eases handling multibody systems. Although multibond graphs can be simulated successfully, the use of vector bonds can present difficulties. In addition, most qualitative, bond graph–based exploitation relies on the use of scalar bonds. This article discusses the main methods for simulating bond graphs of multibody systems, using a graphical software platform. The transformation between models with vector and scalar bonds is presented. The methods are then compared with respect to both time and accuracy, through simulation of two benchmark models. This article is a tutorial on the existing methods for simulating three-dimensional rigid and holonomic multibody systems using bond graphs and discusses the difficulties encountered. It then proposes and adapts methods for simulating this type of system directly from its bond graph within a software package. The value of this study is in giving practical guidance to modellers, so that they can implement the adapted method in software.
Business process infrastructures like BPMS (Business Process Management Systems) and WfMS (Workflow Management Systems) traditionally focus on the automation of processes predefined at design time. This approach is well suited for routine tasks which are processed repeatedly and which are described by a predefined control flow. In contrast, knowledge-intensive work is more goal and data-driven and less control-flow oriented. Knowledge workers need the flexibility to decide dynamically at run-time and based on current context information on the best next process step to achieve a given goal. Obviously, in most practical scenarios, these decisions are complex and cannot be anticipated and modeled completely in a predefined process model. Therefore, adaptive and dynamic process management techniques are necessary to augment the control-flow oriented part of process management (which is still a need also for knowledge workers) with flexible, context-dependent, goaloriented support.
Studienverläufe von Studenten weichen nicht selten vom offiziell geplanten Curriculum ab. Für eine den Studienerfolg verbessernde Planung und Weiterentwicklung von Studiengängen und Curricula fehlen den Verantwortlichen häufig Erkenntnisse über tatsächliche sowie typischerweise erfolgreiche und weniger erfolgreiche Studienverlaufsmuster. Process-Mining-Techniken können helfen, mehr Transparenz bei der Auswertung von Studienverläufen zu schaffen und so die Erkennung typischer Studienverlaufsmuster, die Überprüfung der Übereinstimmung der konkreten Studienverläufe mit dem vorgegebenen Curriculum sowie eine zielgerechte Verbesserung des Curriculums zu unterstützen.
An internal model of self-motion provides a fundamental basis for action in our daily lives, yet little is known about its development. The ability to control self-motion develops in youth and often deteriorates with advanced age. Self-motion generates relative motion between the viewer and the environment. Thus, the smoothness of the visual motion created will vary as control improves. Here, we study the influence of the smoothness of visually simulated self-motion on an observer's ability to judge how far they have travelled over a wide range of ages. Previous studies were typically highly controlled and concentrated on university students. But are such populations representative of the general public? And are there developmental and sex effects? Here, estimates of distance travelled (visual odometry) during visually induced self-motion were obtained from 466 participants drawn from visitors to a public science museum. Participants were presented with visual motion that simulated forward linear self-motion through a field of lollipops using a head-mounted virtual reality display. They judged the distance of their simulated motion by indicating when they had reached the position of a previously presented target. The simulated visual motion was presented with or without horizontal or vertical sinusoidal jitter. Participants' responses indicated that they felt they travelled further in the presence of vertical jitter. The effectiveness of the display increased with age over all jitter conditions. The estimated time for participants to feel that they had started to move also increased slightly with age. There were no differences between the sexes. These results suggest that age should be taken into account when generating motion in a virtual reality environment. Citizen science studies like this can provide a unique and valuable insight into perceptual processes in a truly representative sample of people.
Das Optimalziel für ein Logistiklager ist eine hohe Auslastung des Transportsystems. Es stellt sich somit die Frage nach der Auswahl der Aufträge, die gleichzeitig innerhalb des Lagers abgearbeitet werden, ohne Staus, Blockaden oder Überlastungen entstehen zu lassen. Dieser Auswahlprozess wird auch als Path-Packing bezeichnet. Diese Masterthesis untersucht das Path-Packing auf graphentheoretischer Ebene und stellt verschiedene Greedy-Heuristiken, eine Optimallösung auf Basis der Linearen Programmierung sowie einen kombinierten Ansatz gegenüber. Die Ansätze werden anhand von Messzeiten und Auslastungen unterschiedlich randomisiert erstellter Testdaten ausgewertet.
Das Cutting sticks-Problem ist in seiner allgemeinen Formulierung ein NP-vollständiges Problem mit Anwendungspotenzialen im Bereich der Logistik. Unter der Annahme, dass P ungleich NP (P != NP) ist, existieren keine effizienten, d.h. polynomiellen Algorithmen zur Lösung des allgemeinen Problems.
In diesem Papier werden Ansätze aufgezeigt, mit denen bestimmte Instanzen des Problems effizient berechnet werden können. Für die Berechnung wichtige Parameter werden charakterisiert und deren Beziehung untereinander analysiert.
Das Cutting sticks-Problem ist ein NP-vollständiges Problem mit Anwendungspotenzialen im Bereich der Logistik. Es werden grundlegende Definitionen für die Behandlung sowie bisherige Ansätze zur Lösung des Problems aufgearbeitet und durch einige neue Aussagen ergänzt. Insbesondere stehen Ideen für eine algorithmische Lösung des Problems bzw. von Varianten des Problems im Fokus.
Das Cutting sticks-Problem ist in seiner allgemeinen Formulierung ein NP-vollständiges Problem mit Anwendungspotenzialen im Bereich der Logistik. Unter der Annahme, dass P ungleich NP (P != NP) ist, existieren keine effizienten, d.h. polynomiellen Algorithmen zur Lösung des allgemeinen Problems.
In diesem Papier werden für eine Reihe von Instanzen effiziente Lösungen angegeben.
Less is Often More: Header Whitelisting as Semantic Gap Mitigation in HTTP-Based Software Systems
(2021)
The web is the most wide-spread digital system in the world and is used for many crucial applications. This makes web application security extremely important and, although there are already many security measures, new vulnerabilities are constantly being discovered. One reason for some of the recent discoveries lies in the presence of intermediate systems—e.g. caches, message routers, and load balancers—on the way between a client and a web application server. The implementations of such intermediaries may interpret HTTP messages differently, which leads to a semantically different understanding of the same message. This so-called semantic gap can cause weaknesses in the entire HTTP message processing chain.
In this paper we introduce the header whitelisting (HWL) approach to address the semantic gap in HTTP message processing pipelines. The basic idea is to normalize and reduce an HTTP request header to the minimum required fields using a whitelist before processing it in an intermediary or on the server, and then restore the original request for the next hop. Our results show that HWL can avoid misinterpretations of HTTP messages in the different components and thus prevent many attacks rooted in a semantic gap including request smuggling, cache poisoning, and authentication bypass.
This work proposes a novel approach for probabilistic end-to-end all-sky imager-based nowcasting with horizons of up to 30 min using an ImageNet pre-trained deep neural network. The method involves a two-stage approach. First, a backbone model is trained to estimate the irradiance from all-sky imager (ASI) images. The model is then extended and retrained on image and parameter sequences for forecasting. An open access data set is used for training and evaluation. We investigated the impact of simultaneously considering global horizontal (GHI), direct normal (DNI), and diffuse horizontal irradiance (DHI) on training time and forecast performance as well as the effect of adding parameters describing the irradiance variability proposed in the literature. The backbone model estimates current GHI with an RMSE and MAE of 58.06 and 29.33 W m−2, respectively. When extended for forecasting, the model achieves an overall positive skill score reaching 18.6 % compared to a smart persistence forecast. Minor modifications to the deterministic backbone and forecasting models enables the architecture to output an asymmetrical probability distribution and reduces training time while leading to similar errors for the backbone models. Investigating the impact of variability parameters shows that they reduce training time but have no significant impact on the GHI forecasting performance for both deterministic and probabilistic forecasting while simultaneously forecasting GHI, DNI, and DHI reduces the forecast performance.
The accurate forecasting of solar radiation plays an important role for predictive control applications for energy systems with a high share of photovoltaic (PV) energy. Especially off-grid microgrid applications using predictive control applications can benefit from forecasts with a high temporal resolution to address sudden fluctuations of PV-power. However, cloud formation processes and movements are subject to ongoing research. For now-casting applications, all-sky-imagers (ASI) are used to offer an appropriate forecasting for aforementioned application. Recent research aims to achieve these forecasts via deep learning approaches, either as an image segmentation task to generate a DNI forecast through a cloud vectoring approach to translate the DNI to a GHI with ground-based measurement (Fabel et al., 2022; Nouri et al., 2021), or as an end-to-end regression task to generate a GHI forecast directly from the images (Paletta et al., 2021; Yang et al., 2021). While end-to-end regression might be the more attractive approach for off-grid scenarios, literature reports increased performance compared to smart-persistence but do not show satisfactory forecasting patterns (Paletta et al., 2021). This work takes a step back and investigates the possibility to translate ASI-images to current GHI to deploy the neural network as a feature extractor. An ImageNet pre-trained deep learning model is used to achieve such translation on an openly available dataset by the University of California San Diego (Pedro et al., 2019). The images and measurements were collected in Folsom, California. Results show that the neural network can successfully translate ASI-images to GHI for a variety of cloud situations without the need of any external variables. Extending the neural network to a forecasting task also shows promising forecasting patterns, which shows that the neural network extracts both temporal and momentarily features within the images to generate GHI forecasts.
PURPOSE
Cervical cancer (CC) is caused by a persistent high-risk human papillomavirus (hrHPV) infection. The cervico-vaginal microbiome may influence the development of (pre)cancer lesions. Aim of the study was (i) to evaluate the new CC screening program in Germany for the detection of high-grade CC precursor lesions, and (ii) to elucidate the role of the cervico-vaginal microbiome and its potential impact on cervical dysplasia.
METHODS
The microbiome of 310 patients referred to colposcopy was determined by amplicon sequencing and correlated with clinicopathological parameters.
RESULTS
Most patients were referred for colposcopy due to a positive hrHPV result in two consecutive years combined with a normal PAP smear. In 2.1% of these cases, a CIN III lesion was detected. There was a significant positive association between the PAP stage and Lactobacillus vaginalis colonization and between the severity of CC precursor lesions and Ureaplasma parvum.
CONCLUSION
In our cohort, the new cervical cancer screening program resulted in a low rate of additional CIN III detected. It is questionable whether these cases were only identified earlier with additional HPV testing before the appearance of cytological abnormalities, or the new screening program will truly increase the detection rate of CIN III in the long run. Colonization with U. parvum was associated with histological dysplastic lesions. Whether targeted therapy of this pathogen or optimization of the microbiome prevents dysplasia remains speculative.
Contextual information is widely considered for NLP and knowledge discovery in life sciences since it highly influences the exact meaning of natural language. The scientific challenge is not only to extract such context data, but also to store this data for further query and discovery approaches. Classical approaches use RDF triple stores, which have serious limitations. Here, we propose a multiple step knowledge graph approach using labeled property graphs based on polyglot persistence systems to utilize context data for context mining, graph queries, knowledge discovery and extraction. We introduce the graph-theoretic foundation for a general context concept within semantic networks and show a proof of concept based on biomedical literature and text mining. Our test system contains a knowledge graph derived from the entirety of PubMed and SCAIView data and is enriched with text mining data and domain-specific language data using Biological Expression Language. Here, context is a more general concept than annotations. This dense graph has more than 71M nodes and 850M relationships. We discuss the impact of this novel approach with 27 real-world use cases represented by graph queries. Storing and querying a giant knowledge graph as a labeled property graph is still a technological challenge. Here, we demonstrate how our data model is able to support the understanding and interpretation of biomedical data. We present several real-world use cases that utilize our massive, generated knowledge graph derived from PubMed data and enriched with additional contextual data. Finally, we show a working example in context of biologically relevant information using SCAIView.
In the past decade computer models have become very popular in the field of biomechanics due to exponentially increasing computer power. Biomechanical computer models can roughly be subdivided into two groups: multi-body models and numerical models. The theoretical aspects of both modelling strategies will be introduced. However, the focus of this chapter lies on demonstrating the power and versatility of computer models in the field of biomechanics by presenting sophisticated finite element models of human body parts. Special attention is paid to explain the setup of individual models using medical scan data. In order to reach the goal of individualising the model a chain of tools including medical imaging, image acquisition and processing, mesh generation, material modelling and finite element simulation –possibly on parallel computer architectures- becomes necessary. The basic concepts of these tools are described and application results are presented. The chapter ends with a short outlook into the future of computer biomechanics.
Multi-robot systems (MRS) are capable of performing a set of tasks by dividing them among the robots in the fleet. One of the challenges of working with multirobot systems is deciding which robot should execute each task. Multi-robot task allocation (MRTA) algorithms address this problem by explicitly assigning tasks to robots with the goal of maximizing the overall performance of the system. The indoor transportation of goods is a practical application of multi-robot systems in the area of logistics. The ROPOD project works on developing multi-robot system solutions for logistics in hospital facilities. The correct selection of an MRTA algorithm is crucial for enhancing transportation tasks. Several multi-robot task allocation algorithms exist in the literature, but just few experimental comparative analysis have been performed. This project analyzes and assesses the performance of MRTA algorithms for allocating supply cart transportation tasks to a fleet of robots. We conducted a qualitative analysis of MRTA algorithms, selected the most suitable ones based on the ROPOD requirements, implemented four of them (MURDOCH, SSI, TeSSI, and TeSSIduo), and evaluated the quality of their allocations using a common experimental setup and 10 experiments. Our experiments include off-line and semi on-line allocation of tasks as well as scalability tests and use virtual robots implemented as Docker containers. This design should facilitate deployment of the system on the physical robots. Our experiments conclude that TeSSI and TeSSIduo suit best the ROPOD requirements. Both use temporal constraints to build task schedules and run in polynomial time, which allow them to scale well with the number of tasks and robots. TeSSI distributes the tasks among more robots in the fleet, while TeSSIduo tends to use a lower percentage of the available robots.
Subsequently, we have integrated TeSSI and TeSSIduo to perform multi-robot task allocation for the ROPOD project.
This paper introduces a random number generator (RNG) based on the avalanche noise of two diodes. A true random number generator (TRNG) generates true random numbers with the use of the electronic noise produced by two avalanche diodes. The amplified outputs of the diodes are sampled and digitized. The difference between the two concurrently sampled and digitized outputs is calculated and used to select a seed and to drive a pseudo-random number generator (PRNG). The PRNG is an xorshift generator that generates 1024 bits in each cycle. Every sequence of 1024 bits is moderately modified and output. The TRNG delivers the next seed and the next cycle begins. The statistical behavior of the generator is analyzed and presented.
Die Wahrnehmung des perzeptionellen Aufrecht (perceptual upright, PU) variiert in Abhängigkeit der Gewichtung verschiedener gravitationsbezogener und körperbasierter Merkmale zwischen Kontexten und aufgrund individueller Unterschiede. Ziel des Vorhabens war es, systematisch zu untersuchen, welche Zusammenhänge zwischen visuellen und gravitationsbedingten Merkmalen bestehen. Das Vorhaben baute auf vorangegangen Untersuchungen auf, deren Ergebnisse indizieren, dass eine Gravitation von ca. 0,15g notwendig ist, um effiziente Selbstorientierungsinformationen bereit zu stellen (Herpers et. al, 2015; Harris et. al, 2014).
In dem hier beschriebenen Vorhaben wurden nun gezielt künstliche Gravitationsbedingungen berücksichtigt, um die Gravitationsschwelle, ab der ein wahrnehmbarer Einfluss beobachtbar ist, genauer zu quantifizieren bzw. die oben genannte Hypothese zu bestätigen. Es konnte gezeigt werden, dass die zentripetale Kraft, die auf einer rotierenden Zentrifuge entlang der Längsachse des Körpers wirkt, genauso efektiv wie Stehen mit normaler Schwerkraft ist, um das Gefühl des perzeptionellen Aufrechts auszulösen. Die erzielten Daten deuten zudem darauf hin, dass ein Gravitationsfeld von mindestens 0,15 g notwendig ist, um eine efektive Orientierungsinformation für die Wahrnehmung von Aufrecht zu liefern. Dies entspricht in etwa der Gravitationskraft von 0,17 g, die auf dem Mond besteht. Für eine lineare Beschleunigung des Körpers liegt der vestibulare Schwellenwert bei etwa 0,1 m/s2 und somit liegt der Wert für die Situation auf dem Mond von 1,6 m/s2 deutlich über diesem Schwellenwert.
More and more devices will be connected to the internet [3]. Many devicesare part of the so-called Internet of Things (IoT) which contains many low-powerdevices often powered by a battery. These devices mainly communicate with the manufacturers back-end and deliver personal data and secrets like passwords.
Mebendazole Mediates Proteasomal Degradation of GLI Transcription Factors in Acute Myeloid Leukemia
(2021)
The prognosis of elderly AML patients is still poor due to chemotherapy resistance. The Hedgehog (HH) pathway is important for leukemic transformation because of aberrant activation of GLI transcription factors. MBZ is a well-tolerated anthelmintic that exhibits strong antitumor effects. Herein, we show that MBZ induced strong, dose-dependent anti-leukemic effects on AML cells, including the sensitization of AML cells to chemotherapy with cytarabine. MBZ strongly reduced intracellular protein levels of GLI1/GLI2 transcription factors. Consequently, MBZ reduced the GLI promoter activity as observed in luciferase-based reporter assays in AML cell lines. Further analysis revealed that MBZ mediates its anti-leukemic effects by promoting the proteasomal degradation of GLI transcription factors via inhibition of HSP70/90 chaperone activity. Extensive molecular dynamics simulations were performed on the MBZ-HSP90 complex, showing a stable binding interaction at the ATP binding site. Importantly, two patients with refractory AML were treated with MBZ in an off-label setting and MBZ effectively reduced the GLI signaling activity in a modified plasma inhibitory assay, resulting in a decrease in peripheral blood blast counts in one patient. Our data prove that MBZ is an effective GLI inhibitor that should be evaluated in combination to conventional chemotherapy in the clinical setting.
Verschiedene intelligente Heimautomatisierungsgeräte wie Lampen, Schlösser und Thermostate verbreiten sich rasant im privaten Umfeld. Ein typisches Kommunikationsprotokoll für diese Geräteklasse ist Bluetooth Low Energy (BLE). In dieser Arbeit wird eine strukturierte Sicherheitsanalyse für BLE vorgestellt. Die beschriebene Vorgehensweise kategorisiert bekannte Angriffsvektoren und beschreibt einen möglichen Aufbau für eine Analyse. Im Zuge dieser Arbeit wurden einige sicherheitsrelevante Probleme aufgedeckt, die es Angreifern ermöglichen die Geräte vollständig zu übernehmen. Es zeigte sich, dass im Standard vorgesehene Sicherheitsfunktionen wie Verschlüsselung und Integritätsprüfungen häufig gar nicht oder fehlerhaft implementiert sind.