Refine
H-BRS Bibliography
- yes (100) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (54)
- Fachbereich Angewandte Naturwissenschaften (25)
- Institute of Visual Computing (IVC) (18)
- Institut für funktionale Gen-Analytik (IFGA) (10)
- Fachbereich Ingenieurwissenschaften und Kommunikation (9)
- Fachbereich Wirtschaftswissenschaften (8)
- Institut für Sicherheitsforschung (ISF) (4)
- Institut für Cyber Security & Privacy (ICSP) (2)
- Institut für Detektionstechnologien (IDT) (2)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (2)
Document Type
- Article (42)
- Conference Object (34)
- Part of a Book (11)
- Report (4)
- Book (monograph, edited volume) (3)
- Master's Thesis (2)
- Doctoral Thesis (1)
- Other (1)
- Part of Periodical (1)
- Study Thesis (1)
Year of publication
- 2012 (100) (remove)
Language
- English (100) (remove)
Keywords
- ISM: molecules (3)
- 3D-Scanner (2)
- ARRs (2)
- Bag of Features (2)
- CD21 (2)
- FDI (2)
- Hybrid systems (2)
- ISM: kinematics (2)
- Polymers (2)
- classifier combination (2)
People with type 2 Diabetes have an elevated risk for developing cardiovascular disease (CVD) for which dyslipidemia is the major contributor. Diabetic patients have characteristic pattern of dyslipidemia with decreased level of high density lipoprotein cholesterol (HDL-C) and elevated triglycerides (TG) level. However, in diabetes mellitus, low density lipoprotein cholesterol (LDL-C) which is used as one of the markers for the risk of CVD, is underestimated so in such cases the levels of non-High density lipoprotein cholesterol (non-HDL-C) can be a stronger predictor of CVD as it strongly correlates with atherogenic lipoproteins. Therefore, an attempt has been made to evaluate the level of non-HDL-C as a newer marker for the risk of cardiovascular disease and to fi nd out the pattern of dyslipidemia in diabetes mellitus. The present study comprised of 82 type 2 Diabetic cases and 81 non-diabetic controls. Among the diabetics, the majority of the subjects (61.0%) were HDL-C dyslipidemic. However, among the controls, the maximum numbers of individuals (40.7%) were TG dyslipidemic. Diabetics have signifi cantly elevated ratio of total cholesterol to high density lipoprotein cholesterol (TC/HDL-C) and the signifi cant increased levels of non-high density lipoprotein cholesterol (non-HDL-C) compared to controls which can be used as markers of dyslipidemia and can also be used to predict the risk of cardiovascular disease in type 2 Diabetes Mellitus.
Approximate clone detection is the process of identifying similar process fragments in business process model collections. The tool presented in this paper can efficiently cluster approximate clones in large process model repositories. Once a repository is clustered, users can filter and browse the clusters using different filtering parameters. Our tool can also visualize clusters in the 2D space, allowing a better understanding of clusters and their member fragments. This demonstration will be useful for researchers and practitioners working on large process model repositories, where process standardization is a critical task for increasing the consistency and reducing the complexity of the repository.
Development and Validation of a Rapid and Reliable Method for TPMT Genotyping using real-time PCR
(2012)
Topics
Dialogue University President Hartmut Ihne and Jakob Rhyner, Vice Rector of the United Nations University (UNU), talk about common goals and the concept of regional internationality ...
Studies and Teaching University scores high with the Teaching Quality Pact (Pro-MINT-us), career training and Bachelor studies all in one, three attractive Master’s programmes set up, central e-Learning platform online, International Centre for Sustainable Development already hard at work ...
Research Graduate Institute establishes new Ph.D. culture, research focus on visual computing secures third-party funding, energy harvesting project wins university competition, research on the impact of zero gravity on arteries, security systems protect against car thieves ...
Campus Centre for Science and Technology Transfer, International Welcome Centre - a first stop for foreign students, alumni coordinator keeps in close contact with former students, hackathon brings students from around the world together, H-BRS prepared for the double Abitur year ...
What if ... ... the Bonn-Rhein-Sieg University of Applied Sciences did not exist? Personal answers to an unusual question ...
Region H-BRS- a strong engine for the region, research centre for region’s SMEs looks for investors, companies invest in scholarships, students advise the Alexander- Koenig-Gesellschaft, BusinessCampus opens a third location, concept for medical tourism along the Rhine corridor ...
International Mechanical engineering students in Ethiopia, businesses and universities collaborate in Ghana, university partnership with Namibia, Study Buddies for foreign students, student initiates German-Argentine Master’s degree, Spanish teacher conference, intercultural training for all university staff ...
Facts and Figures Programmes of study, statistics, organisational structure, international partnerships, awards ...
YAWL User Group
(2012)
The documentation requirements of data published in long term archives have significantly grown over the last decade. At WDCC the data publishing process is assisted by “Atarrabi”, a web-based workflow system for reviewing and editing metadata information by the data authors and the publication agent. The system ensures high metadata quality for long-term use of the data with persistent identifiers (DOI/URN). By these well-defined references (DOI) credit can properly be given to the data producers in any publication.
This paper describes adaptive time frequency analysis of EEG signals, both in theory as well as in practice. A momentary frequency estimation algorithm is discussed and applied to EEG time series of test persons performing a concentration experiment. The motivation for deriving and implementing a time frequency estimator is the assumption that an emotional change implies a transient in the measured EEG time series, which again are superimposed by biological white noise as well as artifacts. It will be shown how accurately and robustly the estimator detects the transient even under such complicated conditions.
The criteria for assessing the quality of rubber materials are the polymer or copolymer composition and the additives. These additives include plasticizers, extender oils, carbon black, inorganic fillers, antioxidants, heat and light stabilizers, processing aids, cross-linking agents, accelerators, retarders, adhesives, pigments, smoke and flame retardants, and others. Determination of additives in polymers or copolymers generally requires the extraction of these substances from the matrix as a first step, which can be challenging, and the subsequent analysis of the extracted additives by gas chromatography (GC), GC–mass spectrometry (MS), high performance liquid chromatography (HPLC), HPLC–MS, capillary electrophoresis, thin-layer chromatography, and other analytical techniques. In the present work, nitrile rubber materials were studied using direct analytical flash pyrolysis hyphenated to GC and electrospray ionization MS in both scan and selected ion monitoring modes to demonstrate that this technique is a good tool to identify the organic additives in nitrile rubber.
In a research project funded by the German Research Foundation, meteorologists, data publication experts, and computer scientists optimised the publication process of meteorological data and developed software that supports metadata review. The project group placed particular emphasis on scientific and technical quality assurance of primary data and metadata. At the end, the software automatically registers a Digital Object Identifier at DataCite. The software has been successfully integrated into the infrastructure of the World Data Center for Climate, but a key was to make the results applicable to data publication processes in other sciences as well.
Along with the success of the digitally revived stereoscopic cinema, other events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.
Gas chromatography with simultaneous flame-ionization detection (FID) and a nitrogen-phosphorus detection (NPD) as well as gas chromatography-mass spectrometry (GC/MS) has been used to characterize some long-chain primary alkyl amines and alkyl diamines after derivatization with trifluoroacetic anhydride (TFAA).
A robot (e.g. mobile manipulator) that interacts with its environment to perform its tasks, often faces situations in which it is unable to achieve its goals despite perfect functioning of its sensors and actuators. These situations occur when the behavior of the object(s) manipulated by the robot deviates from its expected course because of unforeseeable ircumstances. These deviations are experienced by the robot as unknown external faults. In this work we present an approach that increases reliability of mobile manipulators against the unknown external faults. This approach focuses on the actions of manipulators which involve releasing of an object. The proposed approach, which is triggered after detection of a fault, is formulated as a three-step scheme that takes a definition of a planning operator and an example simulation as its inputs. The planning operator corresponds to the action that fails because of the fault occurrence, whereas the example simulation shows the desired/expected behavior of the objects for the same action. In its first step, the scheme finds a description of the expected behavior of the objects in terms of logical atoms (i.e. description vocabulary). The description of the simulation is used by the second step to find limits of the parameters of the manipulated object. These parameters are the variables that define the releasing state of the object.
Using randomly chosen values of the parameters within these limits, this step creates different examples of the releasing state of the object. Each one of these examples is labelled as desired or undesired according to the behavior exhibited by the object (in the simulation), when the object is released in the state corresponded by the example. The description vocabulary is also used in labeling the examples autonomously. In the third step, an algorithm (i.e. N-Bins) uses the labelled examples to suggest the state for the object in which releasing it avoids the occurrence of unknown external faults.
The proposed N-Bins algorithm can also be used for binary classification problems. Therefore, in our experiments with the proposed approach we also test its prediction ability along with the analysis of the results of our approach. The results show that under the circumstances peculiar to our approach, N-Bins algorithm shows reasonable prediction accuracy where other state of the art classification algorithms fail to do so. Thus, N-Bins also extends the ability of a robot to predict the behavior of the object to avoid unknown external faults. In this work we use simulation environment OPENRave that uses physics engine ODE to simulate the dynamics of rigid bodies.
The relative contributions of radial and laminar optic flow to the perception of linear self-motion
(2012)
When illusory self-motion is induced in a stationary observer by optic flow, the perceived distance traveled is generally overestimated relative to the distance of a remembered target (Redlick, Harris, & Jenkin, 2001): subjects feel they have gone further than the simulated distance and indicate that they have arrived at a target's previously seen location too early. In this article we assess how the radial and laminar components of translational optic flow contribute to the perceived distance traveled. Subjects monocularly viewed a target presented in a virtual hallway wallpapered with stripes that periodically changed color to prevent tracking. The target was then extinguished and the visible area of the hallway shrunk to an oval region 40° (h) × 24° (v). Subjects either continued to look centrally or shifted their gaze eccentrically, thus varying the relative amounts of radial and laminar flow visible. They were then presented with visual motion compatible with moving down the hallway toward the target and pressed a button when they perceived that they had reached the target's remembered position. Data were modeled by the output of a leaky spatial integrator (Lappe, Jenkin, & Harris, 2007). The sensory gain varied systematically with viewing eccentricity while the leak constant was independent of viewing eccentricity. Results were modeled as the linear sum of separate mechanisms sensitive to radial and laminar optic flow. Results are compatible with independent channels for processing the radial and laminar flow components of optic flow that add linearly to produce large but predictable errors in perceived distance traveled.
Interactive Distributed Rendering of 3D Scenes on Multiple Xbox 360 Systems and Personal Computers
(2012)
We present our approach to extend a Virtual Reality software framework towards the use for Augmented Reality applications. Although VR and AR applications have very similar requirements in terms of abstract components (like 6DOF input, stereoscopic output, simulation engines), the requirements in terms of hardware and software vary considerably. In this article we would like to share the experience gained from adapting our VR software framework for AR applications. We will address design issues for this task. The result is a VR/AR basic software that allows us to implement interactive applications without fixing their type (VR or AR) beforehand. Switching from VR to AR is a matter of changing the configuration file of the application. We also give an example of the use of the extended framework: Augmenting the magnetic field of bar magnets in physics classes. We describe the setup of the system and the real-time calculation of the magnetic field, using a GPU.
This paper compares the memory allocation of two Java virtual machines, namely Oracle Java HotSpot VM 32-bit (OJVM) and Jamaica JamaicaVM (JJVM). The basic difference of the architectures in both machines is that the JamaicaVM uses fixed-size blocks for allocating objects on the heap. The basic difference of the architectures is that the JJVM uses fixed size block allocation on the heap. This means that objects have to be split into several connected blocks if they are bigger than the specified block-size. On the other hand, for small objects a full block must be allocated. The paper contains both theoretical and experimental analysis on the memory-overhead. The theoretical analysis is based on specifications of the two virtual machines. The experimental analysis is done with a modified JVMTI Agent together with the SPECjvm2008 Benchmark.
Traffic simulations for virtual environments are concerned with the behavior of individual traffic participants. The complexity of behavior in these simulations is often rather simple to abide by the constraints of processing resources. In sophisticated traffic simulations, the behavior of individual traffic participants is also modeled, but the focus lies on the overall behavior of the entire system, e.g. to identify possible bottle necks of traffic flow [8].
Distributed computing environments allow collaborative problem solving across teams and organisations. A fundamental precondition for collaboration is the ability to find available participants and be able to exchange information. One way to approach this conceptual formulation are central directories or registry services. A major disadvantage of centralized components is, that they limit the flexibility to form ad hoc networks that are targeted to solve a specific problem. To facilitate flexible and dynamic collaborations, ideas from decentralized and self-organising networks can be combined with concepts of service oriented computing. This project aims to investigate potential solutions for dynamic discovery of network participants and outlines how to manage challenges associated with the development of a discovery protocol for distributed systems. During the course of this project a prototypical implementation was created that integrates into the open source distributed, collaborative problem solving environment RCE [9]. It is currently developed at the German Aerospace Center (DLR) but is planned to make the framework available to broader community.
For the case when the abstraction of instantaneous state transitions is adopted, this paper proposes to start fault detection and isolation in an engineering system from a single time-invariant causality bond graph representation of a hybrid model. To that end, the paper picks up on a long-known proposal to model switching devices by a transformer modulated by a Boolean variable and a resistor in fixed conductance causality accounting for its ON resistance. Bond graph representations of hybrid system models developed in this way have been used so far mainly for the purpose of simulation. The paper shows that they can well constitute an approach to the bond-graph-based quantitative fault detection and isolation of hybrid models. Advantages are that the standard sequential causality assignment procedure can be a used without modification. A single set of analytical redundancy relations valid for all physically feasible system modes can be (automatically) derived from the bond graph. Stiff model equations due to small values of the ON resistance in the switch model may be avoided by symbolic reformulation of equations and letting the ON resistance of some switches tend to zero, turning them into ideal switches.
First, for two examples considered in the literature, it is shown that the approach proposed in this paper can produce the same analytical redundancy relations as were obtained from a hybrid bond graph with controlled junctions and the use of a sequential causality assignment procedure especially for fault detection and isolation purpose. Moreover, the usefulness of the proposed approach is illustrated in two case studies by its application to standard switching circuits extensively used in power electronic systems and by simulation of some fault scenarios. The approach, however, is not confined to the fault detection and isolation of such systems. Analytically validated simulation results obtained by means of the program Scilab give confidence in the approach.
A bond graph representation of switching devices known for a long time has been a modulated transformer with a modulus b(t)∈{0,1}∀t≥0 in conjunction with a resistor R:Ron accounting for the ON-resistance of a switch considered non-ideal. Besides other representations, this simple model has been used in bond graphs for simulation of the dynamic behaviour of hybrid systems. A previous article of the author has proposed to use the transformer–resistor pair in bond graphs for fault diagnosis in hybrid systems. Advantages are a unique bond graph for all system modes, the application of the unmodified standard Sequential Causality Assignment Procedure, fixed computational causalities and the derivation of analytical redundancy relations incorporating ‘Boolean’ transformer moduli so that they hold for all system modes. Switches temporarily connect and disconnect model parts. As a result, some independent storage elements may temporarily become dependent, so that the number of state variables is not time-invariant. This article addresses this problem in the context of modelling and simulation of fault scenarios in hybrid systems. In order to keep time-invariant preferred integral causality at storage ports, residual sinks previously introduced by the author are used. When two storage elements become dependent at a switching time instance ts, a residual sink is activated. It enforces that the outputs of two dependent storage elements become immediately equal by imposing the conjugate3 power variable of appropriate value on their inputs. The approach is illustrated by the bond graph modelling and simulation of some fault scenarios in a standard three-phase switched power inverter supplying power into an RL-load in a delta configuration. A well-developed approach to model-based fault detection and isolation is to evaluate the residual of analytical redundancy relations. In this article, analytical redundancy relation residuals have been computed numerically by coupling a bond graph of the faulty system to one of the non-faulty systems by means of residual sinks. The presented approach is not confined to power electronic systems but can be used for hybrid systems in other domains as well. In further work, the RL-load may be replaced by a bond graph model of an alternating current motor in order to study the effect of switch failures in the power inverter on to the dynamic behaviour of the motor.
Using virtual environment systems for road safety education requires a realistic simulation of road traffic. Current traffic simulations are either too restricted in their complexity of agent behavior or focus on aspects not important in virtual environments. More importantly, none of them are concerned with modeling misbehavior of traffic participants which is part of every-day traffic and should therefore not be neglected in this context. We present a concept for a traffic simulation that addresses the need for more realistic agent behavior with regard to road safety education. The two major components of this concept are a simulation of persistent agents which minimizes computational overhead and a model of cognitive processes of human drivers combined with psychological personality profiles to allow for individual behavior and misbehavior.
Traffic simulations are typically concerned with modeling human behavior as closely as possible to create realistic results. In conventional traffic simulations used for road planning or traffic jam prediction only the overall behavior of an entire system is of interest. In virtual environments, like digital games, simulated traffic participants are merely a backdrop to the player’s experience and only need to be “sufficiently realistic”. Additionally, restricted computational resources, typical for virtual environment applications, usually limit the complexity of simulated behavior in this field. More importantly, two integral aspects of real-world traffic are not considered in current traffic simulations from both fields: misbehavior and risk taking of traffic participants. However, for certain applications like the FIVIS bicycle simulator, these aspects are essential.
Traditionally traffic simulations are used to predict traffic jams, plan new roads or highways, and estimate road safety. They are also used in computer games and virtual environments. There are two general concepts of modeling traffic: macroscopic and microscopic modeling. Macroscopic traffic models take vehicle collectives into account and do not consider individual vehicles. Parameters like average velocity and density are used to model the flow of traffic. In contrast, microscopic traffic models consider each vehicle individually. Therefore, vehicle specific parameters are of importance, e.g. current velocity, desired velocity, velocity difference to the lead vehicle, individual time gap.
The objective of this thesis is to implement a computer game based motivation system for maximal strength testing on the Biodex System 3 Isokinetic Dynamometer. The prototype game has been designed to improve the peak torque produced in an isometric knee extensor strength test. An extensive analysis is performed on a torque data set from a previous study. The torque responses for five second long maximal voluntary contractions of the knee extensor are analyzed to understand torque response characteristics of different subjects. The parameters identifed in the data analysis are used in the implementation of the 'Shark and School of Fish' game. The behavior of the game for different torque responses is analyzed on a different torque data set from the previous study. The evaluation shows that the game rewards and motivates continuously over a repetition to reach the peak torque value. The evaluation also shows that the game rewards the user more if he overcomes a baseline torque value within the first second and then gradually increase the torque to reach peak torque.
In this work, preceramic papers containing 85 wt% Al2O3 were heat-treated at 1600 °C to obtain paper-derived ceramics. In order to increase the preceramic paper density prior to sintering, the papers were calendered at different roll temperatures and pressures. The influences of the calendering parameters on the microstructure and mechanical properties of the preceramic papers and the paper-derived ceramics were investigated. It was expected that especially the mechanical properties of the papers and derived ceramics would be improved by calendering.
Software testing in web services environment faces different challenges in comparison with testing in traditional software environments. Regression testing activities are triggered based on software changes or evolutions. In web services, evolution is not a choice for service clients. They have always to use the current updated version of the software. In addition test execution or invocation is expensive in web services and hence providing algorithms to optimize test case generation and execution is vital. In this environment, we proposed several approach for test cases' selection in web services' regression testing. Testing in this new environment should evolve to be included part of the service contract. Service providers should provide data or usage sessions that can help service clients reduce testing expenses through optimizing the selected and executed test cases.
In service robotics, tasks without the involvement of objects are barely applicable, like in searching, fetching or delivering tasks. Service robots are supposed to capture efficiently object related information in real world scenes while for instance considering clutter and noise, and also being flexible and scalable to memorize a large set of objects. Besides object perception tasks like object recognition where the object’s identity is analyzed, object categorization is an important visual object perception cue that associates unknown object instances based on their e.g. appearance or shape to a corresponding category. We present a pipeline from the detection of object candidates in a domestic scene over the description to the final shape categorization of detected candidates. In order to detect object related information in cluttered domestic environments an object detection method is proposed that copes with multiple plane and object occurrences like in cluttered scenes with shelves. Further a surface reconstruction method based on Growing Neural Gas (GNG) in combination with a shape distribution-based descriptor is proposed to reflect shape characteristics of object candidates. Beneficial properties provided by the GNG such as smoothing and denoising effects support a stable description of the object candidates which also leads towards a more stable learning of categories. Based on the presented descriptor a dictionary approach combined with a supervised shape learner is presented to learn prediction models of shape categories.
Experimental results, of different shapes related to domestically appearing object shape categories such as cup, can, box, bottle, bowl, plate and ball, are shown. A classification accuracy of about 90% and a sequential execution time of lesser than two seconds for the categorization of an unknown object is achieved which proves the reasonableness of the proposed system design. Additional results are shown towards object tracking and false positive handling to enhance the robustness of the categorization. Also an initial approach towards incremental shape category learning is proposed that learns a new category based on the set of previously learned shape categories.
Human mesenchymal stem cells (hMSCs) are considered a promising cell source for regenerative medicine, because they have the potential to differentiate into a variety of lineages among which the mesoderm-derived lineages such adipo- or osteogenesis are investigated best. Human MSCs can be harvested in reasonable to large amounts from several parts of the patient’s body and due to this possible autologous origin, allorecognition can be avoided. In addition, even in allogenic origin-derived donor cells, hMSCs generate a local immunosuppressive microenvironment, causing only a weak immune reaction. There is an increasing need for bone replacement in patients from all ages, due to a variety of reasons such as a new recreational behavior in young adults or age-related diseases. Adipogenic differentiation is another interesting lineage, because fat tissue is considered to be a major factor triggering atherosclerosis that ultimately leads to cardiovascular diseases, the main cause of death in industrialized countries. However, understanding the differentiation process in detail is obligatory to achieve a tight control of the process for future clinical applications to avoid undesired side effects. In this review, the current findings for adipo- and osteo-differentiation are summarized together with a brief statement on first clinical trials.
The biological effects of bilirubin, still poorly understood, are concentration-dependent ranging from cell protection to toxicity. Here we present data that at high nontoxic physiological concentrations, bilirubin inhibits growth of proliferating human coronary artery smooth muscle cells by three events. It impairs the activation of Raf/ERK/MAPK pathway and the cellular Raf and cyclin D1 content that results in retinoblastoma protein hypophosphorylation on amino acids S608 and S780. These events impede the release of YY1 to the nuclei and its availability to regulate the expression of genes and to support cellular proliferation. Moreover, altered calcium influx and calpain II protease activation leads to proteolytical degradation of transcription factor YY1. We conclude that in the serum-stimulated human vascular smooth muscle primary cell cultures, bilirubin favors growth arrest, and we propose that this activity is regulated by its interaction with the Raf/ERK/MAPK pathway, effect on cyclin D1 and Raf content, altered retinoblastoma protein profile of hypophosphorylation, calcium influx, and YY1 proteolysis. We propose that these activities together culminate in diminished 5 S and 45 S ribosomal RNA synthesis and cell growth arrest. The observations provide important mechanistic insight into the molecular mechanisms underlying the transition of human vascular smooth muscle cells from proliferative to contractile phenotype and the role of bilirubin in this transition.
One of the most common problems in Regenerative Medicine is the regeneration of damaged bone with the aim of repairing or replacing lost or damaged bone tissue by stimulating the natural regenerative process. Particularly in the fields of orthopedic, plastic, reconstructive, maxillofacial and craniofacial surgery there is need for successful methods to restore bone. From a regenerative point of view two different bone replacement problems can be distinguished: large bone defects and small bone defects. Currently, no perfect system exists for the treatment of large bone defects.
This article concerns with the accessibility of Business process modelling tools (BPMo tools) and business process modelling languages (BPMo languages). Therefore the reader will be introduced to business process management and the authors' motivation behind this inquiry. Afterwards, the paper will reflect problems when applying inaccessible BPMo tools. To illustrate these problems the authors distinguish between two different categories of issues and provide practical examples. Finally the article will present three approaches to improve the accessibility of BPMo tools and BPMo languages.
Transient up-regulation of P2 receptors influence differentiation of human mesenchymal stem cells
(2012)
In the realm of service robots recovery from faults is indispensable to foster user acceptance. Here fault is to be understood not in the sense of robot internal, rather as interaction faults while situated in and interacting with an environment (aka ex-ternal faults). We reason along the most frequent failures in typical scenarios which we observed during real-world demonstrations and competitions using our Care-O-bot III 1 robot. They take place in an apartment-like environments which is known as closed world. We suggest four different -for now adhoc -fault categories caused by disturbances, imperfect per-ception, inadequate planning or chaining of action sequences. The fault are categorized and then mapped to a handful of partly known, partly extended fault handling techniques. Among them we applied qualitative reasoning, use of simu-lation as oracle, learning for planning (aka en-hancement of plan operators) or -in future -case-based reasoning. Having laid out this frame we mainly ask open questions related to the applicability of the pre-sented approach. Amongst them: how to find new categories, how to extend them, how to as-sure disjointness, how to identify old and label new faults on the fly.
The work presented in this paper focuses on the comparison of well-known and new techniques for designing robust fault diagnosis schemes in the robot domain. The main challenge for fault diagnosis is to allow the robot to effectively cope not only with internal hardware and software faults but with external disturbances and errors from dynamic and complex environments as well.
At previous SIAS conferences, we presented a novel opto-electronic safety sensor system for skin detection at circular saws jointly developed with the Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA). This work now presents the development results of our consecutive research on a prototype of a sensor system for more general production machine applications including robot workplaces. The system uses offthe shelf LEDs and photodiodes in combination with dedicated optics and a microcontroller system to implement a so-called spectral light curtain.
This project investigated the viability of using the Microsoft Kinect in order to obtain reliable Red-Green-Blue-Depth (RGBD) information. This explored the usability of the Kinect in a variety of environments as well as its ability to detect different classes of materials and objects. This was facilitated through the implementation of Random Sample and Consensus (RANSAC) based algorithms and highly parallelized workflows in order to provide time sensitive results. We found that the Kinect provides detailed and reliable information in a time sensitive manner. Furthermore, the project results recommend usability and operational parameters for the use of the Kinect as a scientific research tool.
After more than twenty years of research, the molecular events of apoptotic cell death can be succinctly stated; different pathways, activated by diverse signals, increase the activity of proteases called caspases that rapidly and irreversibly dismantle condemned cell by cleaving specific substrates. In this time the ideas that apoptosis protects us from tumourigenesis and that cancer chemotherapy works by inducing apoptosis also emerged. Currently, apoptosis research is shifting away from the intracellular events within the dying cell to focus on the effect of apoptotic cells on surrounding tissues. This is producing counterintuitive data showing that our understanding of the role of apoptosis in tumourigenesis and cancer therapy is too simple, with some interesting and provocative implications. Here, we will consider evidence supporting the idea that dying cells signal their presence to the surrounding tissue and, in doing so, elicit repair and regeneration that compensates for any loss of function caused by cell death. We will discuss evidence suggesting that cancer cell proliferation may be driven by inappropriate or corrupted tissue-repair programmes that are initiated by signals from apoptotic cells and show how this may dramatically modify how we view the role of apoptosis in both tumourigenesis and cancer therapy.
The work presented in this paper focuses on the comparison of well-known and new fault-diagnosis algorithms in the robot domain. The main challenge for fault diagnosis is to allow the robot to effectively cope not only with internal hardware and software faults but with external disturbances and errors from dynamic and complex environments as well. Based on a study of literature covering fault-diagnosis algorithms, I selected four of these methods based on both linear and non-linear models, analysed and implemented them in a mathematical robot-model, representing a four-wheels-OMNI robot. In experiments I tested the ability of the algorithms to detect and identify abnormal behaviour and to optimize the model parameters for the given training data. The final goal was to point out the strengths of each algorithm and to figure out which method would best suit the demands of fault diagnosis for a particular robot.
This article concerns the design and development of Information- and Communication Technology, in particular computer systems in regard to the demographic transition which will influence user capabilities. It is questionable if current applied computer systems are able to meet the requirements of altered user groups with diversified capabilities. Such an enquiry is necessary based on actual forecasts leading to the assumption that the average age of employees in enterprises will increase significantly within the next 50-60 years, while the percentage of computer aided business tasks, operated by human individuals, rises from year to year. This progress will precipitate specific consequences for enterprises regarding the design and application of computer systems. If computer systems are not adapted to altered user requirements, efficient and productive utilisation could be negatively influenced. These consequences constitute the motivation to extend traditional design methodologies and thereby ensure the application of computer systems that are usable, independent of user capabilities.
The ability of detecting people has become a crucial subtask, especially in robotic systems which aim an application in public or domestic environments. Robots already provide their services e.g. in real home improvement markets and guide people to a desired product. In such a scenario many robot internal tasks would benefit from the knowledge of knowing the number and positions of people in the vicinity. The navigation for example could treat them as dynamical moving objects and also predict their next motion directions in order to compute a much safer path. Or the robot could specifically approach customers and offer its services. This requires to detect a person or even a group of people in a reasonable range in front of the robot. Challenges of such a real-world task are e.g. changing lightning conditions, a dynamic environment and different people shapes. In this thesis a 3D people detection approach based on point cloud data provided by the Microsoft Kinect is implemented and integrated on mobile service robot. A Top-Down/Bottom-Up segmentation is applied to increase the systems flexibility and provided the capability to the detect people even if they are partially occluded. A feature set is proposed to detect people in various pose configurations and motions using a machine learning technique. The system can detect people up to a distance of 5 meters. The experimental evaluation compared different machine learning techniques and showed that standing people can be detected with a rate of 87.29% and sitting people with 74.94% using a Random Forest classifier. Certain objects caused several false detections. To elimante those a verification is proposed which further evaluates the persons shape in the 2D space. The detection component has been implemented as s sequential (frame rate of 10 Hz) and a parallel application (frame rate of 16 Hz). Finally, the component has been embedded into complete people search task which explorates the environment, find all people and approach each detected person.
People have dreamed of machines, which would free them from unpleasant, dull, dirty and dangerous tasks and work for them as servants, for centuries if not millennia. Service robots seem to finally let these dreams come true. But where are all these robots that eventually serve us all day long, day for day? A few service robots have entered the market: domestic and professional cleaning robots, lawnmowers, milking robots, or entertainment robots. Some of these robots look more like toys or gadgets rather than real robots. But where is the rest? This is a question, which is asked not only by customers, but also by service providers, care organizations, politicians, and funding agencies. The answer is not very satisfying. Today’s service robots have their problems operating in everyday environments. This is by far more challenging than operating an industrial robot behind a fence. There is a comprehensive list of technical and scientific problems, which still need to be solved. To advance the state of the art in service robotics towards robots, which are capable of operating in an everyday environment, was the major objective of the DESIRE project (Deutsche Service Robotik Initiative – Germany Service Robotics Initiative) funded by the German Ministry of Education and Research (BMBF) under grant no. 01IME01A. This book offers a sample of the results achieved in DESIRE.