Refine
H-BRS Bibliography
- yes (122) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (62)
- Fachbereich Ingenieurwissenschaften und Kommunikation (24)
- Fachbereich Angewandte Naturwissenschaften (23)
- Institute of Visual Computing (IVC) (21)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (14)
- Fachbereich Wirtschaftswissenschaften (11)
- Institut für funktionale Gen-Analytik (IFGA) (10)
- Institut für Cyber Security & Privacy (ICSP) (9)
- Institut für Verbraucherinformatik (IVI) (3)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (3)
Document Type
- Conference Object (56)
- Article (49)
- Report (5)
- Part of a Book (3)
- Preprint (2)
- Working Paper (2)
- Book (monograph, edited volume) (1)
- Conference Proceedings (1)
- Contribution to a Periodical (1)
- Doctoral Thesis (1)
Year of publication
- 2014 (122) (remove)
Language
- English (122) (remove)
Keywords
- FPGA (3)
- education (3)
- parallel breadth-first search (3)
- BFS (2)
- Exchangeable pairs (2)
- Garbage collection (2)
- Java virtual machine (2)
- NUMA (2)
- Stein’s method (2)
- Sustainability (2)
The title of the annual report 2013 "Shaping change: The University Addresses Society‘s Probing Challenges" reveals the great importance placed on social, economic and technological changes at the university.
This key aspect thus runs through the contents of the 90-page annual report like a common thread, without losing track of the enormous variety of research and teaching at Bonn-Rhein-Sieg University. Whether the exploration of gaps in robot safety during a European Intensive Programme, a report about the Philipines crisis region from a graduate who has worked as an organizer for Care International, or the chapter "What does change look like?" – The annual report provides the full spectrum of opportunities, activities and findings of university members.
A principal step towards solving diverse perception problems is segmentation. Many algorithms benefit from initially partitioning input point clouds into objects and their parts. In accordance with cognitive sciences, segmentation goal may be formulated as to split point clouds into locally smooth convex areas, enclosed by sharp concave boundaries. This goal is based on purely geometrical considerations and does not incorporate any constraints, or semantics, of the scene and objects being segmented, which makes it very general and widely applicable. In this work we perform geometrical segmentation of point cloud data according to the stated goal. The data is mapped onto a graph and the task of graph partitioning is considered. We formulate an objective function and derive a discrete optimization problem based on it. Finding the globally optimal solution is an NP-complete problem; in order to circumvent this, spectral methods are applied. Two algorithms that implement the divisive hierarchical clustering scheme are proposed. They derive graph partition by analyzing the eigenvectors obtained through spectral relaxation. The specifics of our application domain are used to automatically introduce cannot-link constraints in the clustering problem. The algorithms function in completely unsupervised manner and make no assumptions about shapes of objects and structures that they segment. Three publicly available datasets with cluttered real-world scenes and an abundance of box-like, cylindrical, and free-form objects are used to demonstrate convincing performance. Preliminary results of this thesis have been contributed to the International Conference on Autonomous Intelligent Systems (IAS-13).
Business process infrastructures like BPMS (Business Process Management Systems) and WfMS (Workflow Management Systems) traditionally focus on the automation of processes predefined at design time. This approach is well suited for routine tasks which are processed repeatedly and which are described by a predefined control flow. In contrast, knowledge-intensive work is more goal and data-driven and less control-flow oriented. Knowledge workers need the flexibility to decide dynamically at run-time and based on current context information on the best next process step to achieve a given goal. Obviously, in most practical scenarios, these decisions are complex and cannot be anticipated and modeled completely in a predefined process model. Therefore, adaptive and dynamic process management techniques are necessary to augment the control-flow oriented part of process management (which is still a need also for knowledge workers) with flexible, context-dependent, goaloriented support.
The contribution of the most common reciprocal translocation in childhood B-cell precursor leukemia t(12;21)(p13;q22) to leukemia development is still under debate. Direct as well as secondary indirect effects of the TEL-AML1 fusion protein are commonly recorded by using cell lines and patient samples, often bearing the TEL-AML1 fusion protein for decades. To identify direct targets of the fusion protein a short-term induction of TEL-AML1 is needed. We here describe in detail the experimental procedure, quality controls and contents of the ChIP, mRNA expression and SILAC datasets associated with the study published by Linka and colleagues in the Blood Cancer Journal [1] utilizing a short term induction of TEL-AML1 in an inducible precursor B-cell line model.
We investigated graphene structures grafted with fullerenes. The size of the graphene sheets ranges from 6400 to 640,000 atoms. The fullerenes (C60 and C240) are placed on top of the graphene sheets, using different impact velocities we could distinguish three types of impact. Furthermore, we investigated the changes of the vibrational properties. The modified graphene planes show additional features in the vibronic density of states.
We are happy to present you the special issue on Best Practice in Robot Software Development of the Journal on Software Engineering for Robotics! The spark for this special issue came during the eighth workshop on Software Development and Integration in Robotics (SDIR) at the 2013 IEEE International Conference on Robotics and Automation. The workshop focused on Robot Software Architectures, and the fruitful discussions made it clear that the design, development, and deployment of robot software is always an interplay between competing aspects. These are often couched in antagonistic pairs, such as dependability versus performance, and prominently include quality attributes as well as functional, nonfunctional, and application requirements.
The Fifth International Workshop on Domain-Specific Languages and Models for Robotic Systems (DSLRob'14) was held in conjunction with the 2014 International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR 2014), October 2014 in Bergamo, Italy. The main topics of the workshop were Domain-Specific Languages (DSLs) and Model-driven Software Development (MDSD) for robotics. A domain-specific language is a programming language dedicated to a particular problem domain that offers specific notations and abstractions that increase programmer productivity within that domain. Model-driven software development offers a high-level way for domain users to specify the functionality of their system at the right level of abstraction. DSLs and models have historically been used for programming complex systems. However recently they have garnered interest as a separate field of study. Robotic systems blend hardware and software in a holistic way that intrinsically raises many crosscutting concerns (concurrency, uncertainty, time constraints, ...), for which reason, traditional general-purpose languages often lead to a poor fit between the language features and the implementation requirements. DSLs and models offer a powerful, systematic way to overcome this problem, enabling the programmer to quickly and precisely implement novel software solutions to complex problems within the robotics domain.
Over the past two decades social protection has gained importance at the international and the national level of many low and middle income countries. Despite reforms in this sector being a global phenomenon, they differ from country to country. Traditional efforts to explain these dif- ferences focus on domestic factors. Yet it remains unclear how international influences and interdependencies contrib- ute to policy change. The study ‘International Policy Learn- ing and Policy Change’ aims at providing an answer to this question, by focusing on ‘soft governance’ via horizontal processes, meaning processes between equal actors. The studie was carried out in two parts. While in Part I the cur- rent state of the art in relevant research fields was assessed, in Part II the findings from Part I were used to conduct a survey which analyses the role of policy networks.
The latest advances in the field of smart card technologies allow modern cards to be more than just simple security tokens. Recent developments facilitate the use of interactive components like buttons, displays or even touch-sensors within the cards body thus conquering whole new areas of application. With interactive functionalities the usability aspect becomes the most important one for designing secure and popularly accepted products. Unfortunately the usability can only be tested fully with completely integrated hence expensive smart card prototypes. This restricts application specific research, case studies of new smart card user interfaces, concerning applications and the performance of useability tests in smart card development. Rapid development and simulation of smart card interfaces and applications can help to avoid this restriction. This paper presents SCUIDtextsuperscript{Sim} a tool for rapid user-centric development of new smart card interfaces and applications based on common smartphone technology.
The work being described in this paper is the result of a cooperation project between the Institute of Visual Computing at the Bonn-Rhein-Sieg University of Applied Sciences, Germany and the Laboratory of Biomedical Engineering at the Federal University of Uberlândia, Brazil. The aim of the project is the development of a virtual environment based training simulator which enables for better and faster learning the control of upper limb prostheses. The focus of the paper is the description of the technical setup since learning tutorials still need to be developed as well as a comprehensive evaluation still needs to be carried out.
Gas chromatography with flame-ionization detection (FID) and gas chromatography-mass spectrometry (GC/MS) with electron impact ionization (EI) and chemical ionization (PCI and NCI) were successfully used for separation and identification of commercially available longchain primary alkyl amines. The investigated compounds were used as corrosion inhibiting and antifouling agents in a water-steam circuit of energy systems in the power industry. Solidphase extraction (SPE) with octadecyl bonded silica (C18) sorbents followed by gas chromatography were used for quantification of the investigated Primene JM-T™ alkyl amines in boiler water, condensate and superheated steam samples from the power plant. Amine formulations from Kotamina group favor formation of protective layers on internal surfaces and keep them free from corrosion and scale. Alkyl amines contained in those formulations both render the environment alkaline and limit the corrosion impact of ionic and gaseous impurities by formation of protective layers. Moreover, alkyl amines limit scaling on heating surfaces of boilers and in turbine, ensuring failure-free operation. Application of alkyl amine formulation enhances heat exchange during boiling and condensation processes. Alkyl amines with branched structure are more thermally stable than linear alkyl amines, exhibit better adsorption and effectiveness of surface shielding. As a result, application of thermostable long-chain branched alkyl amines increases the efficiency of anti-corrosive protection. Moreover, the concentration of ammonia content in water and in steam was also considerably decreased.
Purpose – The aim of the study is to investigate the implementation of corporate sustainability (CS) in the German real estate sector.
Design/methodology/approach – The authors begin by outlining the framework set by the European Union and the German Federal Government for companies wanting to be classified as sustainable. After this, the relevance of sustainability for German real estate companies is discussed. Their empirical section contains an international comparison. Finally, they present an analysis checking the implementation of CS for the main 135 German real estate companies.
Findings – The present analysis shows that German real estate companies compare well with their international counterparts, in 2012 representing 15 per cent of all real estate firms reporting on the basis of the Global Reporting Initiative. However, of the 135 companies in Germany surveyed, only a small proportion classify themselves as CS and CSR (corporate social responsibility) enterprises. This number could be rapidly increased by better documentation of companies’ commitment to sustainability.
Practical implications – The study’s importance lies in the overview it provides of CS activities in the German real estate industry. In addition, it provides hints on how companies can improve their documentation to classify as CSR enterprises. Although the analysis concentrates on Germany, the results are also relevant for companies in other European countries.
The ability to track moving people is a key aspect of autonomous robot systems in real-world environments. Whilst for many tasks knowing the approximate positions of people may be sufficient, the ability to identify unique people is needed to accurately count people in the real world. To accomplish the people counting task, a robust system for people detection, tracking and identification is needed.
Adapting plans to changes in the environment by finding alternatives and taking advantage of opportunities is a common human behavior. The need for such behavior is often rooted in the uncertainty produced by our incomplete knowledge of the environment. While several existing planning approaches deal with such issues, artificial agents still lack the robustness that humans display in accomplishing their tasks. In this work, we address this brittleness by combining Hierarchical Task Network planning, Description Logics, and the notions of affordances and conceptual similarity. The approach allows a domestic service robot to find ways to get a job done by making substitutions. We show how knowledge is modeled, how the reasoning process is used to create a constrained planning problem, and how the system handles cases where plan generation fails due to missing/unavailable objects. The results of the evaluation for two tasks in a domestic service domain show the viability of the approach in finding and making the appropriate goal transformations.
Robust Indoor Localization Using Optimal Fusion Filter For Sensors And Map Layout Information
(2014)
Might the gravity levels found on other planets and on the moon be sufficient to provide an adequate perception of upright for astronauts? Can the amount of gravity required be predicted from the physiological threshold for linear acceleration? The perception of upright is determined not only by gravity but also visual information when available and assumptions about the orientation of the body. Here, we used a human centrifuge to simulate gravity levels from zero to earth gravity along the long-axis of the body and measured observers' perception of upright using the Oriented Character Recognition Test (OCHART) with and without visual cues arranged to indicate a direction of gravity that differed from the body's long axis. This procedure allowed us to assess the relative contribution of the added gravity in determining the perceptual upright. Control experiments off the centrifuge allowed us to measure the relative contributions of normal gravity, vision, and body orientation for each participant. We found that the influence of 1 g in determining the perceptual upright did not depend on whether the acceleration was created by lying on the centrifuge or by normal gravity. The 50% threshold for centrifuge-simulated gravity's ability to influence the perceptual upright was at around 0.15 g, close to the level of moon gravity but much higher than the threshold for detecting linear acceleration along the long axis of the body. This observation may partially explain the instability of moonwalkers but is good news for future missions to Mars.
In contrast to projection-based systems, large, high resolution multi-display systems offer a high pixel density on a large visualization area. This enables users to step up to the displays and see a small but highly detailed area. If the users move back a few steps they don't perceive details at pixel level but will instead get an overview of the whole visualization. Rendering techniques for design evaluation and review or for visualizing large volume data (e.g. Big Data applications) often use computationally expensive ray-based methods. Due to the number of pixels and the amount of data, these methods often do not achieve interactive frame rates.
A view direction based (VDB) rendering technique renders the user's central field of view in high quality whereas the surrounding is rendered with a level-of-detail approach depending on the distance to the user's central field of view. This approach mimics the physiology of the human eye and conserves the advantage of highly detailed information when standing close to the multi-display system as well as the general overview of the whole scene. In this paper we propose a prototype implementation and evaluation of a focus-based rendering technique based on a hybrid ray tracing/sparse voxel octree rendering approach.
An apple a day keeps the doctor away’. While it may be true that a balanced diet is a prerequisite for good health, how good is what we eat and drink every day? And is it actually possible to fulfil every customer desire with the vast array of foodstuffs on offer? BSE, dioxin in eggs, EHEC sprouts: in the light of repeated food safety crises, the issue of quality assurance as well as customer-oriented quality management has become of prime importance for the agri-food industry.
Current computer architectures are multi-threaded and make use of multiple CPU cores. Most garbage collections policies for the Java Virtual Machine include a stop-the-world phase, which means that all threads are suspended. A considerable portion of the execution time of Java programs is spent in these stop-the-world garbage collections. To improve this behavior, a thread-local allocation and garbage collection that only affects single threads, has been proposed. Unfortunately, only objects that are not accessible by other threads ("do not escape") are eligible for this kind of allocation. It is therefore necessary to reliably predict the escaping of objects. The work presented in this paper analyzes the escaping of objects based on the line of code (program counter – PC) the object was allocated at. The results show that on average 60-80% of the objects do not escape and can therefore be locally allocated.
Automated parameterization of intermolecular pair potentials using global optimization techniques
(2014)
In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters’ influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.
Nitrile-type inhibitors are known to interact with cysteine proteases in a covalent-reversible manner. The chemotype of 3-cyano-3-aza-β-amino acid derivatives was designed in which the N-cyano group is centrally arranged in the molecule to allow for interactions with the nonprimed and primed binding regions of the target enzymes. These compounds were evaluated as inhibitors of the human cysteine cathepsins K, S, B, and L. They exhibited slow-binding behavior and were found to be exceptionally potent, in particular toward cathepsin K, with second-order rate constants up to 52 900 × 103 M–1 s–1.
Sustainability is a key issue in current research activities and programs. In this conjunction three major functions of research have been identified: Basic research, knowledge reservoirs, and knowledge transfer. With regard to a transmission to the private sector, knowledge transfer is the most important factor. In this process, universities of applied sciences can play an important part as they typically have a long-standing experience in linking science and business in their teaching and research. Another important agent in the process of knowledge transfer are networks and clusters. Their strength lies integrating the different competencies of its partners and using them to a mutual benefit.
The International Centre for Sustainable Development (IZNE) – with a major focus on responsible business and sustainable food – takes the advantage of being part of a University of Applied Sciences (Bonn-Rhein-Sieg, BRSU), and being a member of several regional and international clusters and networks. These co-operations aim to establish and strengthen linkages between science and business, in particular by investigating research needs for business and business relevant research activities. Moreover, IZNE established and expanded regional and international co-operations of its own to get more transparency about regional and international value-added chains in the food sector and the issue of responsible business.
Hybrid system models exploit the modelling abstraction that fast state transitions take place instantaneously so that they encompass discrete events and the continuous time behaviour for the while of a system mode. If a system is in a certain mode, e.g. two rigid bodies stick together, then residuals of analytical redundancy relations (ARRs) within certain small bounds indicate that the system is healthy. An unobserved mode change, however, invalidates the current model for the dynamic behaviour. As a result, ARR residuals may exceed current thresholds indicating faults in system components that have not happened. The paper shows that ARR residuals derived from a bond graph cannot only serve as fault indicators but may also be used for bond graph model-based system mode identification. ARR residuals are numerically computed in an off-line simulation by coupling a bond graph of the faulty system to a non-faulty system bond graph through residual sinks. In real-time simulation, the faulty system model is to be replaced by measurements from the real system. As parameter values are uncertain, it is important to determine adaptive ARR thresholds that, given uncertain parameters, allow to decide whether the dynamic behaviour in a current system mode is the one of the healthy system so that false alarms or overlooking of true faults can be avoided. The paper shows how incremental bond graphs can be used to determine adaptive mode-dependent ARR thresholds for switched linear time-invariant systems with uncertain parameters in order to support robust fault detection. Bond graph-based hybrid system mode identification as well as the determination of adaptive fault thresholds is illustrated by application to a power electronic system easy to survey. Some simulation results have been analytically validated.
Level-Synchronous Parallel Breadth-First Search Algorithms For Multicore and Multiprocessor Systems
(2014)
Breadth-First Search (BFS) is a graph traversal technique used in many applications as a building block, e.g.,~to systematically explore a search space. For modern multicore processors and as application graphs get larger, well-performing parallel algorithms are favourable. In this paper, we systematically evaluate an important class of parallel BFS algorithms and discuss programming optimization techniques for their implementation. We concentrate our discussion on level-synchronous algorithms for larger multicore and multiprocessor systems. In our results, we show that for small core counts many of these algorithms show rather similar behaviour. But, for large core counts and large graphs, there are considerable differences in performance and scalability influenced by several factors. This paper gives advice, which algorithm should be used under which circumstances.
This work describes extensions to the well-known Distributed Coordination Function (DCF) model to account for IEEE802.11n point-to-point links. The developed extensions cover adaptions to the throughput and delay estimation for this type of link as well peculiarities of hardware and implementations within the Linux Kernel. Instead of using simulations, the approach was extensively verified on real-world deployments at various link distances. Additionally, trials were conducted to optimize the CWmin values and the number of retries to maximize throughput and minimize delay. The results of this work can be used to estimate the properties of long-distance 802.11 links beforehand, allowing the network to be planned more accurately.
Design of a declarative language for task-oriented grasping and tool-use with dextrous robotic hands
(2014)
Apparently simple manipulation tasks for a human such as transportation or tool use are challenging to replicate in an autonomous service robot. Nevertheless, dextrous manipulation is an important aspect for a robot in many daily tasks. While it is possible to manufacture special-purpose hands for one specific task in industrial settings, a generalpurpose service robot in households must have flexible hands which can adapt to many tasks. Intelligently using tools enables the robot to perform tasks more efficiently and even beyond the designed capabilities. In this work a declarative domain-specific language, called Grasp Domain Definition Language (GDDL), is presented that allows the specification of grasp planning problems independently of a specific grasp planner. This design goal resembles the idea of the Planning Domain Definition Language (PDDL). The specification of GDDL requires a detailed analysis of the research in grasping in order to identify best practices in different domains that contribute to a grasp. These domains describe for instance physical as well as semantic properties of objects and hands. Grasping always has a purpose which is captured in the task domain definition. It enables the robot to grasp an object in a taskdependent manner. Suitable representations in these domains have to be identified and formalized for which a domain-driven software engineering approach is applied. This kind of modeling allows the specification of constraints which guide the composition of domain entity specifications. The domain-driven approach fosters reuse of domain concepts while the constraints enable the validation of models already during design time. A proof of concept implementation of GDDL into the GraspIt! grasp planner is developed. Preliminary results of this thesis have been published and presented on the IEEE International Conference on Robotics and Automation (ICRA).
A cost-efficient alternative to outside-in tracking systems for pointing interaction with large displays is to equip the pointing device with a camera, whose images are matched to display content. This work presents the Dynamic Marker Camera Tracking (DMCT) framework for display-based camera tracking. It accounts for typical display characteristics and uses dynamic on-screen markers overlaid to the display content that follow the camera. An example marker implementation and a tracking recovery method are presented. DMCT can measure pointing locations with sub-millimeter precision in large tracking volumes and computes 6-DoF camera poses for 3D interaction. 60 Hz update rate and 24 ms latency were achieved. DMCT's main limitation is the visible marker interfering with display content. In pointing effciency, the prototype is comparable to an OptiTrack system.
Structure-activity relationships of thiostrepton derivatives: implications for rational drug design
(2014)
The case for basic human needs in coaching: A neuroscientific perspective - The SCOAP Coach Theory
(2014)
Realism and plausibility of computer controlled entities in entertainment software have been enhanced by adding both static personalities and dynamic emotions. Here a generic model is introduced that allows findings from real-life personality studies to be transferred to a computational model. Adaptive behavior patterns are enabled by introducing dynamic event-based emotions. The advantages of this model have been validated using a four-way crossroad in a traffic simulation. Driving agents using the introduced model enhanced by dynamics were compared to agents based on static personality profiles and simple rule-based behavior. The results show that adding a dynamic factor to agents improves perceivable plausibility and realism.
Perception is one of the most important cognitive capabilities of an entity since it determines how an entity perceives its environment. The presented work focuses on providing cost efficient but realistic perceptual processes for intelligent virtual agents (IVAs) or NPCs with the goal of providing a sound information basis for the entities' decision making processes. In addition, an agent-central perception process should rovide a common interface for developers to retrieve data from the IVAs' environment. The overall process is evaluated by applying it to a scenario demonstrating its benefits. The evaluation indicates, that such a realistically simulated perception process provides a powerful instrument to enhance the (perceived) realism of an IVA's simulated behavior.
Rendering techniques for design evaluation and review or for visualizing large volume data often use computationally expensive ray-based methods. Due to the number of pixels and the amount of data, these methods often do not achieve interactive frame rates. A view direction based rendering technique renders the users central field of view in high quality whereas the surrounding is rendered with a level of detail approach depending on the distance to the users central field of view thus giving the opportunity to increase rendering efficiency. We propose a prototype implementation and evaluation of a focus-based rendering technique based on a hybrid ray tracing/sparse voxel octree rendering approach.
The perceived direction of “up” is determined by gravity, visual information, and an internal estimate of body orientation (Mittelstaedt, 1983; Dyde et al., 2006). Is the gravity level found on other worlds sufficient to maintain gravity’s contribution to this perception? Difficulties in stability reported anecdotally by astronauts on the lunar surface (NASA 1972) suggest that the moon’s gravity may not be, despite this value being far above the threshold for detecting linear acceleration. Knowing how much gravity is needed to provide a reliable orientation cue is required for training and preparing astronauts for future missions to the moon, mars and beyond.
Humans exhibit flexible and robust behavior in achieving their goals. We make suitable substitutions for objects, actions, or tools to get the job done. When opportunities that would allow us to reach our goals with less effort arise, we often take advantage of them. Robots are not nearly as robust in handling such situations. Enabling a domestic service robot to find ways to get a job done by making substitutions is the goal of our work. In this paper, we highlight the challenges faced in our approach to combine Hierarchical Task Network planning, Description Logics, and the notions of affordances and conceptual similarity. We present open questions in modeling the necessary knowledge, creating planning problems, and enabling the system to handle cases where plan generation fails due to missing/unavailable objects.
This paper gives an overview of the development of Fair Trade in six European countries: Austria, France, Germany, the Netherlands, Switzerland and the United Kingdom. After the description of the food retail industry and its market structures in these countries, the main European Fair Trade organizations are analyzed regarding their role within the Fair Trade system. The following part deals with the development of Fair Trade sales in general and with respect to the products coffee, tea, bananas, fruit juice and sugar. An overview of the main activities of national Fair Trade organizations, e.g. public relation activities, completes the analysis. This study shows the enormous upswing of Fair Trade during the last decade and the reasons for this development. Nevertheless, it comes to the conclusion that Fair Trade is still far away from being an essential part of the food retail industry in Europe.
Application systems are often advertised with features, and features are used heavily for requirements man- agement. However, often software manufacturers only have incomplete information about the features of their software. The information is distributed over different sources, such as requirements documents, issue trackers, user manuals, and code. In this paper, we research the occurrence of feature information in open source software engineering data. We report on a case study with three open source systems. We analyze what information about features can be found in issue trackers and user documentation. Furthermore, we study the abstraction levels on which the features are described, how feature information is related, and we discuss the possibility to discover such information semi-automatically. To mirror the diversity of software development contexts, we choose open source systems, which are quite different, e.g., in the rigor of issue tracker usage. The results differ accordingly. One main result is that the user documentation did not provide more accurate information than the issue tracker compared to a provided feature list. The results also give hints on how the management of feature relevant information can be supported.
Breadth-First Search is a graph traversal technique used in many applications as a building block, e.g., to systematically explore a search space or to determine single source shortest paths in unweighted graphs. For modern multicore processors and as application graphs get larger, well-performing parallel algorithms are favorable. In this paper, we systematically evaluate an important class of parallel algorithms for this problem and discuss programming optimization techniques for their implementation on parallel systems with shared memory. We concentrate our discussion on level-synchronous algorithms for larger multicore and multiprocessor systems. In our results, we show that for small core counts many of these algorithms show rather similar performance behavior. But, for large core counts and large graphs, there are considerable differences in performance and scalability influenced by several factors, including graph topology. This paper gives advice, which algorithm should be used under which circumstances.
Dysregulation of IL12 Signaling As a Novel Cause of an Autoimmune Lymphoproliferative like Syndrome
(2014)
Software repository data, for example in issue tracking systems, include natural language text and technical information, which includes anything from log files via code snippets to stack traces. However, data mining is often only interested in one of the two types e.g. in natural language text when looking at text mining. Regardless of which type is being investigated, any techniques used have to deal with noise caused by fragments of the other type i.e. methods interested in natural language have to deal with technical fragments and vice versa. This paper proposes an approach to classify unstructured data, e.g. development documents, into natural language text and technical information using a mixture of text heuristics and agglomerative hierarchical clustering. The approach was evaluated using 225 manually annotated text passages from developer emails and issue tracker data. Using white space tokenization as a basis, the overall precision of the approach is 0.84 and the recall is 0.85.
Updating a shared data structure in a parallel program is usually done with some sort of high-level synchronization operation to ensure correctness and consistency. The realization of such high-level synchronization operations is done with appropriate low-level atomic synchronization instructions that the target processor architecture provides. These instructions are costly and often limited in their scalability on larger multi-core / multi-processor systems. In this paper, a technique is discussed that replaces atomic updates of a shared data structure with ordinary and cheaper read/write operations. The necessary conditions are specified that must be fulfilled to ensure overall correctness of the program despite missing synchronization. The advantage of this technique is the reduction of access costs as well as more scalability due to elided atomic operations. But on the other side, possibly more work has to be done caused by missing synchronization. Therefore, additional work is traded against costly atomic operations. A practical application is shown with level-synchronous parallel Breadth-First Search on an undirected graph where two vertex frontiers are accessed in parallel. This application scenario is also used for an evaluation of the technique. Tests were done on four different large parallel systems with up to 64-way parallelism. It will be shown that for the graph application examined the amount of additional work caused by missing synchronization is neglectible and the performance is almost always better than the approach with atomic operations.
This review is divided into two interconnected parts, namely a biological and a chemical one. The focus of the first part is on the biological background for constructing tissue-engineered vascular grafts to promote vascular healing. Various cell types, such as embryonic, mesenchymal and induced pluripotent stem cells, progenitor cells and endothelial- and smooth muscle cells will be discussed with respect to their specific markers. The in vitro and in vivo models and their potential to treat vascular diseases are also introduced. The chemical part focuses on strategies using either artificial or natural polymers for scaffold fabrication, including decellularized cardiovascular tissue. An overview will be given on scaffold fabrication including conventional methods and nanotechnologies. Special attention is given to 3D network formation via different chemical and physical cross-linking methods. In particular, electron beam treatment is introduced as a method to combine 3D network formation and surface modification. The review includes recently published scientific data and patents which have been registered within the last decade.
During space missions astronauts suffer from cardiovascular deconditioning, when they are exposed to microgravity conditions. Until now, no specific drugs are available for effective countermeasures, since the underlying mechanism is not completely understood. Endothelial cells (ECs) and smooth muscle cells (SMCs) play crucial roles in a variety of cardiovascular functions, many of which are regulated via P2 receptors. However, their function in ECs and SMCs under microgravity condition is still unknown. In this study, ECs and SMCs were isolated from bovine aorta and differentiated from human mesenchymal stem cells (hMSCs), respectively. Subsequently, the cells were verified based on specific markers. An altered P2 receptor expression pattern was detected during the commitment of hMSC towards ECs and SMCs. The administration of natural and artificial P2 receptor agonists and antagonists directly affected the differentiation process. By using EC growth medium as conditioned medium, a vessel cell model was created to culture SMCs and vice versa. Within this study, we were able to show for the first time that the expression of some P2 receptors were altered in ECs and SMCs grown for 24h under simulated microgravity conditions. On the other hand, in some P2 receptor expressions such as P2X7 conditioned medium compensated this change.
In conclusion, our data show that P2 receptors play an important functional role in hMSC differentiation towards ECs and SMCs. Since some P2 receptor artificial ligands are already used as drugs for patients with cardiovascular diseases, it is reasonable to assume that in the future they might be promising candidates for treating cardiovascular deconditioning.
Improving the study entry supports students in a decisive phase of their university education. Implementing improvements is a change process and can only be successful if the relevant stakeholders are addressed and convinced. In the described Teaching Quality Pact project evaluation data is used as a mean to discuss in the university the situation of the study programs. As these discussions were based on empirical data rather than on opinion, it was possible to achieve an open discussion about measures that are implemented. The open discussion is maintained during the project when results of the measures taken are analyzed.
This article describes an approach to rapidly prototype the parameters of a Java application run on the IBM J9 Virtual Machine in order to improve its performance. It works by analyzing VM output and searching for behavioral patterns. These patterns are matched against a list of known patterns for which rules exist that specify how to adapt the VM to a given application. Adapting the application is done by adding parameters and changing existing ones. The process is fully automated and carried out by a toolkit. The toolkit iteratively cycles through multiple possible parameter sets, benchmarks them and proposes the best alternative to the user. The user can, without any prior knowledge about the Java application or the VM improve the performance of the deployed application and quickly cycle through a multitude of different settings to benchmark them. When tested with the representative benchmarks, improvements of up to 150% were achieved.
In general, mathematics plays a central role in our lives because today mathematics regulates our everyday life with techniques, technologies and procedures, for example coding techniques for credit cards or the drafting of curves and surfaces for construction procedures [5]. Obviously, mathematics continues to be an important element of engineering education and it still represents a major obstacle for the students. Lacking the knowledge of several topics, changing learning behavior and inadequate overall conditions at universities for the repetition of school mathematics were mentioned to be causes for the constantly increasing gap between the initial level of mathematics at university and the prior knowledge of the first semester students [2].
Improving data acquisition techniques and rising computational power keep producing more and larger data sets that need to be analyzed. These data sets usually do not fit into a GPU's memory. To interactively visualize such data with direct volume rendering, sophisticated techniques for problem domain decomposition, memory management and rendering have to be used. The volume renderer Volt is used to show how CUDA is efficiently utilised to manage the volume data and a GPU's memory with the aim of low opacity volume renderings of large volumes at interactive frame rates.
We consider the Hopfield model with n neurons and an increasing number p=p(n) of randomly chosen patterns and use Stein's method to obtain rates of convergence for the central limit theorem of overlap parameters, which holds for every fixed choice of the overlap parameter for almost all realisations of the random patterns.
Low power dissipation is a current topic in digital design, and therefore, it should be covered in a state-of-the-art electrical engineering curriculum. This paper describes how low-power design can be addressed within a digital design course. Doing so would be beneficial for both topics because low-power design is not detached from the systems perspective, and the digital design course would be enriched by references to current challenges and applications. Thus, the presented course should serve as an example of how a course can be developed to also teach students about sustainable engineering.
Abstract Classical ballet requires dancers to exercise significant muscle control and strength both while stationary and when moving. Following the Royal Academy of Dance (RAD) syllabus, 8 male and 27 female dancers (aged 20.2 + 1.9 yr) in a full-time university undergraduate dance training program were asked to stand in first position for 10 seconds and then perform 10 repeats of a demi-plié exercise to a counted rhythm. Accelerometer records from the wrist, sacrum, knee and ankle were compared with the numerical scores from a professional dance instructor. The sacrum mounted sensor detected lateral tilts of the torso in dances with lower scores (Spearman’s rank correlation coefficient r = -0.64, p < 0.005). The 5RMS6 acceleration amplitude of wrist mounted sensor was linearly correlated to the movement scores (Spearman’s rank correlation coefficient r = 0.63, p < 0.005). The application of sacrum and wrist mounted sensors for biofeedback during dance training is a realistic, low cost option.
The analytical pyrolysis technique hyphenated to gas chromatography–mass spectrometry (GC–MS) has extended the range of possible tools for the characterization of synthetic polymers and copolymers. Pyrolysis involves thermal fragmentation of the analytical sample at temperatures of 500–1400 °C. In the presence of an inert gas, reproducible decomposition products characteristic for the original polymer or copolymer sample are formed. The pyrolysis products are chromatographically separated using a fused-silica capillary column and are subsequently identified by interpretation of the obtained mass spectra or by using mass spectra libraries. The analytical technique eliminates the need for pretreatment by performing analyses directly on the solid or liquid polymer sample. In this article, application examples of analytical pyrolysis hyphenated to GC–MS for the identification of different polymeric materials in the plastic and automotive industry, dentistry, and occupational safety are demonstrated. For the first time, results of identification of commercial light-curing dental filling material and a car wrapping foil by pyrolysis–GC–MS are presented.
It has become increasingly clear that caspases, far from being merely cell death effectors, have a much wider range of functions within the cell. These functions are as diverse as signal transduction and cytoskeletal remodeling, and caspases are now known to have an essential role in cell proliferation, migration, and differentiation. There is also evidence that apoptotic cells themselves can direct the behavior of nearby cells through the caspase-dependent secretion of paracrine signaling factors. In some processes, including the differentiation of skeletal muscle myoblasts, both caspase activation in differentiating cells as well as signaling from apoptotic cells has been reported. Here, we review the non-apoptotic outcomes of caspase activity in a range of different model systems and attempt to integrate this knowledge.
Robots, which are able to carry out their tasks robustly in real world environments, are not only desirable but necessary if we want them to be more welcome for a wider audience. But very often they may fail to execute their actions successfully because of insufficient information about behaviour of objects used in the actions.