Refine
Departments, institutes and facilities
- Fachbereich Informatik (68)
- Fachbereich Wirtschaftswissenschaften (62)
- Fachbereich Angewandte Naturwissenschaften (33)
- Fachbereich Ingenieurwissenschaften und Kommunikation (33)
- Institute of Visual Computing (IVC) (25)
- Institut für funktionale Gen-Analytik (IFGA) (22)
- Institut für Verbraucherinformatik (IVI) (20)
- Institut für Cyber Security & Privacy (ICSP) (18)
- Fachbereich Sozialpolitik und Soziale Sicherung (16)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (16)
Document Type
- Conference Object (111)
- Article (102)
- Part of a Book (62)
- Book (monograph, edited volume) (19)
- Report (10)
- Contribution to a Periodical (6)
- Conference Proceedings (5)
- Patent (4)
- Working Paper (4)
- Lecture (3)
Year of publication
- 2014 (337) (remove)
Keywords
- Nachhaltigkeit (8)
- Lehrbuch (7)
- Corporate Social Responsibility (3)
- FPGA (3)
- Unternehmen (3)
- education (3)
- parallel breadth-first search (3)
- BFS (2)
- Betriebswirtschaftslehre (2)
- Controlling (2)
Matrix metalloproteinases (MMPs) are matrix-degrading enzymes that are over-expressed in joints of rheumatoid arthritis (RA) patients. However, the contribution of specific MMPs for the development of arthritic joints is unknown. This study is aimed at studying the role of matrix metalloproteinase-9 (MMP-9) in mice, using the K/BxN serum-transfer model of RA. Arthritis was induced in Balb/c mice by injecting K/BxN serum. Development of arthritis was followed in these mice by measuring ankle thickness and clinical index score. MMP-9 expression in the joints of mice killed at various time points during the disease progression was determined by gelatin zymography using ankle lysates. We found that MMP-9 expression increased with the severity of arthritis. Importantly MMP-9 deficient mice injected with K/BxN serum showed a milder form of arthritis in comparison to the control C57BL/6 mice injected with K/BxN serum. We therefore conclude that MMP-9 promotes arthritis in mice.
This work describes extensions to the well-known Distributed Coordination Function (DCF) model to account for IEEE802.11n point-to-point links. The developed extensions cover adaptions to the throughput and delay estimation for this type of link as well peculiarities of hardware and implementations within the Linux Kernel. Instead of using simulations, the approach was extensively verified on real-world deployments at various link distances. Additionally, trials were conducted to optimize the CWmin values and the number of retries to maximize throughput and minimize delay. The results of this work can be used to estimate the properties of long-distance 802.11 links beforehand, allowing the network to be planned more accurately.
Corporate Design Leitfaden
(2014)
Die nachhaltige Organisation des Verkehrs soll auf kostengünstige, umweltfreundliche und nutzerfreundlichere Nahverkehrskonzepte, die möglichst viele Bürger zur Nutzung des ÖPNV einladen, abzielen. Vor diesem Hintergrund ist es das Ziel dieses Beitrags, instrumentelle Ansatzpunkte für ein Nachhaltigkeitscontrolling in ÖPNV-Unternehmen aufzuzeigen. Hierzu werden nachfolgend die Berücksichtigung der Nachhaltigkeit bei Investitionsentscheidungen, das Carbon Accounting (Transparenz über CO2-Emissionen), die Integration der ökologischen, ökonomischen und sozialen Nachhaltigkeitsdimension bei der Berichterstattung und die Einbindung der genannten Instrumente in ein Managementsystem skizziert. Die Nachhaltigkeitsdimensionen Ökologie, Ökonomie und Soziales lassen sich gut mit Hilfe mehrdimensionaler, integrierter Managementsysteme in der Organisation verankern und systematisch in interne Strukturen und Prozesse einbetten. Integrierte Managementsysteme können so eine wichtige Voraussetzung für ein effizientes Nachhaltigkeitscontrolling sein.
Open educational resources (OERs) provide opportunities as enablers of societal development, but they also create new challenges. From the perspective of content providers and educational institutions, particularly, cultural and context-related challenges emerge. Even though barriers regarding large-scale adoption of OERs are widely discussed, empirical evidence for determining challenges in relation to particular contexts is still rare. Such context-specific barriers generally can jeopardize the acceptance of OERs and, in particular, social OER environments. We conducted a large-scale (N = 855) cross-European investigation in the school context to determine how teachers and learners perceive cultural distance as a barrier against the use of social OER environments. The findings indicate how nationality and age of the respondents are strong predictors of cultural distance barrier. The study concludes with identification of context-sensitive interventions for overcoming the related bar riers. These consequences are vital for OER initiatives and educational institutions for aligning their efforts on OER.
Case Management
(2014)
An apple a day keeps the doctor away’. While it may be true that a balanced diet is a prerequisite for good health, how good is what we eat and drink every day? And is it actually possible to fulfil every customer desire with the vast array of foodstuffs on offer? BSE, dioxin in eggs, EHEC sprouts: in the light of repeated food safety crises, the issue of quality assurance as well as customer-oriented quality management has become of prime importance for the agri-food industry.
Ausgehend von den Anforderungen der EU-Rechtsprechung untersuchen die Autoren im Rahmen der vom Forschungsinstitut für Glücksspiel und Wetten (Hochschule Bonn-Rhein-Sieg, St. Augustin) erstellten Studie alternative Regulierungsmodelle für den Glücks- und Gewinnspielmarkt in Deutschland. Die gegenwärtige Regulierung ist nach ihrer Analyse nicht in der Lage, das dynamische Wachstum des nichtregulierten Glücks- und Gewinnspielmarktes zu begrenzen. Sie führt zur Migration von Spielern vor allem zugunsten der nichtregulierten Märkte, zu Sozialverlusten, da die Spieler auf den nichtregulierten Glücks- und Gewinnspielmärkten nicht wirksam in Konzepte des Spieler- und Jugendschutzes einbezogen werden können, sowie zu Wettbewerbsverzerrungen zwischen verschiedenen Glücks- und Gewinnspielformen. Die Einbeziehung des derzeitigen Graumarktes in die Regulierung sei dringend erforderlich. Dies würde zu einer ökonomisch gebotenen Planungssicherheit führen, fiskalische Mehreinnahmen generieren und einen deutlich effizienteren Spieler- und Konsumentenschutz ermöglichen.
Die Formelsammlung zeigt die statistischen Formeln auf, die in den Wirtschaftswissenschaften notwendig von Bedarf sind. Sie ist interdisziplinär zielgerichtet und unterstützt sämtliche Bereiche der Ökonomik. Das Verständnis der Formeln und deren praktische Anwendung werden sinnvoll unterstützt durch nützliche Hilfen und Beispiele. Das Buch ist ein unverzichtbares Tool sowohl für den Studierenden als auch für den Verantwortlichen in Wirtschaft, Management, Verwaltung, Politik und Lehre.
Diese Formelsammlung zeigt die mathematischen Formeln auf, die in den
Wirtschaftswissenschaften notwendig von Bedarf sind. Sie ist interdisziplinär
zielgerichtet und unterstützt sämtliche Bereiche der Ökonomik. Das Verständnis der Formeln und deren praktische Anwendung werden sinnvoll unterstützt durch nützliche Hilfen und Beispiele. Das Buch ist ein unverzichtbares Tool sowohl für den Studierenden als auch für den Verantwortlichen in Wirtschaft, Management, Verwaltung, Politik und Lehre.
Facility Management
(2014)
Application systems are often advertised with features, and features are used heavily for requirements man- agement. However, often software manufacturers only have incomplete information about the features of their software. The information is distributed over different sources, such as requirements documents, issue trackers, user manuals, and code. In this paper, we research the occurrence of feature information in open source software engineering data. We report on a case study with three open source systems. We analyze what information about features can be found in issue trackers and user documentation. Furthermore, we study the abstraction levels on which the features are described, how feature information is related, and we discuss the possibility to discover such information semi-automatically. To mirror the diversity of software development contexts, we choose open source systems, which are quite different, e.g., in the rigor of issue tracker usage. The results differ accordingly. One main result is that the user documentation did not provide more accurate information than the issue tracker compared to a provided feature list. The results also give hints on how the management of feature relevant information can be supported.
It is a euphemism to say that humans use tools. Humans possess a vast repertoire of tools they use every day. In fact, as language or bipedal locomotion, tool use is a hallmark of humans. Tool use has also been often viewed as an important step during evolution (van Schaik et al., 1999) or even as a marker of the evolution of human intelligence (Wynn, 1985). So a fundamental issue is, what are the cognitive and neural bases of human tool use? The present series of papers in this special topic represents the newest additions to that research topic.
Chemie kann im Studium ganz schön kompliziert sein, besonders wenn es etwas mehr in die Tiefe geht. Dass man aber auch komplizierte Dinge leicht verständlich und bisweilen amüsant erklären kann, beweist dieses Lehrbuch. Stefanie Ortanderl und Ulf Ritgen erklären die Grundlagen der Chemie und so erfahren Sie, was Sie über Atommodelle, Bindungstypen und das Periodensystem wissen sollten.
When developing new ICT systems and applications for domestic environments, rich qualitative approaches improve the understanding of the user's integral usage of technology in their daily routines and thereby inform design. This knowledge will often be reached through in-home studies, strong relationships with the users and their involvement in the design and evaluation process. However, whilst this kind of research offers valuable context insights and brings out unexpected findings, it also presents methodological, technical and organizational challenges for the study design and its underlying cooperation processes. In particular, due to heterogeneous users in households in terms of technology affinity, individual needs, age distribution, gender, social constellations, personal role assignment, project expectations, etc. it produces particular demands to collaborate with users in the design process and thereby exposes a range of practical challenges. The full-day workshop wishes to identify these practical challenges, discuss best practice and develop a roadmap for sustainable relationships for design with users.
Rehabilitation wirkt
(2014)
Die medizinische Rehabilitation bildet im bundesdeutschen Gesundheitssystem eine wichtige Säule. Sie wird weltweit immer wieder als vorbildlich angesehen und ist im internationalen Vergleich mit Mitteln, Infrastruktur, Know-how und Behandlungsqualität hervorragend ausgestattet. Dies ist gut so, aber ist es gut genug?
Vor dem Hintergrund knapper Ressourcen, dem zunehmendem Reha-Bedarf und der politischen Diskussion um eine demografische Anpassung der Reha-Budgets gewinnt der Nachweis der Ergebnisqualität medizinischer Reha-Leistungen weiter an zentraler Bedeutung (z. B. Haaf, 2005; Steiner et al., 2009). Die kontinuierliche und klinikvergleichende Überprüfung der Behandlungsergebnisse ist darüber hinaus ein wichtiger Baustein eines funktionierenden Qualitätsmanagements (Schmidt et al., in press). Sie ermöglicht ein "Lernen von den Besten" und führt zu organisatorischen Lernprozessen (Toepler et. al., 2010).
This article describes an approach to rapidly prototype the parameters of a Java application run on the IBM J9 Virtual Machine in order to improve its performance. It works by analyzing VM output and searching for behavioral patterns. These patterns are matched against a list of known patterns for which rules exist that specify how to adapt the VM to a given application. Adapting the application is done by adding parameters and changing existing ones. The process is fully automated and carried out by a toolkit. The toolkit iteratively cycles through multiple possible parameter sets, benchmarks them and proposes the best alternative to the user. The user can, without any prior knowledge about the Java application or the VM improve the performance of the deployed application and quickly cycle through a multitude of different settings to benchmark them. When tested with the representative benchmarks, improvements of up to 150% were achieved.
This thesis presents an approach to automatically adjust the parameters of a Java application run on the IBM J9 Virtual Machine in order to improve its performance. It works by analyzing the logfile the VM generates and searching for specific behavioral patterns. These patterns are matched against a list of known patterns for which rules exist that specify how to adapt the VM to the given application. Adapting the application is done by adding parameters and changing existing ones, for example to achieve a better heap usage. The process is fully automated and carried out by a toolkit developed for this thesis. The toolkit iteratively cycles through multiple possible parameter sets, benchmarks them and proposes the best alternative to the user. The user can, without any prior knowledge about the Java application or the VM improve the performance of the deployed application.
Dysregulation of IL12 Signaling As a Novel Cause of an Autoimmune Lymphoproliferative like Syndrome
(2014)
As soon as data is noisy, knowledge as it is represented in an information system becomes unreliable. Features in databases induce equivalence relations—but knowledge discovery takes the other way round: given a relation, what could be a suitable functional description? But the relations we work on are noisy again. If we expect to record data for learning a classification of objects then it can well be the real data does not create a reflexive, symmetric and transitive relation although we know it should be. The usual approach taken here is to build the closure in order to ensure desired properties. This, however, leads to overgeneralisation rather quickly.
On nothing
(2014)
Software repository data, for example in issue tracking systems, include natural language text and technical information, which includes anything from log files via code snippets to stack traces. However, data mining is often only interested in one of the two types e.g. in natural language text when looking at text mining. Regardless of which type is being investigated, any techniques used have to deal with noise caused by fragments of the other type i.e. methods interested in natural language have to deal with technical fragments and vice versa. This paper proposes an approach to classify unstructured data, e.g. development documents, into natural language text and technical information using a mixture of text heuristics and agglomerative hierarchical clustering. The approach was evaluated using 225 manually annotated text passages from developer emails and issue tracker data. Using white space tokenization as a basis, the overall precision of the approach is 0.84 and the recall is 0.85.
Das sogenannte „Deutschlandstipendium“ ist 2010 ins Leben gerufen worden. Gemäß den gesetzlichen Vorgaben sollen die Stipendien nach Begabung und Leistung vergeben werden. Darüber hinaus sollen auch gesellschaftliches Engagement oder besondere soziale, familiäre oder persönliche Umstände berücksichtigt werden. Bei der Finanzierung sind die Hochschulen zunächst auf das Einwerben privater Fördermittel angewiesen, die von Bund und Land um denselben Betrag aufgestockt werden. Die privaten Mittelgeber können für die von ihnen anteilig finanzierten Stipendien festlegen, aus welchen Studiengängen ihre Stipendiaten ausgewählt werden sollen. Die Hochschulen haben jedoch darauf zu achten, dass ein Drittel aller zu vergebenden Stipendien ohne eine entsprechende Zweckbindung vergeben werden. Einen direkten Einfluss auf die Auswahl einzelner Kandidaten dürfen die Förderer nicht haben. Vor diesem Hintergrund sind die Hochschulen angehalten, Anreize für private Förderer zu schaffen und parallel Bewerbungs- und Auswahlverfahren zu konzipieren, die die genannten gesetzlichen Vorgaben einhalten. Dadurch entsteht bei den Hochschulen ein erheblicher Verwaltungsaufwand. Zu dessen Reduzierung wird in diesem Artikel ein transparenter, nachvollziehbarer, zeit- und kostensparender Prozess durch einen programmierten Workflow beschrieben.
Seit vielen Jahren ist der Übergang von der Schule zur Hochschule eines der zentralen Themen für didaktische Theorien, empirische Untersuchungen und bildungspolitische Diskussionen. Ein dabei identifiziertes großes Problem vieler Studierender ist, dass mit dem Abitur „eine Lebensphase mit meist klar definierten Zielen in überschaubaren räumlichen, familiären und schulischen Strukturen endet“.1) Entscheidet man sich als Studierender gegen die nicht akademische Laufbahn und nimmt ein Hochschulstudium auf, trifft man auf Studienstrukturen und -bedingungen, die einem fremd und chaotisch vorkommen können. Der Weg an die Hochschulen ermöglicht den Individuen eine Reihe von Optionen, ist aber leider auch immer mit Risiken und Unsicherheiten behaftet. Entscheidungen müssen nun selbstständig vorbereitet und getroffen werden und dies in einem Umfeld, das sehr unterschiedlich im Vergleich zur bekannten Schulstruktur sein kann.
Breadth-First Search is a graph traversal technique used in many applications as a building block, e.g., to systematically explore a search space or to determine single source shortest paths in unweighted graphs. For modern multicore processors and as application graphs get larger, well-performing parallel algorithms are favorable. In this paper, we systematically evaluate an important class of parallel algorithms for this problem and discuss programming optimization techniques for their implementation on parallel systems with shared memory. We concentrate our discussion on level-synchronous algorithms for larger multicore and multiprocessor systems. In our results, we show that for small core counts many of these algorithms show rather similar performance behavior. But, for large core counts and large graphs, there are considerable differences in performance and scalability influenced by several factors, including graph topology. This paper gives advice, which algorithm should be used under which circumstances.
The usage of the Web has experienced a vertiginous growth in the last few years. Watching video online has been one major driving force for this growth lately. Until the appearance of the HTML5 agglomerate of (still draft) specifications, the access and consumption of multimedia content in the Web has not been standardized. Hence, the use of proprietary Web browser plugins flourished as intermediate solution. With the introduction of the HTML5 VideoElement, Web browser plugins are replaced with a standardized alternative. Still, HTML5 Video is currently limited in many respects, including the access to only file-based media. This paper investigates on approaches to develop video live streaming solutions based on available Web standards. Besides a pull-based design based on HTTP, a push-based architecture is introduced, making use of the WebSocket protocol being part of the HTML5 standards family as well. The evaluation results of both conceptual principles emphasize, that push-based approaches have a higher potential of providing resource and cost efficient solutions as their pull-based counterparts. In addition, initial approaches to instrument the proposed push-based architecture with adaptiveness to network conditions have been developed.
The contribution of the most common reciprocal translocation in childhood B-cell precursor leukemia t(12;21)(p13;q22) to leukemia development is still under debate. Direct as well as secondary indirect effects of the TEL-AML1 fusion protein are commonly recorded by using cell lines and patient samples, often bearing the TEL-AML1 fusion protein for decades. To identify direct targets of the fusion protein a short-term induction of TEL-AML1 is needed. We here describe in detail the experimental procedure, quality controls and contents of the ChIP, mRNA expression and SILAC datasets associated with the study published by Linka and colleagues in the Blood Cancer Journal [1] utilizing a short term induction of TEL-AML1 in an inducible precursor B-cell line model.
We investigated graphene structures grafted with fullerenes. The size of the graphene sheets ranges from 6400 to 640,000 atoms. The fullerenes (C60 and C240) are placed on top of the graphene sheets, using different impact velocities we could distinguish three types of impact. Furthermore, we investigated the changes of the vibrational properties. The modified graphene planes show additional features in the vibronic density of states.
Propionic acidemia in a previously healthy adolescent with acute onset of dilated cardiomyopathy
(2014)
Unexpected Situations in Service Robot Environment: Classification and Reasoning Using Naive Physics
(2014)
Gas chromatography with flame-ionization detection (FID) and gas chromatography-mass spectrometry (GC/MS) with electron impact ionization (EI) and chemical ionization (PCI and NCI) were successfully used for separation and identification of commercially available longchain primary alkyl amines. The investigated compounds were used as corrosion inhibiting and antifouling agents in a water-steam circuit of energy systems in the power industry. Solidphase extraction (SPE) with octadecyl bonded silica (C18) sorbents followed by gas chromatography were used for quantification of the investigated Primene JM-T™ alkyl amines in boiler water, condensate and superheated steam samples from the power plant. Amine formulations from Kotamina group favor formation of protective layers on internal surfaces and keep them free from corrosion and scale. Alkyl amines contained in those formulations both render the environment alkaline and limit the corrosion impact of ionic and gaseous impurities by formation of protective layers. Moreover, alkyl amines limit scaling on heating surfaces of boilers and in turbine, ensuring failure-free operation. Application of alkyl amine formulation enhances heat exchange during boiling and condensation processes. Alkyl amines with branched structure are more thermally stable than linear alkyl amines, exhibit better adsorption and effectiveness of surface shielding. As a result, application of thermostable long-chain branched alkyl amines increases the efficiency of anti-corrosive protection. Moreover, the concentration of ammonia content in water and in steam was also considerably decreased.
The analytical pyrolysis technique hyphenated to gas chromatography–mass spectrometry (GC–MS) has extended the range of possible tools for the characterization of synthetic polymers and copolymers. Pyrolysis involves thermal fragmentation of the analytical sample at temperatures of 500–1400 °C. In the presence of an inert gas, reproducible decomposition products characteristic for the original polymer or copolymer sample are formed. The pyrolysis products are chromatographically separated using a fused-silica capillary column and are subsequently identified by interpretation of the obtained mass spectra or by using mass spectra libraries. The analytical technique eliminates the need for pretreatment by performing analyses directly on the solid or liquid polymer sample. In this article, application examples of analytical pyrolysis hyphenated to GC–MS for the identification of different polymeric materials in the plastic and automotive industry, dentistry, and occupational safety are demonstrated. For the first time, results of identification of commercial light-curing dental filling material and a car wrapping foil by pyrolysis–GC–MS are presented.
Analytical pyrolysis technique hyphenated to gas chromatography/mass spectrometry (Py-GC/MS) has extended the range of possible tools for characterization of synthetic polymers/copolymers. Pyrolysis involves thermal fragmentation of the analytical sample at elevated temperature between 500 and 1400 °C. In the presence of an inert gas, reproducible decomposition products characteristic for the original polymer/copolymer sample are formed. The pyrolysis products are chromatographically separated by using a fused silica capillary column and subsequently identified by interpretation of the obtained mass spectra or by using mass spectra libraries. The analytical technique eliminate the need for pre-treatment by performing analyses directly on the solid or liquid polymer sample.
In this paper, application examples of the analytical pyrolysis hyphenated to gas chromatography/mass spectrometry for the identification of different polymeric materials in the plastic and automotive industry, dentistry and occupational safety are demonstrated. For the first time results of identification of commercially light-curing dental filling material and a car wrapping foil by pyrolysis-GC/MS are presented.
Robots, which are able to carry out their tasks robustly in real world environments, are not only desirable but necessary if we want them to be more welcome for a wider audience. But very often they may fail to execute their actions successfully because of insufficient information about behaviour of objects used in the actions.
In the field of domestic service robots, recovery from faults is crucial to promote user acceptance. In this context we focus in particular on some specific faults, which arise from the interaction of a robot with its real world environment. Even a well-modelled robot may fail to perform its tasks successfully due to unexpected situations, which occur while interacting. These situations occur as deviations of properties of the objects (manipulated by the robot) from their expected values. Hence, they are experienced by the robot as external faults.
Automated parameterization of intermolecular pair potentials using global optimization techniques
(2014)
In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters’ influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.
Realism and plausibility of computer controlled entities in entertainment software have been enhanced by adding both static personalities and dynamic emotions. Here a generic model is introduced that allows findings from real-life personality studies to be transferred to a computational model. Adaptive behavior patterns are enabled by introducing dynamic event-based emotions. The advantages of this model have been validated using a four-way crossroad in a traffic simulation. Driving agents using the introduced model enhanced by dynamics were compared to agents based on static personality profiles and simple rule-based behavior. The results show that adding a dynamic factor to agents improves perceivable plausibility and realism.
CSR-Handbuch : ein Ratgeber
(2014)
Aus dem Projekt "Förderung angehender weiblicher Führungskräfte in kleinen und mittleren Unternehmen als CSR-Maßnahme"; ein Projekt der Hochschule Bonn-Rhein-Sieg im Rahmen des Programms "CSR-Gesellschaftliche Verantwortung im Mittelstand" gefördert durch das Bundesministerium für Arbeit und Soziales und durch den Europäischen Sozialfonds.
Mit dem Projekt Pro-MINT-us hat sich die Hochschule Bonn-Rhein-Sieg erfolgreich im „Qualitätspakt Lehre“ beworben. Im Fokus steht dabei eine bessere Begleitung der Studierenden im Übergang von der Schule zur Hochschule. Mit Hilfe der Projektmittel konnten u.a. zwei Stellen geschaffen werden, die die Studierenden im Bereich „wissenschaftliches Schreiben“ unterstützen sollen.
The ability to track moving people is a key aspect of autonomous robot systems in real-world environments. Whilst for many tasks knowing the approximate positions of people may be sufficient, the ability to identify unique people is needed to accurately count people in the real world. To accomplish the people counting task, a robust system for people detection, tracking and identification is needed.
PhD Project Management
(2014)
Current computer architectures are multi-threaded and make use of multiple CPU cores. Most garbage collections policies for the Java Virtual Machine include a stop-the-world phase, which means that all threads are suspended. A considerable portion of the execution time of Java programs is spent in these stop-the-world garbage collections. To improve this behavior, a thread-local allocation and garbage collection that only affects single threads, has been proposed. Unfortunately, only objects that are not accessible by other threads ("do not escape") are eligible for this kind of allocation. It is therefore necessary to reliably predict the escaping of objects. The work presented in this paper analyzes the escaping of objects based on the line of code (program counter – PC) the object was allocated at. The results show that on average 60-80% of the objects do not escape and can therefore be locally allocated.
Improving data acquisition techniques and rising computational power keep producing more and larger data sets that need to be analyzed. These data sets usually do not fit into a GPU's memory. To interactively visualize such data with direct volume rendering, sophisticated techniques for problem domain decomposition, memory management and rendering have to be used. The volume renderer Volt is used to show how CUDA is efficiently utilised to manage the volume data and a GPU's memory with the aim of low opacity volume renderings of large volumes at interactive frame rates.