Refine
Departments, institutes and facilities
- Fachbereich Informatik (38)
- Institut für funktionale Gen-Analytik (IFGA) (25)
- Fachbereich Angewandte Naturwissenschaften (16)
- Institute of Visual Computing (IVC) (15)
- Institut für Cyber Security & Privacy (ICSP) (10)
- Fachbereich Ingenieurwissenschaften und Kommunikation (9)
- Fachbereich Wirtschaftswissenschaften (7)
- Institut für Verbraucherinformatik (IVI) (5)
- Institut für Sicherheitsforschung (ISF) (2)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (1)
Document Type
- Conference Object (53)
- Article (43)
- Part of a Book (14)
- Report (7)
- Master's Thesis (6)
- Conference Proceedings (3)
- Book (monograph, edited volume) (2)
- Doctoral Thesis (1)
- Lecture (1)
- Part of Periodical (1)
Year of publication
- 2011 (131) (remove)
Language
- English (131) (remove)
Keywords
- Business Ethnography (2)
- Emergency support system (2)
- Global Software Engineering (2)
- Mobile sensors (2)
- 3D gaming (1)
- 3D nucleus (1)
- AR (1)
- Accretion (1)
- Adaptation (1)
- Adipose tissue (1)
Routing Attacks are a serious threat to communication in tactical MANETs. TOGBAD is a centralised approach, using topology graphs to detect such attacks. In this paper, we present TOGBAD's newly added wormhole detection capability. It is an adaptation of a wormhole detection method developed by Hu et al. This method is based on nodes' positions. We adapted it to the specific properties of tactical environments. Furthermore, we present simulation results which show TOGBAD's performance regarding the detection of wormhole attacks.
Nowadays Field Programmable Gate Arrays (FPGA) are used in many fields of research, e.g. to create prototypes of hardware or in applications where hardware functionality has to be changed more frequently. Boolean circuits, which can be implemented by FPGAs are the compiled result of hardware description languages such as Verilog or VHDL. Odin II is a tool, which supports developers in the research of FPGA based applications and FPGA architecture exploration by providing a framework for compilation and verification. In combination with the tools ABC, T-VPACK and VPR, Odin II is part of a CAD flow, which compiles Verilog source code that targets specific hardware resources. This paper describes the development of a graphical user interface as part of Odin II. The goal is to visualize the results of these tools in order to explore the changing structure during the compilation and optimization processes, which can be helpful to research new FPGA architectures and improve the workflow.
Despite perfect functioning of its internal components, a robot can be unsuccessful in performing its tasks because of unforeseen situations. These situations occur when the behavior of the objects in the robot’s environment deviates from its expected values. For robots, such deviations are exhibited in the form of unknown external faults which prohibit them from performing their tasks successfully. In this work we propose to use naive physics knowledge to reason about such faults in the robotics domain. We propose an approach that uses naive physics concepts to find information about the situations which result in a detected unknown fault. The naive physics knowledge is represented by the physical properties of objects which are formalized in a logical framework. The proposed approach applies a qualitative version of physical laws to these properties for reasoning about the detected fault. By interpreting the reasoning results the robot finds the information about the situations which can cause the fault. We apply the proposed approach to the scenarios in which a robot performs manipulation tasks of picking and placing objects. Results of this application show that naive physics holds great promise for reasoning about unknown ex- ternal faults in robotics.
While industrialized countries are becoming service economies, all countries are becoming global. As competition becomes more global, understanding and accommodating the needs of international customers with different cultural backgrounds has become increasingly important. This study highlights cross-cultural perceptions of service problems in the tourist industry.
The usage of link quality based routing metrics significantly improves the quality of the chosen paths and by that the performance of the network. But, attackers may try to exploit link qualities for their purposes. Especially in tactical multi-hop networks, routing may fall prey to an attacker. Such routing attacks are a serious threat to communication. TOGBAD is a centralised approach, using topology graphs to detect routing attacks. In this paper, we enhance TOGBAD with the capability to detect fake link qualities. We use a Challenge/Response method to estimate the link qualities in the network. Based on this, we perform plausibility checks for the link qualities propagated by the nodes in the network. Furthermore, we study the impact of attackers propagating fake link qualities and present simulation results showing TOGBAD's detection rate.
Spectral surveys provide the only way to determine the full molecular inventory of an object and hence build a comprehensive view of the state of the molecular gas and its role in star formation and the structure and evolution of the ISM. Of course spectral surveys also provide the most efficient method of identifying new and unexpected species that have to be include in the chemical networks. The most extensive and complete survey of an extragalactic system has been the continuous spectral survey from 129 GHz to 175 GHz carried out by Martín et al. (2006) toward NGC253. This first spectral line surveys at 2 mm towards the prototypical starbursts galaxies NGC253 have shown an unexpected chemical richness.
Selective screening for inborn errors of metabolism--assessment of metabolites in body fluids
(2011)
The roadmap for quality and innovation through open educational practices has been conceived as a number of steps, a conceptual document, which can be used by organisations, leaners or professionals in order to improve their open educational practices. After the development of the core concept of the OPAL project, the guidelines for OEP, it became clear that these guidelines, would have to play an important part on the roadmap exercise, because they represent the very essence of how to foster and stimulate open educational practices. The roadmap therefore is meant to be an instrument, a tool which helps the different stakeholders to use the guidelines for their own context and purpose.
Streptococcus agalactiae is the leading cause of bacterial sepsis and meningitis in neonates and is also the causative agent of several serious infections in immunocompromised adults. S. agalactiae encounters multiple niches during an infection, suggesting that regulatory mechanisms control the expression of specific virulence factors in this bacterium. The present study describes the functional characterization of a gene from S. agalactiae, designated rga, which encodes a protein with significant similarity to members of the RofA-like protein (RALP) family of transcriptional regulators. After deletion of the rga gene in the genome of S. agalactiae, the mutant strain exhibited significantly reduced expression of the genes srr-1 and pilA, which encode a serine-rich repeat surface glycoprotein and a pilus protein, respectively, and moderately increased expression of the fbsA gene, which encodes a fibrinogen-binding protein. Electrophoretic mobility shift assays demonstrated specific DNA binding of purified Rga to the promoter regions of pilA and fbsA, suggesting that Rga directly controls pilA and fbsA. Adherence assays revealed significantly reduced binding of the Δrga mutant to epithelial HEp-2 cells and to immobilized human keratin 4, respectively. In contrast, the adherence of the Δrga mutant to A549 cells and its binding to human fibrinogen was significantly increased. Immunoblot and immunoelectron microscopy revealed that the quantity of pilus structures was significantly reduced in the Δrga mutant compared with the parental strain. The wild-type phenotype could be restored by plasmid-mediated expression of rga, demonstrating that the mutant phenotypes resulted from a loss of Rga function.
Providing Mobile Phone Access in Rural Areas via Heterogeneous Meshed Wireless Back-Haul Networks
(2011)
Polymerase Chain Reaction
(2011)
In this paper, the performance evaluation of Frequency Modulated Chaotic On-Off Keying (FM-COOK) in AWGN, Rayleigh and Rician fading channels is given. The simulation results show that an improvement in BER can be gained by incorporating the FM modulation with COOK for SNR values less than 10dB in AWGN case and less than 6dB for Rayleigh and Rician fading channels.
Novel Automated Three-Dimensional Genome Scanning Based on the Nuclear Architecture of Telomeres
(2011)
NMR structures of thiostrepton derivatives for characterization of the ribosomal binding site
(2011)
Having multiple talkers on a bus system rises the bandwidth on this bus. To monitor the communication on a bus, tools that constantly read the bus are needed. This report shows an implementation of a monitoring system for the CAN bus utilizing the Altera DE2 development board. The Biomedical Institute of the University of New Brunswick is currently developing together with different partners a prosthetic limb device, the UNB hand. Communication in this device is done via two CAN buses, which operate at a bit-rate of 1 Mbit/s. The developed monitoring system has been completely designed in Verilog HDL. It monitors the CAN bus in real-time and allows monitoring of different modules as well as of the overall load. The calculated data is displayed on the built-in LCD and also transmitted via UART to a PC. A sample receiver programmed in C is also given. The evaluation of this system has been done by using the Microchip CAN Bus Analyzer Tool connected to the GPIO port of the development board that simulates CAN communication.
The Anomalous X‐ray Pulsar 4U 0142+61 is the only neutron star where it is believed that one of the long searched‐for ‘fallback’ disks has been detected in the mid‐IR by Wang et al. [1] using Spitzer. Such a disk originates from material falling back to the NS after the supernova. We search for cold circumstellar material in the 90 GHz continuum using the Plateau de Bure Interferometer. No millimeter flux is detected at the position of 4U 0142+61, the upper flux limit is 150 μJy corresponding to the 3σ noise rms level. The re‐processed Spitzer MIPS 24μm data presented previously by Wang et al. [2] show some indication of flux enhancement at the position of the neutron star, albeit below the 3σ statistical significance limit. At far infrared wavelengths the source flux densities are probably below the Herschel confusion limits.
Measuring the Understandability of Business Process Models - Are We Asking the Right Questions?
(2011)
Liquid–liquid equilibria of dipropylene glycol dimethyl ether and water by molecular dynamics
(2011)
The recent explosion of available audio-visual media is the new challenge for information retrieval research. Audio speech recognition systems translate spoken content to the text domain. There is a need for searching and indexing this data which possesses no logical structure. One possible way to structure it on a high level of abstraction is by finding topic boundaries. Two unsupervised topic segmentation methods were evaluated with real-world data in the course of this work. The first one, TSF, models topic shifts as fluctuations in the similarity function of the transcript. The second one, LCSeg, approaches topic changes as places with the least overlapping lexical chains. Only LCSeg performed close to a similar real-world corpus. Other reported results could not be outperformed. Topic analysis based on the repeated word usage models renders topic changes more ambiguous than expected. This issue has more impact on the segmentation quality than the state-of-the-art ASR word error rate. It could be concluded that it is advisable to develop topic segmentation algorithms with real-world data to avoid potential biases to artificial data. Unlike evaluated approaches based on word usage analysis, methods operating with local contexts can be expected to perform better through emulation of semantic dependencies.
JNK1, but Not JNK2, Is Required in Two Mechanistically Distinct Models of Inflammatory Arthritis
(2011)
Isolation of DNA and RNA
(2011)
Incremental Bond Graphs
(2011)
Balanites aegyptiaca (Balanitaceae) is a widely grown desert plant with multiuse potential. In the present paper, a crude extract from B. aegyptiaca seeds equivalent to a ratio of 1 : 2000 seeds to the extract was screened for antiplasmodial activity. The determined IC(50) value for the chloroquine-susceptible Plasmodium falciparum NF54 strain was 68.26 μg/μL ± 3.5. Analysis of the extract by gas chromatography-mass spectrometry detected 6-phenyl-2(H)-1,2,4-triazin-5-one oxime, an inhibitor of the parasitic M18 Aspartyl Aminopeptidase as one of the compounds which is responsible for the in vitro antiplasmodial activity. The crude plant extract had a K(i) of 2.35 μg/μL and showed a dose-dependent response. After depletion of the compound, a significantly lower inhibition was determined with a K(i) of 4.8 μg/μL. Moreover, two phenolic compounds, that is, 2,6-di-tert-butyl-phenol and 2,4-di-tert-butyl-phenol, with determined IC(50) values of 50.29 μM ± 3 and 47.82 μM ± 2.5, respectively, were detected. These compounds may contribute to the in vitro antimalarial activity due to their antioxidative properties. In an in vivo experiment, treatment of BALB/c mice with the aqueous Balanite extract did not lead to eradication of the parasites, although a reduced parasitemia at day 12 p.i. was observed.
This master thesis describes a supervised approach to the detection and the identification of humans in TV-style video sequences. In still images and video sequences, humans appear in different poses and views, fully visible and partly occluded, with varying distances to the camera, at different places, under different illumination conditions, etc. This diversity in appearance makes the task of human detection and identification to a particularly challenging problem. A possible solution of this problem is interesting for a wide range of applications such as video surveillance and content-based image and video processing. In order to detect humans in views ranging from full to close-up view and in the presence of clutter and occlusion, they are modeled by an assembly of several upper body parts. For each body part, a detector is trained based on a Support Vector Machine and on densely sampled, SIFT-like feature points in a detection window. For a more robust human detection, localized body parts are assembled using a learned model for geometric relations based on Gaussians. For a flexible human identification, the outward appearance of humans is captured and learned using the Bag-of-Features approach and non-linear Support Vector Machines. Probabilistic votes for each body part are combined to improve classification results. The combined votes yield an identification accuracy of about 80% in our experiments on episodes of the TV series "Buffy the Vampire Slayer". The Bag-of-Features approach has been used in previous work mainly for object classification tasks. Our results show that this approach can also be applied to the identification of humans in video sequences. Despite the difficulty of the given problem, the overall results are good and encourage future work in this direction.
We present the extensible post processing framework GrIP, usable for experimenting with screen space-based graphics algorithms in arbitrary applications. The user can easily implement new ideas as well as add known operators as components to existing ones. Through a well-defined interface, operators are realized as plugins that are loaded at run-time. Operators can be combined by defining a post processing graph (PPG) using a specific XML-format where nodes are the operators and edges define their dependencies. User-modifiable parameters can be manipulated through an automatically generated GUI. In this paper we describe our approach, show some example effects and give performance numbers for some of them.
We present a graph-based framework for post processing filters, called GrIP, providing the possibility of arranging and connecting compatible filters in a directed, acyclic graph for realtime image manipulation. This means that the construction of whole filter graphs is possible through an external interface, avoiding the necessity of a recompilation cycle after changes in post processing. Filter graphs are implemented as XML files containing a collection of filter nodes with their parameters as well as linkage (dependency) information. Implemented methods include (but are not restricted to) depth of field, depth darkening and an implementation of screen space shadows, all applicable in real-time, with manipulable parameterizations.
Governance and Sustainability in Information Systems. Managing the Transfer and Diffusion of IT
(2011)
Superconducting heterodyne receiver has played a vital role in the high resolution spectroscopy applications for astronomy and atmospheric research up to 2THz. NbN hot electron bolometer (HEB) mixer, as the most sensitive mixer above 1.5THz, has been used in the Herschel space telescope for 1.4-1.9THz and has also shown an ultra-high sensitivity up to 5.3THz. Combined a HEB mixer with a novel THz quantum cascade laser (QCL) as local oscillator (LO), such an all solid-state heterodyne receiver provides the technology which can be used for any balloon-, air- and space-borne heterodyne instruments above 2THz. Here we report the first high-resolution heterodyne spectroscopy measurement using a gas cell and using such a HEB-QCL receiver. The receiver employs a 2.9THz metal-metal waveguide QCL as LO and a NbN HEB as a mixer. By using a gas cell filled with methanol (CH3OH) gas in combination with hot/cold blackbody loads as signal source, we successfully recorded the methanol emission line around 2.918THz. Spectral lines at different pressures and also different frequency of the QCL are studied.
A system that interacts with its environment can be much more robust if it is able to reason about the faults that occur in its environment, despite perfect functioning of its internal components. For robots, which interact with the same environment as human beings, this robustness can be obtained by incorporating human-like reasoning abilities in them. In this work we use naive physics to enable reasoning about external faults in robots. We propose an approach for diagnosing external faults that uses qualitative reasoning on naive physics concepts for diagnosis. These concepts are mainly individual properties of objects that define their state qualitatively. The reasoning process uses physical laws to generate possible states of the concerned object(s), which could result into a detected external fault. Since effective reasoning about any external fault requires the information of relevant properties and physical laws, we associate different properties and laws to different types of faults which can be detected by a robot. The underlying ontology of this association is proposed on the basis of studies conducted (by other researchers) on reasoning of physics novices about everyday physical phenomena. We also formalize some definitions of properties of objects into a small framework represented in First-Order logic. These definitions represent naive concepts behind the properties and are intended to be independent from objects and circumstances. The definitions in the framework illustrates our proposal of using different biased definitions of properties for different types of faults. In this work, we also present a brief review of important contributions in the area of naive/qualitative physics. These reviews help in understanding the limitations of naive/qualitative physics in general. We also apply our approach to simple scenarios to asses its limitations in particular. Since this work was done independent of any particular real robotic system, it can be seen as a theoretical proof of the concept of usefulness of naive physics for external fault reasoning in robotics.
Nowadays, we input text not only on stationary devices, but also on handheld devices while walking, driving, or commuting. Text entry on the move, which we term as nomadic text entry, is generally slower. This is partially due to the need for users to move their visual focus from the device to their surroundings for navigational purposes and back. To investigate if better feedback about users' surroundings on the device can improve performance, we present a number of new and existing feedback systems: textual, visual, textual & visual, and textual & visual via translucent keyboard. Experimental comparisons between the conventional and these techniques established that increased ambient awareness for mobile users enhances nomadic text entry performance. Results showed that the textual and the textual & visual via translucent keyboard conditions increased text entry speed by 14% and 11%, respectively, and reduced the error rate by 13% compared to the regular technique. The two methods also significantly reduced the number of collisions with obstacles.
The smart home of the future is typically researched in lab settings or apartments that have been built from scratch. However, comparing the lifecycle of buildings and information technology, it is evident that modernization strategies and technologies are needed to empower residents to modify and extend their homes to make it smarter. In this paper, we describe a case study about the deployment, adaption to and adoption of tailorable home energy management systems in 7 private households. Based on this experience, we want to discuss how hardware and software technologies should be designed so that people could build their own smart home with a high usability and user experience.
The work done in this thesis enhances the MMD algorithm in multi-core environments. The MMD algorithm, a transformation based algorithm for reversible logic synthesis, is based on the works introduced by Maslov, Miller and Dueck and their original, sequential implementation. It synthesises a formal function specification, provided by a truth table, into a reversible network and is able to perform several optimization steps after the synthesis. This work concentrates on one of these optimization steps, the template matching. This approach is used to reduce the size of the reversible circuit by replacing a number of gates that match a template which implements the same function and uses less gates. Smaller circuits have several benefits since they need less area and are not as costly. The template matching approach introduced in the original works is computationally expensive since it tries to match a library of templates against the given circuit. For each template at each position in the circuit, a number of different combinations have to be calculated during runtime resulting in high execution times, especially for large circuits. In order to make the template matching approach more efficient and usable, it has been reimplemented in order to take advantage of modern multi-core architectures such as the Cell Broadband Engine or a Graphics Processing Unit. For this work, two algorithmically different approaches that try to consider each multi-core architecture’s strengths, have been analyzed and improved. For the analysis these approaches have been cross-implemented on the two target hardware architectures and compared to the original parallel versions. Important metrics for this analysis are the execution time of the algorithm and the result of the minimization with the template matching approach. It could be shown that the algorithmically different approaches produce the same minimization results, independent of the used hardware architecture. However, both cross-implementations also show a significantly higher execution time which makes them practically irrelevant. The results of the first analysis and comparison lead to the decision to enhance only the original parallel approaches. Using the same metrics for successful enhancements as mentioned above, it could be shown that improving the algorithmic concepts and exploiting the capabilities of the hardware lead to better results for the execution time and the minimization results compared to their original implementations.
This paper addresses special skills, learners in Internet-based learning scenarios need. In self-directed learning scenarios, as most Internet-based learning scenarios are designed, learners bear the responsibility for their learning progress. To ease this task, institutions could prime the learners for the situation which may be quite different to their previous learning experiences. Basing on a Delphi-study, conducted with experts from the e-Learning sector in Germany, Austria, and Switzerland, the basic requirements have been determined.
DNA Sequencing
(2011)
The small and remote households in Northern regions demand thermal energy rather than electricity. Wind turbine in such places can be used to convert wind energy into thermal energy directly using a heat generator based on the principle of the Joule machine. The heat generator driven by a wind turbine can reduce the cost of energy for heating system. However the optimal performance of the system depends on the torque-speed characteristics of the wind turbine and the heat generator. To achieve maximum efficiency of operation both characteristics should be matched. In the article the condition of optimal performance is developed and an example of the system operating at maximum efficiency is simulated.
Based on our reconfigurable FPGA spectrometer technology, we have developed a read-out system, operating in the frequency domain, for arrays of Microwave Kinetic Inductance Detectors (MKIDs). The readout consists of a combination of two digital boards: A programmable DAC-/FPGA-board (tone-generator) to stimulate the MKIDs detectors and an ADC-/FPGA-unit to analyze the detectors response. Laboratory measurement show no deterioration of the noise performance compared to low noise analog mixing. Thus, this technique allows capturing several hundreds of detector signals with just one pair of coaxial cables.
Green IT (Green IS, Green ICT) is a concept of saving energy consumption to reduce IT costs. A current survey shows that only few companies in German speaking countries consider this aspect in their daily business. This is important facing the current situation of attempts of cost saving during the current economic crisis worldwide. This paper introduces into Green IT and presents an IT management and controlling concept. Then the main results of a currently presented survey are used to modify the concept. Finally an agenda for future research is given
Software offshoring has been established as an important business strategy over the last decade. While research on such forms of Global Software Development (GSD) has mainly focused on the situation of large enterprises, small enterprises are increasingly engaging in offshoring, too. Representing the biggest share of the German software industry, small companies are known to be important innovators and market pioneers. They often regard their flexibility and customer-orientation as core competitive advantages. Unlike large corporations, their small size allows them to adopt software development approaches that are characterized by a high agility and flat hierarchies. At the same time, their distinct strategies make it unlikely that they can simply adopt management strategies that were developed for larger companies.
Flexible development approaches like the ones preferred by small corporations have proven to be problematic in the context of offshoring, as their strong dependency on constant communication is strongly affected by the various barriers of international cooperation between companies. Cooperating closely over companies’ borders in different time zones and in culturally diverse teams poses complex obstacles for flexible management approaches. It is still a matter of discussion in fields like Software Engineering and Computer Supported Cooperative Work how these obstacles can be tackled and how they affect companies in the long term. Hence, it is agreed that we need a more detailed understanding of distributed software development practices in order to come to feasible technological and organizational solutions.
This dissertation presents results from two ethnographically-informed case studies of software offshoring in small German enterprises. By adopting Anselm Strauss’ concept of articulation work, we want to deepen the understanding of managing distributed software development in flexible, customer-oriented organizations. In doing so, we show how practices of coordinating inter-organizational software development are closely related to aspects of organizational learning in small enterprises. By means of interviews with developers and project managers from both parties of the cooperation, we do not only take into account the multiple perspectives of the cooperation, but also include the socio-cultural background of international software development projects into our analysis.