### Refine

#### Department, Institute

- Fachbereich Elektrotechnik, Maschinenbau, Technikjournalismus (31) (remove)

#### Document Type

- Conference Object (16)
- Article (13)
- Preprint (1)
- Report (1)

#### Year of publication

- 2013 (31) (remove)

#### Is part of the Bibliography

- yes (31) (remove)

#### Keywords

- Education (2)
- ionic liquids (2)
- Current measurement (1)
- Engine Map (1)
- Evaluation board (1)
- Fuel Consumption (1)
- GROW (1)
- Hybrid (1)
- ICE (1)
- KMU (1)

This paper presents an efficient design and implementation of a complex wavelet packet modulation (CWPM) multicarrier communication transceiver using an FPGA platform. A fast algorithm already proposed for high data rate WPM systems has been applied to CWPM systems for speed enhancement. The theoretical performance of the computation algorithm is analyzed. The design uses 16-point Fast Wavelet Packet Transform/Inverse Wavelet Packet Transform (FWPT/IFWPT) of the Haar family as the core processing module. All the proposed fast CWPM (FCWPM) system modules are designed and implemented using VHDL programming language. Software tools used in this work include Altera Quartus II 9.1 and ModelSim Altera 6.5b. A Cyclone III board is used for the implementation. The hardware simulation results show that the use of fast Haar wavelet packet transform algorithms to implement complex wavelet packet modulation systems significantly increases its speed as compared with direct implementations.

High peak to average power ratio (PAPR) of a transmitted signal is one of the major drawbacks of the complex wavelet packet modulation (CWPM) as usual in any multicarrier communication system. Utilizing the advantage of concentrating the energy to certain subspaces of the discrete wavelet transform, many PAPR reduction techniques are proposed to solve this problem like threshold and clipping methods. In this paper a novel hybrid PAPR reduction method for CWPM called Threshold-Clipping (TC) method has been proposed. The simulation results in Rayleigh multipath fading channel show that the proposed scheme has achieved 4.5 dB and 3 dB reduction in PAPR over the traditional threshold and clipping methods respectively with less than 0.5 dB degradation in bit error probability.

We report on the setup and initial discoveries of the Northern High Time Resolution Universe survey for pulsars and fast transients, the first major pulsar survey conducted with the 100-m Effelsberg radio telescope and the first in 20 years to observe the whole northern sky at high radio frequencies. Using a newly developed 7-beam receiver system combined with a state-of-the-art polyphase filterbank, we record an effective bandwidth of 240 MHz in 410 channels centred on 1.36 GHz with a time resolution of 54 μs. Such fine time and frequency resolution increases our sensitivity to millisecond pulsars and fast transients, especially deep inside the Galaxy, where previous surveys have been limited due to intrachannel dispersive smearing. To optimize observing time, the survey is split into three integration regimes dependent on Galactic latitude, with 1500, 180 and 90-s integrations for latitude ranges |b| < 3 ∘.5, |b| < 15° and |b| > 15°, respectively. The survey has so far resulted in the discovery of 15 radio pulsars, including a pulsar with a characteristic age of ∼18 kyr, PSR J2004+3429, and a highly eccentric, binary millisecond pulsar, PSR J1946+3417. All newly discovered pulsars are timed using the 76-m Lovell radio telescope at the Jodrell Bank Observatory and the Effelsberg radio telescope. We present timing solutions for all newly discovered pulsars and discuss potential supernova remnant associations for PSR J2004+3429.

Earth’s nearest candidate supermassive black hole lies at the centre of the Milky Way1. Its electromagnetic emission is thought to be powered by radiatively inefficient accretion of gas from its environment2, which is a standard mode of energy supply for most galactic nuclei. X-ray measurements have already resolved a tenuous hot gas component from which the black hole can be fed3. The magnetization of the gas, however, which is a crucial parameter determining the structure of the accretion flow, remains unknown. Strong magnetic fields can influence the dynamics of accretion, remove angular momentum from the infalling gas4, expel matter through relativistic jets5 and lead to synchrotron emission such as that previously observed6, 7, 8. Here we report multi-frequency radio measurements of a newly discovered pulsar close to the Galactic Centre9, 10, 11, 12 and show that the pulsar’s unusually large Faraday rotation (the rotation of the plane of polarization of the emission in the presence of an external magnetic field) indicates that there is a dynamically important magnetic field near the black hole. If this field is accreted down to the event horizon it provides enough magnetic flux to explain the observed emission—from radio to X-ray wavelengths—from the black hole.

Radio pulsars in relativistic binary systems are unique tools to study the curved space-time around massive compact objects. The discovery of a pulsar closely orbiting the super-massive black hole at the centre of our Galaxy, Sgr A⋆, would provide a superb test-bed for gravitational physics. To date, the absence of any radio pulsar discoveries within a few arc minutes of Sgr A⋆ has been explained by one principal factor: extreme scattering of radio waves caused by inhomogeneities in the ionized component of the interstellar medium in the central 100 pc around Sgr A⋆. Scattering, which causes temporal broadening of pulses, can only be mitigated by observing at higher frequencies. Here we describe recent searches of the Galactic centre region performed at a frequency of 18.95 GHz with the Effelsberg radio telescope.

We derive rates of convergence for limit theorems that reveal the intricate structure of the phase transitions in a mean-field version of the Blume-Emery-Griffith model. The theorems consist of scaling limits for the total spin. The model depends on the inverse temperature β and the interaction strength K. The rates of convergence results are obtained as (β,K) converges along appropriate sequences (βn,Kn) to points belonging to various subsets of the phase diagram which include a curve of second-order points and a tricritical point. We apply Stein's method for normal and non-normal approximation avoiding the use of transforms and supplying bounds, such as those of Berry-Esseen quality, on approximation error. We observe an additional phase transition phenomenon in the sense that depending on how fast Kn and βn are converging to points in various subsets of the phase diagram, different rates of convergences to one and the same limiting distribution occur.

During recent years different types of millimetre-wave and terahertz-scanners have been developed, as well radar-based as passive radiometers. Mainly body scanners were in the focus of research. Although luggage and parcels are sufficiently inspected using X-ray techniques, the use of millimetre wave technology also for this application offers some advantages. Among them are easy deployment at any place, due to compact geometry, possible miniaturization of sensors and stand-off operation without any radiation hazard. Also the better contrast of dielectric material including explosives are of considerable advantage, not to neglect, that scanning is possible while the owner keeps the luggage in his hands. This allows tracking a piece of luggage together with its owner without losing their mutual relation. To allow a fast scanning, an array solution is investigated using state-of the art devices at the 80-GHz band.

This paper reports experimental results for the performance of a free space optical (FSO) communication link employing a binary-phase-shift-keying subcarrier modulation (BPSK) scheme under the influence of the atmospheric scintillation. A dedicated experimental atmospheric simulation chamber has been built where the effects of weak turbulence regimes on the FSO link can be investigated. The experimental data obtained is compared to the theoretical prevision. The paper also presents how data transmission performance depends on the position of turbulence source within the chamber.

Power train models are required to simulate hence predict energy consumption of vehicles. Efficiencies for different components in power train are required. Common procedures use digitalised shell models (or maps) to model the efficiency of Internal Combustion Engines (ICE) and manual gearboxes (MG). Errors are connected with these models and affect the accuracy of the calculation. The accuracy depends on the configuration of the simulation, the digitalisation of the data and the data used. This paper evaluates these sources of error. The understanding of the source of error can improve the results of the modelling by more than eight percent.

Computer simulations of chemical systems, especially systems of condensed matter, are highly important for both scientific and industrial applications. Thereby, molecular interactions are modeled on a microscopic level in order to study their impact on macroscopic phenomena. To be capable of predicting physical properties quantitatively, accurate molecular models are indispensable. Molecular interactions are described mathematically by force fields, which have to be parameterized. Recently, an automated gradient-based optimization procedure was published by the authors based on the minimization of a loss function between simulated and experimental physical properties. The applicability of gradient-based procedures is not trivial at all because of two reasons: firstly, simulation data are affected by statistical noise, and secondly, the molecular simulations required for the loss function evaluations are extremely time-consuming. Within the optimization process, gradients and Hessians were approximated by finite differences so that additional simulations for the respective modified parameter sets were required. Hence, a more efficient approach to computing gradients and Hessians is presented in this work. The method developed here is based on directional instead of partial derivatives. It is compared with the classical computations with respect to computation time. Firstly, molecular simulations are replaced by fit functions that define a functional dependence between specific physical observables and force field parameters. The goal of these simulated simulations is to assess the new methodology without much computational effort. Secondly, it is applied to real molecular simulations of the three chemical substances phosgene, methanol and ethylene oxide. It is shown that up to 75% of the simulations can be avoided using the new algorithm.

Molecular modeling is an important subdomain in the field of computational modeling, regarding both scientific and industrial applications. This is because computer simulations on a molecular level are a virtuous instrument to study the impact of microscopic on macroscopic phenomena. Accurate molecular models are indispensable for such simulations in order to predict physical target observables, like density, pressure, diffusion coefficients or energetic properties, quantitatively over a wide range of temperatures. Thereby, molecular interactions are described mathematically by force fields. The mathematical description includes parameters for both intramolecular and intermolecular interactions. While intramolecular force field parameters can be determined by quantum mechanics, the parameterization of the intermolecular part is often tedious. Recently, an empirical procedure, based on the minimization of a loss function between simulated and experimental physical properties, was published by the authors. Thereby, efficient gradient-based numerical optimization algorithms were used. However, empirical force field optimization is inhibited by the two following central issues appearing in molecular simulations: firstly, they are extremely time-consuming, even on modern and high-performance computer clusters, and secondly, simulation data is affected by statistical noise. The latter provokes the fact that an accurate computation of gradients or Hessians is nearly impossible close to a local or global minimum, mainly because the loss function is flat. Therefore, the question arises of whether to apply a derivative-free method approximating the loss function by an appropriate model function. In this paper, a new Sparse Grid-based Optimization Workflow (SpaGrOW) is presented, which accomplishes this task robustly and, at the same time, keeps the number of time-consuming simulations relatively small. This is achieved by an efficient sampling procedure for the approximation based on sparse grids, which is described in full detail: in order to counteract the fact that sparse grids are fully occupied on their boundaries, a mathematical transformation is applied to generate homogeneous Dirichlet boundary conditions. As the main drawback of sparse grids methods is the assumption that the function to be modeled exhibits certain smoothness properties, it has to be approximated by smooth functions first. Radial basis functions turned out to be very suitable to solve this task. The smoothing procedure and the subsequent interpolation on sparse grids are performed within sufficiently large compact trust regions of the parameter space. It is shown and explained how the combination of the three ingredients leads to a new efficient derivative-free algorithm, which has the additional advantage that it is capable of reducing the overall number of simulations by a factor of about two in comparison to gradient-based optimization methods. At the same time, the robustness with respect to statistical noise is maintained. This assertion is proven by both theoretical considerations and practical evaluations for molecular simulations on chemical example substances.

The simulation of fluid flows is of importance to many fields of application, especially in industry and infrastructure. The modelling equations applied describe a coupled system of non-linear, hyperbolic partial differential equations given by one-dimensional shallow water equations that enable the consistent implementation of free surface flows in open channels as well as pressurised flows in closed pipes. The numerical realisation of these equations is complicated and challenging to date due to their characteristic properties that are able to cause discontinuous solutions.

The partition coefficient of a substance measures its solubility in octanol compared with water and is widely used to estimate toxicity. If a substance is hardly soluble in octanol, then it is practically impossible for it to enter (human) cells and therefore is less likely to be toxic. For novel drugs it might be important to penetrate the cell through the membrane or even integrate into it. While for most simple substances the partition coefficient is concentration-independent at low concentrations, this is not true for a few important classes of complex molecules, such as ionic liquids or tensides. We present a simple association–dissociation model for concentration dependence of the partition coefficient of ionic liquids. Atomistic computer simulations serve to parametrize our model by calculating solvation free energies in water and octanol using thermodynamic integration. We demonstrate the validity of the method by reproducing the concentration-independent partition coefficients of small alcohols and the concentration-dependent partition coefficient of a commonly used ionic liquid, 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide [C4MIM][NTf2]. The concentration dependence is accurately predicted in a concentration range of several orders of magnitude.

When it comes to university-level mathematics in engineering education it is getting harder and harder to bridge the gap between the requirements of the curriculum and the actual math skills of first-year students. Often students fail to realise that they lack elementary math skills. Lecturers intend to help them to learn what they have not learned at school. But obstacles like for example lapses in their concentration while working on exercises or playing down their problems can make it difficult to bridge existing gaps.
In order to increase the concentration while solving problems that deal with elementary mathematics students could communicate in a foreign language. By doing so, they have to understand the subject matter in order to talk about it. The Bonn-Rhein-Sieg University of Applied Science tries to launch a project that examines how dealing with these mathematical problems in a foreign language can support students acquiring fundamental mathematical skill. For this purpose the university is seeking for an international partnership. Via virtual communications students from both universities work in teams in English on mathematical problems. The research question if foreign language teaching can advance the acquisition of knowledge is the focus of interest.

Einführungsveranstaltungen in der Mathematik vermitteln Studierenden die Grundlagenbildung, die ihnen das Verständnis weiterführender Veranstaltungen erleichtern soll. In diesem Artikel werden Maßnahmen vorgestellt, mit dem Ziel, durch eine Verbesserung der Qualität der Lehre in der Studieneingangsphase, die allgemeine Studierfähigkeit der Studierenden bezogen auf das Fach Mathematik zu erhöhen. Erste Ergebnisse werden reflektiert. Unter anderem wurde ein Modul „Mathematik I.“ durch eine Mehrzügigkeit der Lehrveranstaltungen, den Einsatz von CATs und die Einführung eines Karteikartensystems verändert.

Mathematische Vorkurse werden zur Vorbereitung auf das Studium allen Studienanfängerinnen und Studienanfängern der Ingenieurmathematik dringend empfohlen, aber leider fällt es immer schwerer, die Lücke zwischen den Erwartungen an die Vorkenntnisse der Studierenden und dem tatsächlichen Rüstwerkzeug der Studienanfänger/innen zu schließen. In diesem Artikel wird die Projektidee vorgestellt, im Rahmen einer Zusammenarbeit mit dem internationalen ROLE-Projekt einen mathematischen Vorkurs durch zusätzliche Elemente aus dem Bereich der Open Educational Resources sinnvoll zu ergänzen, um eine Binnendifferenzierung zu ermöglichen und den Studierenden zu erleichtern, sich in den Lehrstoff individuell einzuarbeiten.

Für kleinere Unternehmen mit geringen Ressourcen ist die Gestaltung des QM-Systems eine beträchtliche Herausforderung: Welche Methoden und Maßnahmen sind nötig und bestgeeignet, um die Qualitätskosten nachhaltig zu senken? Durch individuelle und ganzheitliche Betrachtung des Unternehmens sowie Einsatz der Kraftfeldanalyse gelang es einem Metallverarbeiter, ein maßgeschneidertes und dauerhaft wirksames QM-System zu implementieren.

Qualitätsverbesserung und Zeitersparnis bei der Stipendienvergabe durch automatisierten Workflow
(2013)

Der Wechsel vom Lehren zum aktiven Lernen kann durch studentische Projekte gelingen. Studierende wenden das bisher vermittelte Wissen an und erleben dadurch Ihre eigene Handlungskompetenz während der Bearbeitung einer berufsnahen Aufgabenstellung. Lernziel ist hierbei die Steigerung der Handlungskompetenz, bestehend aus Fach-, Sozial-, Methoden- und Individualkompetenz durch die Aufgabenbearbeitung im Team. Insbesondere wird dabei auch Wert auf die Vermittlung und Erfahrung von Skills, wie z. B. Kooperationsfähigkeit, Kommunikationsverhalten und Arbeitsorganisation gelegt.