Refine
H-BRS Bibliography
- yes (200)
Departments, institutes and facilities
- Fachbereich Ingenieurwissenschaften und Kommunikation (200) (remove)
Document Type
- Article (200) (remove)
Year of publication
Keywords
- ISM: molecules (6)
- West Africa (4)
- Hydrogen storage (3)
- Lattice Boltzmann Method (3)
- Optimization (3)
- error analysis (3)
- magnetic resonance spectrometers auxiliary equipment (3)
- polyethylene (3)
- pulsars: general (3)
- radiowave spectrometers (3)
论“数字化大学”的内涵及发展
(2022)
‚Making Media‘
(2021)
Ausgangspunkt der im Folgenden vorgestellten Semesterstruktur war die Umstellung der vorhandenen Diplomstudiengänge auf den Bachelor-Abschluss. Am Fachbereich werden drei grundständige Studiengänge angeboten; die technischen Studiengänge Elektrotechnik und Maschinenbau sowie der interdisziplinäre Studiengang Technikjournalismus, der den Geistes- und Sozialwissenschaften zuzurechnen ist. Ebenfalls sind duale Studiengänge vorhanden, die an grundständige Studiengänge angelehnt sind.
DeltaV Neural ist eine Softwareapplikation innerhalb des Prozessautomatisierungssystems DeltaV, die es dem Anwender ermöglicht, auf einfache Art und Weise Softsensoren zu konfigurieren. Softsensoren besitzen die Aufgabe, schwer messbare oder nur in großen Zeitabständen ermittelbare Prozessausgangsgrößen mittels einfacher und schneller messbarer Ersatzmessgrößen zu schätzen bzw. vorherzusagen.
Der Nutzen von Prozessmanagement für die Effizienz und Effektivität der Organisation von Unternehmen ist vielfach bestätigt. Eine Studie der gfo-Gesellschaft für Organisation stellt fest, dass der Umsetzungsgrad der Prozessorganisation in Unternehmen dennoch mangelhaft ist. Es fehlt die Unterstützung der Leitung, die selbst noch überwiegend funktional organisiert ist.
Improving the study entry supports students in a decisive phase of their university education. Implementing improvements is a change process and can only be successful if the relevant stakeholders are addressed and convinced. In the described Teaching Quality Pact project evaluation data is used as a mean to discuss in the university the situation of the study programs. As these discussions were based on empirical data rather than on opinion, it was possible to achieve an open discussion about measures that are implemented. The open discussion is maintained during the project when results of the measures taken are analyzed.
Battery lifespan estimation is essential for effective battery management systems, aiding users and manufacturers in strategic planning. However, accurately estimating battery capacity is complex, owing to diverse capacity fading phenomena tied to factors such as temperature, charge-discharge rate, and rest period duration. In this work, we present an innovative approach that integrates real-world driving behaviors into cyclic testing. Unlike conventional methods that lack rest periods and involve fixed charge-discharge rates, our approach involves 1000 unique test cycles tailored to specific objectives and applications, capturing the nuanced effects of temperature, charge-discharge rate, and rest duration on capacity fading. This yields comprehensive insights into cell-level battery degradation, unveiling growth patterns of the solid electrolyte interface (SEI) layer and lithium plating, influenced by cyclic test parameters. The results yield critical empirical relations for evaluating capacity fading under specific testing conditions.
Twitchen als Kulturtechnik
(2023)
Trueness and precision of milled and 3D printed root-analogue implants: A comparative in vitro study
(2023)
Transition point prediction in a multicomponent lattice Boltzmann model: Forcing scheme dependencies
(2018)
In addition to the long-term goal of mitigating climate change, the current geopolitical upheavals heighten the urgency to transform Europe's energy system. This involves expanding renewable energies while managing intermittent electricity generation. Hydrogen is a promising solution to balance generation and demand, simultaneously decarbonizing complex applications. To model the energy system's transformation, the project TransHyDE-Sys, funded by the German Federal Ministry of Education and Research, takes an integrated approach beyond traditional energy system analysis, incorporating a diverse range of more detailed methods and tools. Herein, TransHyDE-Sys is situated within the recent policy discussion. It addresses the requirements for energy system modeling to gain insights into transforming the European hydrogen and energy infrastructure. It identifies knowledge gaps in the existing literature on hydrogen infrastructure-oriented energy system modeling and presents the research approach of TransHyDE-Sys. TransHyDE-Sys analyzes the development of hydrogen and energy infrastructures from “the system” and “the stakeholder” perspectives. The integrated modeling landscape captures temporal and spatial interactions among hydrogen, electricity, and natural gas infrastructure, providing comprehensive insights for systemic infrastructure planning. This allows a more accurate representation of the energy system's dynamics and aids in decision-making for achieving sustainable and efficient hydrogen network development integration.
The general method of topological reduction for the network problems is presented on example of gas transport networks. The method is based on a contraction of series, parallel and tree-like subgraphs for the element equations of quadratic, power law and general monotone dependencies. The method allows to reduce significantly the complexity of the graph and to accelerate the solution procedure for stationary network problems. The method has been tested on a large set of realistic network scenarios. Possible extensions of the method have been described, including triangulated element equations, continuation of the equations at infinity, providing uniqueness of solution, a choice of Newtonian stabilizer for nearly degenerated systems. The method is applicable for various sectors in the field of energetics, including gas networks, water networks, electric networks, as well as for coupling of different sectors.
We present the performance of the upGREAT heterodyne array receivers on the SOFIA telescope after several years of operations. This instrument is a multi-pixel high resolution (R≳107) spectrometer for the Stratospheric Observatory for Far-Infrared Astronomy (SOFIA). The receivers use 7-pixel subarrays configured in a hexagonal layout around a central pixel. The low frequency array receiver (LFA) has 2×7 pixels (dual polarization), and presently covers the 1.83–2.07THz frequency range, which allows to observe the [CII] and [OI] lines at 158μm and 145μm wavelengths. The high frequency array (HFA) covers the [OI] line at 63μm and is equipped with one polarization at the moment (7 pixels, which can be upgraded in the near future with a second polarization array). The 4.7THz array has successfully flown using two separate quantum-cascade laser local oscillators from two different groups. NASA completed the development, integration and testing of a dual-channel closed-cycle cryocooler system, with two independently operable He compressors, aboard SOFIA in early 2017 and since then, both arrays can be operated in parallel using a frequency separating dichroic mirror. This configuration is now the prime GREAT configuration and has been added to SOFIA’s instrument suite since observing cycle 6.
We present a new multi-pixel high resolution (R ≳ 107) spectrometer for the Stratospheric Observatory for Far-Infrared Astronomy (SOFIA). The receiver uses 2 × 7-pixel subarrays in orthogonal polarization, each in an hexagonal array around a central pixel. We present the first results for this new instrument after commissioning campaigns in May and December 2015 and after science observations performed in May 2016. The receiver is designed to ultimately cover the full 1.8−2.5 THz frequency range but in its first implementation, the observing range was limited to observations of the [CII] line at 1.9 THz in 2015 and extended to 1.83−2.07 THz in 2016. The instrument sensitivities are state-of-the-art and the first scientific observations performed shortly after the commissioning confirm that the time efficiency for large scale imaging is improved by more than an order of magnitude as compared to single pixel receivers. An example of large scale mapping around the Horsehead Nebula is presented here illustrating this improvement. The array has been added to SOFIA’s instrument suite already for ongoing observing cycle 4.
The Project SupraMetall: Towards Commercial Fabrication of High-Temperature Superconducting Tapes
(2014)
Technik ist heutzutage allgegenwärtig und beeinflusst Wirtschaft und Gesellschaft. Die Bundesrepublik Deutschland erwirtschaftet als Industriestaat ihr Bruttosozialprodukt zu einem bedeutenden Anteil aus der Entwicklung und Produktion von technischen Gütern. Hieran sind Personen mit unterschiedlicher Ausbildung in verschiedenen Tätigkeiten beteiligt. Ingenieure kümmern sich um die Technik, Betriebswirte um die Finanzen, Juristen um rechtliche Fragen. So zumindest die Theorie.
Die allgemeine Konnotation von Technik mit Männlichkeit hat Auswirkungen auf die Berufswahlentscheidungen und das Technikverständnis von jungen Frauen. Nur gut 22 Prozent aller Studierenden in den Ingenieurswissenschaften waren 2014 in Deutschland weiblich (vgl. MonitorING)1. Seit Jahren wird versucht, diese Zahlen nach oben zu korrigieren, indem man Programme für Mädchen und junge Frauen anbietet, die erste Kontakte zu technischen Arbeitsfeldern her stellen. Auch für bereits berufstätige Ingenieurinnen gibt es zahlreiche Förderprogramme, um den Drop-out hochqualiizierter Frauen auf der Karriere leiter zu verhindern. Dennoch verändern sich die prozentualen Anteile von Frauen in ingenieurswissenschaftlichen Studiengängen und Berufen kaum. Aktuelle Studien belegen, dass vor allem kulturell bedingte Erwartungen und Einstellungen hierfür verantwortlich sind (vgl. Paulitz 2012).
Technikjournalismus
(2005)
This paper investigates the effect of voltage sensors on the measurement of transient voltages for power semiconductors in a Double Pulse Test (DPT) environment.We adapt previously published models that were developed for current sensors and apply them to voltage sensors to evaluate their suitability for DPT applications. Similarities and differences between transient current and voltage sensors are investigated and the resulting methodology is applied to commercially available and experimental voltage sensors. Finally, a selection aid for given measurement tasks is derived that focuses on the measurement of fast-switching power semiconductors.
Suitability of Current Sensors for the Measurement of Switching Currents in Power Semiconductors
(2021)
This paper investigates the impact of current sensors on the measurement of transient currents in fast-switching power semiconductors in a double pulse test (DPT environment. We review previous research that assesses the influence of current sensors on a DPT circuit through mathematical modeling. The developed selection aids can be used to identify suitable current sensors for transient current measurements of fast-switching power semiconductors and to estimate the error introduced by their insertion into the DPT circuit. Afterwards, this analysis is extended by including further elements from real DPT applications to increase the consistency of the error estimation with practical situations and setups. Both methods are compared and their individual advantages and drawbacks are discussed. Finally, a recommendation on when to use which method is derived.
We report on submillimetre bolometer observations of the isolated neutron star RX J1856.5−3754 using the Large Apex Bolometer Camera bolometer array on the Atacama Pathfinder Experiment telescope. No cold dust continuum emission peak at the position of RX J1856.5−3754 was detected. The 3σ flux density upper limit of 5 mJy translates into a cold dust mass limit of a few earth masses. We use the new submillimetre limit, together with a previously obtained H-band limit, to constrain the presence of a gaseous, circumpulsar disc. Adopting a simple irradiated disc model, we obtain a mass accretion limit of Graphic and a maximum outer disc radius of ∼1014 cm. By examining the projected proper motion of RX J1856.5−3754, we speculate about a possible encounter of the neutron star with a dense fragment of the CrA molecular cloud a few thousand years ago.
Stably stratified Taylor–Green vortex simulations are performed by lattice Boltzmann methods (LBM) and compared to other recent works using Navier–Stokes solvers. The density variation is modeled with a separate distribution function in addition to the particle distribution function modeling the flow physics. Different stencils, forcing schemes, and collision models are tested and assessed. The overall agreement of the lattice Boltzmann solutions with reference solutions from other works is very good, even when no explicit subgrid model is used, but the quality depends on the LBM setup. Although the LBM forcing scheme is not decisive for the quality of the solution, the choice of the collision model and of the stencil are crucial for adequate solutions in underresolved conditions. The LBM simulations confirm the suppression of vertical flow motion for decreasing initial Froude numbers. To gain further insight into buoyancy effects, energy decay, dissipation rates, and flux coefficients are evaluated using the LBM model for various Froude numbers.
Pipeline transport is an efficient method for transporting fluids in energy supply and other technical applications. While natural gas is the classical example, the transport of hydrogen is becoming more and more important; both are transmitted under high pressure in a gaseous state. Also relevant is the transport of carbon dioxide, captured in the places of formation, transferred under high pressure in a liquid or supercritical state and pumped into underground reservoirs for storage. The transport of other fluids is also required in technical applications. Meanwhile, the transport equations for different fluids are essentially the same, and the simulation can be performed using the same methods. In this paper, the effect of control elements such as compressors, regulators and flaptraps on the stability of fluid transport simulations is studied. It is shown that modeling of these elements can lead to instabilities, both in stationary and dynamic simulations. Special regularization methods were developed to overcome these problems. Their functionality also for dynamic simulations is demonstrated for a number of numerical experiments.
Molecular modeling is an important subdomain in the field of computational modeling, regarding both scientific and industrial applications. This is because computer simulations on a molecular level are a virtuous instrument to study the impact of microscopic on macroscopic phenomena. Accurate molecular models are indispensable for such simulations in order to predict physical target observables, like density, pressure, diffusion coefficients or energetic properties, quantitatively over a wide range of temperatures. Thereby, molecular interactions are described mathematically by force fields. The mathematical description includes parameters for both intramolecular and intermolecular interactions. While intramolecular force field parameters can be determined by quantum mechanics, the parameterization of the intermolecular part is often tedious. Recently, an empirical procedure, based on the minimization of a loss function between simulated and experimental physical properties, was published by the authors. Thereby, efficient gradient-based numerical optimization algorithms were used. However, empirical force field optimization is inhibited by the two following central issues appearing in molecular simulations: firstly, they are extremely time-consuming, even on modern and high-performance computer clusters, and secondly, simulation data is affected by statistical noise. The latter provokes the fact that an accurate computation of gradients or Hessians is nearly impossible close to a local or global minimum, mainly because the loss function is flat. Therefore, the question arises of whether to apply a derivative-free method approximating the loss function by an appropriate model function. In this paper, a new Sparse Grid-based Optimization Workflow (SpaGrOW) is presented, which accomplishes this task robustly and, at the same time, keeps the number of time-consuming simulations relatively small. This is achieved by an efficient sampling procedure for the approximation based on sparse grids, which is described in full detail: in order to counteract the fact that sparse grids are fully occupied on their boundaries, a mathematical transformation is applied to generate homogeneous Dirichlet boundary conditions. As the main drawback of sparse grids methods is the assumption that the function to be modeled exhibits certain smoothness properties, it has to be approximated by smooth functions first. Radial basis functions turned out to be very suitable to solve this task. The smoothing procedure and the subsequent interpolation on sparse grids are performed within sufficiently large compact trust regions of the parameter space. It is shown and explained how the combination of the three ingredients leads to a new efficient derivative-free algorithm, which has the additional advantage that it is capable of reducing the overall number of simulations by a factor of about two in comparison to gradient-based optimization methods. At the same time, the robustness with respect to statistical noise is maintained. This assertion is proven by both theoretical considerations and practical evaluations for molecular simulations on chemical example substances.
The elucidation of conformations and relative potential energies (rPEs) of small molecules has a long history across a diverse range of fields. Periodically, it is helpful to revisit what conformations have been investigated and to provide a consistent theoretical framework for which clear comparisons can be made. In this paper, we compute the minima, first- and second-order saddle points, and torsion-coupled surfaces for methanol, ethanol, propan-2-ol, and propanol using consistent high-level MP2 and CCSD(T) methods. While for certain molecules more rigorous methods were employed, the CCSD(T)/aug-cc-pVTZ//MP2/aug-cc-pV5Z theory level was used throughout to provide relative energies of all minima and first-order saddle points. The rPE surfaces were uniformly computed at the CCSD(T)/aug-cc-pVTZ//MP2/aug-cc-pVTZ level. To the best of our knowledge, this represents the most extensive study for alcohols of this kind, revealing some new aspects. Especially for propanol, we report several new conformations that were previously not investigated. Moreover, two metrics are included in our analysis that quantify how the selected surfaces are similar to one another and hence improve our understanding of the relationship between these alcohols.
Simultaneous multifrequency radio observations of the Galactic Centre magnetar SGR J1745-2900
(2015)
The utilization of simulation procedures is gaining increasing attention in the product development of extrusion blow molded parts. However, some simulation steps, like the simulation of shrinkage and warpage, are still associated with uncertainties. The reason for this is on the one hand a lack of standardized interfaces for the transfer of simulation data between different simulation tools, and on the other hand the complex time-, temperature- and process-dependent material behavior of the used semi crystalline polymers. Using a new vendor neutral interface standard for the data transfer, the shrinkage analysis of a simple blow molded part is investigated and compared to experimental data. A linear viscoelastic material model in combination with an orthotropic process- and temperature-dependent thermal expansion coefficient is used for the shrinkage prediction. A good agreement is observed. Finally, critical parameters in the simulation models that strongly influence the shrinkage analysis are identified by a sensitivity study.
This work thoroughly investigates a semi-Lagrangian lattice Boltzmann (SLLBM) solver for compressible flows. In contrast to other LBM for compressible flows, the vertices are organized in cells, and interpolation polynomials up to fourth order are used to attain the off-vertex distribution function values. Differing from the recently introduced Particles on Demand (PoD) method , the method operates in a static, non-moving reference frame. Yet the SLLBM in the present formulation grants supersonic flows and exhibits a high degree of Galilean invariance. The SLLBM solver allows for an independent time step size due to the integration along characteristics and for the use of unusual velocity sets, like the D2Q25, which is constructed by the roots of the fifth-order Hermite polynomial. The properties of the present model are shown in diverse example simulations of a two-dimensional Taylor-Green vortex, a Sod shock tube, a two-dimensional Riemann problem and a shock-vortex interaction. It is shown that the cell-based interpolation and the use of Gauss-Lobatto-Chebyshev support points allow for spatially high-order solutions and minimize the mass loss caused by the interpolation. Transformed grids in the shock-vortex interaction show the general applicability to non-uniform grids.
Die Norm EN ISO 13849-1 stellt explizite Anforderungen an sicherheitsgerichtete SPS-Software. Wie lassen sich diese im Maschinenbau praxisgerecht umsetzen? Mit dieser Frage hat sich ein von der DGUV gefördertes und an der Hochschule Bonn-Rhein-Sieg durchgeführtes Projekt beschäftigt. Der Beitrag skizziert die Vorgehensweise zur möglichen Umsetzung der normativen Anforderungen. Diese Vorgehensweise ist unabhängig von der verwendeten Sicherheits-SPS und daher allgemein anwendbar. Es wird auf insgesamt 10 dokumentierte Beispiele und einen ausführlichen Forschungsbericht verwiesen, die downloadbar sind.
In her recent article, Bender discusses several aspects of research–practice–collaborations (RPCs). In this commentary, we apply Bender's arguments to experiences in engineering research and development (R&D). We investigate the influence of interaction with practice partners on relevance, credibility, and legitimacy in the special engineering field of product development and analyze which methodological approaches are already being pursued for dealing with diverging interests and asymmetries and which steps will be necessary to include interests of civil society beyond traditional customer relations.
Ein Projektvertrag bietet den Geschäftspartnern rechtliche Sicherheit. Häufig wird er von Anwälten aufgesetzt. Insbesondere in kleinen und mittelständischen Unternehmen ist es aber auch oft der Projektleiter, der den Vertrag formuliert. Prof. Dr. Uwe Braehmer erklärt, unter welchen Bedingungen man einen Vertrag ohne juristische Beratung erstellen kann und liefert eine Checkliste mit den Regelungen, die ein Vertrag enthalten sollte.
Qualitätsverbesserung und Zeitersparnis bei der Stipendienvergabe durch automatisierten Workflow
(2013)
Für die Vergabe der Deutschlandstipendien hatte die Hochschule anfangs ein Verfahren festgelegt, das viel manuelle Arbeitsschritte umfasst: Die Studierenden hatten ihre Bewerbungsunterlagen schriftlich einzureichen. Dazu gehörten neben einem Motivationsschreiben, einem Ausdruck des aktuellen Notenspiegels alle weiteren Referenzen zur Einschätzung der Bewerbung gemäß den gesetzlichen Auswahlkriterien. Als Grundlage zur Bewertung der „sozialen Kriterien“ sollten die Bewerberinnen und Bewerber ein Gutachten eines Professors oder einer Professorin der Hochschule einholen.
Unternehmen agieren in einem dynamischen Umfeld mit hoher Komplexität und Unsicherheit. Um dabei langfristig wettbewerbsfähig zu bleiben, ist eine kontinuierliche Optimierung der Prozesse erforderlich. Eine konsequente Prozessorientierung wird daher seit langem angestrebt. Zur Ermittlung des aktuellen Standes der Prozessorganisation in deutschen Unternehmen hat die Gesellschaft für Organisation e. V. (gfo) eine Studie durchführen lassen, deren erste Ergebnisse hier vorgestellt werden.
Protocol for conducting advanced cyclic tests in lithium-ion batteries to estimate capacity fade
(2024)
Using advanced cyclic testing techniques improves accuracy in estimating capacity fade and incorporates real-world scenarios in battery cycle aging assessment. Here, we present a protocol for conducting cyclic tests in lithium-ion batteries to estimate capacity fade. We describe steps for implementing strategies for accounting for variations in rest periods, charge-discharge rates, and temperatures. We also detail procedures for validating tests experimentally within a climate-controlled chamber and for developing an empirical model to estimate capacity fading under various testing objectives. For complete details on the use and execution of this protocol, please refer to Mulpuri et al.1.
This work proposes a novel approach for probabilistic end-to-end all-sky imager-based nowcasting with horizons of up to 30 min using an ImageNet pre-trained deep neural network. The method involves a two-stage approach. First, a backbone model is trained to estimate the irradiance from all-sky imager (ASI) images. The model is then extended and retrained on image and parameter sequences for forecasting. An open access data set is used for training and evaluation. We investigated the impact of simultaneously considering global horizontal (GHI), direct normal (DNI), and diffuse horizontal irradiance (DHI) on training time and forecast performance as well as the effect of adding parameters describing the irradiance variability proposed in the literature. The backbone model estimates current GHI with an RMSE and MAE of 58.06 and 29.33 W m−2, respectively. When extended for forecasting, the model achieves an overall positive skill score reaching 18.6 % compared to a smart persistence forecast. Minor modifications to the deterministic backbone and forecasting models enables the architecture to output an asymmetrical probability distribution and reduces training time while leading to similar errors for the backbone models. Investigating the impact of variability parameters shows that they reduce training time but have no significant impact on the GHI forecasting performance for both deterministic and probabilistic forecasting while simultaneously forecasting GHI, DNI, and DHI reduces the forecast performance.
This paper addresses long-term historical changes in solar irradiance in West Africa (3 to 20° N and 20° W to 16° E) and the implications for photovoltaic systems. Here, we use satellite irradiance (Surface Solar Radiation Data Set – Heliosat, Edition 2.1 – SARAH-2.1) and temperature data from a reanalysis (ERA5) to derive photovoltaic yields. Based on 35 years of data (1983–2017), the temporal and regional variability as well as long-term trends in global and direct horizontal irradiance are analyzed. Furthermore, a detailed time series analysis is undertaken at four locations. According to the high spatial resolution SARAH-2.1 data record (0.05°×0.05°), solar irradiance is largest (up to a 300 W m−2 daily average) in the Sahara and the Sahel zone with a positive trend (up to 5 W m−2 per decade) and a lower temporal variability (<75 W m−2 between 1983 and 2017 for daily averages). In contrast, the solar irradiance is lower in southern West Africa (between 200 W m−2 and 250 W m−2) with a negative trend (up to −5 W m−2 per decade) and a higher temporal variability (up to 150 W m−2). The positive trend in the north is mostly connected to the dry season, whereas the negative trend in the south occurs during the wet season. Both trends show 95 % significance. Photovoltaic (PV) yields show a strong meridional gradient with the lowest values of around 4 kWh kWp−1 in southern West Africa and values of more than 5.5 kWh kWp−1 in the Sahara and Sahel zone.
In this paper, the performance evaluation of Frequency Modulated Chaotic On-Off Keying (FM-COOK) in AWGN, Rayleigh and Rician fading channels is given. The simulation results show that an improvement in BER can be gained by incorporating the FM modulation with COOK for SNR values less than 10dB in AWGN case and less than 6dB for Rayleigh and Rician fading channels.
Alkaline methanol oxidation is an important electrochemical process in the design of efficient fuel cells. Typically, a system of ordinary differential equations is used to model the kinetics of this process. The fitting of the parameters of the underlying mathematical model is performed on the basis of different types of experiments, characterizing the fuel cell. In this paper, we describe generic methods for creation of a mathematical model of electrochemical kinetics from a given reaction network, as well as for identification of parameters of this model. We also describe methods for model reduction, based on a combination of steady-state and dynamical descriptions of the process. The methods are tested on a range of experiments, including different concentrations of the reagents and different voltage range.
Mathematische Vorkurse werden zur Vorbereitung auf das Studium allen Studienanfängerinnen und Studienanfängern der Ingenieurmathematik dringend empfohlen, aber leider fällt es immer schwerer, die Lücke zwischen den Erwartungen an die Vorkenntnisse der Studierenden und dem tatsächlichen Rüstwerkzeug der Studienanfänger/innen zu schließen. In diesem Artikel wird die Projektidee vorgestellt, im Rahmen einer Zusammenarbeit mit dem internationalen ROLE-Projekt einen mathematischen Vorkurs durch zusätzliche Elemente aus dem Bereich der Open Educational Resources sinnvoll zu ergänzen, um eine Binnendifferenzierung zu ermöglichen und den Studierenden zu erleichtern, sich in den Lehrstoff individuell einzuarbeiten.
Am Beispiel einer jahrelang in Präsenz gelehrten Veranstaltung mit Vorlesungen, Übungen und Laborpraktika wird gezeigt, wie die Vermittlung prüfungsrelevanter Kompetenzen auch „online“ gelang. Das passende „Setting“ des Lehr- und Lernprozesses unter Beachtung von Handlungsempfehlungen ist auch für die Zukunft relevant.
We consider the Hopfield model with n neurons and an increasing number p=p(n) of randomly chosen patterns and use Stein's method to obtain rates of convergence for the central limit theorem of overlap parameters, which holds for every fixed choice of the overlap parameter for almost all realisations of the random patterns.
Anfang Januar 2007 trat in Nordrhein-Westfalen (NRW) das Hochschulfreiheitsgesetz (HFG) in Kraft. Mit dem HFG hat sich die Landesregierung in NRW weit aus der Detailsteuerung der Hochschulen zurückgezogen und tritt lediglich noch für existenzielle Schadensereignisse ein. So sind die Hochschulen – Universitäten und Fachhochschulen – in NRW keine staatlichen Einrichtungen mehr, sondern Körperschaften des öffentlichen Rechts in staatlicher Trägerschaft. Hiermit folgt das neue Hochschulgesetz dem hochschulpolitischen Paradigmenwechsel von einem staatlich geplanten, weitgehend einheitlich gestalteten System zu einem durch Profilbildung und Wettbewerb geprägten Hochschulsystem. In diesem Artikel werden die damit verbundenen Problemstellungen und Lösungsoptionen aufgezeigt und als Empfehlung die Einführung eines Chancen- und Risikomanagements beschrieben.
Multi-epoch searches for relativistic binary pulsars and fast transients in the Galactic Centre
(2021)
Für kleinere Unternehmen mit geringen Ressourcen ist die Gestaltung des QM-Systems eine beträchtliche Herausforderung: Welche Methoden und Maßnahmen sind nötig und bestgeeignet, um die Qualitätskosten nachhaltig zu senken? Durch individuelle und ganzheitliche Betrachtung des Unternehmens sowie Einsatz der Kraftfeldanalyse gelang es einem Metallverarbeiter, ein maßgeschneidertes und dauerhaft wirksames QM-System zu implementieren.
Die Kommunikation der Gegenwart ist dominant digital geleitet. Die Benutzung der Zeichen ist hier an mediale Artefakte gebunden, denen dabei das Kriterium des Mikroformatischen sowohl im Hinblick auf ihre Objektivationen als auch in Hinsicht auf ihre Inhalte zukommt. Der Beitrag nimmt diese Beobachtung zum Ausgang, um am Beispiel von WhatsApp eine Kommunikationspraktik medienästhetisch zu betrachten, die ihrerseits vehement die Gegenwärtigkeit der Kommunikation bestimmt.
The media is considered to be the fourth pillar in a democratic country. It acts as an effective control mechanism to check the other branches of the government. But this is only consequential when the media functions in an independent and transparent fashion with trained and neutral professionals who are aware of the accountability and consequences of their work. All these factors together would further the country as a democratic institution. Traditionally, it was legacy media responsible for a one-to-many communication process. Their goal was to provide information to the citizens. But this changed with development in technology and the use of social media in daily life. The internet brought with it new media formats which are easily accessible but also unstructured. These lowered barriers of entry in the media enabled citizens to become active participants in the communication process. As a result, these citizens developed a different relationship with the already existing media wherein they were not only the receivers to information but also co-producers. Real-time information allows users to communicate with each other and in turn widely generate public opinion on internet platforms. A many-to-many communication style emerged. While on the one hand, this type of discourse could be an opportunity for citizens to exercise their fundamental freedom of speech and expression, it is on the other hand, proving to have a detrimental effect in two parts: Lack of neutrality, polarized views and pre-existing misconceptions on the part of citizens as well as algorithms and formation of echo-chambers on the part of technology. Some questions arise in this scenario about the capability of citizen journalists, the duties they should adhere to along with the enjoyment of their rights and freedoms, the risks involved in an unchecked method of communication and the effect of citizen journalism in the democratic process.
Löten oder schreiben?
(2005)
The lattice Boltzmann method (LBM) stands apart from conventional macroscopic approaches due to its low numerical dissipation and reduced computational cost, attributed to a simple streaming and local collision step. While this property makes the method particularly attractive for applications such as direct noise computation, it also renders the method highly susceptible to instabilities. A vast body of literature exists on stability-enhancing techniques, which can be categorized into selective filtering, regularized LBM, and multi-relaxation time (MRT) models. Although each technique bolsters stability by adding numerical dissipation, they act on different modes. Consequently, there is not a universal scheme optimally suited for a wide range of different flows. The reason for this lies in the static nature of these methods; they cannot adapt to local or global flow features. Still, adaptive filtering using a shear sensor constitutes an exception to this. For this reason, we developed a novel collision operator that uses space- and time-variant collision rates associated with the bulk viscosity. These rates are optimized by a physically informed neural net. In this study, the training data consists of a time series of different instances of a 2D barotropic vortex solution, obtained from a high-order Navier–Stokes solver that embodies desirable numerical features. For this specific text case our results demonstrate that the relaxation times adapt to the local flow and show a dependence on the velocity field. Furthermore, the novel collision operator demonstrates a better stability-to-precision ratio and outperforms conventional techniques that use an empirical constant for the bulk viscosity.
Solar photovoltaic power output is modulated by atmospheric aerosols and clouds and thus contains valuable information on the optical properties of the atmosphere. As a ground-based data source with high spatiotemporal resolution it has great potential to complement other ground-based solar irradiance measurements as well as those of weather models and satellites, thus leading to an improved characterisation of global horizontal irradiance. In this work several algorithms are presented that can retrieve global tilted and horizontal irradiance and atmospheric optical properties from solar photovoltaic data and/or pyranometer measurements. The method is tested on data from two measurement campaigns that took place in the Allgäu region in Germany in autumn 2018 and summer 2019, and the results are compared with local pyranometer measurements as well as satellite and weather model data. Using power data measured at 1 Hz and averaged to 1 min resolution along with a non-linear photovoltaic module temperature model, global horizontal irradiance is extracted with a mean bias error compared to concurrent pyranometer measurements of 5.79 W m−2 (7.35 W m−2) under clear (cloudy) skies, averaged over the two campaigns, whereas for the retrieval using coarser 15 min power data with a linear temperature model the mean bias error is 5.88 and 41.87 W m−2 under clear and cloudy skies, respectively.
During completely overcast periods the cloud optical depth is extracted from photovoltaic power using a lookup table method based on a 1D radiative transfer simulation, and the results are compared to both satellite retrievals and data from the Consortium for Small-scale Modelling (COSMO) weather model. Potential applications of this approach for extracting cloud optical properties are discussed, as well as certain limitations, such as the representation of 3D radiative effects that occur under broken-cloud conditions. In principle this method could provide an unprecedented amount of ground-based data on both irradiance and optical properties of the atmosphere, as long as the required photovoltaic power data are available and properly pre-screened to remove unwanted artefacts in the signal. Possible solutions to this problem are discussed in the context of future work.
The mechanical properties of plastic components, especially if they are made of semi-crystalline polymers, are considerably influenced by the process conditions. The degree of crystallization influences thermal and mechanical properties. Even more important is the orientation of molecules due to stretching of the polymer melt. Anisotropic material properties are the result of such orientations. Up to now all these effects are not considered within the simulation models of blow molded parts.
In this study, we investigate the thermo-mechanical relaxation and crystallization behavior of polyethylene using mesoscale molecular dynamics simulations. Our models specifically mimic constraints that occur in real-life polymer processing: After strong uniaxial stretching of the melt, we quench and release the polymer chains at different loading conditions. These conditions allow for free or hindered shrinkage, respectively. We present the shrinkage and swelling behavior as well as the crystallization kinetics over up to 600 ns simulation time. We are able to precisely evaluate how the interplay of chain length, temperature, local entanglements and orientation of chain segments influences crystallization and relaxation behavior. From our models, we determine the temperature dependent crystallization rate of polyethylene, including crystallization onset temperature.
The cube in cube approach was used by Paul and Ishai-Cohen to model and derive formulas for filler content dependent Young's moduli of particle filled composites assuming perfect filler matrix adhesion. Their formulas were chosen because of their simplicity, and recalculated using an elementary volume approach which transforms spherical inclusions to cubic inclusions. The EV approach led to expression of the composites moduli that allows introducing an adhesion factor kadh ranging from 0 and 1 to take into account reduced filler matrix adhesion. This adhesion factor scales the edge length of the cubic inclusions, thus reducing the stress transfer area between matrix and filler. Fitting the experimental data with the modified Paul model provides reasonable kadh for PA66, PBT, PP, PE-LD and BR which are in line with their surface energies. Further analysis showed that stiffening only occurs if kadh exceeds [Formula: see text] and depends on the ratio of matrix modulus and filler modulus. The modified model allows for a quick calculation of any particle filled composites for known matrix modulus EM, filler modulus EF, filler volume content vF and adhesion factor kadh. Thus, finite element analysis (FEA) simulations of any particle filled polymer parts as well as materials selection are significantly eased. FEA of cubic and hexagonal EV arrangements show that stress distributions within the EV exhibit more shear stresses if one deviates from the cubic arrangement. At high filler contents the assumption that the property of the EV is representative for the whole composite, holds only for filler volume contents up to 15 or 20% (corresponding to 30 to 40 weight %). Thus, for vast majority of commercially available particulate composites, the modified model can be applied. Furthermore, this indicates that the cube in cube approach reaches two limits: (i) the occurrence of increasing shear stresses at filler contents above 20% due to deviations of EV arrangements or spatial filler distribution from cubic arrangements (singular), and (ii) increasing interaction between particles with the formation of particle network within the matrix violating the EV assumption of their homogeneous dispersion.
Interne Audits können mehr
(2024)
Dieser Beitrag zeigt, wie das Deutsche Zentrum für Luft- und Raumfahrt e. V. (DLR) Zufriedenheitsanalysen aus zwei Sichtweisen durchführt: Aus Sicht der Auditoren und aus Sicht der Managementbeauftragten der auditierten Institute und Einrichtungen. Die Ergebnisse fließen in die jährliche Auditprogrammplanung ein. Damit wird der Nutzen von internen Audits gesteigert.