Refine
H-BRS Bibliography
- yes (2497) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (979)
- Fachbereich Angewandte Naturwissenschaften (630)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (388)
- Fachbereich Ingenieurwissenschaften und Kommunikation (386)
- Fachbereich Wirtschaftswissenschaften (319)
- Institute of Visual Computing (IVC) (286)
- Institut für funktionale Gen-Analytik (IFGA) (231)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (126)
- Institut für Cyber Security & Privacy (ICSP) (107)
- Institut für Verbraucherinformatik (IVI) (104)
Document Type
- Article (1021)
- Conference Object (918)
- Part of a Book (190)
- Preprint (86)
- Doctoral Thesis (52)
- Report (51)
- Book (monograph, edited volume) (43)
- Master's Thesis (30)
- Working Paper (28)
- Research Data (22)
Year of publication
Language
- English (2497) (remove)
Keywords
- Virtual Reality (15)
- FPGA (14)
- Machine Learning (14)
- GC/MS (13)
- Robotics (13)
- Sustainability (11)
- virtual reality (11)
- Augmented Reality (9)
- Lignin (9)
- Social Protection (9)
Quantifying Interference in WiLD Networks using Topography Data and Realistic Antenna Patterns
(2019)
Avoiding possible interference is a key aspect to maximize the performance in Wi-Fi based Long Distance networks. In this paper we quantify self-induced interference based on data derived from our testbed and match the findings against simulations. By enhancing current simulation models with two key elements we significantly reduce the deviation between testbed and simulation: the usage of detailed antenna patterns compared to the cone model and propagation modeling enhanced by license-free topography data. Based on the gathered data we discuss several possible optimization approaches such as physical separation of local radios, tuning the sensitivity of the transmitter and using centralized compared to distributed channel assignment algorithms. While our testbed is based on 5 GHz Wi-Fi, we briefly discuss the possible impact of our results to other frequency bands.
The Peren-Clement index (PCI) is a methodology to analyze country-specific risk for businesses engaged in international trade and direct investment. This index, established in 1998, provides a guideline when deciding which foreign markets offer the possibility for additional business engagement and investment, and to what extent existing engagement or investment can be increased or should be reduced.
Bond graph software can simulate bond graph models without the user needing to manually derive equations. This offers the power to model larger and more complex systems than in the past. Multibond graphs (those with vector bonds) offer a compact model which further eases handling multibody systems. Although multibond graphs can be simulated successfully, the use of vector bonds can present difficulties. In addition, most qualitative, bond graph–based exploitation relies on the use of scalar bonds. This article discusses the main methods for simulating bond graphs of multibody systems, using a graphical software platform. The transformation between models with vector and scalar bonds is presented. The methods are then compared with respect to both time and accuracy, through simulation of two benchmark models. This article is a tutorial on the existing methods for simulating three-dimensional rigid and holonomic multibody systems using bond graphs and discusses the difficulties encountered. It then proposes and adapts methods for simulating this type of system directly from its bond graph within a software package. The value of this study is in giving practical guidance to modellers, so that they can implement the adapted method in software.
During the dawn of chemistry when the temperature of the young Universe had fallen below ∼4000 K, the ions of the light elements produced in Big Bang nucleosynthesis recombined in reverse order of their ionization potential. With its higher ionization potentials, He++ (54.5 eV) and He+ (24.6 eV) combined first with free electrons to form the first neutral atom, prior to the recombination of hydrogen (13.6 eV). At that time, in this metal-free and low-density environment, neutral helium atoms formed the Universe's first molecular bond in the helium hydride ion HeH+, by radiative association with protons (He + H+ → HeH+ + hν). As recombination progressed, the destruction of HeH+ (HeH+ + H → He + H+2) created a first path to the formation of molecular hydrogen, marking the beginning of the Molecular Age. Despite its unquestioned importance for the evolution of the early Universe, the HeH+ molecule has so far escaped unequivocal detection in interstellar space. In the laboratory, the ion was discovered as long ago as 1925, but only in the late seventies was the possibility that HeH+ might exist in local astrophysical plasmas discussed. In particular, the conditions in planetary nebulae were shown to be suitable for the production of potentially detectable HeH+ column densities: the hard radiation field from the central hot white dwarf creates overlapping Strömgren spheres, where HeH+ is predicted to form, primarily by radiative association of He+ and H. With the GREAT spectrometer onboard SOFIA, the HeH+ rotational ground-state transition at λ149.1 μm is now accessible. We report here its detection towards the planetary nebula NGC7027.
Humankind, it can be argued, lives beyond its means and often at the expense of future generations. This paper starkly demonstrates, with the aid of a mathematical model, the imperative for a sustainable existence. In the model, consumption of resources is represented as a closed system, just like our planet. Long-term survival is only possible if consumption is below the ability of the system to regenerate.
Surrogate models are used to reduce the burden of expensive-to-evaluate objective functions in optimization. By creating models which map genomes to objective values, these models can estimate the performance of unknown inputs, and so be used in place of expensive objective functions. Evolutionary techniques such as genetic programming or neuroevolution commonly alter the structure of the genome itself. A lack of consistency in the genotype is a fatal blow to data-driven modeling techniques: interpolation between points is impossible without a common input space. However, while the dimensionality of genotypes may differ across individuals, in many domains, such as controllers or classifiers, the dimensionality of the input and output remains constant. In this work we leverage this insight to embed differing neural networks into the same input space. To judge the difference between the behavior of two neural networks, we give them both the same input sequence, and examine the difference in output. This difference, the phenotypic distance, can then be used to situate these networks into a common input space, allowing us to produce surrogate models which can predict the performance of neural networks regardless of topology. In a robotic navigation task, we show that models trained using this phenotypic embedding perform as well or better as those trained on the weight values of a fixed topology neural network. We establish such phenotypic surrogate models as a promising and flexible approach which enables surrogate modeling even for representations that undergo structural changes.
The initially large number of variants is reduced by applying custom variant annotation and filtering procedures. This requires complex software toolchains to be set up and data sources to be integrated. Furthermore, increasing study sizes subsequently require higher efforts to manage datasets in a multi-user and multi-institution environment. It is common practice to expect numerous iterations of continuative respecification and refinement of filter strategies, when the cause for a disease or phenotype is unknown. Data analysis support during this phase is fundamental, because handling the large volume of data is not possible or inadequate for users with limited computer literacy. Constant feedback and communication is necessary when filter parameters are adjusted or the study grows with additional samples. Consequently, variant filtering and interpretation becomes time-consuming and hinders a dynamic and explorative data analysis by experts.
Are quality diversity algorithms better at generating stepping stones than objective-based search?
(2019)
The route to the solution of complex design problems often lies through intermediate "stepping stones" which bear little resemblance to the final solution. By greedily following the path of greatest fitness improvement, objective-based search overlooks and discards stepping stones which might be critical to solving the problem. Here, we hypothesize that Quality Diversity (QD) algorithms are a better way to generate stepping stones than objective-based search: by maintaining a large set of solutions which are of high-quality, but phenotypically different, these algorithms collect promising stepping stones while protecting them in their own "ecological niche". To demonstrate the capabilities of QD we revisit the challenge of recreating images produced by user-driven evolution, a classic challenge which spurred work in novelty search and illustrated the limits of objective-based search. We show that QD far outperforms objective-based search in matching user-evolved images. Further, our results suggest some intriguing possibilities for leveraging the diversity of solutions created by QD.
The initial phase in real world engineering optimization and design is a process of discovery in which not all requirements can be made in advance, or are hard to formalize. Quality diversity algorithms, which produce a variety of high performing solutions, provide a unique chance to support engineers and designers in the search for what is possible and high performing. In this work we begin to answer the question how a user can interact with quality diversity and turn it into an interactive innovation aid. By modeling a user's selection it can be determined whether the optimization is drifting away from the user's preferences. The optimization is then constrained by adding a penalty to the objective function. We present an interactive quality diversity algorithm that can take into account the user's selection. The approach is evaluated in a new multimodal optimization benchmark that allows various optimization tasks to be performed. The user selection drift of the approach is compared to a state of the art alternative on both a planning and a neuroevolution control task, thereby showing its limits and possibilities.
More and more devices will be connected to the internet [3]. Many devicesare part of the so-called Internet of Things (IoT) which contains many low-powerdevices often powered by a battery. These devices mainly communicate with the manufacturers back-end and deliver personal data and secrets like passwords.
Due to the policy goals for sustainable energy production, renewable energy plants such as photovoltaics are increasingly in use. The energy production from solar radiation depends strongly on atmospheric conditions. As the weather mostly changes, electrical power generation fluctuates, making technical planning and control of power grids to a complex problem.
Renewable energies play an increasingly important role for energy production in Europe. Unlike coal or gas powerplants, solar energy production is highly variable in space and time. This is due to the strong variability of cloudsand their influence on the surface solar irradiance. Especially in regions with large contribution from photovoltaicpower production, the intermittent energy feed-in to the power grid can be a risk for grid stability. Therefore goodforecasts of temporal and spatial variability of surface irradiance are necessary to be able to properly regulate thepower supply.