Refine
Departments, institutes and facilities
- Fachbereich Informatik (86)
- Fachbereich Wirtschaftswissenschaften (69)
- Fachbereich Angewandte Naturwissenschaften (57)
- Fachbereich Ingenieurwissenschaften und Kommunikation (55)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (47)
- Fachbereich Sozialpolitik und Soziale Sicherung (40)
- Institut für Verbraucherinformatik (IVI) (27)
- Institut für funktionale Gen-Analytik (IFGA) (21)
- Institute of Visual Computing (IVC) (18)
- Institut für Cyber Security & Privacy (ICSP) (16)
Document Type
- Article (134)
- Conference Object (83)
- Part of a Book (44)
- Book (monograph, edited volume) (24)
- Preprint (18)
- Doctoral Thesis (16)
- Report (10)
- Contribution to a Periodical (6)
- Master's Thesis (6)
- Research Data (5)
Year of publication
- 2020 (361) (remove)
Keywords
- Digitalisierung (5)
- Inborn error of metabolism (3)
- Innovation (3)
- Lehrbuch (3)
- Organic aciduria (3)
- Quality diversity (3)
- Shared autonomous vehicles (3)
- Usable Privacy (3)
- Usable Security (3)
- post-buckling (3)
Diese Studie untersucht die Aneignung und Nutzung von Sprachassistenten wie Google Assistant oder Amazon Alexa in Privathaushalten. Unsere Forschung basiert auf zehn Tiefeninterviews mit Nutzern von Sprachassistenten sowie der Evaluation bestimmter Interaktionen in der Interaktionshistorie. Unsere Ergebnisse illustrieren, zu welchen Anlässen Sprachassistenten im heimischen Umfeld genutzt werden, welche Strategien sich die Nutzer in der Interaktion mit Sprachassistenten angeeignet haben, wie die Interaktion abläuft und welche Schwierigkeiten sich bei der Einrichtung und Nutzung des Sprachassistenten ergeben haben. Ein besonderer Fokus der Studie liegt auf Fehlinteraktionen, also Situationen, in denen die Interaktion scheitert oder zu scheitern droht. Unsere Studie zeigt, dass das Nutzungspotenzial der Assistenten häufig nicht ausgeschöpft wird, da die Interaktion in komplexeren Anwendungsfällen häufig misslingt. Die Nutzer verwenden daher den Sprachassistenten eher in einfachen Anwendungsfällen und neue Apps und Anwendungsfälle werden gar nicht erst ausprobiert. Eine Analyse der Aneignungsstrategien, beispielsweise durch eine selbst erstellte Liste mit Befehlen, liefert Erkenntnisse für die Gestaltung von Unterstützungswerkzeugen sowie die Weiterentwicklung und Optimierung von sprachbasierten Mensch-Maschine-Interfaces.
2-methylacetoacetyl-coenzyme A thiolase (beta-ketothiolase) deficiency: one disease - two pathways
(2020)
Background: 2-methylacetoacetyl-coenzyme A thiolase deficiency (MATD; deficiency of mitochondrial acetoacetyl-coenzyme A thiolase T2/ “beta-ketothiolase”) is an autosomal recessive disorder of ketone body utilization and isoleucine degradation due to mutations in ACAT1.
Methods: We performed a systematic literature search for all available clinical descriptions of patients with MATD. Two hundred forty-four patients were identified and included in this analysis. Clinical course and biochemical data are presented and discussed.
Results: For 89.6% of patients at least one acute metabolic decompensation was reported. Age at first symptoms ranged from 2 days to 8 years (median 12 months). More than 82% of patients presented in the first 2 years of life, while manifestation in the neonatal period was the exception (3.4%). 77.0% (157 of 204 patients) of patients showed normal psychomotor development without neurologic abnormalities. Conclusion: This comprehensive data analysis provides a systematic overview on all cases with MATD identified in the literature. It demonstrates that MATD is a rather benign disorder with often favourable outcome, when compared with many other organic acidurias.
Background: 3-hydroxy-3-methylglutaryl-coenzyme A lyase deficiency (HMGCLD) is an autosomal recessive disorder of ketogenesis and leucine degradation due to mutations in HMGCL.
Method: We performed a systematic literature search to identify all published cases. Two hundred eleven patients of whom relevant clinical data were available were included in this analysis. Clinical course, biochemical findings and mutation data are highlighted and discussed. An overview on all published HMGCL variants is provided.
Results: More than 95% of patients presented with acute metabolic decompensation. Most patients manifested within the first year of life, 42.4% already neonatally. Very few individuals remained asymptomatic. The neurologic long-term outcome was favorable with 62.6% of patients showing normal development.
Conclusion: This comprehensive data analysis provides a systematic overview on all published cases with HMGCLD including a list of all known HMGCL mutations.
4GREAT is an extension of the German Receiver for Astronomy at Terahertz frequencies (GREAT) operated aboard the Stratospheric Observatory for Infrared Astronomy (SOFIA). The spectrometer comprises four different detector bands and their associated subsystems for simultaneous and fully independent science operation. All detector beams are co-aligned on the sky. The frequency bands of 4GREAT cover 491-635, 890-1090, 1240-1525 and 2490-2590 GHz, respectively. This paper presents the design and characterization of the instrument, and its in-flight performance. 4GREAT saw first light in June 2018, and has been offered to the interested SOFIA communities starting with observing cycle 6.
Bei der sechsten Ausgabe des wissenschaftlichen Workshops ”Usable Security und Privacy” auf der Mensch und Computer 2020 werden wie in den vergangenen Jahren aktuelle Forschungs- und Praxisbeiträge präsentiert und anschließend mit allen Teilnehmenden diskutiert. Drei Beiträge befassen sich dieses Jahr mit dem Thema Privatsphäre, einer mit dem Thema Sicherheit. Mit dem Workshop wird ein etabliertes Forum fortgeführt und weiterentwickelt, in dem sich Expert*innen aus unterschiedlichen Domänen, z. B. dem Usability- und Security-Engineering, transdisziplinär austauschen können.
The simultaneous operation of multiple different semiconducting metal oxide (MOX) gas sensors is demanding for the readout circuitry. The challenge results from the strongly varying signal intensities of the various sensor types to the target gas. While some sensors change their resistance only slightly, other types can react with a resistive change over a range of several decades. Therefore, a suitable readout circuit has to be able to capture all these resistive variations, requiring it to have a very large dynamic range. This work presents a compact embedded system that provides a full, high range input interface (readout and heater management) for MOX sensor operation. The system is modular and consists of a central mainboard that holds up to eight sensor-modules, each capable of supporting up to two MOX sensors, therefore supporting a total maximum of 16 different sensors. Its wide input range is archived using the resistance-to-time measurement method. The system is solely built with commercial off-the-shelf components and tested over a range spanning from 100Ω to 5 GΩ (9.7 decades) with an average measurement error of 0.27% and a maximum error of 2.11%. The heater management uses a well-tested power-circuit and supports multiple modes of operation, hence enabling the system to be used in highly automated measurement applications. The experimental part of this work presents the results of an exemplary screening of 16 sensors, which was performed to evaluate the system’s performance.
A Comparative Study of Uncertainty Estimation Methods in Deep Learning Based Classification Models
(2020)
Deep learning models produce overconfident predictions even for misclassified data. This work aims to improve the safety guarantees of software-intensive systems that use deep learning based classification models for decision making by performing comparative evaluation of different uncertainty estimation methods to identify possible misclassifications.
In this work, uncertainty estimation methods applicable to deep learning models are reviewed and those which can be seamlessly integrated to existing deployed deep learning architectures are selected for evaluation. The different uncertainty estimation methods, deep ensembles, test-time data augmentation and Monte Carlo dropout with its variants, are empirically evaluated on two standard datasets (CIFAR-10 and CIFAR-100) and two custom classification datasets (optical inspection and RoboCup@Work dataset). A relative ranking between the methods is provided by evaluating the deep learning classifiers on various aspects such as uncertainty quality, classifier performance and calibration. Standard metrics like entropy, cross-entropy, mutual information, and variance, combined with a rank histogram based method to identify uncertain predictions by thresholding on these metrics, are used to evaluate uncertainty quality.
The results indicate that Monte Carlo dropout combined with test-time data augmentation outperforms all other methods by identifying more than 95% of the misclassifications and representing uncertainty in the highest number of samples in the test set. It also yields a better classifier performance and calibration in terms of higher accuracy and lower Expected Calibration Error (ECE), respectively. A python based uncertainty estimation library for training and real-time uncertainty estimation of deep learning based classification models is also developed.
Computers can help us to trigger our intuition about how to solve a problem. But how does a computer take into account what a user wants and update these triggers? User preferences are hard to model as they are by nature vague, depend on the user’s background and are not always deterministic, changing depending on the context and process under which they were established. We pose that the process of preference discovery should be the object of interest in computer aided design or ideation. The process should be transparent, informative, interactive and intuitive. We formulate Hyper-Pref, a cyclic co-creative process between human and computer, which triggers the user’s intuition about what is possible and is updated according to what the user wants based on their decisions. We combine quality diversity algorithms, a divergent optimization method that can produce many, diverse solutions, with variational autoencoders to both model that diversity as well as the user’s preferences, discovering the preference hypervolume within large search spaces.
Failure prognostic builds up on constant data acquisition and processing and fault diagnosis and is an essential part of predictive maintenance of smart manufacturing systems enabling condition based maintenance, optimised use of plant equipment, improved uptime and yield and to prevent safety problems. Given known control inputs into a plant and real sensor outputs or simulated measurements, the model-based part of the proposed hybrid method provides numerical values of unknown parameter degradation functions at sampling time points by the evaluation of equations that have been derived offline from a bicausal diagnostic bond graph. These numerical values are computed concurrently to the constant monitoring of a system and are stored in a buffer of fixed length. The data-driven part of the method provides a sequence of remaining useful life estimates by repeated projection of the parameter degradation into the future based on the use of values in a sliding time window. Existing software can be used to determine the best fitting function and can account for its random parameters. The continuous parameter estimation and their projection into the future can be performed in parallel for multiple isolated simultaneous parametric faults on a multicore, multiprocessor computer.
The proposed hybrid bond graph model-based, data-driven method is verified by an offline simulation case study of a typical power electronic circuit. It can be used to implement embedded systems that enable cooperating machines in smart manufacturing to perform prognostic themselves.
Striated muscle contraction is regulated by the translocation of troponin-tropomyosin strands over the thin filament surface. Relaxation relies partly on highly-favorable, conformation-dependent electrostatic contacts between actin and tropomyosin, which position tropomyosin such that it impedes actomyosin associations. Impaired relaxation and hypercontractile properties are hallmarks of various muscle disorders. The α-cardiac actin M305L hypertrophic cardiomyopathy-causing mutation lies near residues that help confine tropomyosin to an inhibitory position along thin filaments. Here, we investigate M305L actin in vivo, in vitro, and in silico to resolve emergent pathological properties and disease mechanisms. Our data suggest the mutation reduces actin flexibility and distorts the actin-tropomyosin electrostatic energy landscape that, in muscle, result in aberrant contractile inhibition and excessive force. Thus, actin flexibility may be required to establish and maintain interfacial contacts with tropomyosin as well as facilitate its movement over distinct actin surface features and is, therefore, likely necessary for proper regulation of contraction.
Solving differential-algebraic equations (DAEs) efficiently by means of appropriate numerical schemes for time-integration is an ongoing topic in applied mathematics. In this context, especially when considering large systems that occur with respect to many fields of practical application effective computation becomes relevant. In particular, corresponding examples are given when having to simulate network structures that consider transport of fluid and gas or electrical circuits. Due to the stiffness properties of DAEs, time-integration of such problems generally demands for implicit strategies. Among the schemes that prove to be an adequate choice are linearly implicit Rung-Kutta methods in the form of Rosenbrock-Wanner (ROW) schemes. Compared to fully implicit methods, they are easy to implement and avoid the solution of non-linear equations by including Jacobian information within their formulation. However, Jacobian calculations are a costly operation. Hence, necessity of having to compute the exact Jacobian with every successful time-step proves to be a considerable drawback. To overcome this drawback, a ROW-type method is introduced that allows for non-exact Jacobian entries when solving semi-explicit DAEs of index one. The resulting scheme thus enables to exploit several strategies for saving computational effort. Examples include using partial explicit integration of non-stiff components, utilizing more advantageous sparse Jacobian structures or making use of time-lagged Jacobian information. In fact, due to the property of allowing for non-exact Jacobian expressions, the given scheme can be interpreted as a generalized ROW-type method for DAEs. This is because it covers many different ROW-type schemes known from literature. To derive the order conditions of the ROW-type method introduced, a theory is developed that allows to identify occurring differentials and coefficients graphically by means of rooted trees. Rooted trees for describing numerical methods were originally introduced by J.C. Butcher. They significantly simplify the determination and definition of relevant characteristics because they allow for applying straightforward procedures. In fact, the theory presented combines strategies used to represent ROW-type methods with exact Jacobian for DAEs and ROW-type methods with non-exact Jacobian for ODEs. For this purpose, new types of vertices are considered in order to describe occurring non-exact elementary differentials completely. The resulting theory thus automatically comprises relevant approaches known from literature. As a consequence, it allows to recognize order conditions of familiar methods covered and to identify new conditions. With the theory developed, new sets of coefficients are derived that allow to realize the ROW-type method introduced up to orders two and three. Some of them are constructed based on methods known from literature that satisfy additional conditions for the purpose of avoiding effects of order reduction. It is shown that these methods can be improved by means of the new order conditions derived without having to increase the number of internal stages. Convergence of the resulting methods is analyzed with respect to several academic test problems. Results verify the theory determined and the order conditions found as only schemes satisfying the order conditions predicted preserve their order when using non-exact Jacobian expressions.
Autonomous driving enables new mobility concepts such as shared-autonomous services. Although significant re-search has been done on passenger-car interaction, work on passenger interaction with robo-taxis is still rare. In this paper, we tackle the question of how passengers experience robo-taxis as a service in real-life settings to inform the interaction design. We conducted a Wizard of Oz study with an electric vehicle where the driver was hidden from the passenger to simulate the service experience of a robo-taxi. 10 participants had the opportunity to use the simulated shared-autonomous service in real-life situations for one week. By the week's end, 33 rides were completed and recorded on video. Also, we flanked the study conducting interviews before and after with all participants. The findings provided insights into four design themes that could inform the service design of robo-taxis along the different stages including hailing, pick-up, travel, and drop-off.
Abschlussbericht zum BMBF-Fördervorhaben Enabling Infrastructure for HPC-Applications (EI-HPC)
(2020)
Optimization plays an essential role in industrial design, but is not limited to minimization of a simple function, such as cost or strength. These tools are also used in conceptual phases, to better understand what is possible. To support this exploration we focus on Quality Diversity (QD) algorithms, which produce sets of varied, high performing solutions. These techniques often require the evaluation of millions of solutions -- making them impractical in design cases. In this thesis we propose methods to radically improve the data-efficiency of QD with machine learning, enabling its application to design. In our first contribution, we develop a method of modeling the performance of evolved neural networks used for control and design. The structures of these networks grow and change, making them difficult to model -- but with a new method we are able to estimate their performance based on their heredity, improving data-efficiency by several times. In our second contribution we combine model-based optimization with MAP-Elites, a QD algorithm. A model of performance is created from known designs, and MAP-Elites creates a new set of designs using this approximation. A subset of these designs are the evaluated to improve the model, and the process repeats. We show that this approach improves the efficiency of MAP-Elites by orders of magnitude. Our third contribution integrates generative models into MAP-Elites to learn domain specific encodings. A variational autoencoder is trained on the solutions produced by MAP-Elites, capturing the common “recipe” for high performance. This learned encoding can then be reused by other algorithms for rapid optimization, including MAP-Elites. Throughout this thesis, though the focus of our vision is design, we examine applications in other fields, such as robotics. These advances are not exclusive to design, but serve as foundational work on the integration of QD and machine learning.
Graph drawing with spring embedders employs a V x V computation phase over the graph's vertex set to compute repulsive forces. Here, the efficacy of forces diminishes with distance: a vertex can effectively only influence other vertices in a certain radius around its position. Therefore, the algorithm lends itself to an implementation using search data structures to reduce the runtime complexity. NVIDIA RT cores implement hierarchical tree traversal in hardware. We show how to map the problem of finding graph layouts with force-directed methods to a ray tracing problem that can subsequently be implemented with dedicated ray tracing hardware. With that, we observe speedups of 4x to 13x over a CUDA software implementation.
Unter dem Begriff „Additive Fertigung“ werden alle Verfahren zusammengefasst, die dazu dienen Formteile aufgrund von CAD-Daten schichtweise aufzubauen. Dabei geschieht der Aufbau stets selektiv, entsprechend der durch die CAD-Daten vorgegebenen Positionen. Während für Metalle und Kunststoffe diese Technik bereits in der industriellen Anwendung etabliert ist, befindet sie sich im Bereich der keramischen Werkstoffe noch in einer frühen Entwicklungs- bzw. Anwendungsphase. Um so wichtiger ist es, den aktuellen Stand der Technik und das Potenzial für keramische Bauteile umfassend darzustellen.
AErOmAt Abschlussbericht
(2020)
Das Projekt AErOmAt hatte zum Ziel, neue Methoden zu entwickeln, um einen erheblichen Teil aerodynamischer Simulationen bei rechenaufwändigen Optimierungsdomänen einzusparen. Die Hochschule Bonn-Rhein-Sieg (H-BRS) hat auf diesem Weg einen gesellschaftlich relevanten und gleichzeitig wirtschaftlich verwertbaren Beitrag zur Energieeffizienzforschung geleistet. Das Projekt führte außerdem zu einer schnelleren Integration der neuberufenen Antragsteller in die vorhandenen Forschungsstrukturen.
In optimization methods that return diverse solution sets, three interpretations of diversity can be distinguished: multi-objective optimization which searches diversity in objective space, multimodal optimization which tries spreading out the solutions in genetic space, and quality diversity which performs diversity maintenance in phenotypic space. We introduce niching methods that provide more flexibility to the analysis of diversity and a simple domain to compare and provide insights about the paradigms. We show that multiobjective optimization does not always produce much diversity, quality diversity is not sensitive to genetic neutrality and creates the most diverse set of solutions, and multimodal optimization produces higher fitness solutions. An autoencoder is used to discover phenotypic features automatically, producing an even more diverse solution set. Finally, we make recommendations about when to use which approach.
The ability to finely segment different instances of various objects in an environment forms a critical tool in the perception tool-box of any autonomous agent. Traditionally instance segmentation is treated as a multi-label pixel-wise classification problem. This formulation has resulted in networks that are capable of producing high-quality instance masks but are extremely slow for real-world usage, especially on platforms with limited computational capabilities. This thesis investigates an alternate regression-based formulation of instance segmentation to achieve a good trade-off between mask precision and run-time. Particularly the instance masks are parameterized and a CNN is trained to regress to these parameters, analogous to bounding box regression performed by an object detection network.
In this investigation, the instance segmentation masks in the Cityscape dataset are approximated using irregular octagons and an existing object detector network (i.e., SqueezeDet) is modified to regresses to the parameters of these octagonal approximations. The resulting network is referred to as SqueezeDetOcta. At the image boundaries, object instances are only partially visible. Due to the convolutional nature of most object detection networks, special handling of the boundary adhering object instances is warranted. However, the current object detection techniques seem to be unaffected by this and handle all the object instances alike. To this end, this work proposes selectively learning only partial, untainted parameters of the bounding box approximation of the boundary adhering object instances. Anchor-based object detection networks like SqueezeDet and YOLOv2 have a discrepancy between the ground-truth encoding/decoding scheme and the coordinate space used for clustering, to generate the prior anchor shapes. To resolve this disagreement, this work proposes clustering in a space defined by two coordinate axes representing the natural log transformations of the width and height of the ground-truth bounding boxes.
When both SqueezeDet and SqueezeDetOcta were trained from scratch, SqueezeDetOcta lagged behind the SqueezeDet network by a massive ≈ 6.19 mAP. Further analysis revealed that the sparsity of the annotated data was the reason for this lackluster performance of the SqueezeDetOcta network. To mitigate this issue transfer-learning was used to fine-tune the SqueezeDetOcta network starting from the trained weights of the SqueezeDet network. When all the layers of the SqueezeDetOcta were fine-tuned, it outperformed the SqueezeDet network paired with logarithmically extracted anchors by ≈ 0.77 mAP. In addition to this, the forward pass latencies of both SqueezeDet and SqueezeDetOcta are close to ≈ 19ms. Boundary adhesion considerations, during training, resulted in an improvement of ≈ 2.62 mAP of the baseline SqueezeDet network. A SqueezeDet network paired with logarithmically extracted anchors improved the performance of the baseline SqueezeDet network by ≈ 1.85 mAP.
In summary, this work demonstrates that if given sufficient fine instance annotated data, an existing object detection network can be modified to predict much finer approximations (i.e., irregular octagons) of the instance annotations, whilst having the same forward pass latency as that of the bounding box predicting network. The results justify the merits of logarithmically extracted anchors to boost the performance of any anchor-based object detection network. The results also showed that the special handling of image boundary adhering object instances produces more performant object detectors.
Analytische Chemie II
(2020)
Dieses Arbeitsbuch führt durch das erfolgreiche Lehrbuch Skoog/Holler/Crouch, Instrumentelle Analytik und ist vor allem für das Selbststudium konzipiert. In fünf Teilen werden die Vorlesungsinhalte der fortgeschritteneren Analytischen Chemie zusammengefasst und anhand ausgewählter Beispiele erläutert: Mit der Untersuchung von Molekülen befassen sich Massenspektrometrie und Kernresonanzspektroskopie, zudem werden zahlreiche elektroanalytische Methoden wie Potentiometrie, Coulometrie, Amperometrie und Voltammetrie behandelt. In einem Überblick über speziellere Verfahren der Analytik geht es unter anderem ebenso um den Einsatz radioaktiver Substanzen und die Nutzung verschiedener Fluoreszenzverfahren wie um Methoden der Informationsgewinnung in der zunehmend wichtigen elektrochemischen und optischen Sensortechnik sowie deren Automatisierbarkeit. Den Abschluss bildet eine Zusammenfassung verschiedener Prinzipien und Anwendungsmethoden der Statistik, die im Rahmen der Analytik schlichtweg unverzichtbar sind. Um das selbstständige Lernen zu erleichtern, wird dabei in allen Teilen des Buches immer wieder auf essenzielle Abschnitte und Abbildungen des Lehrbuches verwiesen.
Due to the use of fossil fuel resources, many environmental problems have been increasingly growing. Thus, the recent research focuses on the use of environment friendly materials from sustainable feedstocks for future fuels, chemicals, fibers and polymers. Lignocellulosic biomass has become the raw material of choice for these new materials. Recently, the research has focused on using lignin as a substitute material in many industrial applications. The antiradical and antimicrobial activity of lignin and lignin-based films are both of great interest for applications such as food packaging additives. DPPH assay was used to determine the antioxidant activity of Kraft lignin compared to Organosolv lignins from different biomasses. The purification procedure of Kraft lignin showed that double-fold selective extraction is the most efficient confirmed by UV-Vis, FTIR, HSQC, 31PNMR, SEC, and XRD. The antioxidant capacity was discussed regarding the biomass source, pulping process, and degree of purification. Lignin obtained from industrial black liquor are compared with beech wood samples: Biomass source influences the DPPH inhibition (softwood > grass) and the TPC (softwood < grass). DPPH inhibition affected by the polarity of the extraction solvent. Following the trend: ethanol > diethylether > acetone. Reduced polydispersity has positive influence on the DPPH inhibition. Storage decreased the DPPH inhibition but increased the TPC values. The DPPH assay was also used to discuss the antiradical activity of HPMC/lignin and HPMC/lignin/chitosan films. In both binary (HPMC/lignin) and ternary (HPMC/lignin/chitosan) systems the 5% addition showed the highest activity and the highest addition had the lowest. Both scavenging activity and antimicrobial activity are dependent on the biomass source; Organosolv of softwood > Kraft of softwood > Organosolv of grass. Lignins and lignin-containing films showed high antimicrobial activities against Gram-positive and Gram-negative bacteria at 35 °C and at low temperatures (0-7 °C). Purification of Kraft lignin has a negative effect on the antimicrobial activity while storage has positive effect. The lignin leaching in the produced films affected the activity positively and the chitosan addition enhances the activity for both Gram-positive and Gram-negative bacteria. Testing the films against food spoilage bacteria that grow at low temperatures revealed the activity of the 30% addition on HPMC/L1 film against both B. thermosphacta and P. fluorescens while L5 was active only against B. thermosphacta. In HPMC/lignin/chitosan films, the 5% addition exhibited activity against both food spoilage bacteria.
Are There Extended Cognitive Improvements from Different Kinds of Acute Bouts of Physical Activity?
(2020)
Acute bouts of physical activity of at least moderate intensity have shown to enhance cognition in young as well as older adults. This effect has been observed for different kinds of activities such as aerobic or strength and coordination training. However, only few studies have directly compared these activities regarding their effectiveness. Further, most previous studies have mainly focused on inhibition and have not examined other important core executive functions (i.e., updating, switching) which are essential for our behavior in daily life (e.g., staying focused, resisting temptations, thinking before acting), as well. Therefore, this study aimed to directly compare two kinds of activities, aerobic and coordinative, and examine how they might affect executive functions (i.e., inhibition, updating, and switching) in a test-retest protocol. It is interesting for practical implications, as coordinative exercises, for example, require little space and would be preferable in settings such as an office or a classroom. Furthermore, we designed our experiment in such a way that learning effects were controlled. Then, we tested the influence of acute bouts of physical activity on the executive functioning in both young and older adults (young 16–22 years, old 65–80 years). Overall, we found no differences between aerobic and coordinative activities and, in fact, benefits from physical activities occurred only in the updating tasks in young adults. Additionally, we also showed some learning effects that might influence the results. Thus, it is important to control cognitive tests for learning effects in test-retest studies as well as to analyze effects from physical activity on a construct level of executive functions.
With the digital transformation, software systems have become an integral part of our society and economy. In every part of our life, software systems are increasingly utilized to, e.g., simplify housework or to optimize business processes. All these applications are connected to the Internet, which already includes millions of software services consumed by billions of people. Applications which process such a magnitude of users and data traffic requires to be highly scalable and are therefore denoted as Ultra Large Scale (ULS) systems. Roy Fielding has defined one of the first approaches which allows designing modern ULS software systems. In his doctoral thesis, Fielding introduced the architectural style Representational State Transfer (REST) which builds the theoretical foundation of the web. At present, the web is considered as the world's largest ULS system. Due to a large number of users and the significance of software for society and the economy, the security of ULS systems is another crucial quality factor besides high scalability.
The way solutions are represented, or encoded, is usually the result of domain knowledge and experience. In this work, we combine MAP-Elites with Variational Autoencoders to learn a Data-Driven Encoding (DDE) that captures the essence of the highest-performing solutions while still able to encode a wide array of solutions. Our approach learns this data-driven encoding during optimization by balancing between exploiting the DDE to generalize the knowledge contained in the current archive of elites and exploring new representations that are not yet captured by the DDE. Learning representation during optimization allows the algorithm to solve high-dimensional problems, and provides a low-dimensional representation which can be then be re-used. We evaluate the DDE approach by evolving solutions for inverse kinematics of a planar arm (200 joint angles) and for gaits of a 6-legged robot in action space (a sequence of 60 positions for each of the 12 joints). We show that the DDE approach not only accelerates and improves optimization, but produces a powerful encoding that captures a bias for high performance while expressing a variety of solutions.
Neben der individuellen Bedeutung von Gesundheit für jeden Menschen, steigt auch die Relevanz von „gesunden Beschäftigten“. Gerade in Zeiten von Vollbeschäftigung, Fachkräftemangel und höherem Renteneintrittsalter, rückt die Gesundheit der Beschäftigten und die damit verbundene Arbeitsfähigkeit jedes Einzelnen stärker in den Fokus. Staat, Sozialversicherungsträger und Unternehmen sind zunehmend daran interessiert, Arbeitsplätze und Arbeitsbedingungen gesundheitsförderlich zu gestalten. Hierbei bildet die BGF den Rahmen für die existierenden gesundheitsförderlichen Interventionen, die in einer Vielzahl im betrieblichen Setting vorzufinden sind. Die Arbeitspause kann in diesem Kontext als geeignete Intervention angesehen werden, die jedoch sehr vielfältig in der Ausgestaltung sein kann.
Kommunikation gilt nicht ohne Grund als die Königsdisziplin im BGM. Hier gilt es, Mitarbeiter in einem ersten Schritt für das Thema Gesundheit zu sensibilisieren und mit relevanten Materialien zu informieren, um sie letztendlich zur Teilnahme an Gesundheitsangeboten zu motivieren. Diese drei Schritte empfehlen sich ebenfalls für die Kommunikation in digitalen Zeiten. Gesundheitsplattformen und/oder Gesundheits-Apps können die Kommunikation unterstützen. Das richtige Maß an Kommunikation stellt eine weitere Herausforderung in digitalen Zeiten dar, da Informationen in der Flut an E-Mails durchaus untergehen können. Eine Kombination aus Push- und Pull-Kommunikation hat sich hierbei bewährt, um bei Mitarbeitern das nötige Interesse für Gesundheit anzustoßen, damit diese dann eigenständig aus bestehenden Angeboten (Informationen, Kurse usw.) wählen.
In der heutigen Zeit nimmt die Bedeutung schlanker und effektiver Prozesse in Unternehmen vor dem Hintergrund des Wettbewerbs sowie Kostendrucks stetig zu. Um dieser Herausforderung entgegenzuwirken, fokussieren sich Unternehmen auf die Identifikation neuer innovativer Potenziale. Aufgrund der Tatsache, dass monotone und regelbasierte Prozesse durch Softwareroboter automatisiert werden können, ist das Interesse an Robotic Process Automation (RPA) in den letzten Jahren stetig gestiegen. Bevor sich Unternehmen allerdings für oder gegen den Einsatz von RPA entscheiden, ist es zunächst notwendig, dass die Entscheidungsträger ein Verständnis von RPA erlangen sowie die entsprechenden Einsatzpotenziale und Risiken einschätzen können. Dieser Artikel trägt diesem Bedürfnis Rechnung, indem es diese auf Basis einer Literaturrecherche ermittelt und bewertet. Im Ausblick wird das zukünftige Potenzial von RPA eingeschätzt.
Bionik
(2020)
Wie machen die das... kann angesichts der erstaunlichen Fähigkeiten mancher Lebewesen gefragt werden. Die Bionik fragt noch weiter …und wie kann man das nachmachen? Hier liegt ein Schwerpunkt dieses Lehrbuches, das die Bionik nicht nur an zahlreichen Beispielen erklärt, sondern auch eine Vorgehensweise für die Identifizierung biologischer Lösungen und deren Übertragung auf technische Anwendungen vermittelt. Basisinformationen der Biologie und Grundlagen der Konstruktionstechnik gewährleisten einen leichten Zugang zum Stoff. Mit dem 3D-Druck als Schlüsseltechnologie und der Thematisierung der Nachhaltigkeit geht das Buch zudem auf aktuelle Entwicklungen ein. Dieser ganzheitliche Blick auf die Bionik soll den Leser zur Durchführung bionischer Projekte befähigen und motivieren. (Verlagsangaben)
Object detectors have improved considerably in the last years by using advanced Convolutional Neural Networks (CNNs) architectures. However, many detector hyper-parameters are not generally tuned, and they are used with values set by the detector authors. Blackbox optimization methods have gained more attention in recent years because of its ability to optimize the hyper-parameters of various machine learning algorithms and deep learning models. However, these methods are not explored in improving CNN-based object detector's hyper-parameters. In this research work, we propose the use of blackbox optimization methods such as Gaussian Process based Bayesian Optimization (BOGP), Sequential Model-based Algorithm Configuration (SMAC), and Covariance Matrix Adaptation Evolution Strategy (CMA-ES) to tune the hyper-parameters in Faster R-CNN and Single Shot MultiBox Detector (SSD). In Faster R-CNN, tuning the input image size, prior box anchor scales and ratios using BOGP, SMAC, and CMA-ES has increased the performance around 1.5% in terms of Mean Average Precision (mAP) on PASCAL VOC. Tuning the anchor scales of SSD has increased the mAP by 3% on PASCAL VOC and marine debris datasets. On the COCO dataset with SSD, mAP improvement is observed in the medium and large objects, but mAP decreases by 1% in small objects. The experimental results show that the blackbox optimization methods have proved to increase the mAP performance by optimizing the object detectors. Moreover, it has achieved better results than the hand-tuned configurations in most of the cases.
Object detectors have improved considerably in the last years by using advanced CNN architectures. However, many detector hyper-parameters are generally manually tuned, or they are used with values set by the detector authors. Automatic Hyper-parameter optimization has not been explored in improving CNN-based object detectors hyper-parameters. In this work, we propose the use of Black-box optimization methods to tune the prior/default box scales in Faster R-CNN and SSD, using Bayesian Optimization, SMAC, and CMA-ES. We show that by tuning the input image size and prior box anchor scale on Faster R-CNN mAP increases by 2% on PASCAL VOC 2007, and by 3% with SSD. On the COCO dataset with SSD there are mAP improvement in the medium and large objects, but mAP decreases by 1% in small objects. We also perform a regression analysis to find the significant hyper-parameters to tune.
Business Management
(2020)