Refine
Departments, institutes and facilities
- Präsidium (398)
- Fachbereich Angewandte Naturwissenschaften (189)
- Fachbereich Informatik (178)
- Fachbereich Wirtschaftswissenschaften (152)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (133)
- Fachbereich Ingenieurwissenschaften und Kommunikation (124)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (100)
- Institut für funktionale Gen-Analytik (IFGA) (72)
- Fachbereich Sozialpolitik und Soziale Sicherung (43)
- Institute of Visual Computing (IVC) (41)
Document Type
- Article (439)
- Part of Periodical (407)
- Conference Object (160)
- Part of a Book (83)
- Report (54)
- Working Paper (42)
- Preprint (19)
- Bachelor Thesis (17)
- Master's Thesis (13)
- Other (10)
Year of publication
Has Fulltext
- yes (1266) (remove)
Keywords
- Entrepreneurship (8)
- Ghana (8)
- Hochschule Bonn-Rhein-Sieg (7)
- Machine Learning (7)
- Robotik (7)
- cytokine-induced killer cells (7)
- lignin (7)
- Digitalisierung (6)
- Kenya (6)
- Lignin (6)
There is severe clinical vitamin A deficiency (VAD) prevalence among Ghanaians and many African countries. Foodbased diets has been suggested as a more sustainable approach to solving the VAD situation in Africa. In this study, A participatory action research between orange flesh sweet potato farmers, gari processors within central region and academia was adopted to develop gari containing provitamin A beta-carotene. Gari is a major staple for Ghanaians and people in the West African subregion due to its affordability and swelling capacity. It is mainly eaten raw with water, sugar, groundnut and milk as gari-soakings or with hot water to prepare gelatinized food called gari-kai in Ghana or “eba” among Nigerians. However, gari is limited in provitamin A carotenoids. Orange fleshed sweet potato (OFSP) is known to contain large amount of vitamin A precursor. Therefore, addition of OFSP to gari would have the potential to fight the high prevalence rate of vitamin A deficiency amongst less developed regions of Africa. To develop this, different proportions of orange fleshed sweet potatoes (OFSP) was used to substitute cassava mash and fermented spontaneously to produce composite gari - a gritty-crispy ready-to-eat food product. Both the amount of OFSP and the fermentation duration caused significant increases in the β-carotene content of the composite gari. OFSP addition reduced the luminance while roasting made the composite gari yellower when compared with the cake used. Addition of OFSP negatively affected the swelling capacity of the gari although not significant. The taste, texture, flavour and the overall preferences for the composite gari decreased due to the addition of the OFSP but fermentation duration (FD) improved them. The sample with 10% OFSP and FD of 1.81 days was found to produce the optimal gari. One-portion of the optimal gari would contribute to 34.75, 23.2, 23.2, 27, 17 and 16% of vitamin A requirements amongst children, adolescent, adult males, adult females, pregnant women and lactating mothers respectively. The study demonstrated that partial substitution of cassava with OFSP for gari production would have the potential to fight the high prevalence rate of vitamin A deficiency amongst less developed regions of Africa while involvement of farmers and processors prior to the design of research phase enhanced the adoption of intervention strategies.
Neuromorphic computing aims to mimic the computational principles of the brain in silico and has motivated research into event-based vision and spiking neural networks (SNNs). Event cameras (ECs) capture local, independent changes in brightness, and offer superior power consumption, response latencies, and dynamic ranges compared to frame-based cameras. SNNs replicate neuronal dynamics observed in biological neurons and propagate information in sparse sequences of ”spikes”. Apart from biological fidelity, SNNs have demonstrated potential as an alternative to conventional artificial neural networks (ANNs), such as in reducing energy expenditure and inference time in visual classification. Although potentially beneficial for robotics, the novel event-driven and spike-based paradigms remain scarcely explored outside the domain of aerial robots.
To investigate the utility of brain-inspired sensing and data processing in a robotics application, we developed a neuromorphic approach to real-time, online obstacle avoidance on a manipulator with an onboard camera. Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN, decoding neural activations into avoidance motions, and adjusting plans in a dynamic motion primitive formulation. We conducted simulated and real experiments with a Kinova Gen3 arm performing simple reaching tasks involving static and dynamic obstacles. Our implementation was systematically tuned, validated, and tested in sets of distinct task scenarios, and compared to a non-adaptive baseline through formalized quantitative metrics and qualitative criteria.
The neuromorphic implementation facilitated reliable avoidance of imminent collisions in most scenarios, with 84% and 92% median success rates in simulated and real experiments, where the baseline consistently failed. Adapted trajectories were qualitatively similar to baseline trajectories, indicating low impacts on safety, predictability and smoothness criteria. Among notable properties of the SNN were the correlation of processing time with the magnitude of perceived motions (captured in events) and robustness to different event emulation methods. Preliminary tests with a DAVIS346 EC showed similar performance, validating our experimental event emulation method. These results motivate future efforts to incorporate SNN learning, utilize neuromorphic processors, and target other robot tasks to further explore this approach.
An essential measure of autonomy in service robots designed to assist humans is adaptivity to the various contexts of human-oriented tasks. These robots may have to frequently execute the same action, but subject to subtle variations in task parameters that determine optimal behaviour. Such actions are traditionally executed by robots using pre-determined, generic motions, but a better approach could utilize robot arm maneuverability to learn and execute different trajectories that work best in each context.
In this project, we explore a robot skill acquisition procedure that allows incorporating contextual knowledge, adjusting executions according to context, and improvement through experience, as a step towards more adaptive service robots. We propose an apprenticeship learning approach to achieving context-aware action generalisation on the task of robot-to-human object hand-over. The procedure combines learning from demonstration, with which a robot learns to imitate a demonstrator’s execution of the task, and a reinforcement learning strategy, which enables subsequent experiential learning of contextualized policies, guided by information about context that is integrated into the learning process. By extending the initial, static hand-over policy to a contextually adaptive one, the robot derives and executes variants of the demonstrated action that most appropriately suit the current context. We use dynamic movement primitives (DMPs) as compact motion representations, and a model-based Contextual Relative Entropy Policy Search (C-REPS) algorithm for learning policies that can specify hand-over position, trajectory shape, and execution speed, conditioned on context variables. Policies are learned using simulated task executions, before transferring them to the robot and evaluating emergent behaviours.
We demonstrate the algorithm’s ability to learn context-dependent hand-over positions, and new trajectories, guided by suitable reward functions, and show that the current DMP implementation limits learning context-dependent execution speeds. We additionally conduct a user study involving participants assuming different postures and receiving an object from the robot, which executes hand-overs by either exclusively imitating a demonstrated motion, or selecting hand-over positions based on learned contextual policies and adapting its motion accordingly. The results confirm the hypothesized improvements in the robot’s perceived behaviour when it is context-aware and adaptive, and provide useful insights that can inform future developments.
In this paper, the performance evaluation of Frequency Modulated Chaotic On-Off Keying (FM-COOK) in AWGN, Rayleigh and Rician fading channels is given. The simulation results show that an improvement in BER can be gained by incorporating the FM modulation with COOK for SNR values less than 10dB in AWGN case and less than 6dB for Rayleigh and Rician fading channels.
ICT integration by universities teaching professionals is emerging as a major concern, this study demonstrate the need to address the integration problem by encouraging existing metrics use in indexing ICT integration as an ICT governance strategy. Quality of integration depends on quality indexing which also depend on quality of existing metrics and their use. Considering the role that University Information Technology Teaching Professionals’ (UITTPs) continuous improvement indexing can offer, towards autonomic governance of the continuous emerging ICTs in the university teaching, this study examined extent in use of existing ICT integration metrics to index ICT integration by the UITTPs. Six metrics for ICT integration were investigated; time, workshop course content relevance, technical malfunctions, support conditions, support services, and motivation and commitment to student learning and staff professional development metrics. Descriptive survey design was used in which interviews were conducted to UITTPs in three (3) public and three (3) private purposively selected universities in Kenya. The findings were analyzed descriptively and inferentially using Kendall’s correlation of concordance and tested using Chi-square on the extent of concordance and presented with help of frequency tables, figures and percentages. The findings revealed that all the metrics are rarely used for indexing ICT integration (32.8%), and most UITTPs were in discordance on this level of all the six metrics use except for support condition. This implied that the use of metrics for indexing integration has not been formalized across the Kenyan universities. Universities need to be encouraged to identify suitable metrics, formalize them and improve their frequency in use. Secondly, socio based metrics such as content relevance are used more frequently for indexing integration as compared to Technical metrics, socio-technical metrics balance therefore need to be emphasized by the universities management when determining and using metrics for indexing ICT integration.
YAWL (Yet Another Workflow Language) is an open source Business Process Management System, first released in 2003. YAWL grew out of a university research environment to become a unique system that has been deployed worldwide as a laboratory environment for research in Business Process Management and as a productive system in other scientific domains.
With trainings and research oriented towards sustainable development since 2006 (Water and Sanitation, Infrastructure, Renewable Energies and Energy Processes), Foundation 2iE is positioning itself as a reference institute that trains innovative engineers-entrepreneurs for the needs and challenges of Africa’s development. Center of Excellence of the UEMOA and the World Bank, CSR is at the heart of the Strategy of the institute which aims to be a showcase in this field in Africa.
When users in virtual reality cannot physically walk and self-motions are instead only visually simulated, spatial updating is often impaired. In this paper, we report on a study that investigated if HeadJoystick, an embodied leaning-based flying interface, could improve performance in a 3D navigational search task that relies on maintaining situational awareness and spatial updating in VR. We compared it to Gamepad, a standard flying interface. For both interfaces, participants were seated on a swivel chair and controlled simulated rotations by physically rotating. They either leaned (forward/backward, right/left, up/down) or used the Gamepad thumbsticks for simulated translation. In a gamified 3D navigational search task, participants had to find eight balls within 5 min. Those balls were hidden amongst 16 randomly positioned boxes in a dark environment devoid of any landmarks. Compared to the Gamepad, participants collected more balls using the HeadJoystick. It also minimized the distance travelled, motion sickness, and mental task demand. Moreover, the HeadJoystick was rated better in terms of ease of use, controllability, learnability, overall usability, and self-motion perception. However, participants rated HeadJoystick could be more physically fatiguing after a long use. Overall, participants felt more engaged with HeadJoystick, enjoyed it more, and preferred it. Together, this provides evidence that leaning-based interfaces like HeadJoystick can provide an affordable and effective alternative for flying in VR and potentially telepresence drones.
The nature of the program was an exchange program between Hochschule Bonn-Rhein-Sieg, University of Applied Sciences and the University of Cape Coast. The program was advertised and we applied. We were shortlisted for interview and we were selected as the candidates for the exchange program. The program took a period of five months. We set off from Accra, Ghana to Germany on 7th September 2015, and returned to Ghana on 25th January 2016.
The role of tourism entrepreneurship in rural development continues to be a subject of interest and debate among academia and practitioners. Theoretically, it is anticipated that tourism entrepreneurship will lead to livelihood diversification, enhancement and ultimately a revitalization of the rural economy. While tourism is posited as an accessible entrepreneurship pathway, there is a dearth of information regarding rural dwellers’ actual experiences with it, especially within the Ghanaian context. Using a case study approach and qualitative data from Wli; a rural tourism destination in Ghana, this paper delves into the opportunities and concerns associated with tourism entrepreneurship in rural areas. Data was obtained between November and December 2016 from 27 persons who were either tourism enterprise owners or employees. Findings from the study showed that entrepreneurial activities centred on the provision of accommodation, food and beverage, souvenir and guiding services. The nature of the activities enabled easy transfer of existing skills and knowledge. Further, entry into tourism entrepreneurship was perceived to be easy by the majority of study participants. These findings confirm the potential for tourism to be employed in boosting entrepreneurial activities in rural areas. Nevertheless, there were concerns regarding access to credit, institutional support, unhealthy competitions, low incomes, unguaranteed pensions, and seasonality and skewness of demand. These concerns threatened the growth and sustainability of tourism entrepreneurship within the community. From a policy perspective, there is a need for institutional recognition and support for tourism entrepreneurial intentions and activities in rural areas. Practice-wise, credit facilities need to be designed specifically for tourism-related rural enterprises. Further, periodic skills and knowledge augmentation programmes must be initiated to help expand the skill sets for the rural entrepreneurs. Finally, there is a need for the formation of traderelated networks to provide a platform for knowledge and experience sharing among the entrepreneurs.
In the last two decades, studies that analyse the political economy of sustainable energy transitions have increasingly become available. Yet very few attempts have been made to synthesize the factors discussed in the growing literature. This paper reviews the extant empirical literature on the political economy of sustainable energy transitions. Using a well-defined search strategy, a total of 36 empirical contributions covering the period 2008 to 2022 are reviewed full text. Overall, the findings highlight the role of vested interest, advocacy coalitions and green constituencies, path dependency, external shocks, policy and institutional environment, political institutions and fossil fuel resource endowments as major political economy factors influencing sustainable energy transitions across both high income countries, and low and middle income countries. In addition, the paper highlights and discusses some critical knowledge gaps in the existing literature and provides suggestions for a future research agenda.
Competency-Based Teaching Using Simulation Exercises: Evidence of the University of Cape Coast
(2018)
Tertiary institutions exist to train manpower to solve local, national, and international problems. Products from such institutions should not be a problem to countries as in the case of some Sub-Saharan African countries including Ghana which has a high level of graduate unemployment. Among the causes of the problem is the nature of teaching or the syllabus or the programs students pursue while in such institutions. The paper discusses one of the teaching strategies used to make a course relevant for a program and for the working world. In this course, students are introduced to practice-oriented learning through simulation exercises. The project activities specifically seek to assess the students’ understanding of business formation; examine students’ understanding of sustainability, creativity and innovation of business ideas; assess their understanding of the functional areas of business including marketing & sales, finance, human resource management, operations, and accounting, among others. Feedback from students who have participated indicates the exercise gave much more exposure and meaning to the concepts they learned in class. In this exercise, students build teams, develop a product, learn to set up a business, and design organogram, business vision, mission, and core values. The exercise empowers students to learn by doing. It accords students the opportunity to review their own knowledge and skills with respect to the concepts they have learned in the course. More than 3000 students have participated in this project since its inception in the academic year 2013/2014. It is estimated that 1000 students will participate in this project in the academic year 2017/2018.
Conclusion
(2018)
There is a paradigm shift from traditional content-based education and training to competencybased and practice-oriented training. This shift has occurred because practice-oriented teaching has been found to produce a training outcome that is industry focused, generating the relevant occupational standards. Competency-based training program often comprises of modules broken into segments called learning outcomes. These learning outcomes are based on criteria set by industry and assessment is designed to ensure students become competent in their respective areas of specialization.
Multidisciplinary, multicultural, and multitasking has taken center stage in the global educational debate. Globalization and improvement in communication have affected the way organisations operate and hence influenced whom they hire. Today, it is common practice to work with people from diverse backgrounds and it requires competencies that go beyond general project management. Intercultural awareness, networking in different global communities, and learning to develop specific communication strategies for different stakeholders is all part of the package of skills and competencies that are required in today's interconnected world. This has indirect implication on the nature of skills and competencies institutions/universities must equip their students with to enable them to compete successfully in the working world.
Airborne and spaceborne platforms are the primary data sources for large-scale forest mapping, but visual interpretation for individual species determination is labor-intensive. Hence, various studies focusing on forests have investigated the benefits of multiple sensors for automated tree species classification. However, transferable deep learning approaches for large-scale applications are still lacking. This gap motivated us to create a novel dataset for tree species classification in central Europe based on multi-sensor data from aerial, Sentinel-1 and Sentinel-2 imagery. In this paper, we introduce the TreeSatAI Benchmark Archive, which contains labels of 20 European tree species (i.e., 15 tree genera) derived from forest administration data of the federal state of Lower Saxony, Germany. We propose models and guidelines for the application of the latest machine learning techniques for the task of tree species classification with multi-label data. Finally, we provide various benchmark experiments showcasing the information which can be derived from the different sensors including artificial neural networks and tree-based machine learning methods. We found that residual neural networks (ResNet) perform sufficiently well with weighted precision scores up to 79 % only by using the RGB bands of aerial imagery. This result indicates that the spatial content present within the 0.2 m resolution data is very informative for tree species classification. With the incorporation of Sentinel-1 and Sentinel-2 imagery, performance improved marginally. However, the sole use of Sentinel-2 still allows for weighted precision scores of up to 74 % using either multi-layer perceptron (MLP) or Light Gradient Boosting Machine (LightGBM) models. Since the dataset is derived from real-world reference data, it contains high class imbalances. We found that this dataset attribute negatively affects the models' performances for many of the underrepresented classes (i.e., scarce tree species). However, the class-wise precision of the best-performing late fusion model still reached values ranging from 54 % (Acer) to 88 % (Pinus). Based on our results, we conclude that deep learning techniques using aerial imagery could considerably support forestry administration in the provision of large-scale tree species maps at a very high resolution to plan for challenges driven by global environmental change. The original dataset used in this paper is shared via Zenodo (https://doi.org/10.5281/zenodo.6598390, Schulz et al., 2022). For citation of the dataset, we refer to this article.
A robot (e.g. mobile manipulator) that interacts with its environment to perform its tasks, often faces situations in which it is unable to achieve its goals despite perfect functioning of its sensors and actuators. These situations occur when the behavior of the object(s) manipulated by the robot deviates from its expected course because of unforeseeable ircumstances. These deviations are experienced by the robot as unknown external faults. In this work we present an approach that increases reliability of mobile manipulators against the unknown external faults. This approach focuses on the actions of manipulators which involve releasing of an object. The proposed approach, which is triggered after detection of a fault, is formulated as a three-step scheme that takes a definition of a planning operator and an example simulation as its inputs. The planning operator corresponds to the action that fails because of the fault occurrence, whereas the example simulation shows the desired/expected behavior of the objects for the same action. In its first step, the scheme finds a description of the expected behavior of the objects in terms of logical atoms (i.e. description vocabulary). The description of the simulation is used by the second step to find limits of the parameters of the manipulated object. These parameters are the variables that define the releasing state of the object.
Using randomly chosen values of the parameters within these limits, this step creates different examples of the releasing state of the object. Each one of these examples is labelled as desired or undesired according to the behavior exhibited by the object (in the simulation), when the object is released in the state corresponded by the example. The description vocabulary is also used in labeling the examples autonomously. In the third step, an algorithm (i.e. N-Bins) uses the labelled examples to suggest the state for the object in which releasing it avoids the occurrence of unknown external faults.
The proposed N-Bins algorithm can also be used for binary classification problems. Therefore, in our experiments with the proposed approach we also test its prediction ability along with the analysis of the results of our approach. The results show that under the circumstances peculiar to our approach, N-Bins algorithm shows reasonable prediction accuracy where other state of the art classification algorithms fail to do so. Thus, N-Bins also extends the ability of a robot to predict the behavior of the object to avoid unknown external faults. In this work we use simulation environment OPENRave that uses physics engine ODE to simulate the dynamics of rigid bodies.
A system that interacts with its environment can be much more robust if it is able to reason about the faults that occur in its environment, despite perfect functioning of its internal components. For robots, which interact with the same environment as human beings, this robustness can be obtained by incorporating human-like reasoning abilities in them. In this work we use naive physics to enable reasoning about external faults in robots. We propose an approach for diagnosing external faults that uses qualitative reasoning on naive physics concepts for diagnosis. These concepts are mainly individual properties of objects that define their state qualitatively. The reasoning process uses physical laws to generate possible states of the concerned object(s), which could result into a detected external fault. Since effective reasoning about any external fault requires the information of relevant properties and physical laws, we associate different properties and laws to different types of faults which can be detected by a robot. The underlying ontology of this association is proposed on the basis of studies conducted (by other researchers) on reasoning of physics novices about everyday physical phenomena. We also formalize some definitions of properties of objects into a small framework represented in First-Order logic. These definitions represent naive concepts behind the properties and are intended to be independent from objects and circumstances. The definitions in the framework illustrates our proposal of using different biased definitions of properties for different types of faults. In this work, we also present a brief review of important contributions in the area of naive/qualitative physics. These reviews help in understanding the limitations of naive/qualitative physics in general. We also apply our approach to simple scenarios to asses its limitations in particular. Since this work was done independent of any particular real robotic system, it can be seen as a theoretical proof of the concept of usefulness of naive physics for external fault reasoning in robotics.
Während sich die unternehmerische Arbeitswelt immer mehr in Richtung Agilität verschiebt, verharrt das IT-Controlling noch in alten, klassischen Strukturen. Diese Arbeit untersucht die Fragestellung, ob und inwieweit agile Ansätze im IT-Controlling eingesetzt werden können. Dieser Beitrag ist eine modifizierte Version des in der Zeitschrift „HMD Praxis der Wirtschaftsinformatik“ (https://link.springer.com/article/10.1365/s40702-022-00837-0) erschienenen Artikels „Agiles IT-Controlling“.
Agiles IT-Controlling
(2022)
Während im IT-Projektmanagement agile Methoden seit vielen Jahren in der Praxis Zuspruch finden, werden im IT-Controlling überwiegend noch klassische Methoden eingesetzt. Der Beitrag untersucht die Fragestellung, ob und wie die im IT-Controlling eingesetzten Methoden auch agilen Paradigmen folgen und Methoden des agilen IT-Projektmanagements adaptiert werden können.
Die digitale Transformation verändert die internationale Kooperation der Hochschulen massiv. Über die Möglichkeiten der virtuellen Mobilität hinaus entstehen neue Themenfelder, die internationale Lern- und Lehrerlebnisse mit digitaler Unterstützung verändern, ergänzen oder neu ermöglichen. Dazu sind im Bereich der Förderung der Internationalisierung (DAAD, Erasmus+, BMBF u.a.) Projekte und Förderformate entstanden, die Digitalisierung und Internationalisierung kombinieren und die neuen Themenstellungen adressieren, z.B. didaktische Formate, administrative Prozesse (auch im Kontext OZG und DSGVO), virtuelle und hybride Mobilität, internationale Projekt- und Teamformate sowie schlussendlich auch Inhalte, die internationale, interkulturelle und interdisziplinäre Kompetenzen mit digitalen Kompetenzen verbinden. Der vorgeschlagene Workshop soll entsprechende Projekte zusammenbringen und die Themen strukturieren, um einen Überblick der Entwicklungen zu schaffen und somit einen Beitrag zur Definition des Themenfelds „Digitalisierung & Internationalisierung“ zu leisten.
A principal step towards solving diverse perception problems is segmentation. Many algorithms benefit from initially partitioning input point clouds into objects and their parts. In accordance with cognitive sciences, segmentation goal may be formulated as to split point clouds into locally smooth convex areas, enclosed by sharp concave boundaries. This goal is based on purely geometrical considerations and does not incorporate any constraints, or semantics, of the scene and objects being segmented, which makes it very general and widely applicable. In this work we perform geometrical segmentation of point cloud data according to the stated goal. The data is mapped onto a graph and the task of graph partitioning is considered. We formulate an objective function and derive a discrete optimization problem based on it. Finding the globally optimal solution is an NP-complete problem; in order to circumvent this, spectral methods are applied. Two algorithms that implement the divisive hierarchical clustering scheme are proposed. They derive graph partition by analyzing the eigenvectors obtained through spectral relaxation. The specifics of our application domain are used to automatically introduce cannot-link constraints in the clustering problem. The algorithms function in completely unsupervised manner and make no assumptions about shapes of objects and structures that they segment. Three publicly available datasets with cluttered real-world scenes and an abundance of box-like, cylindrical, and free-form objects are used to demonstrate convincing performance. Preliminary results of this thesis have been contributed to the International Conference on Autonomous Intelligent Systems (IAS-13).
A company's financial documents use tables along with text to organize the data containing key performance indicators (KPIs) (such as profit and loss) and a financial quantity linked to them. The KPI’s linked quantity in a table might not be equal to the similarly described KPI's quantity in a text. Auditors take substantial time to manually audit these financial mistakes and this process is called consistency checking. As compared to existing work, this paper attempts to automate this task with the help of transformer-based models. Furthermore, for consistency checking it is essential for the table's KPIs embeddings to encode the semantic knowledge of the KPIs and the structural knowledge of the table. Therefore, this paper proposes a pipeline that uses a tabular model to get the table's KPIs embeddings. The pipeline takes input table and text KPIs, generates their embeddings, and then checks whether these KPIs are identical. The pipeline is evaluated on the financial documents in the German language and a comparative analysis of the cell embeddings' quality from the three tabular models is also presented. From the evaluation results, the experiment that used the English-translated text and table KPIs and Tabbie model to generate table KPIs’ embeddings achieved an accuracy of 72.81% on the consistency checking task, outperforming the benchmark, and other tabular models.
The epithelial sodium channel (ENaC) is a heterotrimeric ion channel that plays a key role in sodium and water homeostasis in tetrapod vertebrates. In the aldosterone-sensitive distal nephron, hormonally controlled ENaC expression matches dietary sodium intake to its excretion. Furthermore, ENaC mediates sodium absorption across the epithelia of the colon, sweat ducts, reproductive tract, and lung. ENaC is a constitutively active ion channel and its expression, membrane abundance, and open probability (PO) are controlled by multiple intracellular and extracellular mediators and mechanisms [9]. Aberrant ENaC regulation is associated with severe human diseases, including hypertension, cystic fibrosis, pulmonary edema, pseudohypoaldosteronism type 1, and nephrotic syndrome [9].
Lignocellulose feedstock (LCF) provides a sustainable source of components to produce bioenergy, biofuel, and novel biomaterials. Besides hard and soft wood, so-called low-input plants such as Miscanthus are interesting crops to be investigated as potential feedstock for the second generation biorefinery. The status quo regarding the availability and composition of different plants, including grasses and fast-growing trees (i.e., Miscanthus, Paulownia), is reviewed here. The second focus of this review is the potential of multivariate data processing to be used for biomass analysis and quality control. Experimental data obtained by spectroscopic methods, such as nuclear magnetic resonance (NMR) and Fourier-transform infrared spectroscopy (FTIR), can be processed using computational techniques to characterize the 3D structure and energetic properties of the feedstock building blocks, including complex linkages. Here, we provide a brief summary of recently reported experimental data for structural analysis of LCF biomasses, and give our perspectives on the role of chemometrics in understanding and elucidating on LCF composition and lignin 3D structure.
Renewable resources gain increasing interest as source for environmentally benign biomaterials, such as drug encapsulation/release compounds, and scaffolds for tissue engineering in regenerative medicine. Being the second largest naturally abundant polymer, the interest in lignin valorization for biomedical utilization is rapidly growing. Depending on resource and isolation procedure, lignin shows specific antioxidant and antimicrobial activity. Today, efforts in research and industry are directed toward lignin utilization as renewable macromolecular building block for the preparation of polymeric drug encapsulation and scaffold materials. Within the last five years, remarkable progress has been made in isolation, functionalization and modification of lignin and lignin-derived compounds. However, literature so far mainly focuses lignin-derived fuels, lubricants and resins. The purpose of this review is to summarize the current state of the art and to highlight the most important results in the field of lignin-based materials for potential use in biomedicine (reported in 2014–2018). Special focus is drawn on lignin-derived nanomaterials for drug encapsulation and release as well as lignin hybrid materials used as scaffolds for guided bone regeneration in stem cell-based therapies.
Antioxidant activity is an essential feature required for oxygen-sensitive merchandise and goods, such as food and corresponding packaging as well as materials used in cosmetics and biomedicine. For example, vanillin, one of the most prominent antioxidants, is fabricated from lignin, the second most abundant natural polymer in the world. Antioxidant potential is primarily related to the termination of oxidation propagation reactions through hydrogen transfer. The application of technical lignin as a natural antioxidant has not yet been implemented in the industrial sector, mainly due to the complex heterogeneous structure and polydispersity of lignin. Thus, current research focuses on various isolation and purification strategies to improve the compatibility of lignin material with substrates and enhancing its stabilizing effect.
Antioxidant activity is an essential aspect of oxygen-sensitive merchandise and goods, such as food and corresponding packaging, cosmetics, and biomedicine. Technical lignin has not yet been applied as a natural antioxidant, mainly due to the complex heterogeneous structure and polydispersity of lignin. This report presents antioxidant capacity studies completed using the 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay. The influence of purification on lignin structure and activity was investigated. The purification procedure showed that double-fold selective extraction is the most efficient (confirmed by ultraviolet-visible (UV/Vis), Fourier transform infrared (FTIR), heteronuclear single quantum coherence (HSQC) and 31P nuclear magnetic resonance spectroscopy, size exclusion chromatography, and X-ray diffraction), resulting in fractions of very narrow polydispersity (3.2⁻1.6), up to four distinct absorption bands in UV/Vis spectroscopy. Due to differential scanning calorimetry measurements, the glass transition temperature increased from 123 to 185 °C for the purest fraction. Antioxidant capacity is discussed regarding the biomass source, pulping process, and degree of purification. Lignin obtained from industrial black liquor are compared with beech wood samples: antioxidant activity (DPPH inhibition) of kraft lignin fractions were 62⁻68%, whereas beech and spruce/pine-mixed lignin showed values of 42% and 64%, respectively. Total phenol content (TPC) of the isolated kraft lignin fractions varied between 26 and 35%, whereas beech and spruce/pine lignin were 33% and 34%, respectively. Storage decreased the TPC values but increased the DPPH inhibition.
The antiradical and antimicrobial activity of lignin and lignin-based films are both of great interest for applications such as food packaging additives. The polyphenolic structure of lignin in addition to the presence of O-containing functional groups is potentially responsible for these activities. This study used DPPH assays to discuss the antiradical activity of HPMC/lignin and HPMC/lignin/chitosan films. The scavenging activity (SA) of both binary (HPMC/lignin) and ternary (HPMC/lignin/chitosan) systems was affected by the percentage of the added lignin: the 5% addition showed the highest activity and the 30% addition had the lowest. Both scavenging activity and antimicrobial activity are dependent on the biomass source showing the following trend: organosolv of softwood > kraft of softwood > organosolv of grass. Testing the antimicrobial activities of lignins and lignin-containing films showed high antimicrobial activities against Gram-positive and Gram-negative bacteria at 35 °C and at low temperatures (0-7 °C). Purification of kraft lignin has a negative effect on the antimicrobial activity while storage has positive effect. The lignin release in the produced films affected the activity positively and the chitosan addition enhances the activity even more for both Gram-positive and Gram-negative bacteria. Testing the films against spoilage bacteria that grow at low temperatures revealed the activity of the 30% addition on HPMC/L1 film against both B. thermosphacta and P. fluorescens while L5 was active only against B. thermosphacta. In HPMC/lignin/chitosan films, the 5% addition exhibited activity against both B. thermosphacta and P. fluorescens.
This paper presents the preliminary results of the Socialist Republic of Vietnam country case study conducted as part of the research project Sustainable Labour Migration implemented by the University of Applied Science Bonn-Rhein-Sieg. The project focuses on stakeholder perspectives on countries of origin benefits and the sustainability of different transnational skill partnership schemes. Existing and ongoing small-scale initiatives indicate that opportunities exist for all three types of labour mobility pathways, from recruiting youth for apprenticeships and subsequent skilled work to recruitment and recognition of skilled 'professionals' certificates for direct work contracts to initial vocational education and training programs in a dual-track approach. While the latter has the highest potential to be more beneficial than other approaches, pursuing and supporting the scaling up of all three pathways in parallel will have additional, mutually reinforcing and supporting effects. The potential for benefits over and above those already realised by existing skill partnerships appears high, especially considering the favourable framework conditions specific to the long-standing German-Vietnamese relationship. If the potential of well-managed skill partnerships was realised, such sustainable models of skilled labour migration could serve as a unique selling point in the international competition for skilled labour.
SLC6A14 (ATB0,+) is unique among SLC proteins in its ability to transport 18 of the 20 proteinogenic (dipolar and cationic) amino acids and naturally occurring and synthetic analogues (including anti-viral prodrugs and nitric oxide synthase (NOS) inhibitors). SLC6A14 mediates amino acid uptake in multiple cell types where increased expression is associated with pathophysiological conditions including some cancers. Here, we investigated how a key position within the core LeuT-fold structure of SLC6A14 influences substrate specificity. Homology modelling and sequence analysis identified the transmembrane domain 3 residue V128 as equivalent to a position known to influence substrate specificity in distantly related SLC36 and SLC38 amino acid transporters. SLC6A14, with and without V128 mutations, was heterologously expressed and function determined by radiotracer solute uptake and electrophysiological measurement of transporter-associated current. Substituting the amino acid residue occupying the SLC6A14 128 position modified the binding pocket environment and selectively disrupted transport of cationic (but not dipolar) amino acids and related NOS inhibitors. By understanding the molecular basis of amino acid transporter substrate specificity we can improve knowledge of how this multi-functional transporter can be targeted and how the LeuT-fold facilitates such diversity in function among the SLC6 family and other SLC amino acid transporters.
Over the years, entrepreneurship has proven to be one of the key roles towards development. The cycle of business start-ups and growth are linked to the socio-economic benefits of the global world at large. With a growing world population of over 7billion people, the existence of universities (both public &private) as well as enterprises has increased globally in the 21st century. The mission and purpose behind Universities, Entrepreneurship and Enterprises thrive on development in the areas of capacity building, skill acquisition, training and knowledge amongst others. Africa alone has a population of over 1.2billion people with about 650 recognized universities and over 140,000 registered businesses (enterprises) in Ghana alone. A case study in Ghana reveals three key drivers towards entrepreneurship and the role university education has played in various business establishments. The drivers are problem statements, resources and research findings. Some of these notions to business include the management of risk, research findings and customer relationship. These are major features that need critical attention and play a role in business and entrepreneurship in Africa. A major success in business and entrepreneurship is the utilization of the human resource population and the lifeline support given to households in terms of income, while a barrier being the limited access to credit support from the financial companies at the inception stages. In conclusion, this conference should develop a practical book guide on business start-ups and entrepreneurship knowledge to be used at the various universities in Africa to enhance development.
The transport of carbon dioxide through pipelines is one of the important components of Carbon dioxide Capture and Storage (CCS) systems that are currently being developed. If high flow rates are desired a transportation in the liquid or supercritical phase is to be preferred. For technical reasons, the transport must stay in that phase, without transitioning to the gaseous state. In this paper, a numerical simulation of the stationary process of carbon dioxide transport with impurities and phase transitions is considered. We use the Homogeneous Equilibrium Model (HEM) and the GERG-2008 thermodynamic equation of state to describe the transport parameters. The algorithms used allow to solve scenarios of carbon dioxide transport in the liquid or supercritical phase, with the detection of approaching the phase transition region. Convergence of the solution algorithms is analyzed in connection with fast and abrupt changes of the equation of state and the enthalpy function in the region of phase transitions.
Pipeline transport is an efficient method for transporting fluids in energy supply and other technical applications. While natural gas is the classical example, the transport of hydrogen is becoming more and more important; both are transmitted under high pressure in a gaseous state. Also relevant is the transport of carbon dioxide, captured in the places of formation, transferred under high pressure in a liquid or supercritical state and pumped into underground reservoirs for storage. The transport of other fluids is also required in technical applications. Meanwhile, the transport equations for different fluids are essentially the same, and the simulation can be performed using the same methods. In this paper, the effect of control elements such as compressors, regulators and flaptraps on the stability of fluid transport simulations is studied. It is shown that modeling of these elements can lead to instabilities, both in stationary and dynamic simulations. Special regularization methods were developed to overcome these problems. Their functionality also for dynamic simulations is demonstrated for a number of numerical experiments.
This study sought to examine the relationship between the components of SMEs social capital and firm performance. Using the social capital theory and the resource-based view as the theoretical foundations and census, 1,532 SMEs were selected in the Accra Metropolis for the study. Empirical results from 717 SMEs, utilising the hierarchical linear regression model, revealed that owner/manger’s network relationships are beneficial to the firm depending on when the relationships are closed or opened. Moreover, the study found that social capital has a significant impact on the sales and market performance of small and medium-sized enterprises. The results also brought to the fore the fact that most social networks of SME entrepreneurs are family, friends and relatives, which most times can only be used for expressive purposes and not for instrumental gain. The practical implications of the results are also discussed.
Ziel der vorliegenden Forschungsarbeit ist es, den Einfluss von Persönlichkeit auf nachhaltige Maßnahmen anhand des Streamingkonsums zu eruieren. Der allgemein steigende Streamingkonsum und die damit einhergehenden Umweltschäden einerseits und ein wachsendes gesellschaftliches Umweltbewusstsein andererseits stellen einen Widerspruch dar. An einer Online-Umfrage zu diesen und weiterführenden Aspekten nahmen 204 Probanden teil. Während sich die Eigenschaften Verträglichkeit und Offenheit in hoher Ausprägung positiv auf die Umwelteinstellung, das Umweltverhalten und die Umweltbesorgnis auswirkten, wurden die umweltfreundlichen Maßnahmen in einer Clusteranalyse hingegen stärker von der Gruppe bevorzugt, deren Verträglichkeit und Offenheit verhältnismäßig schwach ausgeprägt waren. Ein geringes Wissen über die streamingbedingten Umweltfolgen lag grundsätzlich vor und dient als möglicher Erklärungsansatz des genannten Widerspruchs. Die Probanden forderten, ein Bewusstsein für diese Thematik zu schaffen. Um Streamingkonsum umweltfreundlicher zu gestalten empfiehlt es sich, alle am Prozess beteiligten Akteure einzubeziehen. Die befragten Konsumenten bevorzugten dabei vor allem die Verwendung von Ökostrom und lehnten eine Umstellung der Bezahlstruktur vorwiegend ab.
Recent work in image captioning and scene-segmentation has shown significant results in the context of scene-understanding. However, most of these developments have not been extrapolated to research areas such as robotics. In this work we review the current state-ofthe- art models, datasets and metrics in image captioning and scenesegmentation. We introduce an anomaly detection dataset for the purpose of robotic applications, and we present a deep learning architecture that describes and classifies anomalous situations. We report a METEOR score of 16.2 and a classification accuracy of 97 %.
Farming communities confronted with climate change adopt formal and informal adaptation strategies to mitigate the effects of climate change. While the environmental and social effects of climate change are well documented, there is still a dearth of literature on girl-child marriage (formal marriage or informal union between a child under the age of 18 and an adult or another child) as a response to the effects of climate change. In this research, we ask if girl-child marriage is promoted as a social protection mechanism first, rather than as simply a response to climate-induced poverty. We use qualitative semi-structured interviews and focus group discussions to explore this question in a rural farming community in Northern Ghana. Our findings reveal that climate change shocks result in poverty and compel farmers to marry off their young daughters. The unmarried girl-child is perceived as an ‘extra mouth to feed’, a liability whose marriage becomes a strategy for protecting the family, the family’s reputation, and the girl child. The emphasis in girl-child marriage is not on the girl-child as an individual but on the family as a group. Hence, what is good for the family is assumed to be in the best interest of the girl-child. We place our analysis at the intersection of climate change, social protection, and the incidence of girl-child marriages. We argue that understanding this link is crucial and can contribute significantly to our knowledge of girl-child marriage as well as our ability to address this in Sub-Saharan Africa.
Ghanaian tertiary graduates' perception of entrepreneurship education on employment opportunities
(2017)
This study focuses on whether entrepreneurship education increases entrepreneurial interest in students to set up new businesses. Entrepreneurship is a core course taken in the third year by all students of Ho Technical University. Out of the 1329 population of level 300 students of the 2016/2017 academic year, data were collected by convenience sampling from 325 (217 males and 108 females) with mean age of 24.75 years from 14 departments of four faculties. The students responded to 43-survey items derived from reviewed literature on a 5-Point Likert-Scale. It is concluded that more than 84% of the respondents agreed that entrepreneurship education informed students about entrepreneurship through the acquisition of practical skills, knowledge about acquisition of personal orientation, knowledge about business management principles and the availability of entrepreneurial support agencies. This shows that the students are highly confident of setting up their own businesses through the knowledge acquired. The study therefore has important implications for policy makers, management of tertiary institutions, students and educational evaluators on how to ensure that tertiary graduates set up entrepreneurship ventures in order to partially solve the unemployment problem in Ghana.
AErOmAt Abschlussbericht
(2020)
Das Projekt AErOmAt hatte zum Ziel, neue Methoden zu entwickeln, um einen erheblichen Teil aerodynamischer Simulationen bei rechenaufwändigen Optimierungsdomänen einzusparen. Die Hochschule Bonn-Rhein-Sieg (H-BRS) hat auf diesem Weg einen gesellschaftlich relevanten und gleichzeitig wirtschaftlich verwertbaren Beitrag zur Energieeffizienzforschung geleistet. Das Projekt führte außerdem zu einer schnelleren Integration der neuberufenen Antragsteller in die vorhandenen Forschungsstrukturen.
The exchange program was aimed at giving students an international exposure through teaching and intercultural communication and to also enhance the existing relationship among the partner schools. The program lasted for a period of six months from September 2016 to February 2017. The main part of the program was the International Management program which comprised of four courses. The program offered us an opportunity to travel to four European countries to broaden our academic and social network.
Among the celestial bodies in the Solar System, Mars currently represents the main target for the search for life beyond Earth. However, its surface is constantly exposed to high doses of cosmic rays (CRs) that may pose a threat to any biological system. For this reason, investigations into the limits of resistance of life to space relevant radiation is fundamental to speculate on the chance of finding extraterrestrial organisms on Mars. In the present work, as part of the STARLIFE project, the responses of dried colonies of the black fungus Cryomyces antarcticus Culture Collection of Fungi from Extreme Environments (CCFEE) 515 to the exposure to accelerated iron (LET: 200 keV/μm) ions, which mimic part of CRs spectrum, were investigated. Samples were exposed to the iron ions up to 1000 Gy in the presence of Martian regolith analogues. Our results showed an extraordinary resistance of the fungus in terms of survival, recovery of metabolic activity and DNA integrity. These experiments give new insights into the survival probability of possible terrestrial-like life forms on the present or past Martian surface and shallow subsurface environments.
Projekte des maschinellen Lernens (ML), insbesondere im Bereich der Zeitreihenanalyse, gewinnen heute zunehmend an Bedeutung. Die Bereitstellung solcher Projekte in einer Produktionsumgebung mit dem gleichen Automatisierungsgrad wie bei klassischen Softwareprojekten ist ein komplexes Unterfangen. Die Umsetzung in Produktionsumgebungen erfordert neben klassischen DevOps auch Machine Learning Operation (MLOps) Technologien und Werkzeuge. Ziel dieser Studie ist es, einen umfassenden Überblick über verfügbare MLOps Tools zu bieten und einen spezifischen Techstack für Zeitreihen ML Projekte zu entwickeln. Es werden aktuelle Trends und Werkzeuge im Bereich MLOps durch eine multivokale Literaturrecherche (MLR) untersucht und analysiert. Die Studie identifiziert passende MLOps Werkzeuge und Methoden für die Zeitreihenanalyse und präsentiert eine spezifische Implementierung einer MLOps Pipeline für die Aktienkursprognose des S&P 500. MLOps und DevOps Tools nehmen eine essenzielle Rolle bei der effektiven Konstruktion und Verwaltung von ML Pipelines ein. Bei der Auswahl geeigneter Werkzeuge ist stets eine spezifische Anpassung an die jeweiligen Projektanforderungen erforderlich. Die Bereitstellung einer detaillierten Darstellung der aktuellen MLOps Tool Landschaft erweist sich hierbei als wertvolle Ressource, die es Entwicklern ermöglicht, die Effizienz und Effektivität ihrer ML Projekte zu optimieren.
XPERSIF: a software integration framework & architecture for robotic learning by experimentation
(2008)
The integration of independently-developed applications into an efficient system, particularly in a distributed setting, is the core issue addressed in this work. Cooperation between researchers across various field boundaries in order to solve complex problems has become commonplace. Due to the multidisciplinary nature of such efforts, individual applications are developed independent of the integration process. The integration of individual applications into a fully-functioning architecture is a complex and multifaceted task. This thesis extends a component-based architecture, previously developed by the authors, to allow the integration of various software applications which are deployed in a distributed setting. The test bed for the framework is the EU project XPERO, the goal of which is robot learning by experimentation. The task at hand is the integration of the required applications, such as planning of experiments, perception of parametrized features, robot motion control and knowledge-based learning, into a coherent cognitive architecture. This allows a mobile robot to use the methods involved in experimentation in order to learn about its environment. To meet the challenge of developing this architecture within a distributed, heterogeneous environment, the authors specified, defined, developed, implemented and tested a component-based architecture called XPERSIF. The architecture comprises loosely-coupled, autonomous components that offer services through their well-defined interfaces and form a service-oriented architecture. The Ice middleware is used in the communication layer. Its deployment facilitates the necessary refactoring of concepts. One fully specified and detailed use case is the successful integration of the XPERSim simulator which constitutes one of the kernel components of XPERO.The results of this work demonstrate that the proposed architecture is robust and flexible, and can be successfully scaled to allow the complete integration of the necessary applications, thus enabling robot learning by experimentation. The design supports composability, thus allowing components to be grouped together in order to provide an aggregate service. Distributed simulation enabled real time tele-observation of the simulated experiment. Results show that incorporating the XPERSim simulator has substantially enhanced the speed of research and the information flow within the cognitive learning loop.
I had an opportunity to visit Germany in 2016/2017 during which period I was on an exchange staff program between the University of Nairobi, Hochschule Bonn-Rhein-Sieg, University of Applied Sciences, Germany, and the University of Cape Coast, Ghana. My visit took me to the city of Bonn where the University of Bonn-Rhein-Sieg is located in the suburban area in the cities of Sankt Augustin, Rheinbach, and Hennef. I was able to interact with faculty members and students. During this period, the discussion I had with faculty mainly focused on various programs offered by the university and how they have been able to interact and partner with the industry and create linkages with various firms in Germany. It emerged from our discussion that the development of the curriculum by the university depends on such partnerships.
A biodegradable blend of PBAT—poly(butylene adipate-co-terephthalate)—and PLA—poly(lactic acid)—for blown film extrusion was modified with four multi-functional chain extending cross-linkers (CECL). The anisotropic morphology introduced during film blowing affects the degradation processes. Given that two CECL increased the melt flow rate (MFR) of tris(2,4-di-tert-butylphenyl)phosphite (V1) and 1,3-phenylenebisoxazoline (V2) and the other two reduced it (aromatic polycarbodiimide (V3) and poly(4,4-dicyclohexylmethanecarbodiimide) (V4)), their compost (bio-)disintegration behavior was investigated. It was significantly altered with respect to the unmodified reference blend (REF). The disintegration behavior at 30 and 60 °C was investigated by determining changes in mass, Young’s moduli, tensile strengths, elongations at break and thermal properties. In order to quantify the disintegration behavior, the hole areas of blown films were evaluated after compost storage at 60 °C to calculate the kinetics of the time dependent degrees of disintegration. The kinetic model of disintegration provides two parameters: initiation time and disintegration time. They quantify the effects of the CECL on the disintegration behavior of the PBAT/PLA compound. Differential scanning calorimetry (DSC) revealed a pronounced annealing effect during storage in compost at 30 °C, as well as the occurrence of an additional step-like increase in the heat flow at 75 °C after storage at 60 °C. The disintegration consists of processes which affect amorphous and crystalline phase of PBAT in different manner that cannot be understood by a hydrolytic chain degradation only. Furthermore, gel permeation chromatography (GPC) revealed molecular degradation only at 60 °C for the REF and V1 after 7 days of compost storage. The observed losses of mass and cross-sectional area seem to be attributed more to mechanical decay than to molecular degradation for the given compost storage times.
Process-induced changes in the morphology of biodegradable polybutylene adipate terephthalate (PBAT) and polylactic acid (PLA) blends modified with various multifunctional chainextending cross-linkers (CECLs) are presented. The morphology of unmodified and modified films produced with blown film extrusion is examined in an extrusion direction (ED) and a transverse direction (TD). While FTIR analysis showed only small peak shifts indicating that the CECLs modify the molecular weight of the PBAT/PLA blend, SEM investigations of the fracture surfaces of blown extrusion films revealed their significant effect on the morphology formed during the processing. Due to the combined shear and elongation deformation during blown film extrusion, rather spherical PLA islands were partly transformed into long fibrils, which tended to decay to chains of elliptical islands if cooled slowly. The CECL introduction into the blend changed the thickness of the PLA fibrils, modified the interface adhesion, and altered the deformation behavior of the PBAT matrix from brittle to ductile. The results proved that CECLs react selectively with PBAT, PLA, and their interface. Furthermore, the reactions of CECLs with PBAT/PLA induced by the processing depended on the deformation directions (ED and TD), thus resulting in further non-uniformities of blown extrusion films.
This study investigates the effects of four multifunctional chain-extending cross-linkers (CECL) on the processability, mechanical performance, and structure of polybutylene adipate terephthalate (PBAT) and polylactic acid (PLA) blends produced using film blowing technology. The newly developed reference compound (M·VERA® B5029) and the CECL modified blends are characterized with respect to the initial properties and the corresponding properties after aging at 50 °C for 1 and 2 months. The tensile strength, seal strength, and melt volume rate (MVR) are markedly changed after thermal aging, whereas the storage modulus, elongation at the break, and tear resistance remain constant. The degradation of the polymer chains and crosslinking with increased and decreased MVR, respectively, is examined thoroughly with differential scanning calorimetry (DSC), with the results indicating that the CECL-modified blends do not generally endure thermo-oxidation over time. Further, DSC measurements of 25 µm and 100 µm films reveal that film blowing pronouncedly changes the structures of the compounds. These findings are also confirmed by dynamic mechanical analysis, with the conclusion that tris(2,4-di-tert-butylphenyl)phosphite barely affects the glass transition temperature, while with the other changes in CECL are seen. Cross-linking is found for aromatic polycarbodiimide and poly(4,4-dicyclohexylmethanecarbodiimide) CECL after melting of granules and films, although overall the most synergetic effect of the CECL is shown by 1,3-phenylenebisoxazoline.
This review is divided into two interconnected parts, namely a biological and a chemical one. The focus of the first part is on the biological background for constructing tissue-engineered vascular grafts to promote vascular healing. Various cell types, such as embryonic, mesenchymal and induced pluripotent stem cells, progenitor cells and endothelial- and smooth muscle cells will be discussed with respect to their specific markers. The in vitro and in vivo models and their potential to treat vascular diseases are also introduced. The chemical part focuses on strategies using either artificial or natural polymers for scaffold fabrication, including decellularized cardiovascular tissue. An overview will be given on scaffold fabrication including conventional methods and nanotechnologies. Special attention is given to 3D network formation via different chemical and physical cross-linking methods. In particular, electron beam treatment is introduced as a method to combine 3D network formation and surface modification. The review includes recently published scientific data and patents which have been registered within the last decade.
(1) Background: Autologous bone is supposed to contain vital cells that might improve the osseointegration of dental implants. The aim of this study was to investigate particulate and filtered bone chips collected during oral surgery intervention with respect to their osteogenic potential and the extent of microbial contamination to evaluate its usefulness for jawbone reconstruction prior to implant placement. (2) Methods: Cortical and cortical-cancellous bone chip samples of 84 patients were collected. The stem cell character of outgrowing cells was characterized by expression of CD73, CD90 and CD105, followed by osteogenic differentiation. The degree of bacterial contamination was determined by Gram staining, catalase and oxidase tests and tests to evaluate the genera of the found bacteria (3) Results: Pre-surgical antibiotic treatment of the patients significantly increased viability of the collected bone chip cells. No significant difference in plasticity was observed between cells isolated from the cortical and cortical-cancellous bone chip samples. Thus, both types of bone tissue can be used for jawbone reconstruction. The osteogenic differentiation was independent of the quantity and quality of the detected microorganisms, which comprise the most common bacteria in the oral cavity. (4) Discussion: This study shows that the quality of bone chip-derived stem cells is independent of the donor site and the extent of present common microorganisms, highlighting autologous bone tissue, assessable without additional surgical intervention for the patient, as a useful material for dental implantology.
For years, the common logic that underpinned entrepreneurship was to find a niche within in a market/sector and then solidify business practice to achieve success in the market segment. The dawn of technologically-based disruptive enterprises, such as Uber and Air B&B, coupled with the nearing Fourth Industrial revolution seriously call into question the conventional business logic. In this article, the projected impact of these forces on African entrepreneurs is explored. We look at the role of government, business and education systems to prepare for the impact of the Fourth Industrial revolution. Specific focus is placed on the need for entrepreneurial skills and training to prepare for the impact of the Fourth Industrial revolution. We also explore the importance of innovation, both in terms of products and processes to mitigate against the impact of these forces.
Small, Medium and Micro Enterprises (SMMEs) are widely recognised as playing a pivotal role in economic development and job creation. This is particularly so in Africa, where SMMEs are responsible for 80% of all formal jobs. While this is recognised by various African continental and national developments plans, the nefarious practice of late payment, by especially governments, not only stunt the growth of SMMEs, but often-time leads to business failure. This article investigates the impact of late payment, with a specific focus on South Africa and touches on international good practice that may be employed to address this phenomenon.
Recent advances in Natural Language Processing have substantially improved contextualized representations of language. However, the inclusion of factual knowledge, particularly in the biomedical domain, remains challenging. Hence, many Language Models (LMs) are extended by Knowledge Graphs (KGs), but most approaches require entity linking (i.e., explicit alignment between text and KG entities). Inspired by single-stream multimodal Transformers operating on text, image and video data, this thesis proposes the Sophisticated Transformer trained on biomedical text and Knowledge Graphs (STonKGs). STonKGs incorporates a novel multimodal architecture based on a cross encoder that uses the attention mechanism on a concatenation of input sequences derived from text and KG triples, respectively. Over 13 million so-called text-triple pairs, coming from PubMed and assembled using the Integrated Network and Dynamical Reasoning Assembler (INDRA), were used in an unsupervised pre-training procedure to learn representations of biomedical knowledge in STonKGs. By comparing STonKGs to an NLP- and a KG-baseline (operating on either text or KG data) on a benchmark consisting of eight fine-tuning tasks, the proposed knowledge integration method applied in STonKGs was empirically validated. Specifically, on tasks with a comparatively small dataset size and a larger number of classes, STonKGs resulted in considerable performance gains, beating the F1-score of the best baseline by up to 0.083. Both the source code as well as the code used to implement STonKGs are made publicly available so that the proposed method of this thesis can be extended to many other biomedical applications.
MOTIVATION
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited.
RESULTS
To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs (KGs). This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations in a shared embedding space. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against three baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.084 (i.e., from 0.881 to 0.965). Finally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications.
AVAILABILITY
We make the source code and the Python package of STonKGs available at GitHub (https://github.com/stonkgs/stonkgs) and PyPI (https://pypi.org/project/stonkgs/). The pre-trained STonKGs models and the task-specific classification models are respectively available at https://huggingface.co/stonkgs/stonkgs-150k and https://zenodo.org/communities/stonkgs.
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online.
ProtSTonKGs: A Sophisticated Transformer Trained on Protein Sequences, Text, and Knowledge Graphs
(2022)
While most approaches individually exploit unstructured data from the biomedical literature or structured data from biomedical knowledge graphs, their union can better exploit the advantages of such approaches, ultimately improving representations of biology. Using multimodal transformers for such purposes can improve performance on context dependent classication tasks, as demonstrated by our previous model, the Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs (STonKGs). In this work, we introduce ProtSTonKGs, a transformer aimed at learning all-encompassing representations of protein-protein interactions. ProtSTonKGs presents an extension to our previous work by adding textual protein descriptions and amino acid sequences (i.e., structural information) to the text- and knowledge graph-based input sequence used in STonKGs. We benchmark ProtSTonKGs against STonKGs, resulting in improved F1 scores by up to 0.066 (i.e., from 0.204 to 0.270) in several tasks such as predicting protein interactions in several contexts. Our work demonstrates how multimodal transformers can be used to integrate heterogeneous sources of information, paving the foundation for future approaches that use multiple modalities for biomedical applications.
With increasing life expectancy, demands for dental tissue and whole-tooth regeneration are becoming more significant. Despite great progress in medicine, including regenerative therapies, the complex structure of dental tissues introduces several challenges to the field of regenerative dentistry. Interdisciplinary efforts from cellular biologists, material scientists, and clinical odontologists are being made to establish strategies and find the solutions for dental tissue regeneration and/or whole-tooth regeneration. In recent years, many significant discoveries were done regarding signaling pathways and factors shaping calcified tissue genesis, including those of tooth. Novel biocompatible scaffolds and polymer-based drug release systems are under development and may soon result in clinically applicable biomaterials with the potential to modulate signaling cascades involved in dental tissue genesis and regeneration. Approaches for whole-tooth regeneration utilizing adult stem cells, induced pluripotent stem cells, or tooth germ cells transplantation are emerging as promising alternatives to overcome existing in vitro tissue generation hurdles. In this interdisciplinary review, most recent advances in cellular signaling guiding dental tissue genesis, novel functionalized scaffolds and drug release material, various odontogenic cell sources, and methods for tooth regeneration are discussed thus providing a multi-faceted, up-to-date, and illustrative overview on the tooth regeneration matter, alongside hints for future directions in the challenging field of regenerative dentistry.
Target meaning representations for semantic parsing tasks are often based on programming or query languages, such as SQL, and can be formalized by a context-free grammar. Assuming a priori knowledge of the target domain, such grammars can be exploited to enforce syntactical constraints when predicting logical forms. To that end, we assess how syntactical parsers can be integrated into modern encoder-decoder frameworks. Specifically, we implement an attentional SEQ2SEQ model that uses an LR parser to maintain syntactically valid sequences throughout the decoding procedure. Compared to other approaches to grammar-guided decoding that modify the underlying neural network architecture or attempt to derive full parse trees, our approach is conceptually simpler, adds less computational overhead during inference and integrates seamlessly with current SEQ2SEQ frameworks. We present preliminary evaluation results against a recurrent SEQ2SEQ baseline on GEOQUERY and ATIS and demonstrate improved performance while enforcing grammatical constraints.
Mobile technologies have evolved into the means of gaining access to information for learning. Its application in higher education is still a novel concept, particularly in underdeveloped countries. This study is aimed at exploring the views of doctoral students regarding their learning experiences with mobile technologies. Student focus group interviews of 24 doctoral students from 3 different academic institutions were interviewed. The participants’ responses were recorded, transcribed, and analyzed to make conclusions. According to the findings of this study, mobile devices play an important part in the learning experiences of doctoral students. The participating students engaged in collaborative learning using mobile technologies. Given the benefits of adopting mobile technologies for learning activities, academic institutions should focus on teaching faculty members to use this to involve students in their learning process. The implications of this study call for the continued advancement of mobile technologies to facilitate effective learning experience for the multitude of mobile learners in developing countries. Another implication is that academic institutions with collaboration with libraries should see the need to develop user friendly mobile app that is linked to the library management system. Such an application would allow the students to optimally use their smartphones and tablets to search the library’s resources from their mobile devices. Training should be offered to the teaching faculty members to come to terms with the benefits of mobile technologies for learning activities.
Photovoltaic (PV) power data are a valuable but as yet under-utilised resource that could be used to characterise global irradiance with unprecedented spatio-temporal resolution. The resulting knowledge of atmospheric conditions can then be fed back into weather models and will ultimately serve to improve forecasts of PV power itself. This provides a data-driven alternative to statistical methods that use post-processing to overcome inconsistencies between ground-based irradiance measurements and the corresponding predictions of regional weather models (see for instance Frank et al., 2018). This work reports first results from an algorithm developed to infer global horizontal irradiance as well as atmospheric optical properties such as aerosol or cloud optical depth from PV power measurements.
The temperature of photovoltaic modules is modelled as a dynamic function of ambient temperature, shortwave and longwave irradiance and wind speed, in order to allow for a more accurate characterisation of their efficiency. A simple dynamic thermal model is developed by extending an existing parametric steady-state model using an exponential smoothing kernel to include the effect of the heat capacity of the system. The four parameters of the model are fitted to measured data from three photovoltaic systems in the Allgäu region in Germany using non-linear optimisation. The dynamic model reduces the root-mean-square error between measured and modelled module temperature to 1.58 K on average, compared to 3.03 K for the steady-state model, whereas the maximum instantaneous error is reduced from 20.02 to 6.58 K.
Incoming solar radiation is an important driver of our climate and weather. Several studies (see for instance Frank et al. 2018) have revealed discrepancies between ground-based irradiance measurements and the predictions of regional weather models. In the realm of electricity generation, accurate forecasts of solar photovoltaic (PV)energy yield are becoming indispensable for cost-effective grid operation: in Germany there are 1.6 million PVsystems installed, with a nominal power of 46 GW (Bundesverband Solarwirtschaft 2019). The proliferation of PV systems provides a unique opportunity to characterise global irradiance with unprecedented spatiotemporalresolution, which in turn will allow for highly resolved PV power forecasts.
In view of the rapid growth of solar power installations worldwide, accurate forecasts of photovoltaic (PV) power generation are becoming increasingly indispensable for the overall stability of the electricity grid. In the context of household energy storage systems, PV power forecasts contribute towards intelligent energy management and control of PV-battery systems, in particular so that self-sufficiency and battery lifetime are maximised. Typical battery control algorithms require day-ahead forecasts of PV power generation, and in most cases a combination of statistical methods and numerical weather prediction (NWP) models are employed. The latter are however often inaccurate, both due to deficiencies in model physics as well as an insufficient description of irradiance variability.
The rapid increase in solar photovoltaic (PV) installations worldwide has resulted in the electricity grid becoming increasingly dependent on atmospheric conditions, thus requiring more accurate forecasts of incoming solar irradiance. In this context, measured data from PV systems are a valuable source of information about the optical properties of the atmosphere, in particular the cloud optical depth (COD). This work reports first results from an inversion algorithm developed to infer global, direct and diffuse irradiance as well as atmospheric optical properties from PV power measurements, with the goal of assimilating this information into numerical weather prediction (NWP) models.
The electricity grid of the future will be built on renewable energy sources, which are highly variable and dependent on atmospheric conditions. In power grids with an increasingly high penetration of solar photovoltaics (PV), an accurate knowledge of the incoming solar irradiance is indispensable for grid operation and planning, and reliable irradiance forecasts are thus invaluable for energy system operators. In order to better characterise shortwave solar radiation in time and space, data from PV systems themselves can be used, since the measured power provides information about both irradiance and the optical properties of the atmosphere, in particular the cloud optical depth (COD). Indeed, in the European context with highly variable cloud cover, the cloud fraction and COD are important parameters in determining the irradiance, whereas aerosol effects are only of secondary importance.
Solar photovoltaic power output is modulated by atmospheric aerosols and clouds and thus contains valuable information on the optical properties of the atmosphere. As a ground-based data source with high spatiotemporal resolution it has great potential to complement other ground-based solar irradiance measurements as well as those of weather models and satellites, thus leading to an improved characterisation of global horizontal irradiance. In this work several algorithms are presented that can retrieve global tilted and horizontal irradiance and atmospheric optical properties from solar photovoltaic data and/or pyranometer measurements. The method is tested on data from two measurement campaigns that took place in the Allgäu region in Germany in autumn 2018 and summer 2019, and the results are compared with local pyranometer measurements as well as satellite and weather model data. Using power data measured at 1 Hz and averaged to 1 min resolution along with a non-linear photovoltaic module temperature model, global horizontal irradiance is extracted with a mean bias error compared to concurrent pyranometer measurements of 5.79 W m−2 (7.35 W m−2) under clear (cloudy) skies, averaged over the two campaigns, whereas for the retrieval using coarser 15 min power data with a linear temperature model the mean bias error is 5.88 and 41.87 W m−2 under clear and cloudy skies, respectively.
During completely overcast periods the cloud optical depth is extracted from photovoltaic power using a lookup table method based on a 1D radiative transfer simulation, and the results are compared to both satellite retrievals and data from the Consortium for Small-scale Modelling (COSMO) weather model. Potential applications of this approach for extracting cloud optical properties are discussed, as well as certain limitations, such as the representation of 3D radiative effects that occur under broken-cloud conditions. In principle this method could provide an unprecedented amount of ground-based data on both irradiance and optical properties of the atmosphere, as long as the required photovoltaic power data are available and properly pre-screened to remove unwanted artefacts in the signal. Possible solutions to this problem are discussed in the context of future work.
Solar photovoltaic power output is modulated by atmospheric aerosols and clouds and thus contains valuable information on the optical properties of the atmosphere. As a ground-based data source with high spatiotemporal resolution it has great potential to complement other ground-based solar irradiance measurements as well as those of weather models and satellites, thus leading to an improved characterisation of global horizontal irradiance. In this work several algorithms are presented that can retrieve global tilted and horizontal irradiance and atmospheric optical properties from solar photovoltaic data and/or pyranometer measurements. Specifically, the aerosol (cloud) optical depth is inferred during clear sky (completely overcast) conditions. The method is tested on data from two measurement campaigns that took place in Allgäu, Germany in autumn 2018 and summer 2019, and the results are compared with local pyranometer measurements as well as satellite and weather model data. Using power data measured at 1 Hz and averaged to 1 minute resolution, the hourly global horizontal irradiance is extracted with a mean bias error compared to concurrent pyranometer measurements of 11.45 W m−2, averaged over the two campaigns, whereas for the retrieval using coarser 15 minute power data the mean bias error is 16.39 W m−2.
During completely overcast periods the cloud optical depth is extracted from photovoltaic power using a lookup table method based on a one-dimensional radiative transfer simulation, and the results are compared to both satellite retrievals as well as data from the COSMO weather model. Potential applications of this approach for extracting cloud optical properties are discussed, as well as certain limitations, such as the representation of 3D radiative effects that occur under broken cloud conditions. In principle this method could provide an unprecedented amount of ground-based data on both irradiance and optical properties of the atmosphere, as long as the required photovoltaic power data are available and are properly pre-screened to remove unwanted artefacts in the signal. Possible solutions to this problem are discussed in the context of future work.
Object-Based Trace Model for Automatic Indicator Computation in the Human Learning Environments
(2021)
This paper proposes a traces model in the form of an object or class model (in the UML sense) which allows the automatic calculation of indicators of various kinds and independently of the computer environment for human learning (CEHL). The model is based on the establishment of a trace-based system that encompasses all the logic of traces collecting and indicators calculation. It is im-plemented in the form of a trace database. It is an important contribution in the field of the exploitation of the traces of apprenticeship in a CEHL because it pro-vides a general formalism for modeling the traces and allowing the calculation of several indicators at the same time. Also, with the inclusion of calculated indica-tors as potential learning traces, our model provides a formalism for classifying the various indicators in the form of inheritance relationships, which promotes the reuse of indicators already calculated. Economically, the model can allow organi-zations with different learning platforms to invest only in one traces Management System. At the social level, it can allow a better sharing of trace databases be-tween the various research institutions in the field of CEHL.
Telepresence robots allow users to be spatially and socially present in remote environments. Yet, it can be challenging to remotely operate telepresence robots, especially in dense environments such as academic conferences or workplaces. In this paper, we primarily focus on the effect that a speed control method, which automatically slows the telepresence robot down when getting closer to obstacles, has on user behaviors. In our first user study, participants drove the robot through a static obstacle course with narrow sections. Results indicate that the automatic speed control method significantly decreases the number of collisions. For the second study we designed a more naturalistic, conference-like experimental environment with tasks that require social interaction, and collected subjective responses from the participants when they were asked to navigate through the environment. While about half of the participants preferred automatic speed control because it allowed for smoother and safer navigation, others did not want to be influenced by an automatic mechanism. Overall, the results suggest that automatic speed control simplifies the user interface for telepresence robots in static dense environments, but should be considered as optionally available, especially in situations involving social interactions.
Salts and proteins comprise two of the basic molecular components of biological materials. Kosmotropic/chaotropic co-solvation and matching ion water affinities explain basic ionic effects on protein aggregation observed in simple solutions. However, it is unclear how these theories apply to proteins in complex biological environments and what the underlying ionic binding patterns are. Using the positive ion Ca2+ and the negatively charged membrane protein SNAP25, we studied ion effects on protein oligomerization in solution, in native membranes and in molecular dynamics (MD) simulations. We find that concentration-dependent ion-induced protein oligomerization is a fundamental chemico-physical principle applying not only to soluble but also to membrane-anchored proteins in their native environment. Oligomerization is driven by the interaction of Ca2+ ions with the carboxylate groups of aspartate and glutamate. From low up to middle concentrations, salt bridges between Ca2+ ions and two or more protein residues lead to increasingly larger oligomers, while at high concentrations oligomers disperse due to overcharging effects. The insights provide a conceptual framework at the interface of physics, chemistry and biology to explain binding of ions to charged protein surfaces on an atomistic scale, as occurring during protein solubilisation, aggregation and oligomerization both in simple solutions and membrane systems.
In diesem Paper wird ein Modell eines Photovoltaik(PV)-Diesel-Hybrid-Systems aufgebaut. Dieses System besitzt neben einer PV-Anlage einen Batteriespeicher und ist an das öffentliche Stromnetz angeschlossen. Bei einem Ausfall aller drei Energiequellen stellt ein Dieselgenerator die Stromversorgung sicher. Mit Hilfe des erstellten Modells wird der Einfluss der unterschiedlichen Jahreszeiten und Wetterbedingungen auf den PV-Ertrag und das gesamte System im Zeitraum von Februar 2016 bis Februar 2017 untersucht. Die Messdaten dafür stammen von einem Krankenhaus in Akwatia, Ghana. Das Krankenhaus besitzt bereits eine PV-Anlage und einen Dieselgenerator als Backup.
Ein weiterer Aspekt der Untersuchung ist der Einfluss der Stromausfälle, die in dieser Region häufig vorkommen, auf den Einsatz des Generators.
Resultat der Untersuchung ist die Relevanz saisonaler und infrastruktureller Einflüsse auf die Betriebsweise des Systems. Mit Hilfe des erstellten Modells wurde analysiert, dass besonders während der Regenzeit im August die PV-Leistung sinkt und folglich viel Energie durch das öffentliche Stromnetz und den Generator bereitgestellt werden muss. Ein weiterer signifikanter Einbruch im PV-Ertrag ist zur Zeit des Harmattans im Januar zu verzeichnen.
In Zeiten deutlicher Auswirkungen des Klimawandels und gravierender sozialer Missstände in Teilen der Welt drängen sich dringender denn je für die Weltgemeinschaft die Fragen nach Handlungsoptionen zum Erreichen nachhaltiger Entwicklung auf. Dementsprechend ist auch der öffentliche Sektor gefragt seinen Beitrag zu leisten. Als ein wirkungsvoller Beitrag ist die Etablierung nachhaltiger öffentlicher Beschaffung zu nennen. Um diesen Beitrag adäquat umzusetzen, müssen unter anderem die Richtlinien und Gesetze, die die öffentliche Beschaffung regeln, ausreichend Umsetzungsmöglichkeiten dafür bieten. Mit der jüngsten europäischen Vergaberechtsreform und der daraufhin verabschiedeten deutschen Anpassung der Gesetze ist der Grundstein dafür gelegt worden. Welche erfolgsversprechenden Umsetzungsmöglichkeiten diese rechtliche Basis nun den Kommunen in Deutschland als Hauptakteuren öffentlicher Beschaffung bietet und welche weiteren Erfolgsfaktoren für die Umsetzung nachhaltiger öffentlicher Beschaffung auf kommunaler Ebene entscheidend sind, ist Thema dieses Arbeitspapiers. Ein Praxischeck der Erfolgsfaktoren wird am Beispiel der Stadt Bonn, stellvertretend für die kommunale Ebene in NRW vorgenommen.
In this contribution, we perform computer simulations to expedite the development of hydrogen storages based on metal hydride. These simulations enable in-depth analysis of the processes within the systems which otherwise could not be achieved. That is, because the determination of crucial process properties require measurement instruments in the setup which are currently not available. Therefore, we investigate the reliability of reaction values that are determined by a design of experiments.
Specifically, we first explain our model setup in detail. We define the mathematical terms to obtain insights into the thermal processes and reaction kinetics. We then compare the simulated results to measurements of a 5-gram sample consisting of iron-titanium-manganese (FeTiMn) to obtain the values with the highest agreement with the experimental data. In addition, we improve the model by replacing the commonly used Van’t-Hoff equation by a mathematical expression of the pressure-composition-isotherms (PCI) to calculate the equilibrium pressure.
Finally, the parameters’ accuracy is checked in yet another with an existing metal hydride system. The simulated results demonstrate high concordance with experimental data, which advocate the usage of approximated kinetic reaction properties by a design of experiments for further design studies. Furthermore, we are able to determine process parameters like the entropy and enthalpy.
Im Zuge der Migrationsbewegung in den Jahren 2015 und 2016 hat die menschenwürdige Unterbringung von geflüchteten Menschen in Kommunen in Deutschland an Aufmerksamkeit gewonnen. Der Anstieg der Asylbewerber:innen in den Kommunen sowie die Bundesinitiative „Schutz von geflüchteten Menschen in Flüchtlingsunterkünften“ haben Veränderungen im Hinblick auf Schutzstandards in der kommunalen Unterbringung geflüchteter Menschen hervorgerufen. Der Artikel erklärt diese Veränderungen mittels einer akteurszentrierten organisationssoziologischen Herangehensweise. Grundlage sind empirische Forschungsergebnisse des Projektes „Organisational Perspectives on Human Security Standards for Refugees in Germany“ aus zwei deutschen Kommunen.
The cooperation between researchers and practitioners during the different stages of the research process is promoted as it can be of benefit to both society and research supporting processes of ‘transformation’. While acknowledging the important potential of research–practice–collaborations (RPCs), this paper reflects on RPCs from a political-economic perspective to also address potential unintended adverse effects on knowledge generation due to divergent interests, incomplete information or the unequal distribution of resources. Asymmetries between actors may induce distorted and biased knowledge and even help produce or exacerbate existing inequalities. Potential merits and limitations of RPCs, therefore, need to be gauged. Taking RPCs seriously requires paying attention to these possible tensions—both in general and with respect to international development research, in particular: On the one hand, there are attempts to contribute to societal change and ethical concerns of equity at the heart of international development research, and on the other hand, there is the relative risk of encountering asymmetries more likely.
Public preferences
(2021)
For reforms to be acceptable and sustainable in the long run, they should be aligned with public preferences. ‘Preferences’ is a technical term used in social sciences or humanities including for example disciplines such as economics, philosophy or psychology. Broadly speaking, preferences refer to an individual’s judgements on liking one alternative more than others. More specifically, preferences are ‘subjective comparative evaluations, in the form of “Agent prefers X to Y”’ (Hansson and Grüne-Yanoff 2018). Here, we are particularly interested in people’s policy preferences concerning social protection, an area which deserves more attention in policy debates and research.
Over the past two decades many governments of low and middle income countries have started to introduce social protection measures or to extend the coverage and improve the functioning of public social protection systems. These reforms are a "global phenomenon" and can be observed in many African, Asian and Latin American countries. This paper focuses on international determinants for policy change within social protection by assessing the state of the art of both policy diffusion and policy transfer studies. Empirical studies of policy transfer and diffusion in the field of social protection are furthermore assessed in light of the theoretical background.
The paper examines the effectiveness of transgovernmental policy networks as a governance structure for policy diffusion. The analysis is based on a survey including 50 social protection policy maker and technical practitioner who are country delegates to transgovernmental policy networks within the policy area of social protection. The paper provides anecdotal empirical evidence that policy networks contribute to policy diffusion by inducing mutual learning processes.
Political economic analyses of recent social protection reforms in Asian, African or Latin American countries have increased throughout the last few years. Yet, most contributions focus on one social protection mechanism only and do not provide a comparative approach across policy areas. In addition, most studies are empirical studies, with no or very limited theoretical linkages. The paper aims to explain multiple trajectories of social protection reform processes looking at cash transfers and social health protection policies in Kenya. It develops a taxonomy and suggest a conceptual framework to assess and explain reform dynamics across different social protection pillars. In order to allow for a more differentiated typology and enable us to understand different reform dynamics, the article uses the approach on gradual institutional change. While existing approaches to institutional change mostly focus on institutional change prompted by exogenous shocks or environmental shifts, this approach takes account of both, exogenous and endogenous sources of change.
Purpose – To describe the development of a novel polyether(meth)acrylate-based resin material class for stereolithography with alterable material characteristics.
Design/methodology/approach – A complete overview of details to composition parameters, the optimization and bandwidth of mechanical and processing parameters is given. Initial biological characterization experiments and future application felds are depicted. Process parameters are studied in a commercial 3D systems Viper stereolithography system, and a new method to determine these parameters is described herein.
Findings – Initial biological characterizations show the non-toxic behavior in a biological environment, caused mainly by the (meth)acrylate-based core components. These photolithographic resins combine an adjustable low Young’s modulus with the advantages of a non-toxic (meth)acrylate-based process material. In contrast to the mostly rigid process materials used today in the rapid prototyping industry, these polymeric formulations are able to fulfll the extended need for a soft engineering material. A short overview of sample applications is given.
Practical implications – These polymeric formulations are able to meet the growing demand for a resin class for rapid manufacturing that covers a bandwidth from softer to stiffer materials.
Originality/value – This paper gives an overview about the novel developed material class for stereolithography and should be therefore of high interest to people with interest in novel rapid manufacturing materials and technology.
Cathepsin K (CatK) is a target for the treatment of osteoporosis, arthritis, and bone metastasis. Peptidomimetics with a cyanohydrazide warhead represent a new class of highly potent CatK inhibitors; however, their binding mechanism is unknown. We investigated two model cyanohydrazide inhibitors with differently positioned warheads: an azadipeptide nitrile Gü1303 and a 3-cyano-3-aza-β-amino acid Gü2602. Crystal structures of their covalent complexes were determined with mature CatK as well as a zymogen-like activation intermediate of CatK. Binding mode analysis, together with quantum chemical calculations, revealed that the extraordinary picomolar potency of Gü2602 is entropically favoured by its conformational flexibility at the nonprimed-primed subsites boundary. Furthermore, we demonstrated by live cell imaging that cyanohydrazides effectively target mature CatK in osteosarcoma cells. Cyanohydrazides also suppressed the maturation of CatK by inhibiting the autoactivation of the CatK zymogen. Our results provide structural insights for the rational design of cyanohydrazide inhibitors of CatK as potential drugs.
Migrationspolitik in Deutschland polarisiert derzeit wie kaum ein anderes Thema. Einen zentralen Kritikpunkt aus der menschenrechtlichen Perspektive stellen hierbei fehlende gesetzlich verbindliche und einheitliche Standards in der Unterbringung von geflüchteten Menschen in Deutschland dar. Das Ausbleiben verbindlicher bundesweiter Vorgaben hat weitreichende negative Folgen insbesondere für vulnerable Gruppen unter den geflüchteten Menschen, wie Frauen, Kinder, Senior:innen, chronisch Kranke oder LGBTQ+ Personen.
When optimizing the process parameters of the acidic ethanolic organosolv process, the aim is usually to maximize the delignification and/or lignin purity. However, process parameters such as temperature, time, ethanol and catalyst concentration, respectively, can also be used to vary the structural properties of the obtained organosolv lignin, including the molecular weight and the ratio of aliphatic versus phenolic hydroxyl groups, among others. This review particularly focuses on these influencing factors and establishes a trend analysis between the variation of the process parameters and the effect on lignin structure. Especially when larger data sets are available, as for process temperature and time, correlations between the distribution of depolymerization and condensation reactions are found, which allow direct conclusions on the proportion of lignin's structural features, independent of the diversity of the biomass used. The newfound insights gained from this review can be used to tailor organosolv lignins isolated for a specific application.
Miscanthus crops possess very attractive properties such as high photosynthesis yield and carbon fixation rate. Because of these properties, it is currently considered for use in second-generation biorefineries. Here we analyze the differences in chemical composition between M. x giganteus, a commonly studied Miscanthus genotype, and M. nagara, which is relatively understudied but has useful properties such as increased frost resistance and higher stem stability. Samples of M. x giganteus (Gig35) and M. nagara (NagG10) have been separated by plant portion (leaves and stems) in order to isolate the corresponding lignins. The organosolv process was used for biomass pulping (80% ethanol solution, 170 °C, 15 bar). Biomass composition and lignin structure analysis were performed using composition analysis, Fourier-transform infrared (FTIR), ultraviolet-visible (UV-Vis) and nuclear magnetic resonance (NMR) spectroscopy, thermogravimetric analysis (TGA), size exclusion chromatography (SEC) and pyrolysis gas-chromatography/mass spectrometry (Py-GC/MS) to determine the 3D structure of the isolated lignins, monolignol ratio and most abundant linkages depending on genotype and harvesting season. SEC data showed significant differences in the molecular weight and polydispersity indices for stem versus leaf-derived lignins. Py-GC/MS and hetero-nuclear single quantum correlation (HSQC) NMR revealed different monolignol compositions for the two genotypes (Gig35, NagG10). The monolignol ratio is slightly influenced by the time of harvest: stem-derived lignins of M. nagara showed increasing H and decreasing G unit content over the studied harvesting period (December–April).
As a low-input crop, Miscanthus offers numerous advantages that, in addition to agricultural applications, permits its exploitation for energy, fuel, and material production. Depending on the Miscanthus genotype, season, and harvest time as well as plant component (leaf versus stem), correlations between structure and properties of the corresponding isolated lignins differ. Here, a comparative study is presented between lignins isolated from M. x giganteus, M. sinensis, M. robustus and M. nagara using a catalyst-free organosolv pulping process. The lignins from different plant constituents are also compared regarding their similarities and differences regarding monolignol ratio and important linkages. Results showed that the plant genotype has the weakest influence on monolignol content and interunit linkages. In contrast, structural differences are more significant among lignins of different harvest time and/or season. Analyses were performed using fast and simple methods such as nuclear magnetic resonance (NMR) spectroscopy. Data was assigned to four different linkages (A: β-O-4 linkage, B: phenylcoumaran, C: resinol, D: β-unsaturated ester). In conclusion, A content is particularly high in leaf-derived lignins at just under 70% and significantly lower in stem and mixture lignins at around 60% and almost 65%. The second most common linkage pattern is D in all isolated lignins, the proportion of which is also strongly dependent on the crop portion. Both stem and mixture lignins, have a relatively high share of approximately 20% or more (maximum is M. sinensis Sin2 with over 30%). In the leaf-derived lignins, the proportions are significantly lower on average. Stem samples should be chosen if the highest possible lignin content is desired, specifically from the M. x giganteus genotype, which revealed lignin contents up to 27%. Due to the better frost resistance and higher stem stability, M. nagara offers some advantages compared to M. x giganteus. Miscanthus crops are shown to be very attractive lignocellulose feedstock (LCF) for second generation biorefineries and lignin generation in Europe.
Miscanthus x giganteus Stem Versus Leaf-Derived Lignins Differing in Monolignol Ratio and Linkage
(2019)
As a renewable, Miscanthus offers numerous advantages such as high photosynthesis activity (as a C4 plant) and an exceptional CO2 fixation rate. These properties make Miscanthus very attractive for industrial exploitation, such as lignin generation. In this paper, we present a systematic study analyzing the correlation of the lignin structure with the Miscanthus genotype and plant portion (stem versus leaf). Specifically, the ratio of the three monolignols and corresponding building blocks as well as the linkages formed between the units have been studied. The lignin amount has been determined for M. x giganteus (Gig17, Gig34, Gig35), M. nagara (NagG10), M. sinensis (Sin2), and M. robustus (Rob4) harvested at different time points (September, December, and April). The influence of the Miscanthus genotype and plant component (leaf vs. stem) has been studied to develop corresponding structure-property relationships (i.e., correlations in molecular weight, polydispersity, and decomposition temperature). Lignin isolation was performed using non-catalyzed organosolv pulping and the structure analysis includes compositional analysis, Fourier transform infradred (FTIR), ultraviolet/visible (UV-Vis), hetero-nuclear single quantum correlation nuclear magnetic resonsnce (HSQC-NMR), thermogravimetric analysis (TGA), and pyrolysis gaschromatography/mass spectrometry (GC/MS). Structural differences were found for stem and leaf-derived lignins. Compared to beech wood lignins, Miscanthus lignins possess lower molecular weight and narrow polydispersities (<1.5 Miscanthus vs. >2.5 beech) corresponding to improved homogeneity. In addition to conventional univariate analysis of FTIR spectra, multivariate chemometrics revealed distinct differences for aromatic in-plane deformations of stem versus leaf-derived lignins. These results emphasize the potential of Miscanthus as a low-input resource and a Miscanthus-derived lignin as promising agricultural feedstock.
Konsument:innen scheint die Lust vergangen zu sein, individuellen Kleidungsstil auszudrücken, da der Onlinehandel zur Steigerung von Auswahlmöglichkeiten geführt hat. Dies mündet unter anderem in der Nutzung virtueller Stilberatungen. Diese Dienste dienen dazu, Kund:innen möglichst effizient, individuell und authentisch „zu machen“, und sind somit als paradoxaler Demokratisierungsprozess zu verstehen. Eine Erklärung für den Erfolg dieser Dienstleistungen soll mit Reckwitz’ Singularisierungsthese gestützt werden.
Human butyrylcholinesterase (BChE) is a glycoprotein capable of bioscavenging toxic compounds such as organophosphorus (OP) nerve agents. For commercial production of BChE, it is practical to synthesize BChE in non-human expression systems, such as plants or animals. However, the glycosylation profile in these systems is significantly different from the human glycosylation profile, which could result in changes in BChE's structure and function. From our investigation, we found that the glycan attached to ASN241 is both structurally and functionally important due to its close proximity to the BChE tetramerization domain and the active site gorge. To investigate the effects of populating glycosylation site ASN241, monomeric human BChE glycoforms were simulated with and without site ASN241 glycosylated. Our simulations indicate that the structure and function of human BChE are significantly affected by the absence of glycan 241.
Intelligente Dialogsysteme – Chatbots – werden immer häufiger als virtuelle Ansprechpartner von Unternehmen und Institutionen eingesetzt. Auf Basis einer Wissensdatenbank können Chatbots einen größeren Anteil von Kundenanfragen automatisiert beantworten. Analog ist der Einsatz von Chatbots als digitaler Ansprechpartner öffentlicher Verwaltungen denkbar. Sie könnten Bürgern helfen, sich innerhalb der behördlichen Strukturen zu orientieren und Verwaltungsleistungen effizient und effektiv in Anspruch zu nehmen.
Diese Arbeit überprüft den Einsatz eines Chatbots in der öffentlichen Verwaltung hinsichtlich der entstehenden Kosten und des erwartbaren Nutzens. Auf Basis einer umfangreichen Literaturauswertung und der prototypischen Realisierung eines Chatbots für ein Stadtportal werden dabei Herausforderungen dieser Anwendungsdomäne herausgearbeitet, konkrete Funktionsweise und Implementierungsstrategien von Chatbots erörtert und einige Erfolgsfaktoren formuliert, die den Kern einer Handlungsempfehlung für Entscheidungsträger öffentlicher Verwaltungen bilden.
Dried serum spots that are well prepared can be attractive alternatives to frozen serum samples for shelving specimens in a medical or research center's biobank and mailing freshly prepared serum to specialized laboratories. During the pre-analytical phase, complications can arise which are often challenging to identify or are entirely overlooked. These complications can lead to reproducibility issues, which can be avoided in serum protein analysis by implementing optimized storage and transfer procedures. With a method that ensures accurate loading of filter paper discs with donor or patient serum, a gap in dried serum spot preparation and subsequent serum analysis shall be filled. Pre-punched filter paper discs with a 3 mm diameter are loaded within seconds in a highly reproducible fashion (approximately 10% standard deviation) when fully submerged in 10 μl of serum, named the "Submerge and Dry" protocol. Such prepared dried serum spots can store several hundred micrograms of proteins and other serum components. Serum-borne antigens and antibodies are reproducibly released in 20 μl elution buffer in high yields (approximately 90%). Dried serum spot-stored and eluted antigens kept their epitopes and antibodies their antigen binding abilities as was assessed by SDS-PAGE, 2D gel electrophoresis-based proteomics, and Western blot analysis, suggesting pre-punched filter paper discs as handy solution for serological tests.
RoCKIn@Work was focused on benchmarks in the domain of industrial robots. Both task and functionality benchmarks were derived from real world applications. All of them were part of a bigger user story painting the picture of a scaled down real world factory scenario. Elements used to build the testbed were chosen from common materials in modern manufacturing environments. Networked devices, machines controllable through a central software component, were also part of the testbed and introduced a dynamic component to the task benchmarks. Strict guidelines on data logging were imposed on participating teams to ensure gathered data could be automatically evaluated. This also had the positive effect that teams were made aware of the importance of data logging, not only during a competition but also during research as useful utility in their own laboratory. Tasks and functionality benchmarks are explained in detail, starting with their use case in industry, further detailing their execution and providing information on scoring and ranking mechanisms for the specific benchmark.
While humans can effortlessly pick a view from multiple streams, automatically choosing the best view is a challenge. Choosing the best view from multi-camera streams poses a problem regarding which objective metrics should be considered. Existing works on view selection lack consensus about which metrics should be considered to select the best view. The literature on view selection describes diverse possible metrics. And strategies such as information-theoretic, instructional design, or aesthetics-motivated fail to incorporate all approaches. In this work, we postulate a strategy incorporating information-theoretic and instructional design-based objective metrics to select the best view from a set of views. Traditionally, information-theoretic measures have been used to find the goodness of a view, such as in 3D rendering. We adapted a similar measure known as the viewpoint entropy for real-world 2D images. Additionally, we incorporated similarity penalization to get a more accurate measure of the entropy of a view, which is one of the metrics for the best view selection. Since the choice of the best view is domain-dependent, we chose demonstration-based training scenarios as our use case. The limitation of our chosen scenarios is that they do not include collaborative training and solely feature a single trainer. To incorporate instructional design considerations, we included the trainer’s body pose, face, face when instructing, and hands visibility as metrics. To incorporate domain knowledge we included predetermined regions’ visibility as another metric. All of those metrics are taken into account to produce a parameterized view recommendation approach for demonstration-based training. An online study using recorded multi-camera video streams from a simulation environment was used to validate those metrics. Furthermore, the responses from the online study were used to optimize the view recommendation performance with a normalized discounted cumulative gain (NDCG) value of 0.912, which shows good performance with respect to matching user choices.
The Information and Communication Technology (ICT) sector is a significant global industry, and addressing climate change is of critical importance. This paper aims to assess the resources utilized by the ICT sector, the associated negative environmental impacts, and potential mitigation measures. In order to understand these aspects, this study attempts to categorize the resources used by ICT, analyze the amount consumed and the resulting negative impacts, and determine what measures exist to mitigate them. An economic and empirical evaluation shows a negative trend in ICT’s resource consumption, mainly due to increased energy consumption and rising carbon emissions from devices such as smartphones and data centers. The investigated countermeasures focus on Green IT strategies that encompass energy efficiency, carbon awareness, and hardware efficiency principles as outlined by the Green Software Foundation. Special attention is given to reducing the environmental footprint of data center operations and smartphones. This paper concludes that Green IT strategies, although promising in theory, are often not implemented at an industry level.
Several species of (poly)saccharides and organic acids can be found often simultaneously in various biological matrices, e.g., fruits, plant materials, and biological fluids. The analysis of such matrices sometimes represents a challenging task. Using Aloe vera (A. vera) plant materials as an example, the performance of several spectroscopic methods (80 MHz benchtop NMR, NIR, ATR-FTIR and UV-Vis) for the simultaneous analysis of quality parameters of this plant material was compared. The determined parameters include (poly)saccharides such as aloverose, fructose and glucose as well as organic acids (malic, lactic, citric, isocitric, acetic, fumaric, benzoic and sorbic acids). 500 MHz NMR and high-performance liquid chromatography (HPLC) were used as the reference methods.
UV-VIS data can be used only for identification of added preservatives (benzoic and sorbic acids) and drying agent (maltodextrin) and semiquantitative analysis of malic acid. NIR and MIR spectroscopies combined with multivariate regression can deliver more informative overview of A. vera extracts being able to additionally quantify glucose, aloverose, citric, isocitric, malic, lactic acids and fructose. Low-field NMR measurements can be used for the quantification of aloverose, glucose, malic, lactic, acetic, and benzoic acids. The benchtop NMR method was successfully validated in terms of robustness, stability, precision, reproducibility and limit of detection (LOD) and quantification (LOQ), respectively.
All spectroscopic techniques are useful for the screening of (poly)saccharides and organic acids in plant extracts and should be applied according to its availability as well as information and confidence required for the specific analytical goal. Benchtop NMR spectroscopy seems to be the most feasible solution for quality control of A. vera products.
Due to regionalization and global competition, many companies have turned their attention to other markets outside the domestic ones in anticipation of securing profitable market(s) for their products. Cormart (Nigeria) Limited is one of such companies, seeking to expand beyond its domestic borders. Cormart is a Nigerian trading company specializing in Industrial Raw Materials and Chemicals. It represents the business interests of top Multinational Companies that wish to do business in Nigeria. In line with its expansion strategy, Cormart seeks to introduce its newly developed spray starch product (RENEW) into the Ghanaian market.
This thesis work presents the implementation and validation of image processing problems in hardware to estimate the performance and precision gain. It compares the implementation for the addressed problem on a Field Programmable Gate Array (FPGA) with a software implementation for a General Purpose Processor (GPP) architecture. For both solutions the implementation costs for their development is an important aspect in the validation. The analysis of the flexibility and extendability that can be achieved by a modular implementation for the FPGA design was another major aspect. This work is based upon approaches from previous work, which included the detection of Binary Large OBjects (BLOBs) in static images and continuous video streams [13, 15]. One addressed problem of this work is the tracking of the detected BLOBs in continuous image material. This has been implemented for the FPGA platform and the GPP architecture. Both approaches have been compared with respect to performance and precision. This research project is motivated by the MI6 project of the Computer Vision research group, which is located at the Bonn-Rhein-Sieg University of Applied Sciences. The intent of the MI6 project is the tracking of a user in an immersive environment. The proposed solution is to attach a light emitting device to the user for tracking the created light dots on the projection surface of the immersive environment. Having the center points of those light dots would allow the estimation of the user’s position and orientation. One major issue that makes Computer Vision problems computationally expensive is the high amount of data that has to be processed in real-time. Therefore, one major target for the implementation was to get a processing speed of more than 30 frames per second. This would allow the system to realize feedback to the user in a response time which is faster than the human visual perception. One problem that comes with the idea of using a light emitting device to represent the user, is the precision error. Dependent on the resolution of the tracked projection surface of the immersive environment, a pixel might have a size in cm2. Having a precision error of only a few pixels, might lead to an offset in the estimated user’s position of several cm. In this research work the development and validation of a detection and tracking system for BLOBs on a Cyclone II FPGA from Altera has been realized. The system supports different input devices for the image acquisition and can perform detection and tracking for five to eight BLOBs. A further extension of the design has been evaluated and is possible with some constraints. Additional modules for compressing the image data based on run-length encoding and sub-pixel precision for the computed BLOB center-points have been designed. For the comparison of the FPGA approach for BLOB tracking a similar implementation in software using a multi-threaded approach has been realized. The system can transmit the detection or tracking results on two available communication interfaces, USB and RS232. The analysis of the hardware solution showed a similar precision for the BLOB detection and tracking as the software approach. One problem is the strong increase of the allocated resources when extending the system to process more BLOBs. With one of the applied target platforms, the DE2-70 board from Altera, the BLOB detection could be extended to process up to thirty BLOBs. The implementation of the tracking approach in hardware required much more effort than the software solution. The design of high level problems in hardware for this case are more expensive than the software implementation. The search and match steps in the tracking approach could be realized more efficiently and reliably in software. The additional pre-processing modules for sub-pixel precision and run-length-encoding helped to increase the system’s performance and precision.
An der Hochschule Bonn-Rhein-Sieg fand am Donnerstag, den 23.9.21 das erste Verbraucherforum für Verbraucherinformatik statt. Im Rahmen der Online-Tagesveranstaltung diskutierten mehr als 30 Teilnehmer:innen über Themen und Ideen rund um den Bereich Verbraucherdatenschutz. Dabei kamen sowohl Beiträge aus der Informatik, den Verbraucher- und Sozialwissenschaften sowie auch der regulatorischen Perspektive zur Sprache. Der folgende Beitrag stellt den Hintergrund der Veranstaltung dar und berichtet über Inhalte der Vorträge sowie Anknüpfungspunkte für die weitere Konstituierung der Verbraucherinformatik. Veranstalter waren das Institut für Verbraucherinformatik an der H-BRS in Zusammenarbeit mit dem Lehrstuhl IT-Sicherheit der Universität Siegen sowie dem Kompetenzzentrum Verbraucherforschung NRW der Verbraucherzentrale NRW e. V. mit Förderung des Bundesministeriums der Justiz und für Verbraucherschutz.
Pollution with anthropogenic waste, particularly persistent plastic, has now reached every remote corner of the world. The French Atlantic coast, given its extensive coastline, is particularly affected. To gain an overview of current plastic pollution, this study examined a stretch of 250 km along the Silver Coast of France. Sampling was conducted at a total of 14 beach sections, each with five sampling sites in a transect. At each collection site, a square of 0.25 m2 was marked. The top 5 cm of beach sediment was collected and sieved on-site using an analysis sieve (mesh size 1 mm), resulting in a total of approximately 0.8 m3 of sediment, corresponding to a total weight of 1300 kg of examined beach sediment. A total of 1972 plastic particles were extracted and analysed using infrared spectroscopy, corresponding to 1.5 particles kg−1 of beach sediment. Pellets (885 particles), polyethylene as the polymer type (1349 particles), and particles in the size range of microplastics (943 particles) were most frequently found. The significant pollution by pellets suggests that the spread of plastic waste is not primarily attributable to tourism (in February/March 2023). The substantial accumulation of meso- and macro-waste (with 863 and 166 particles) also indicates that research focusing on microplastics should be expanded to include these size categories, as microplastics can develop from them over time.
Bond graph modelling was devised by Professor Paynter at the Massachusetts Institute of Technology in 1959 and subsequently developed into a methodology for modelling multidisciplinary systems at a time when nobody was speaking of object-oriented modelling. On the other hand, so-called object-oriented modelling has become increasingly popular during the last few years. By relating the characteristics of both approaches, it is shown that bond graph modelling, although much older, may be viewed as a special form of object-oriented modelling. For that purpose the new object-oriented modelling language Modelica is used as a working language which aims at supporting multiple formalisms. Although it turns out that bond graph models can be described rather easily, it is obvious that Modelica started from generalized networks and was not designed to support bond graphs. The description of bond graph models in Modelica is illustrated by means of a hydraulic drive. Since VHDL-AMS as an important language standardized and supported by IEEE has been extended to support also modelling of non-electrical systems, it is briefly investigated as to whether it can be used for description of bond graphs. It turns out that currently it does not seem to be suitable.
Multidisciplinary systems are described most suitably by bond graphs. In order to determine unnormalized frequency domain sensitivities in symbolic form, this paper proposes to construct in a systematic manner a bond graph from another bond graph, which is called the associated incremental bond graph in this paper. Contrary to other approaches reported in the literature the variables at the bonds of the incremental bond graph are not sensitivities but variations (incremental changes) in the power variables from their nominal values due to parameter changes. Thus their product is power. For linear elements their corresponding model in the incremental bond graph also has a linear characteristic. By deriving the system equations in symbolic state space form from the incremental bond graph in the same way as they are derived from the initial bond graph, the sensitivity matrix of the system can be set up in symbolic form. Its entries are transfer functions depending on the nominal parameter values and on the nominal states and the inputs of the original model. The sensitivities can be determined automatically by the bond graph preprocessor CAMP-G and the widely used program MATLAB together with the Symbolic Toolbox for symbolic mathematical calculation. No particular program is needed for the approach proposed. The initial bond graph model may be non-linear and may contain controlled sources and multiport elements. In that case the sensitivity model is linear time variant and must be solved in the time domain. The rationale and the generality of the proposed approach are presented. For illustration purposes a mechatronic example system, a load positioned by a constant-excitation d.c. motor, is presented and sensitivities are determined in symbolic form by means of CAMP-G/MATLAB.
In this paper, residual sinks are used in bond graph model-based quantitative fault detection for the coupling of a model of a faultless process engineering system to a bond graph model of the faulty system. By this way, integral causality can be used as the preferred computational causality in both models. There is no need for numerical differentiation. Furthermore, unknown variables do not need to be eliminated from power continuity equations in order to obtain analytical redundancy relations (ARRs) in symbolic form. Residuals indicating faults are computed numerically as components of a descriptor vector of a differential algebraic equation system derived from the coupled bond graphs. The presented bond graph approach especially aims at models with non-linearities that make it cumbersome or even impossible to derive ARRs from model equations by elimination of unknown variables. For illustration, the approach is applied to a non-controlled as well as to a controlled hydraulic two-tank system. Finally, it is shown that not only the numerical computation of residuals but also the simultaneous numerical computation of their sensitivities with respect to a parameter can be supported by bond graph modelling.