Refine
Departments, institutes and facilities
- Fachbereich Wirtschaftswissenschaften (1089)
- Fachbereich Informatik (970)
- Fachbereich Angewandte Naturwissenschaften (577)
- Fachbereich Ingenieurwissenschaften und Kommunikation (512)
- Institut für funktionale Gen-Analytik (IFGA) (512)
- Fachbereich Sozialpolitik und Soziale Sicherung (359)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (352)
- Institut für Cyber Security & Privacy (ICSP) (281)
- Institute of Visual Computing (IVC) (272)
- Institut für Verbraucherinformatik (IVI) (217)
Document Type
- Article (1991)
- Conference Object (1646)
- Part of a Book (859)
- Book (monograph, edited volume) (447)
- Report (147)
- Contribution to a Periodical (116)
- Doctoral Thesis (106)
- Preprint (71)
- Lecture (62)
- Working Paper (52)
Year of publication
Has Fulltext
- no (5749) (remove)
Keywords
- Lehrbuch (88)
- Deutschland (30)
- Nachhaltigkeit (26)
- Controlling (25)
- Unternehmen (23)
- Management (20)
- Betriebswirtschaftslehre (17)
- Prozessmanagement (15)
- Sozialversicherung (15)
- Corporate Social Responsibility (14)
An essential measure of autonomy in assistive service robots is adaptivity to the various contexts of human-oriented tasks, which are subject to subtle variations in task parameters that determine optimal behaviour. In this work, we propose an apprenticeship learning approach to achieving context-aware action generalization on the task of robot-to-human object hand-over. The procedure combines learning from demonstration and reinforcement learning: a robot first imitates a demonstrator’s execution of the task and then learns contextualized variants of the demonstrated action through experience. We use dynamic movement primitives as compact motion representations, and a model-based C-REPS algorithm for learning policies that can specify hand-over position, conditioned on context variables. Policies are learned using simulated task executions, before transferring them to the robot and evaluating emergent behaviours. We additionally conduct a user study involving participants assuming different postures and receiving an object from a robot, which executes hand-overs by either imitating a demonstrated motion, or adapting its motion to hand-over positions suggested by the learned policy. The results confirm the hypothesized improvements in the robot’s perceived behaviour when it is context-aware and adaptive, and provide useful insights that can inform future developments.
Machine learning-based solutions are frequently adapted in several applications that require big data in operations. The performance of a model that is deployed into operations is subject to degradation due to unanticipated changes in the flow of input data. Hence, monitoring data drift becomes essential to maintain the model’s desired performance. Based on the conducted review of the literature on drift detection, statistical hypothesis testing enables to investigate whether incoming data is drifting from training data. Because Maximum Mean Discrepancy (MMD) and Kolmogorov-Smirnov (KS) have shown to be reliable distance measures between multivariate distributions in the literature review, both were selected from several existing techniques for experimentation. For the scope of this work, the image classification use case was experimented with using the Stream-51 dataset. Based on the results from different drift experiments, both MMD and KS showed high Area Under Curve values. However, KS exhibited faster performance than MMD with fewer false positives. Furthermore, the results showed that using the pre-trained ResNet-18 for feature extraction maintained the high performance of the experimented drift detectors. Furthermore, the results showed that the performance of the drift detectors highly depends on the sample sizes of the reference (training) data and the test data that flow into the pipeline’s monitor. Finally, the results also showed that if the test data is a mixture of drifting and non-drifting data, the performance of the drift detectors does not depend on how the drifting data are scattered with the non-drifting ones, but rather their amount in the test set
The design of an efficient digital circuit in term of low-power has become a very challenging issue. For this reason, low-power digital circuit design is a topic addressed in electrical and computer engineering curricula, but it also requires practical experiments in a laboratory. This PhD research investigates a novel approach, the low-power design laboratory system by developing a new technical and pedagogical system. The low-power design laboratory system is composed of two types of laboratories: the on-site (hands-on) laboratory and the remote laboratory. It has been developed at the Bonn-Rhine-Sieg University of Applied Sciences to teach low-power techniques in the laboratory. Additionally, this thesis contributes a suggestion on how the learning objectives can be complemented by developing a remote system in order to improve the teaching process of the low-power digital circuit design. This laboratory system enables online experiments that can be performed using physical instruments and obtaining real data via the internet. The laboratory experiments use a Field Programmable Gate Array (FPGA) as a design platform for circuit implementation by students and use image processing as an application for teaching low-power techniques.
This thesis presents the instructions for the low-power design experiments which use a top-down hierarchical design methodology. The engineering student designs his/her algorithm with a high level of abstraction and the experimental results are obtained and measured at a low level (hardware) so that more information is available to correctly estimate the power dissipation such as specification, latency, thermal effect, and technology used. Power dissipation of the digital system is influenced by specification, design, technology used, as well as operating temperature. Digital circuit designers can observe the most influential factors in power dissipation during the laboratory exercises in the on-site system and then use the remote system to supplement investigating the other factors. Furthermore, the remote system has obvious benefits such as developing learning outcomes, facilitating new teaching methods, reducing costs and maintenance, cost-saving by reducing the numbers of instructors, saving instructor time and simplifying their tasks, facilitating equipment sharing, improving reliability, and finally providing flexibility of usage the laboratories.
For many different applications, current information about the bandwidth-related metrics of the utilized connection is very useful as they directly impact the performance of throughput sensitive applications such as streaming servers, IPTV and VoIP applications. In literature, several tools have been proposed to estimate major bandwidth-related metrics such as capacity, available bandwidth and achievable throughput. The vast majority of these tools fall into one of Packet Pair (PP), Variable Packet Size (VPS), Self-Loading of Periodic Streams (SLoPS) or Throughput approaches. In this study, seven popular bandwidth estimation tools including nettimer, pathrate, pathchar, pchar, clink, pathload and iperf belonging to these four well-known estimation techniques are presented and experimentally evaluated in a controlled testbed environment. Differently from the rest of studies in literature, all tools have been uniformly classified and evaluated according to an objective and sophisticated classification and evaluation scheme. The performance comparison of the tools incorporates not only the estimation accuracy but also the probing time and overhead caused.
Cancer is one of the leading causes of death worldwide [183], with lung tumors being the most frequent cause of cancer deaths in men as well as one of the most common cancers diagnosed in woman [40]. As symptoms often arise in advanced stages, an early diagnosis is especially important to ensure the best and earliest possible treatment. In order to achieve this, Computed Tomography (CT) scans are frequently used for tumor detection and diagnosis. We will present examples of publicly available CT image data of lung cancer patients and discuss possible methods to realize an automatic system for automated cancer diagnosis. We will also look at the recent SPIE-AAPM Lung CT Challenge [10] data set in detail and describe possible methods and challenges for image segmentation and classification based on this data set.
Queueing Theory
(2024)
Entrepreneurship education serves a conduit for new venture creation as it provides the knowledge and skills needed to increase the self-efficacy of individuals to start and run new businesses and to grow existing ones. This study, therefore, sought to assess the relationship between the approaches to the teaching of entrepreneur-ship and entrepreneurial intention on a cohort of 292 respondents consisting of students who have studied entrepreneurship in three selected Universities. A structured questionnaire was used to obtain data randomly from students. The canonical correlation results indicate that education for and through entrepreneurship is the best approach to promoting entrepreneurial intensity among University students, if the aim of teaching entrepreneur-ship is to promote start-up activities. The findings provide valuable insights for institutions of higher learning and policy makers in Ghana with respect to the appropriate methodologies to be adopted in the teaching of entrepreneurship in our universities.
As competition for tourists becomes more global, understanding and accommodating the needs of international tourists, with their different cultural backgrounds, has become increasingly important. This study highlights the variations in tourist industry service--particularly as they relate to different cultures. Specifically, service failures experienced by Japanese and German tourists in the U.S. were categorized using the Critical Incident Technique (CIT). The results were compared with earlier studies of service failures experienced by American consumers in the tourist industry. The sample consists of 128 Japanese and 94 “Germanic” (German, Austrian, Swiss-German) respondents. The Japanese and German sample rated “Inappropriate employee behavior” most significant category of service failure. More than half of these respondents said that, because of the failure, they would avoid the offending U.S. business. This is a much stronger response than an American sample had reported in an earlier study. The implications for managers and researchers are discussed.
Robust Indoor Localization Using Optimal Fusion Filter For Sensors And Map Layout Information
(2014)
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot.We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation and robust object recognition.
Estimation of Prediction Uncertainty for Semantic Scene Labeling Using Bayesian Approximation
(2018)
With the advancement in technology, autonomous and assisted driving are close to being reality. A key component of such systems is the understanding of the surrounding environment. This understanding about the environment can be attained by performing semantic labeling of the driving scenes. Existing deep learning based models have been developed over the years that outperform classical image processing algorithms for the task of semantic labeling. However, the existing models only produce semantic predictions and do not provide a measure of uncertainty about the predictions. Hence, this work focuses on developing a deep learning based semantic labeling model that can produce semantic predictions and their corresponding uncertainties. Autonomous driving needs a real-time operating model, however the Full Resolution Residual Network (FRRN) [4] architecture, which is found as the best performing architecture during literature search, is not able to satisfy this condition. Hence, a small network, similar to FRRN, has been developed and used in this work. Based on the work of [13], the developed network is then extended by adding dropout layers and the dropouts are used during testing to perform approximate Bayesian inference. The existing works on uncertainties, do not have quantitative metrics to evaluate the quality of uncertainties estimated by a model. Hence, the area under curve (AUC) of the receiver operating characteristic (ROC) curves is proposed and used as an evaluation metric in this work. Further, a comparative analysis about the influence of dropout layer position, drop probability and the number of samples, on the quality of uncertainty estimation is performed. Finally, based on the insights gained from the analysis, a model with optimal configuration of dropout is developed. It is then evaluated on the Cityscape dataset and shown to be outperforming the baseline model with an AUC-ROC of about 90%, while the latter having AUC-ROC of about 80%.
Despite perfect functioning of its internal components, a robot can be unsuccessful in performing its tasks because of unforeseen situations. These situations occur when the behavior of the objects in the robot’s environment deviates from its expected values. For robots, such deviations are exhibited in the form of unknown external faults which prohibit them from performing their tasks successfully. In this work we propose to use naive physics knowledge to reason about such faults in the robotics domain. We propose an approach that uses naive physics concepts to find information about the situations which result in a detected unknown fault. The naive physics knowledge is represented by the physical properties of objects which are formalized in a logical framework. The proposed approach applies a qualitative version of physical laws to these properties for reasoning about the detected fault. By interpreting the reasoning results the robot finds the information about the situations which can cause the fault. We apply the proposed approach to the scenarios in which a robot performs manipulation tasks of picking and placing objects. Results of this application show that naive physics holds great promise for reasoning about unknown ex- ternal faults in robotics.
Due to the use of fossil fuel resources, many environmental problems have been increasingly growing. Thus, the recent research focuses on the use of environment friendly materials from sustainable feedstocks for future fuels, chemicals, fibers and polymers. Lignocellulosic biomass has become the raw material of choice for these new materials. Recently, the research has focused on using lignin as a substitute material in many industrial applications. The antiradical and antimicrobial activity of lignin and lignin-based films are both of great interest for applications such as food packaging additives. DPPH assay was used to determine the antioxidant activity of Kraft lignin compared to Organosolv lignins from different biomasses. The purification procedure of Kraft lignin showed that double-fold selective extraction is the most efficient confirmed by UV-Vis, FTIR, HSQC, 31PNMR, SEC, and XRD. The antioxidant capacity was discussed regarding the biomass source, pulping process, and degree of purification. Lignin obtained from industrial black liquor are compared with beech wood samples: Biomass source influences the DPPH inhibition (softwood > grass) and the TPC (softwood < grass). DPPH inhibition affected by the polarity of the extraction solvent. Following the trend: ethanol > diethylether > acetone. Reduced polydispersity has positive influence on the DPPH inhibition. Storage decreased the DPPH inhibition but increased the TPC values. The DPPH assay was also used to discuss the antiradical activity of HPMC/lignin and HPMC/lignin/chitosan films. In both binary (HPMC/lignin) and ternary (HPMC/lignin/chitosan) systems the 5% addition showed the highest activity and the highest addition had the lowest. Both scavenging activity and antimicrobial activity are dependent on the biomass source; Organosolv of softwood > Kraft of softwood > Organosolv of grass. Lignins and lignin-containing films showed high antimicrobial activities against Gram-positive and Gram-negative bacteria at 35 °C and at low temperatures (0-7 °C). Purification of Kraft lignin has a negative effect on the antimicrobial activity while storage has positive effect. The lignin leaching in the produced films affected the activity positively and the chitosan addition enhances the activity for both Gram-positive and Gram-negative bacteria. Testing the films against food spoilage bacteria that grow at low temperatures revealed the activity of the 30% addition on HPMC/L1 film against both B. thermosphacta and P. fluorescens while L5 was active only against B. thermosphacta. In HPMC/lignin/chitosan films, the 5% addition exhibited activity against both food spoilage bacteria.
Sind kleinere und mittlere Unternehmen (KMU) bereits auf die Digitale Transformation vorbereitet?
(2018)
Eine von den Autoren durchgeführte Untersuchung konnte deutliche Indizien dafür ausmachen, dass viele kleinere und mittlere Unternehmen (KMU) aktuell noch keine ausreichende Reife zur Digitalen Transformation haben. Zur Lösung des Problems wird vorgeschlagen, ein agiles IT-Management-Konzept zu entwickeln, um den IT-Bereich dynamisch und ohne formalen Ballast des klassischen IT-Managements zu steuern.
Multi-Merger-Szenarien als Herausforderung für das IT-Controlling - Checklisten zur IT-Integration
(2006)
Digitalisierung für kleinere und mittlere Unternehmen (KMU): Anforderungen an das IT-Management
(2018)
Mergers and acquisitions take place all over the world and in many industries, typically motivated by corporate politics. While IT management is often not involved in the decision-making, it has to solve a wide range of problems in the post-merger phase. Indeed, merging two or more companies implies not only merging their core businesses, but also creating a new and efficiently integrated IT organisation from the individual ones, since persistence of the current IT organisations usually does not make sense. In addition, corporate management frequently imposes constraints, e.g., cost reductions, on the IT infrastructure. The principal critical success factor when merging IT organisations is the uninterrupted operation of the IT business, because a service gap is neither acceptable for in-house functional departments nor for external customers. Therefore, the IT rebuilding phase has to focus on IT services that facilitate the processes of functional departments, support processes, and processes of customers and suppliers, so that any transformation work is transparent to internal and external customers. In this article we describe a real-world but anonymous case study. Our goals are to highlight the points important for merging IT organisations, and to help decision-makers, particularly in the areas of IT organisation and IT personnel. We focus on the arising organisational and non-technical issues from a management perspective, i.e., the CIO's view, and provide checklists intended to help IT managers to address the most pressing issues. To assist CIOs surviving in the post merger phase, we give check lists for merging IT organisations, check lists for merging IT human resources, check lists for IT budgets and reporting, and assess activities in a merger scenario. IT hardware, software and IT infrastructure as well as running IT projects are not considered in this paper.
IT performance measurement is often associated by chief executive officers with IT cost cutting although IT protects business processes from increasing IT costs. IT cost cutting only endangers the company’s efficiency. This opinion discriminates those who do IT performance measurement in companies as a bean-counter. The present paper describes an integrated reference model for IT performance measurement based on a life cycle model and a performance oriented framework. The presented model was created from a practical point of view. It is designed lank compared with other known concepts and is very appropriate for small and medium enterprises (SME).
A plethora of architectural patterns and elements for developing service-oriented applications can be gathered from the state-of-the-art. Most of these approaches are merely applicable for single-tenant applications. However, less methodical support is provided for scenarios, in which multiple different tenants with varying requirements access the same application stack concurrently. In order to fill this gap, both novel and existing architectural patterns, architectural elements, as well as fundamental design decisions must be considered and integrated into a framework that leverages the devel- opment of multi-tenant application. This paper addresses this demand and presents the SOAdapt framework. It promotes the development of adaptable multi-tenant applications based on a service-oriented architecture that is capable of incorporating specific requirements of new tenants in a flexible manner.
Trueness and precision of milled and 3D printed root-analogue implants: A comparative in vitro study
(2023)
The need for innovation around the control functions of inverters is great. PV inverters were initially expected to be passive followers of the grid and to disconnect as soon as abnormal conditions happened. Since future power systems will be dominated by generation and storage resources interfaced through inverters these converters must move from following to forming and sustaining the grid. As “digital natives” PV inverters can also play an important role in the digitalisation of distribution networks. In this short review we identified a large potential to make the PV inverter the smart local hub in a distributed energy system. At the micro level, costs and coordination can be improved with bidirectional inverters between the AC grid and PV production, stationary storage, car chargers and DC loads. At the macro level the distributed nature of PV generation means that the same devices will support both to the local distribution network and to the global stability of the grid. Much success has been obtained in the former. The later remains a challenge, in particular in terms of scaling. Yet there is some urgency in researching and demonstrating such solutions. And while digitalisation offers promise in all control aspects it also raises significant cybersecurity concerns.
In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI” from the users, who were interacting with AI but did not realize it. Three decades of research and we are still facing the same issue with the AItechnology users. In the lack of users’ awareness and mutual understanding of AI-enabled systems between designers and users, informal theories of the users about how a system works (“Folk theories”) become inevitable but can lead to misconceptions and ineffective interactions. To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. However, a profound understanding of the current users’ perception of AI is still missing. In this study, we introduce the term “Perceived AI” as “AI defined from the perspective of its users”. We then present our preliminary results from deep-interviews with 50 AItechnology users, which provide a framework for our future research approach towards a better understanding of PAI and users’ folk theories.
For most people, using their body to authenticate their identity is an integral part of daily life. From our fingerprints to our facial features, our physical characteristics store the information that identifies us as "us." This biometric information is becoming increasingly vital to the way we access and use technology. As more and more platform operators struggle with traffic from malicious bots on their servers, the burden of proof is on users, only this time they have to prove their very humanity and there is no court or jury to judge, but an invisible algorithmic system. In this paper, we critique the invisibilization of artificial intelligence policing. We argue that this practice obfuscates the underlying process of biometric verification. As a result, the new "invisible" tests leave no room for the user to question whether the process of questioning is even fair or ethical. We challenge this thesis by offering a juxtaposition with the science fiction imagining of the Turing test in Blade Runner to reevaluate the ethical grounds for reverse Turing tests, and we urge the research community to pursue alternative routes of bot identification that are more transparent and responsive.
Technological objects present themselves as necessary, only to become obsolete faster than ever before. This phenomenon has led to a population that experiences a plethora of technological objects and interfaces as they age, which become associated with certain stages of life and disappear thereafter. Noting the expanding body of literature within HCI about appropriation, our work pinpoints an area that needs more attention, “outdated technologies.” In other words, we assert that design practices can profit as much from imaginaries of the future as they can from reassessing artefacts from the past in a critical way. In a two-week fieldwork with 37 HCI students, we gathered an international collection of nostalgic devices from 14 different countries to investigate what memories people still have of older technologies and the ways in which these memories reveal normative and accidental use of technological objects. We found that participants primarily remembered older technologies with positive connotations and shared memories of how they had adapted and appropriated these technologies, rather than normative uses. We refer to this phenomenon as nostalgic reminiscence. In the future, we would like to develop this concept further by discussing how nostalgic reminiscence can be operationalized to stimulate speculative design in the present.
When dialogues with voice assistants (VAs) fall apart, users often become confused or even frustrated. To address these issues and related privacy concerns, Amazon recently introduced a feature allowing Alexa users to inquire about why it behaved in a certain way. But how do users perceive this new feature? In this paper, we present preliminary results from research conducted as part of a three-year project involving 33 German households. This project utilized interviews, fieldwork, and co-design workshops to identify common unexpected behaviors of VAs, as well as users’ needs and expectations for explanations. Our findings show that, contrary to its intended purpose, the new feature actually exacerbates user confusion and frustration instead of clarifying Alexa's behavior. We argue that such voice interactions should be characterized as explanatory dialogs that account for VA’s unexpected behavior by providing interpretable information and prompting users to take action to improve their current and future interactions.
AI (artificial intelligence) systems are increasingly being used in all aspects of our lives, from mundane routines to sensitive decision-making and even creative tasks. Therefore, an appropriate level of trust is required so that users know when to rely on the system and when to override it. While research has looked extensively at fostering trust in human-AI interactions, the lack of standardized procedures for human-AI trust makes it difficult to interpret results and compare across studies. As a result, the fundamental understanding of trust between humans and AI remains fragmented. This workshop invites researchers to revisit existing approaches and work toward a standardized framework for studying AI trust to answer the open questions: (1) What does trust mean between humans and AI in different contexts? (2) How can we create and convey the calibrated level of trust in interactions with AI? And (3) How can we develop a standardized framework to address new challenges?
In the fermentation process sugars are transformed into lactic acid. pH meters have traditionally been used for fermentation process monitoring based on acidity. More recently, near infrared (NIR) spectroscopy has proven to provide an accurate and non-invasive method to detect when the transformation of sugars into lactic acid is finished. The fermentation process when sugars are transformed into lactic acid. This research proposes the use of simplified NIR spectroscopy using multispectral optical sensors as a simpler and less expensive measure to end the fermentation process. The NIR spectrum of milk and yogurt is compared to find and extract features that can be used to design a simple sensor to monitor the yogurt fermentation process. Multispectral images in four selected wavebands within the NIR spectrum are captured and show different spectral remission characteristics for milk, yogurt and water, which support the selection of these wavebands for milk and yogurt classification.
Das Kernanliegen des Datenschutzes ist es, natürliche Personen vor nachteiligen Effekten der Speicherung und Verarbeitung der sie betreffenden Daten zu schützen. Aber viele Personen scheinen gar nicht geschützt werden zu wollen. Im Gegenteil, viele Endanwender willigen “freiwillig“ – bewusst oder unbewusst – in eine umfassende Verarbeitung ihrer personenbezogenen Daten ein. Warum tun Menschen dies? Es werden verschiedene Ursachen diskutiert (beispielsweise in [79]), hierzu gehören Uninformiertheit, mangelnde Sensibilität, das Gefühl der Hilflosigkeit, mangelnde Zahlungsbereitschaft und mangelnde Alternativen. Auch wenn dies in Einzelfällen zutrifft, so gibt es oft sehr wohl datenschutzfreundliche Alternativen. Beispielsweise existiert zu WhatsApp (als Instant Messaging App) die Alternative Threema. Threema gilt als EU-DS-GVO-konform und funktional durchaus mit WhatsApp vergleichbar [62]. Allerdings ist inzwischen die aktuelle Netzwerkgröße ein entscheidendes Auswahlkriterium: Im Januar 2018 hatte Threema 4,5 Millionen Nutzer [172], WhatsApp dagegen 1,5 Milliarden [171]. Dies ist ein Indiz dafür, dass WhatsApp sich quasi zum De-facto-Standard entwickelt hat und es für die einzelne Person nur schwer möglich ist, viele andere “zum Wechsel auf ein anderes Produkt zu bewegen. [. . . ] Bei Diensten mit Nutzerzahlen im Milliardenbereich kann von ’Freiwilligkeit’ nur noch bedingt gesprochen werden.“ [9]
Information reliability and automatic computation are two important aspects that are continuously pushing the Web to be more semantic. Information uploaded to the Web should be reusable and extractable automatically to other applications, platforms, etc. Several tools exist to explicitly markup Web content. The Web services may also have a positive role on the automatic processing of Web contents, especially when they act as flexible and agile agents. However, Web services themselves should be developed with semantics in mind. They should include and provide structured information to facilitate their use, reuse, composition, query, etc. In this chapter, the authors focus on evaluating state-of-the-art semantic aspects and approaches in Web services. Ultimately, this contributes to the goal of Web knowledge management, execution, and transfer.
Software testing in web services environment faces different challenges in comparison with testing in traditional software environments. Regression testing activities are triggered based on software changes or evolutions. In web services, evolution is not a choice for service clients. They have always to use the current updated version of the software. In addition test execution or invocation is expensive in web services and hence providing algorithms to optimize test case generation and execution is vital. In this environment, we proposed several approach for test cases' selection in web services' regression testing. Testing in this new environment should evolve to be included part of the service contract. Service providers should provide data or usage sessions that can help service clients reduce testing expenses through optimizing the selected and executed test cases.
The gasotransmitter hydrogen sulphide decreases Na⁺ transport across pulmonary epithelial cells
(2012)
BACKGROUND AND PURPOSE The transepithelial absorption of Na(+) in the lungs is crucial for the maintenance of the volume and composition of epithelial lining fluid. The regulation of Na(+) transport is essential, because hypo- or hyperabsorption of Na(+) is associated with lung diseases such as pulmonary oedema or cystic fibrosis. This study investigated the effects of the gaseous signalling molecule hydrogen sulphide (H(2) S) on Na(+) absorption across pulmonary epithelial cells. EXPERIMENTAL APPROACH Ion transport processes were electrophysiologically assessed in Ussing chambers on H441 cells grown on permeable supports at air/liquid interface and on native tracheal preparations of pigs and mice. The effects of H(2)S were further investigated on Na(+) channels expressed in Xenopus oocytes and Na(+) /K(+)-ATPase activity in vitro. Membrane abundance of Na(+) /K(+)-ATPase was determined by surface biotinylation and Western blot. Cellular ATP concentrations were measured colorimetrically, and cytosolic Ca(2+) concentrations were measured with Fura-2. KEY RESULTS H(2)S rapidly and reversibly inhibited Na(+) transport in all the models employed. H(2)S had no effect on Na(+) channels, whereas it decreased Na(+) /K(+)-ATPase currents. H(2)S did not affect the membrane abundance of Na(+) /K(+)-ATPase, its metabolic or calcium-dependent regulation, or its direct activity. However, H(2)S inhibited basolateral calcium-dependent K(+) channels, which consequently decreased Na(+) absorption by H441 monolayers. CONCLUSIONS AND IMPLICATIONS H(2) S impairs pulmonary transepithelial Na(+) absorption, mainly by inhibiting basolateral Ca(2+)-dependent K(+) channels. These data suggest that the H(2)S signalling system might represent a novel pharmacological target for modifying pulmonary transepithelial Na(+) transport.
The vectorial transport of Na+ across epithelia is crucial for the maintenance of Na+ and water homeostasis in organs such as the kidneys, lung, or intestine. Dysregulated Na+ transport processes are associated with various human diseases such as hypertension, the salt-wasting syndrome pseudohypoaldosteronism type 1, pulmonary edema, cystic fibrosis, or intestinal disorders, which indicate that a precise regulation of epithelial Na+ transport is essential. Novel regulatory signaling molecules are gasotransmitters. There are currently three known gasotransmitters: nitric oxide (NO), carbon monoxide (CO), and hydrogen sulfide (H2S). These molecules are endogenously produced in mammalian cells by specific enzymes and have been shown to regulate various physiological processes. There is a growing body of evidence which indicates that gasotransmitters may also regulate Na+ transport across epithelia. This review will summarize the available data concerning NO, CO, and H2S dependent regulation of epithelial Na+ transport processes and will discuss whether or not these mediators can be considered as true physiological regulators of epithelial Na+ transport biology.
ENaC channels
(2023)
More than 25 years ago, it was a big surprise for physiologists that nitric oxide (NO) was identified as the endothelium derived relaxing factor which is responsible for endothelium-induced smooth muscle relaxation (Ignarro et al., 1987). Until then, small gaseous molecules were simply regarded as byproducts of cellular metabolism which were unlikely to be of any physiological relevance. The discovery that NO was synthesized by specific enzymes (NO-synthases), upon stimulation by specific, physiologically relevant stimuli (e.g., acetylcholine stimulation of endothelial cells), as well as the fact that it acted on specific cellular targets (e.g., soluble guanylate cyclase), set the course for numerous studies which investigated the physiological roles of gaseous signaling molecules—in other words, gasotransmitters (Wang, 2002).
The development of pulmonary edema can be considered as a combination of alveolar flooding via increased fluid filtration, impaired alveolar-capillary barrier integrity, and disturbed resolution due to decreased alveolar fluid clearance. An important mechanism regulating alveolar fluid clearance is sodium transport across the alveolar epithelium. Transepithelial sodium transport is largely dependent on the activity of sodium channels in alveolar epithelial cells. This paper describes how sodium channels contribute to alveolar fluid clearance under physiological conditions and how deregulation of sodium channel activity might contribute to the pathogenesis of lung diseases associated with pulmonary edema. Furthermore, sodium channels as putative molecular targets for the treatment of pulmonary edema are discussed.
Carbon Monoxide Rapidly Impairs Alveolar Fluid Clearance by Inhibiting Epithelial Sodium Channels
(2009)
Carbon monoxide (CO) is currently being evaluated as a therapeutic modality in the treatment of patients with acute lung injury and acute respiratory distress syndrome. No study has assessed the effects of CO on transepithelial ion transport and alveolar fluid reabsorption, two key aspects of alveolocapillary barrier function that are perturbed in acute lung injury/acute respiratory distress syndrome. Both CO gas (250 ppm) and CO donated by the CO donor, CO-releasing molecule (CORM)-3 (100 mu M in epithelial lining fluid), applied to healthy, isolated, ventilated, and perfused rabbit lungs, significantly blocked Na-22(+) clearance from the alveolar compartment, and blocked alveolar fluid reabsorption after fluid challenge. Apical application of two CO donors, CORM-3 or CORM-A1 (100 mu M), irreversibly inhibited amiloride-sensitive short-circuit currents in H441 human bronchiolar epithelial cells and primary rat alveolar type II cells by up to 40%. Using a nystatin permabilization approach, the CO effect was localized to amiloride-sensitive channels on the apical surface. This effect was abolished by hemoglobin, a scavenger of CO, and was not observed when inactive forms of CO donors were employed. The effects of CO were not blocked by 8-bromoguanosine-3',5'-cyclic guanosine monophosphate, soluble guanylate cyclase inhibitors (methylene blue and 1H-[1,2,4]oxadiazolo[4,3-a]quinoxalin-1-one), or inhibitors of trafficking events (phalloidin oleate, MG-132, and brefeldin A), but the amiloride affinity of H441 cells was reduced after CO exposure. These data indicate that CO rapidly inhibits sodium absorption across the airway epithelium by cyclic guanosine monophosphate-and trafficking-independent mechanisms, which may rely on critical histidine residues in amiloride-sensitive channels or associated regulatory proteins on the apical surface of lung epithelial cells.
Nitric oxide (NO) is an important regulator of Na+ reabsorption by pulmonary epithelial cells and therefore of alveolar fluid clearance. The mechanisms by which NO affects epithelial ion transport are poorly understood and vary from model to model. In this study, the effects of NO on sodium reabsorption by H441 cell monolayers were studied in an Ussing chamber. Two NO donors, (Z)-1-[N-(3-aminopropyl)-N-(n-propyl) amino]diazen-1-ium-1,2-diolate and diethylammonium(Z)-1-(N, N-diethylamino) diazen-1-ium-1,2-diolate, rapidly, reversibly, and dose-dependently reduced amiloride-sensitive, short-circuit currents across H441 cell monolayers. This effect was neutralized by the NO scavenger hemoglobin and was not observed with inactive NO donors. The effects of NO were not blocked by 8-bromoguanosine-3',5'-cyclic monophosphate or by soluble guanylate cyclase inhibitors (methylene blue and 1H-[1,2,4] oxadiazolo[4,3-a]quinoxalin-1-one) and were therefore independent of soluble guanylate cyclase signaling. NO targeted apical, highly selective, amiloride-sensitive Na+ channels in basolaterally permeabilized H441 cell monolayers. NO had no effect on the activity of the human epithelial sodium channel heterologously expressed in Xenopus oocytes. NO decreased Na+/K+-ATPase activity in apically permeabilized H441 cell monolayers. The inhibition of Na+/K+-ATPase activity by NO was reversed by mercury and was mimicked by N-ethylmaleimide, which are agents that reverse and mimic, respectively, the reaction of NO with thiol groups. Consistent with these data, S-NO groups were detected on the Na+/K+-ATPase a subunit in response to NO-donor application, using a biotin-switch approach coupled to a Western blot. These data demonstrate that, in the H441 cell model, NO impairs Na+ reabsorption by interfering with the activity of highly selective Na+ channels and the Na+/K+-ATPase.
Introduction: After cellulose, lignin represents the most abundant biopolymer on earth that accounts for up to 18-35 % by weight of lignocellulose biomass. Today, it is a by-product of the paper and pulping industry. Although lignin is available in huge amounts, mainly in form of so called black liquor produced via Kraft-pulping, processes for the valorization of lignin are still limited [1]. Due to its hyperbranched polyphenol-like structure, lignin gained increasing interest as biobased building block for polymer synthesis [2]. The present work is focused on extraction and purification of lignin from industrial black liquor and synthesis of lignin-based polyurethanes.