Refine
H-BRS Bibliography
- no (2100) (remove)
Departments, institutes and facilities
- Institut für funktionale Gen-Analytik (IFGA) (343)
- Institut für Cyber Security & Privacy (ICSP) (175)
- Institut für Verbraucherinformatik (IVI) (76)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (5)
- Institut für Sicherheitsforschung (ISF) (1)
- Institut für Soziale Innovationen (ISI) (1)
Document Type
- Article (828)
- Conference Object (687)
- Part of a Book (252)
- Book (monograph, edited volume) (82)
- Lecture (57)
- Report (56)
- Doctoral Thesis (36)
- Contribution to a Periodical (35)
- Patent (32)
- Master's Thesis (8)
Year of publication
Keywords
- ENaC (9)
- Sozialversicherung (9)
- Deutschland (7)
- Entrepreneurship (7)
- Malus domestica (7)
- Culture (6)
- E-Learning (6)
- Entwicklungspolitik (6)
- apoptosis (6)
- sustainability (6)
Diese Studie untersucht die Aneignung und Nutzung von Sprachassistenten wie Google Assistant oder Amazon Alexa in Privathaushalten. Unsere Forschung basiert auf zehn Tiefeninterviews mit Nutzern von Sprachassistenten sowie der Evaluation bestimmter Interaktionen in der Interaktionshistorie. Unsere Ergebnisse illustrieren, zu welchen Anlässen Sprachassistenten im heimischen Umfeld genutzt werden, welche Strategien sich die Nutzer in der Interaktion mit Sprachassistenten angeeignet haben, wie die Interaktion abläuft und welche Schwierigkeiten sich bei der Einrichtung und Nutzung des Sprachassistenten ergeben haben. Ein besonderer Fokus der Studie liegt auf Fehlinteraktionen, also Situationen, in denen die Interaktion scheitert oder zu scheitern droht. Unsere Studie zeigt, dass das Nutzungspotenzial der Assistenten häufig nicht ausgeschöpft wird, da die Interaktion in komplexeren Anwendungsfällen häufig misslingt. Die Nutzer verwenden daher den Sprachassistenten eher in einfachen Anwendungsfällen und neue Apps und Anwendungsfälle werden gar nicht erst ausprobiert. Eine Analyse der Aneignungsstrategien, beispielsweise durch eine selbst erstellte Liste mit Befehlen, liefert Erkenntnisse für die Gestaltung von Unterstützungswerkzeugen sowie die Weiterentwicklung und Optimierung von sprachbasierten Mensch-Maschine-Interfaces.
2-Methyl-3-hydroxybutyryl-CoA dehydrogenase deficiency is caused by mutations in the HADH2 gene
(2003)
3D Printers as Sociable Technologies: Taking Appropriation Infrastructures to the Internet of Things
(2017)
3D time of flight distance measurement with custom solid state image sensors in CMOS, CCD technology
(2000)
Since we are living in a three-dimensional world, an adequate description of our environment for many applications includes the relative position and motion of the different objects in a scene. Nature has satisfied this need for spatial perception by providing most animals with at least two eyes. This stereo vision ability is the basis that allows the brain to calculate qualitative depth information of the observed scene. Another important parameter in the complex human depth perception is our experience and memory. Although it is far more difficult, a human being is even able to recognize depth information without stereo vision. For example, we can qualitatively deduce the 3D scene from most photos, assuming that the photos contain known objects [COR]. The acquisition, storage, processing and comparison of such a huge amount of information requires enormous computational power - with which nature fortunately provides us. Therefore, for a technical implementation, one should resort to other simpler measurement principles. Additionally, the qualitative distance estimates of such knowledge-based passive vision systems can be replaced by accurate range measurements.
3D Time-of-Flight (ToF)
(2012)
3D Time-of-Flight (ToF)
(2015)
3D User Interfaces
(2005)
3D-Imaging
(2009)
In Fortführung zu den drei erfolgreichen „Usable Security und Privacy“ Workshops der letzten drei Jahre, sollen in einem vierten ganztätigen wissenschaftlichen Workshop auf der diesjährigen Mensch und Computer sechs bis acht Arbeiten auf dem Gebiet Usable Security and Privacy vorgestellt und diskutiert werden. Vorgesehen sind Beiträge aus Forschung und Praxis, die neue nutzerzentrierte Ansätze aber auch praxisrelevante Lösungen zur nutzerzentrierten Entwicklung und Ausgestaltung von digitalen Schutzmechanismen thematisieren. Mit dem Workshop soll das etablierte Forum weiterentwickelt werden, in dem sich Experten aus unterschiedlichen Domänen, z. B. dem Usability-Engineering und Security-Engineering, transdisziplinär austauschen können. Der Workshop wird von den Organisatoren als klassischer wissenschaftlicher Workshop ausgestaltet. Ein Programmkomitee bewertet die Einreichungen und wählt daraus die zur Präsentation akzeptierten Beiträge aus. Diese werden zudem im Poster- und Workshopband der Mensch und Computer 2018 veröffentlicht.
Recent years have seen extensive adoption of domain generation algorithms (DGA) by modern botnets. The main goal is to generate a large number of domain names and then use a small subset for actual C&C communication. This makes DGAs very compelling for botmasters to harden the infrastructure of their botnets and make it resilient to blacklisting and attacks such as takedown efforts. While early DGAs were used as a backup communication mechanism, several new botnets use them as their primary communication method, making it extremely important to study DGAs in detail.
In this paper, we perform a comprehensive measurement study of the DGA landscape by analyzing 43 DGAbased malware families and variants. We also present a taxonomy for DGAs and use it to characterize and compare the properties of the studied families. By reimplementing the algorithms, we pre-compute all possible domains they generate, covering the majority of known and active DGAs. Then, we study the registration status of over 18 million DGA domains and show that corresponding malware families and related campaigns can be reliably identified by pre-computing future DGA domains. We also give insights into botmasters’ strategies regarding domain registration and identify several pitfalls in previous takedown efforts of DGA-based botnets. We will share the dataset for future research and will also provide a web service to check domains for potential DGA identity.
The aim of design science research (DSR) in information systems is the user-centred creation of IT-artifacts with regard to specific social environments. For culture research in the field, which is necessary for a proper localization of IT-artifacts, models and research approaches from social sciences usually are adopted. Descriptive dimension-based culture models most commonly are applied for this purpose, which assume culture being a national phenomenon and tend to reduce it to basic values. Such models are useful for investigations in behavioural culture research because it aims to isolate, describe and explain culture-specific attitudes and characteristics within a selected society. In contrast, with the necessity to deduce concrete decisions for artifact-design, research results from DSR need to go beyond this aim. As hypothesis, this contribution generally questions the applicability of such generic culture dimensions’ models for DSR and focuses on their theoretical foundation, which goes back to Hofstede’s conceptual Onion Model of Culture. The herein applied literature-based analysis confirms the hypothesis. Consequently, an alternative conceptual culture model is being introduced and discussed as theoretical foundation for culture research in DSR.
A trace of the execution of a concurrent object-oriented program can be displayed in two-dimensions as a diagram of a non-metric finite geometry. The actions of a programs are represented by points, its objects and threads by vertical lines, its transactions by horizontal lines, its communications and resource sharing by sloping arrows, and its partial traces by rectangular figures.
A framework of decision‐support systems in advanced manufacturing enterprises ‐ a systems view
(1997)
This paper describes a dynamic, model-based approach for estimating intensities of 22 out of 44 different basic facial muscle movements. These movements are defined as Action Units (AU) in the Facial Action Coding System (FACS) [1]. The maximum facial shape deformations that can be caused by the 22 AUs are represented as vectors in an anatomically based, deformable, point-based face model. The amount of deformation along these vectors represent the AU intensities, and its valid range is [0, 1]. An Extended Kalman Filter (EKF) with state constraints is used to estimate the AU intensities. The focus of this paper is on the modeling of constraints in order to impose the anatomically valid AU intensity range of [0, 1]. Two process models are considered, namely constant velocity and driven mass-spring-damper. The results show the temporal smoothing and disambiguation effect of the constrained EKF approach, when compared to the frame-by-frame model fitting approach ‘Regularized Landmark Mean-Shift (RLMS)’ [2]. This effect led to more than 35% increase in performance on a database of posed facial expressions.
The Internet Engineering Task Force (IETF) is currently working on the development of Differentiated Services (DiffServ). DiffServ seems to be a promising technology for next-generation IP networks supporting Quality-of-Services (QoS). Emerging applications such as IP telephony and time-critical business applications can benefit significantly from the DiffServ approach since the current Internet often can not provide the required QoS. This paper describes an implementation of Differentiated Services for Linux routers and end systems. The implementation is based on the Linux traffic control package and is, therefore, very flexible. It can be used in different network environments as first-hop, boundary or interior router for Differentiated Services. In addition to the implementation architecture, the paper describes performance results demonstrating the usefulness of the DiffServ concept in general and the implementation in particular.
A method for minimum range extension with improved accuracy in triangulation laser range finder
(2011)
A Method of Lines Flux-Difference Splitting Finite Volume Approach for 1D and 2D River Flow Problems
(2001)
In this paper, we present a solution how to test cultural influences on E-Learning in a global context. Based on a metadata approach, we show how specifically cultural influence factors can be determined to transfer and adapt learning environments. We present a method how those influence factors can be validated for both, to improve the dynamical meta-data specification and to be used in the development of (international) E-Learning scenarios.
One of the biggest challenges faced by many tech start-ups from developed markets is to have validated market-fit products/services and to see their solutions implemented. In several sectors, stringent regulations, and the law of handicap of head start at home can be hurdles that limit the development and even the survival potential of theses start-ups. Tech start-ups seeking implementation, learning, and legitimacy may have a solution in expanding into emerging markets. Emerging markets offer both business opportunities in sectors in need of new technologies as they are “fertile grounds” for developing and testing internationalisation business models. We present here a process designed to help tech start-ups to identify, access, shape and seize these opportunities and to overcome their own specificities and emerging markets specificities. The three phases of the proposed process cover entry node concept, partnership, and business, operating and revenue joint models’ development. DesignScience Research Paradigm is used for the design and evaluation of the process. To show the relevance of this process, a case study on the expansion in Morocco of a Dutch start-up active in e-health is used. The study shows the importance of the process for the embeddedness in a local relevant value network with a relevant adopter’s system, a key enabler to achieve time and cost-effective expansion in that specific business and institutional contexts. A pilot to assess the proposed models and evidence of benefits is under development. To boost their chances of growth tech start-ups from developed markets should consider expansion into emerging markets in their strategy. It would be beneficial that policy makers adopt a strategy by which to assist tech start-ups in accessing value networks in emerging markets. It is also important for policy makers from emerging markets to consider developing schemes to attract tech start-ups from developed markets.
Mobile technologies have evolved into the means of gaining access to information for learning. Its application in higher education is still a novel concept, particularly in underdeveloped countries. This study is aimed at exploring the views of doctoral students regarding their learning experiences with mobile technologies. Student focus group interviews of 24 doctoral students from 3 different academic institutions were interviewed. The participants’ responses were recorded, transcribed, and analyzed to make conclusions. According to the findings of this study, mobile devices play an important part in the learning experiences of doctoral students. The participating students engaged in collaborative learning using mobile technologies. Given the benefits of adopting mobile technologies for learning activities, academic institutions should focus on teaching faculty members to use this to involve students in their learning process. The implications of this study call for the continued advancement of mobile technologies to facilitate effective learning experience for the multitude of mobile learners in developing countries. Another implication is that academic institutions with collaboration with libraries should see the need to develop user friendly mobile app that is linked to the library management system. Such an application would allow the students to optimally use their smartphones and tablets to search the library’s resources from their mobile devices. Training should be offered to the teaching faculty members to come to terms with the benefits of mobile technologies for learning activities.
Striated muscle contraction is regulated by the translocation of troponin-tropomyosin strands over the thin filament surface. Relaxation relies partly on highly-favorable, conformation-dependent electrostatic contacts between actin and tropomyosin, which position tropomyosin such that it impedes actomyosin associations. Impaired relaxation and hypercontractile properties are hallmarks of various muscle disorders. The α-cardiac actin M305L hypertrophic cardiomyopathy-causing mutation lies near residues that help confine tropomyosin to an inhibitory position along thin filaments. Here, we investigate M305L actin in vivo, in vitro, and in silico to resolve emergent pathological properties and disease mechanisms. Our data suggest the mutation reduces actin flexibility and distorts the actin-tropomyosin electrostatic energy landscape that, in muscle, result in aberrant contractile inhibition and excessive force. Thus, actin flexibility may be required to establish and maintain interfacial contacts with tropomyosin as well as facilitate its movement over distinct actin surface features and is, therefore, likely necessary for proper regulation of contraction.
Today’s computer systems face a vast array of severe threats that are posed by automated attacks performed by malicious software as well as manual attacks by individual humans. These attacks not only differ in their technical implementation but may also be location-dependent. Consequentially, it is necessary to join the information from heterogeneous and distributed attack sensors in order to acquire comprehensive information on current ongoing cyber attacks.
Currently, there are a lot of research activities dealing with gamma titanium aluminide (γ-TiAl) alloys as new materials for low pressure turbine (LPT) blades. Even though the scatter in mechanical properties of such intermetallic alloys is more distinctive as in conventional metallic alloys, stochastic investigations on γ -TiAl alloys are very rare. For this reason, we analyzed the scatter in static and dynamic mechanical properties of the cast alloy Ti-48Al-2Cr-2Nb. It was found that this alloy shows a size effect in strength which is less pronounced than the size effect of brittle materials. A weakest-link approach is enhanced for describing a scalable size effect under multiaxial stress states and implemented in a post processing tool for reliability analysis of real components. The presented approach is a first applicable reliability model for semi-brittle materials. The developed reliability tool was integrated into a multidisciplinary optimization of the geometry of a LPT blade. Some processes of the optimization were distributed in a wide area network, so that specialized tools for each discipline could be employed. The optimization results show that it is possible to increase the aerodynamic efficiency and the structural mechanics reliability at the same time, while ensuring the blade can be manufactured in an investment casting process.
A structural mapping of mutations causing succinyl-CoA:3-ketoacid CoA transferase (SCOT) deficiency
(2013)
Succinyl-CoA:3-ketoacid CoA transferase (SCOT) deficiency is a rare inherited metabolic disorder of ketone metabolism, characterized by ketoacidotic episodes and often permanent ketosis. To date there are ~20 disease-associated alleles on the OXCT1 gene that encodes the mitochondrial enzyme SCOT. SCOT catalyzes the first, rate-limiting step of ketone body utilization in peripheral tissues, by transferring a CoA moiety from succinyl-CoA to form acetoacetyl-CoA, for entry into the tricarboxylic acid cycle for energy production. We have determined the crystal structure of human SCOT, providing a molecular understanding of the reported mutations based on their potential structural effects. An interactive version of this manuscript (which may contain additional mutations appended after acceptance of this manuscript) may be found on the web address:
http://www.thesgc.org/jimd/SCOT
The article explores SME (Small and Medium Sized Enterprises) brand strategies as a means to position and successfully engage in competitive markets. A derived typology of brand strategy types deals with social profiling and sheds light on brand strategy internalization of two current managerial paradigms—sustainability and co-creation. N = 895 German SME wineries were examined, leaning on a netnographic analysis of predominantly websites and social media interactions. A two-step clustering method thereby identified eight winery SME brand strategy types. The importance of sustainability across the identified eight brand strategy types is significant. Co-creation turned out to be a key profiling trait characterizing one brand strategy type. The typology illustrates strategic richness, with brand strategies leaning predominantly on traditional values, on sustainability, on external reputation, or on more innovative customer centric concepts such as co-creation. Hereby, the typology and the identified brand levers invite to strategically design brand management, governance, and sustainability. Wineries which focus on traditional positioning and legitimacy were found to be cautious in deploying co-creation through social media. Winery brands that are characterized by engagement in digital co-creation apparently either tend to expand their scope or partially combine it with traditional values, making them the most diverse type identified. Sustainability obviously needs to be addressed by all brand strategies. Despite industry and country focus, the analyses illustrate the relevance of socially-oriented profiling and highlights that sustainability has reached a status of a fundamental business approach still allowing to differentiate thereon. Furthermore, the business models of the SMEs need to deliver communicated values.
We present a universal modular robot architecture. A robot consists of the following intelligent modules: central control unit (CCU), drive, actuators, a vision unit and sensor input unit. Software and hardware of the robot fit into this structure. We define generic interface protocols between these units. If the robot has to solve a new application and is equipped with a different drive, new actuators and different sensors, only the program for the new application has to be loaded into the CCU. The interfaces to the drive, the vision unit and the other sensors are plug-and-play interfaces. The only constraint for the CCU-program is the set of commands for the actuators.
AAV-encoded expression of TRAIL in experimental human colorectal cancer leads to tumor regression
(2004)
Ableitung von Klassendiagrammen für objektorientierte Modelle - Kommunikationsorientierte Methode
(1997)
Abstandsmeßsystem
(2017)
Hydrogen sulfide (H2S) is well known as a highly toxic environmental chemical threat. Prolonged exposure to H2S can lead to the formation of pulmonary edema. However, the mechanisms of how H2S facilitates edema formation are poorly understood. Since edema formation can be enhanced by an impaired clearance of electrolytes and, consequently, fluid across the alveolar epithelium, it was questioned whether H2S may interfere with transepithelial electrolyte absorption. Electrolyte absorption was electrophysiologically measured across native distal lung preparations (Xenopus laevis) in Ussing chambers. The exposure of lung epithelia to H2S decreased net transepithelial electrolyte absorption. This was due to an impairment of amiloride-sensitive sodium transport. H2S inhibited the activity of the Na+/K+-ATPase as well as lidocaine-sensitive potassium channels located in the basolateral membrane of the epithelium. Inhibition of these transport molecules diminishes the electrochemical gradient which is necessary for transepithelial sodium absorption. Since sodium absorption osmotically facilitates alveolar fluid clearance, interference of H2S with the epithelial transport machinery provides a mechanism which enhances edema formation in H2S-exposed lungs.
Adaptability as a Special Demand on Open Educational Resources: The Cultural Context of e-Learning
(2011)
Producing and providing Open Educational Resources (OERs) is driven by the concepts of openness and sharing. Although there already are a lot of free high-quality resources available, practitioners often rather rewrite learning resources than creatively embed (and thus, reuse) existing OERs. In this paper, we analyse the reasons for this in two different educational contexts. As a result of this analysis, we found that the uncertainty on possible adaptation needs is one of the major barriers. In order to overcome this barrier and make different learning contexts comparable, we analysed the context of learners and in particular, in the research project ‘Learning Culture’, we investigated the field of culturally motivated expectations and attitudes of learners. This paper shows the results of this research project and discusses which cultural issues should be taken into consideration when OERs are to be adapted from one to another cultural context.
Adaptation of e-Learning Environments: Determining National Differences through Context Metadata
(2008)
The paper shows how existing e-learning modules can be internationalized using structured information on the context and specifically culture. Reusing e-Learning contents is a promising concept for the internationalization or cross-cultural purposes. However, most adaptation efforts are often limited to pure language translation.
As the only alternative is rewriting, reusability allows a massive cost reduction by implementing and adapting already established courses, for example into developing countries on a low-cost level. Our approach provides a basis for international and cross-cultural adaptation. In the approach, we identify, collect and store as many parameters about the source and target context and culture as available. After comparing contexts, we determine changing needs by analyzing the impacting differences.
To implement this approach on a large-scale, we plan a public database containing the necessary information for the comparison process. In our research, we have identified a set of around 170 parameters describing national and, more specifically, cultural attributes related to various situations. Utilizing those in an adequate way will lead to an easier and efficient adaptation process.
Online media consumption is the main driving force for the recent growth of the Web. As especially realtime media is becoming more and more accessible from a wide range of devices, with contrasting screen resolutions, processing resources and network connectivity, a necessary requirement is providing users with a seamless multimedia experience at the best possible quality, henceforth being able to adapt to the specific device and network conditions. This paper introduces a novel approach for adaptive media streaming in the Web. Despite the pervasive pullbased designs based on HTTP, this paper builds upon a Web-native push-based approach by which both the communication and processing overheads are reduced significantly in comparison to the pull-based counterparts. In order to maintain these properties when enhancing the scheme by adaptation features, a server-side monitoring and control needs to be developed as a consequence. Such an adaptive push-based media streaming approach is intr oduced as main contribution of this work. Moreover, the obtained evaluation results provide the evidence that with an adaptive push-based media delivery, on the one hand, an equivalent quality of experience can be provided at lower costs than by adopting pull-based media streaming. On the other hand, an improved responsiveness in switching between quality levels can be obtained at no extra costs.