Fachbereich Wirtschaftswissenschaften
Refine
H-BRS Bibliography
- yes (317)
Departments, institutes and facilities
- Fachbereich Wirtschaftswissenschaften (317)
- Institut für Verbraucherinformatik (IVI) (102)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (50)
- Centrum für Entrepreneurship, Innovation und Mittelstand (CENTIM) (7)
- Fachbereich Sozialpolitik und Soziale Sicherung (6)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (6)
- Fachbereich Ingenieurwissenschaften und Kommunikation (5)
- Fachbereich Informatik (4)
- Graduierteninstitut (4)
- Institute of Visual Computing (IVC) (3)
Document Type
- Article (136)
- Conference Object (100)
- Part of a Book (33)
- Book (monograph, edited volume) (17)
- Working Paper (8)
- Conference Proceedings (6)
- Preprint (6)
- Doctoral Thesis (4)
- Report (4)
- Contribution to a Periodical (2)
Year of publication
Language
- English (317) (remove)
Keywords
- Sustainability (9)
- ICT (7)
- Dementia (5)
- Exergame (5)
- Kenya (5)
- User Experience (5)
- Well-being (5)
- work engagement (5)
- Design (4)
- Eco-Feedback (4)
Queueing Theory
(2024)
In the last two decades, studies that analyse the political economy of sustainable energy transitions have increasingly become available. Yet very few attempts have been made to synthesize the factors discussed in the growing literature. This paper reviews the extant empirical literature on the political economy of sustainable energy transitions. Using a well-defined search strategy, a total of 36 empirical contributions covering the period 2008 to 2022 are reviewed full text. Overall, the findings highlight the role of vested interest, advocacy coalitions and green constituencies, path dependency, external shocks, policy and institutional environment, political institutions and fossil fuel resource endowments as major political economy factors influencing sustainable energy transitions across both high income countries, and low and middle income countries. In addition, the paper highlights and discusses some critical knowledge gaps in the existing literature and provides suggestions for a future research agenda.
Mergers and acquisitions take place all over the world and in many industries, typically motivated by corporate politics. While IT management is often not involved in the decision-making, it has to solve a wide range of problems in the post-merger phase. Indeed, merging two or more companies implies not only merging their core businesses, but also creating a new and efficiently integrated IT organisation from the individual ones, since persistence of the current IT organisations usually does not make sense. In addition, corporate management frequently imposes constraints, e.g., cost reductions, on the IT infrastructure. The principal critical success factor when merging IT organisations is the uninterrupted operation of the IT business, because a service gap is neither acceptable for in-house functional departments nor for external customers. Therefore, the IT rebuilding phase has to focus on IT services that facilitate the processes of functional departments, support processes, and processes of customers and suppliers, so that any transformation work is transparent to internal and external customers. In this article we describe a real-world but anonymous case study. Our goals are to highlight the points important for merging IT organisations, and to help decision-makers, particularly in the areas of IT organisation and IT personnel. We focus on the arising organisational and non-technical issues from a management perspective, i.e., the CIO's view, and provide checklists intended to help IT managers to address the most pressing issues. To assist CIOs surviving in the post merger phase, we give check lists for merging IT organisations, check lists for merging IT human resources, check lists for IT budgets and reporting, and assess activities in a merger scenario. IT hardware, software and IT infrastructure as well as running IT projects are not considered in this paper.
IT performance measurement is often associated by chief executive officers with IT cost cutting although IT protects business processes from increasing IT costs. IT cost cutting only endangers the company’s efficiency. This opinion discriminates those who do IT performance measurement in companies as a bean-counter. The present paper describes an integrated reference model for IT performance measurement based on a life cycle model and a performance oriented framework. The presented model was created from a practical point of view. It is designed lank compared with other known concepts and is very appropriate for small and medium enterprises (SME).
In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI” from the users, who were interacting with AI but did not realize it. Three decades of research and we are still facing the same issue with the AItechnology users. In the lack of users’ awareness and mutual understanding of AI-enabled systems between designers and users, informal theories of the users about how a system works (“Folk theories”) become inevitable but can lead to misconceptions and ineffective interactions. To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. However, a profound understanding of the current users’ perception of AI is still missing. In this study, we introduce the term “Perceived AI” as “AI defined from the perspective of its users”. We then present our preliminary results from deep-interviews with 50 AItechnology users, which provide a framework for our future research approach towards a better understanding of PAI and users’ folk theories.
For most people, using their body to authenticate their identity is an integral part of daily life. From our fingerprints to our facial features, our physical characteristics store the information that identifies us as "us." This biometric information is becoming increasingly vital to the way we access and use technology. As more and more platform operators struggle with traffic from malicious bots on their servers, the burden of proof is on users, only this time they have to prove their very humanity and there is no court or jury to judge, but an invisible algorithmic system. In this paper, we critique the invisibilization of artificial intelligence policing. We argue that this practice obfuscates the underlying process of biometric verification. As a result, the new "invisible" tests leave no room for the user to question whether the process of questioning is even fair or ethical. We challenge this thesis by offering a juxtaposition with the science fiction imagining of the Turing test in Blade Runner to reevaluate the ethical grounds for reverse Turing tests, and we urge the research community to pursue alternative routes of bot identification that are more transparent and responsive.
Technological objects present themselves as necessary, only to become obsolete faster than ever before. This phenomenon has led to a population that experiences a plethora of technological objects and interfaces as they age, which become associated with certain stages of life and disappear thereafter. Noting the expanding body of literature within HCI about appropriation, our work pinpoints an area that needs more attention, “outdated technologies.” In other words, we assert that design practices can profit as much from imaginaries of the future as they can from reassessing artefacts from the past in a critical way. In a two-week fieldwork with 37 HCI students, we gathered an international collection of nostalgic devices from 14 different countries to investigate what memories people still have of older technologies and the ways in which these memories reveal normative and accidental use of technological objects. We found that participants primarily remembered older technologies with positive connotations and shared memories of how they had adapted and appropriated these technologies, rather than normative uses. We refer to this phenomenon as nostalgic reminiscence. In the future, we would like to develop this concept further by discussing how nostalgic reminiscence can be operationalized to stimulate speculative design in the present.
When dialogues with voice assistants (VAs) fall apart, users often become confused or even frustrated. To address these issues and related privacy concerns, Amazon recently introduced a feature allowing Alexa users to inquire about why it behaved in a certain way. But how do users perceive this new feature? In this paper, we present preliminary results from research conducted as part of a three-year project involving 33 German households. This project utilized interviews, fieldwork, and co-design workshops to identify common unexpected behaviors of VAs, as well as users’ needs and expectations for explanations. Our findings show that, contrary to its intended purpose, the new feature actually exacerbates user confusion and frustration instead of clarifying Alexa's behavior. We argue that such voice interactions should be characterized as explanatory dialogs that account for VA’s unexpected behavior by providing interpretable information and prompting users to take action to improve their current and future interactions.
AI (artificial intelligence) systems are increasingly being used in all aspects of our lives, from mundane routines to sensitive decision-making and even creative tasks. Therefore, an appropriate level of trust is required so that users know when to rely on the system and when to override it. While research has looked extensively at fostering trust in human-AI interactions, the lack of standardized procedures for human-AI trust makes it difficult to interpret results and compare across studies. As a result, the fundamental understanding of trust between humans and AI remains fragmented. This workshop invites researchers to revisit existing approaches and work toward a standardized framework for studying AI trust to answer the open questions: (1) What does trust mean between humans and AI in different contexts? (2) How can we create and convey the calibrated level of trust in interactions with AI? And (3) How can we develop a standardized framework to address new challenges?
This paper gives an overview of the development of Fair Trade in six European countries: Austria, France, Germany, the Netherlands, Switzerland and the United Kingdom. After the description of the food retail industry and its market structures in these countries, the main European Fair Trade organizations are analyzed regarding their role within the Fair Trade system. The following part deals with the development of Fair Trade sales in general and with respect to the products coffee, tea, bananas, fruit juice and sugar. An overview of the main activities of national Fair Trade organizations, e.g. public relation activities, completes the analysis. This study shows the enormous upswing of Fair Trade during the last decade and the reasons for this development. Nevertheless, it comes to the conclusion that Fair Trade is still far away from being an essential part of the food retail industry in Europe.
The cooperation between researchers and practitioners during the different stages of the research process is promoted as it can be of benefit to both society and research supporting processes of ‘transformation’. While acknowledging the important potential of research–practice–collaborations (RPCs), this paper reflects on RPCs from a political-economic perspective to also address potential unintended adverse effects on knowledge generation due to divergent interests, incomplete information or the unequal distribution of resources. Asymmetries between actors may induce distorted and biased knowledge and even help produce or exacerbate existing inequalities. Potential merits and limitations of RPCs, therefore, need to be gauged. Taking RPCs seriously requires paying attention to these possible tensions—both in general and with respect to international development research, in particular: On the one hand, there are attempts to contribute to societal change and ethical concerns of equity at the heart of international development research, and on the other hand, there is the relative risk of encountering asymmetries more likely.
Over the past two decades many low and middle income countries worldwide have started to extend the coverage and improve the functioning of public social protection systems. The research program on international policy diffusion provides empirical evidence that apart from domestic factors international interdependencies matter as well for national policy change in social protection. However, little is known about the governance structures mediating international policy diffusion in social protection.
Public preferences
(2021)
For reforms to be acceptable and sustainable in the long run, they should be aligned with public preferences. ‘Preferences’ is a technical term used in social sciences or humanities including for example disciplines such as economics, philosophy or psychology. Broadly speaking, preferences refer to an individual’s judgements on liking one alternative more than others. More specifically, preferences are ‘subjective comparative evaluations, in the form of “Agent prefers X to Y”’ (Hansson and Grüne-Yanoff 2018). Here, we are particularly interested in people’s policy preferences concerning social protection, an area which deserves more attention in policy debates and research.
Over the past two decades many governments of low and middle income countries have started to introduce social protection measures or to extend the coverage and improve the functioning of public social protection systems. These reforms are a "global phenomenon" and can be observed in many African, Asian and Latin American countries. This paper focuses on international determinants for policy change within social protection by assessing the state of the art of both policy diffusion and policy transfer studies. Empirical studies of policy transfer and diffusion in the field of social protection are furthermore assessed in light of the theoretical background.
The paper examines the effectiveness of transgovernmental policy networks as a governance structure for policy diffusion. The analysis is based on a survey including 50 social protection policy maker and technical practitioner who are country delegates to transgovernmental policy networks within the policy area of social protection. The paper provides anecdotal empirical evidence that policy networks contribute to policy diffusion by inducing mutual learning processes.
Political economic analyses of recent social protection reforms in Asian, African or Latin American countries have increased throughout the last few years. Yet, most contributions focus on one social protection mechanism only and do not provide a comparative approach across policy areas. In addition, most studies are empirical studies, with no or very limited theoretical linkages. The paper aims to explain multiple trajectories of social protection reform processes looking at cash transfers and social health protection policies in Kenya. It develops a taxonomy and suggest a conceptual framework to assess and explain reform dynamics across different social protection pillars. In order to allow for a more differentiated typology and enable us to understand different reform dynamics, the article uses the approach on gradual institutional change. While existing approaches to institutional change mostly focus on institutional change prompted by exogenous shocks or environmental shifts, this approach takes account of both, exogenous and endogenous sources of change.
There has been a growing interest in taste research in the HCI and CSCW communities. However, the focus is more on stimulating the senses, while the socio-cultural aspects have received less attention. However, individual taste perception is mediated through social interaction and collective negotiation and is not only dependent on physical stimulation. Therefore, we study the digital mediation of taste by drawing on ethnographic research of four online wine tastings and one self-organized event. Hence, we investigated the materials, associated meanings, competences, procedures, and engagements that shaped the performative character of tasting practices. We illustrate how the tastings are built around the taste-making process and how online contexts differ in providing a more diverse and distributed environment. We then explore the implications of our findings for the further mediation of taste as a social and democratized phenomenon through online interaction.
Taste is a complex phenomenon that depends on the individual experience and is a matter of collective negotiation and mediation. On the contrary, it is uncommon to include taste and its many facets in everyday design, particularly online shopping for fresh food products. To realize this unused potential, we conducted two Co-Design workshops. Based on the participants’ results in the workshops, we prototyped and evaluated a click-dummy smart-phone app to explore consumers’ needs for digital taste depiction. We found that emphasizing the natural qualities of food products, external reviews, and personalizing features lead to a reflection on the individual taste experience. The self-reflection through our design enables consumers to develop their taste competencies and thus strengthen their autonomy in decision-making. Ultimately, exploring taste as a social experience adds to a broader understanding of taste beyond a sensory phenomenon.
Dried serum spots that are well prepared can be attractive alternatives to frozen serum samples for shelving specimens in a medical or research center's biobank and mailing freshly prepared serum to specialized laboratories. During the pre-analytical phase, complications can arise which are often challenging to identify or are entirely overlooked. These complications can lead to reproducibility issues, which can be avoided in serum protein analysis by implementing optimized storage and transfer procedures. With a method that ensures accurate loading of filter paper discs with donor or patient serum, a gap in dried serum spot preparation and subsequent serum analysis shall be filled. Pre-punched filter paper discs with a 3 mm diameter are loaded within seconds in a highly reproducible fashion (approximately 10% standard deviation) when fully submerged in 10 μl of serum, named the "Submerge and Dry" protocol. Such prepared dried serum spots can store several hundred micrograms of proteins and other serum components. Serum-borne antigens and antibodies are reproducibly released in 20 μl elution buffer in high yields (approximately 90%). Dried serum spot-stored and eluted antigens kept their epitopes and antibodies their antigen binding abilities as was assessed by SDS-PAGE, 2D gel electrophoresis-based proteomics, and Western blot analysis, suggesting pre-punched filter paper discs as handy solution for serological tests.
Although most individuals who gamble do so without any adverse consequences, some individuals develop a recurrent, maladaptive pattern of gambling behaviour, often called pathological gambling or gambling disorder, that is associated with financial losses, disruption of family and interpersonal relationships, and co-occurring psychiatric disorders. Identifying whether different types of gambling modalities vary in their ability to lead to maladaptive patterns of gambling behaviour is essential to develop public policies that seek to balance access to gambling opportunities with minimizing risk for the potential adverse consequences of gambling behaviour. Until recently, assessing the risk potential of different types of gambling products was nearly impossible. ASTERIG, initially developed in Germany in 2006-2010, is an assessment tool to measure and to evaluate the risk potential of any gambling product based on scores on ten dimensions. In doing so, it also allows a comparison to be drawn between the addictive potential of different gambling products. Furthermore, the tool highlights where the specific risk potential of each specific gambling product lies. This makes it a valuable tool at the legislative, case law, and administrative levels as it allows the risk potential of individual gambling products to be identified and to be compared globally and across 10 different dimensions of risk potential. We note that specific gambling products should always be evaluated rather than product groups (lotteries, slot machines) or providers, as there may be variations among those product groups that impact their risk potential. For example, slot machines may vary on the amount of jackpot, which may influence their risk potential.
A central objective of the German gambling law is to ensure the protection of minors and players (§ 1 Sentence 1 No. 3 GlüStV 2012). Since the year 2014 gambling facilities for commercial games of chance in gambling halls and restaurants have been certified by the German Safety Standards Authority [Technischer Überwachungsverein – TÜV]. Certification by government-accredited testing organizations based on internationally validated, interdisciplinary scientific expertise "Safeguarding the Protection of Minors and Players with Respect to Commercial Gambling in Germany – 2.0" is a quality assurance instrument from a regulatory perspective. It is in the interests of, in particular, excellent quality providers to ensure that they are also perceived as providing this level of quality. Certification leads to market separation. In so doing, the advantages of end-to-end certification should be greater than any disadvantages. Analysing the international environment shows that certifi cation initiatives are necessary and have been put in place in other sectors of the gambling industry.
The primary aim is quality assurance for a responsible handling of commercial games of chance offerings (responsible gambling). The presented testing catalogue for commercial gambling can also provide an impetus in the international context and, as appropriate, a set standard.