006 Spezielle Computerverfahren
Refine
Departments, institutes and facilities
- Fachbereich Informatik (24)
- Institute of Visual Computing (IVC) (11)
- Fachbereich Wirtschaftswissenschaften (5)
- Institut für Sicherheitsforschung (ISF) (4)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (4)
- Institut für Verbraucherinformatik (IVI) (4)
- Fachbereich Ingenieurwissenschaften und Kommunikation (2)
- Institut für KI und Autonome Systeme (A2S) (2)
- Fachbereich Angewandte Naturwissenschaften (1)
- Institut für Cyber Security & Privacy (ICSP) (1)
Document Type
- Conference Object (40) (remove)
Year of publication
Language
- English (40) (remove)
Keywords
- Augmented Reality (3)
- Machine Learning (2)
- Robotics (2)
- authoring tools (2)
- prototyping (2)
- 450 MHz (1)
- AR design (1)
- AR development (1)
- AR/VR (1)
- Applications in Energy Transport (1)
ProtSTonKGs: A Sophisticated Transformer Trained on Protein Sequences, Text, and Knowledge Graphs
(2022)
While most approaches individually exploit unstructured data from the biomedical literature or structured data from biomedical knowledge graphs, their union can better exploit the advantages of such approaches, ultimately improving representations of biology. Using multimodal transformers for such purposes can improve performance on context dependent classication tasks, as demonstrated by our previous model, the Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs (STonKGs). In this work, we introduce ProtSTonKGs, a transformer aimed at learning all-encompassing representations of protein-protein interactions. ProtSTonKGs presents an extension to our previous work by adding textual protein descriptions and amino acid sequences (i.e., structural information) to the text- and knowledge graph-based input sequence used in STonKGs. We benchmark ProtSTonKGs against STonKGs, resulting in improved F1 scores by up to 0.066 (i.e., from 0.204 to 0.270) in several tasks such as predicting protein interactions in several contexts. Our work demonstrates how multimodal transformers can be used to integrate heterogeneous sources of information, paving the foundation for future approaches that use multiple modalities for biomedical applications.
Taste is a complex phenomenon that depends on the individual experience and is a matter of collective negotiation and mediation. On the contrary, it is uncommon to include taste and its many facets in everyday design, particularly online shopping for fresh food products. To realize this unused potential, we conducted two Co-Design workshops. Based on the participants’ results in the workshops, we prototyped and evaluated a click-dummy smart-phone app to explore consumers’ needs for digital taste depiction. We found that emphasizing the natural qualities of food products, external reviews, and personalizing features lead to a reflection on the individual taste experience. The self-reflection through our design enables consumers to develop their taste competencies and thus strengthen their autonomy in decision-making. Ultimately, exploring taste as a social experience adds to a broader understanding of taste beyond a sensory phenomenon.
Low-Cost In-Hand Slippage Detection and Avoidance for Robust Robotic Grasping with Compliant Fingers
(2021)
Towards an Interaction-Centered and Dynamically Constructed Episodic Memory for Social Robots
(2020)
This paper describes a dynamic, model-based approach for estimating intensities of 22 out of 44 different basic facial muscle movements. These movements are defined as Action Units (AU) in the Facial Action Coding System (FACS) [1]. The maximum facial shape deformations that can be caused by the 22 AUs are represented as vectors in an anatomically based, deformable, point-based face model. The amount of deformation along these vectors represent the AU intensities, and its valid range is [0, 1]. An Extended Kalman Filter (EKF) with state constraints is used to estimate the AU intensities. The focus of this paper is on the modeling of constraints in order to impose the anatomically valid AU intensity range of [0, 1]. Two process models are considered, namely constant velocity and driven mass-spring-damper. The results show the temporal smoothing and disambiguation effect of the constrained EKF approach, when compared to the frame-by-frame model fitting approach ‘Regularized Landmark Mean-Shift (RLMS)’ [2]. This effect led to more than 35% increase in performance on a database of posed facial expressions.
Most VE-frameworks try to support many different input and output devices. They do not concentrate so much on the rendering because this is tradi- tionally done by graphics workstation. In this short paper we present a modern VE framework that has a small kernel and is able to use different renderers. This includes sound renderers, physics renderers and software based graphics renderers. While our VE framework, named basho is still under development we have an alpha version running under Linux and MacOS X.