Refine
Department, Institute
Document Type
- Conference Object (7)
- Article (4)
Keywords
- 3D User Interface (1)
- 3D user interface (1)
- Adaptive Control (1)
- Alternatives (1)
- Cognitive informatics (1)
- Hand Guidance (1)
- Human computer interaction (1)
- Human factors (1)
- Multilayer interaction (1)
- Tactile Feedback (1)
In the conceptual design phase, designers routinely generate dozens of alternatives based on a single idea. This is especially relevant in generative design where an algorithm can generate a large number of viable design options. While solutions for creating and managing a small number of simple alternatives have been proposed, practical applications of these solutions are liMassachusetts Institute of Technologyed. As a result, we present GEM-NI+, an extension to the original GEM-NI system for creating and managing alternatives in generative design. GEM-NI+ is designed to enable editing, managing and comparing up to 24 alternatives simultaneously using a multi-monitor setup. GEM-NI+ also features a new "jamming spaces" technique for assigning individual monitors into different visualization states, which makes organization of a large workspace easier. Finally, GEM-NI+ enables comparison of complex alternatives using recursive group node difference visualization.
We present GEM-NI -- a graph-based generative-design tool that supports parallel exploration of alternative designs. Producing alternatives is a key feature of creative work, yet it is not strongly supported in most extant tools. GEM-NI enables various forms of exploration with alternatives such as parallel editing, recalling history, branching, merging, comparing, and Cartesian products of and for alternatives. Further, GEM-NI provides a modal graphical user interface and a design gallery, which both allow designers to control and manage their design exploration. We conducted an exploratory user study followed by in-depth one-on-one interviews with moderately and highly skills participants and obtained positive feedback for the system features, showing that GEM-NI supports creative design work well.
Nowadays, we input text not only on stationary devices, but also on handheld devices while walking, driving, or commuting. Text entry on the move, which we term as nomadic text entry, is generally slower. This is partially due to the need for users to move their visual focus from the device to their surroundings for navigational purposes and back. To investigate if better feedback about users' surroundings on the device can improve performance, we present a number of new and existing feedback systems: textual, visual, textual & visual, and textual & visual via translucent keyboard. Experimental comparisons between the conventional and these techniques established that increased ambient awareness for mobile users enhances nomadic text entry performance. Results showed that the textual and the textual & visual via translucent keyboard conditions increased text entry speed by 14% and 11%, respectively, and reduced the error rate by 13% compared to the regular technique. The two methods also significantly reduced the number of collisions with obstacles.
We present a new interface for interactive comparisons of more than two alternative documents in the context of a generative design system that uses generative data-flow networks defined via directed acyclic graphs. To better show differences between such networks, we emphasize added, deleted, (un)changed nodes and edges. We emphasize differences in the output as well as parameters using highlighting and enable post-hoc merging of the state of a parameter across a selected set of alternatives. To minimize visual clutter, we introduce new difference visualizations for selected nodes and alternatives using additive and subtractive encodings, which improve readability and keep visual clutter low. We analyzed similarities in networks from a set of alternative designs produced by architecture students and found that the number of similarities outweighs the differences, which motivates use of subtractive encoding. We ran a user study to evaluate the two main proposed difference visualization encodings and found that they are equally effective.
We present a novel, multilayer interaction approach that enables state transitions between spatially above-screen and 2D on-screen feedback layers. This approach supports the exploration of haptic features that are hard to simulate using rigid 2D screens. We accomplish this by adding a haptic layer above the screen that can be actuated and interacted with (pressed on) while the user interacts with on-screen content using pen input. The haptic layer provides variable firmness and contour feedback, while its membrane functionality affords additional tactile cues like texture feedback. Through two user studies, we look at how users can use the layer in haptic exploration tasks, showing that users can discriminate well between different firmness levels, and can perceive object contour characteristics. Demonstrated also through an art application, the results show the potential of multilayer feedback to extend on-screen feedback with additional widget, tool and surface properties, and for user guidance.
Touchscreen interaction suffers from occlusion problems as fingers can cover small targets, which makes interacting with such targets challenging. To improve touchscreen interaction accuracy and consequently the selection of small or hidden objects we introduce a back-of-device force feedback system for smartphones. We introduce a new solution that combines force feedback on the back to enhance touch input on the front screen. The interface includes three actuated pins at the back of a smartphone. All three pins are driven by micro servos and can be actuated up to a frequency of 50Hz and a maximum amplitude of 5mm. In a first psychophysical user study, we explored the limits of the system. Thereafter, we demonstrate through a performance study that the proposed interface can enhance touchscreen interaction precision, compared to state-of-the-art methods. In particular the selection of small targets performed remarkably well with force feedback. The study additionally shows that users subjectively felt significantly more accurate with force feedback. Based on the results, we discuss back-to-front feedback design issues and demonstrate potential applications through several prototypical concepts to illustrate where the back-of-device force feedback could be beneficial.
In presence of conflicting or ambiguous visual cues in complex scenes, performing 3D selection and manipulation tasks can be challenging. To improve motor planning and coordination, we explore audio-tactile cues to inform the user about the presence of objects in hand proximity, e.g., to avoid unwanted object penetrations. We do so through a novel glove-based tactile interface, enhanced by audio cues. Through two user studies, we illustrate that proximity guidance cues improve spatial awareness, hand motions, and collision avoidance behaviors, and show how proximity cues in combination with collision and friction cues can significantly improve performance.
We present a novel forearm-and-glove tactile interface that can enhance 3D interaction by guiding hand motor planning and coordination. In particular, we aim to improve hand motion and pose actions related to selection and manipulation tasks. Through our user studies, we illustrate how tactile patterns can guide the user, by triggering hand pose and motion changes, for example to grasp (select) and manipulate (move) an object. We discuss the potential and limitations of the interface, and outline future work.
Telepresence robots allow users to be spatially and socially present in remote environments. Yet, it can be challenging to remotely operate telepresence robots, especially in dense environments such as academic conferences or workplaces. In this paper, we primarily focus on the effect that a speed control method, which automatically slows the telepresence robot down when getting closer to obstacles, has on user behaviors. In our first user study, participants drove the robot through a static obstacle course with narrow sections. Results indicate that the automatic speed control method significantly decreases the number of collisions. For the second study we designed a more naturalistic, conference-like experimental environment with tasks that require social interaction, and collected subjective responses from the participants when they were asked to navigate through the environment. While about half of the participants preferred automatic speed control because it allowed for smoother and safer navigation, others did not want to be influenced by an automatic mechanism. Overall, the results suggest that automatic speed control simplifies the user interface for telepresence robots in static dense environments, but should be considered as optionally available, especially in situations involving social interactions.