
Miguel Nacenta
Dr. Miguel Nacenta is a lecturer at the University of St Andrews. He is a co-founder of the SACHI research group.
Miguel’s research interests are focused on developing input and output technology that can extend human capabilities. He is interested in applying perceptual and social principles to novel multi-display, multi-touch, multi-modal, and haptic interfaces. He is also interested in perception applied to interface design. For more information see his full website (www.nacenta.com), visit his blog (miguelissimo.wordpress.com), follow him on twitter (@miguelnacenta).
Supervisors: Carl Gutwin
Miguel’s research interests are focused on developing input and output technology that can extend human capabilities. He is interested in applying perceptual and social principles to novel multi-display, multi-touch, multi-modal, and haptic interfaces. He is also interested in perception applied to interface design. For more information see his full website (www.nacenta.com), visit his blog (miguelissimo.wordpress.com), follow him on twitter (@miguelnacenta).
Supervisors: Carl Gutwin
less
InterestsView All (7)
Uploads
Papers by Miguel Nacenta
view more content at once. This paper reports on a study investigating how different configurations of input and output across displays affect performance, subjective workload and preferences in map, text and photo search tasks. Experimental results show that a hybrid configuration where visual output is distributed across displays is worst or equivalent to worst in all tasks. A mobile device-controlled large display configuration performs best in the map search task and equal to best in text and photo search tasks (tied with a mobile-only configuration). After conducting a detailed analysis of the performance differences across different UI configurations, we give recommendations for the design of distributed user interfaces.
advantage of the different characteristics of different display
categories. For example, combining mobile and large displays
within the same system enables users to interact with user
interface elements locally while simultaneously having a large
display space to show data. Although there is a large potential
gain in performance and comfort, there is at least one main
drawback that can override the benefits of MDUIs: the visual and
physical separation between displays requires that users perform
visual attention switches between displays. In this paper, we
present a survey and analysis of existing data and classifications
to identify factors that can affect visual attention switch in
MDUIs. Our analysis and taxonomy bring attention to the often
ignored implications of visual attention switch and collect existing
evidence to facilitate research and implementation of effective
MDUIs.
such feedback is absent in current tabletop systems. The previously developed Haptic Tabletop Puck (HTP) aims at supporting experimentation with and development of inexpensive tabletop haptic interfaces in a do-it-yourself fashion. The problem is that programming the HTP (and haptics in general) is difficult. To address this problem, we contribute the HAPTICTOUCH toolkit, which enables developers to rapidly prototype haptic tabletop applications. Our toolkit is structured in three layers that enable programmers to: (1) directly control the device, (2) create customized combinable haptic behaviors (e.g., softness, oscillation), and (3) use visuals (e.g., shapes, images, buttons) to quickly make use of these behaviors. In our preliminary exploration we found that programmers could use our toolkit to create haptic tabletop applications in a short amount of time.
of the original geometry without affecting any distortion-based lens’s currently used in the presentation. The undistort lens is designed to allow interactive access to the underlying undistorted data within the context of the distorted space, and to enable a better understanding of the distortions. The paper describes the implementation of a generic back-mapping mechanism that enables the implementation of undistort lenses for arbitrary distortion based techniques, including those presented in the lens literature. We also provide a series of use-case scenarios that demonstrate the situations in which the technique can complement existing lenses.
a single digital workspace. One of the main problems to be solved in an MDE’s design is how to enable movement of objects from one display to another. When the real-world space between displays is modeled as part of the workspace (i.e., Mouse Ether), it becomes difficult for users to keep track of their cursors during a transition between displays. To address this problem, we developed the Ubiquitous Cursor system, which uses a projector and a hemispherical mirror to completely cover the interior of a room with usable low-resolution pixels. Ubiquitous Cursor allows us to provide direct feedback about the location of the cursor between displays. To assess the effectiveness of this direct feedback approach, we carried out a study that compared Ubiquitous Cursor with two other standard approaches: Halos, which provide indirect feedback about the cursor’s location; and Stitching, which warps the cursor between displays, similar to the way that current operating systems address multiple monitors. Our study tested simple cross-display pointing tasks in an MDE; the results showed that Ubiquitous Cursor was significantly faster than both other approaches. Our work shows the feasibility and the value of providing direct feedback for cross-display movement, and adds to our understanding of the principles underlying targeting performance in MDEs.
This dissertation focuses on understanding how human performance of cross-display actions is affected by the design of cross-display object movement interaction techniques. Three main aspects of cross-display actions are studied: how displays are referred to by the system and the users, how spatial actions are planned, and how actions are executed. Each of these three aspects is analyzed through laboratory experiments that provide empirical evidence on how different characteristics of interaction techniques affect performance.
The results further our understanding of cross-display interaction and can be used by designers of new MDEs to create more efficient multi-display interfaces.