Memory Stones
2015, Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications
https://doi.org/10.1145/2699343.2699352…
1 page
1 file
Sign up for access to the world's latest research
Abstract
AI
AI
The paper introduces 'Memory Stones,' a novel user-interface method for transferring information objects between devices using multi-touch input that simulates the physical actions of picking up and transporting solid objects. The method enhances the copy-and-paste functionality by allowing users to engage with intangible data as if handling tangible items, providing visual feedback through virtual representations of stones. The implementation is straightforward, using existing multi-touch devices and minimal additional requirements, which suggests that it could be broadly adopted across various computing platforms.
Related papers
Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems - CHI '08, 2008
Figure 1. Rubbing and tapping gestures activate operations while the user is touching the display, so that additional parameter control and functionality can be activated during the fluid interaction. (a) Rubbing in and (b) rubbing out support two operations. (c) Bimanual interaction on single-touch displays is simulated with a set of "tapping" techniques, where operations are executed by tapping with a secondary finger (left), while the primary finger (right) is touching the display.
Proceedings of the 23nd annual ACM symposium on User interface software and technology, 2010
We describe techniques for direct pen+touch input. We observe people's manual behaviors with physical paper and notebooks. These serve as the foundation for a prototype Microsoft Surface application, centered on note-taking and scrapbooking of materials. Based on our explorations we advocate a division of labor between pen and touch: the pen writes, touch manipulates, and the combination of pen + touch yields new tools. This articulates how our system interprets unimodal pen, unimodal touch, and multimodal pen+touch inputs, respectively. For example, the user can hold a photo and drag off with the pen to create and place a copy; hold a photo and cross it in a freeform path with the pen to slice it in two; or hold selected photos and tap one with the pen to staple them all together. Touch thus unifies object selection with mode switching of the pen, while the muscular tension of holding touch serves as the-glue‖ that phrases together all the inputs into a unitary multimodal gesture. This helps the UI designer to avoid encumbrances such as physical buttons, persistent modes, or widgets that detract from the user's focus on the workspace.
Journal of Computer Science and Technology, 2012
IEEE Pervasive Computing
Mid-air gestures have been largely overlooked for transferring content between large displays and personal mobile devices. To fully utilize the ubiquitous nature of mid-air gestures for this purpose, we developed SimSense, a smart space system which automatically pairs users with their mobile devices based on location data. Users can then interact with a gesturecontrolled large display, and move content onto their handheld devices. We investigated two mid-air gestures for content transfer, grab-and-pull and grab-and-drop, in a user study. Our results show that i) mid-air gestures are well suited for content retrieval scenarios and offer an impressive user experience, ii) grab-and-pull is preferred for scenarios where content is transferred to the user, whereas grab-and-drop is presumably ideal when the recipient is another person or a device, and iii) distinct gestures can be successfully combined with common point-and-dwell mechanics prominent in many gesture-controlled applications. Index Terms-content transfer, large displays, mid-air gestures, mobile devices, smart spaces, ubiquitous computing. I. INTRODUCTION XCHANGING information between large displays and personal devices is a feature of growing importance in smart spaces. Past research has shown great interest in studying different interaction mechanics for such tasks. Previous research has largely focused on proxemics-based devices, such as NFC (Near field communication) readers, to enable communication between displays and mobile devices [1][2][3]. Some novel solutions have also been proposed, such as one utilizing the camera of the mobile device to enable drag-and-drop interactions [4]. However, while these solutions may work well for a specific purpose, they also require users to spend time establishing a connection, or require users to specifically walk up to the device to carry out the task. Most notably, they require users to interact with their mobile device
Conventional scrolling methods for small sized display in PDAs or mobile phones are difficult to use when frequent switching of scrolling and editing operations are required, for example, browsing and operating large sized WWW pages.
2006
We present Hover Widgets, a new technique for increasing the capabilities of pen-based interfaces. Hover Widgets are implemented by using the pen movements above the display surface, in the tracking state. Short gestures while hovering, followed by a pen down, access the Hover Widgets, which can be used to activate localized interface widgets. By using the tracking state movements, Hover Widgets create a new command layer which is clearly distinct from the input layer of a pen interface. In a formal experiment Hover Widgets were found to be faster than a more traditional command activation technique, and also reduced errors due to divided attention.
2007
Tangible User Interfaces (TUIs) are emerging as a new paradigm of interaction with the digital world aiming at facilitating traditional GUI-based interaction. Interaction with TUIs relies on users' existing skills of interaction with the real world [9], thereby offering the promise of interfaces that are quicker to learn and easier to use. Recently it has been demonstrated [1] that the use of personal objects as tangible interfaces will be even more straightforward since users already have a mental model associated to the physical objects thus facilitating the comprehension and usage modalities of that objects. However TUIs are currently very challenging to build and this limits their widespread diffusion and exploitation. In order to address this issue we propose a user-oriented framework, called Memodules Framework, which allows the easy creation and management of Personal TUIs, providing end users with the ability of dynamically configuring and reconfiguring their TUIs. The framework is based on a model, called MemoML (Memodules Markup Language), which guarantees framework flexibility, extensibility and evolution over time.
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '13, 2013
Clipboards are omnipresent on today's personal computing platforms. They provide copy-and-paste functionalities that let users easily reorganize information and quickly transfer data across applications. In this work, we introduce personal clipboards to multi-user surfaces. Personal clipboards enable individual and independent copy-and-paste operations, in the presence of multiple users concurrently sharing the same direct-touch interface. As common surface computing platforms do not distinguish touch input of different users, we have developed clipboards that leverage complementary personalization strategies. Specifically, we have built a context menu clipboard based on implicit user identification of every touch, a clipboard based on personal subareas dynamically placed on the surface, and a handheld clipboard based on integration of personal devices for surface interaction. In a user study, we demonstrate the effectiveness of personal clipboards for shared surfaces, and show that different personalization strategies enable clipboards, albeit with different impacts on interaction characteristics.
IEEE Computer Graphics and Applications, 2014

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.