Papers by François Guimbretière

As information visualization tools are used to visualize datasets of increasing size, there is a ... more As information visualization tools are used to visualize datasets of increasing size, there is a growing need for techniques that facilitate efficient navigation. Pan and zoom navigation enables users to display areas of interest at different resolutions. Focus+context techniques aim to overcome the drawbacks of pan and zoom by dynamically integrating areas of interest and context regions. To date, empirical comparisons of these two navigation paradigms have been limited in scope and inconclusive. In two controlled studies, we evaluated navigation techniques representative of the pan and zoom and focus+context approaches. The particular focus+context technique examined was rubber sheet navigation, implemented in a way that afforded a set of navigation actions similar to pan and zoom navigation. The two techniques were used by 40 subjects in each study to perform a navigation-intensive task in a large tree dataset. Study 1 investigated the effect of the amount of screen real estate d...
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems

Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
We present EchoSpeech, a minimally-obtrusive silent speech interface (SSI) powered by low-power a... more We present EchoSpeech, a minimally-obtrusive silent speech interface (SSI) powered by low-power active acoustic sensing. EchoSpeech uses speakers and microphones mounted on a glass-frame and emits inaudible sound waves towards the skin. By analyzing echos from multiple paths, EchoSpeech captures subtle skin deformations caused by silent utterances and uses them to infer silent speech. With a user study of 12 participants, we demonstrate that EchoSpeech can recognize 31 isolated commands and 3-6 figure connected digits with 4.5% (std 3.5%) and 6.1% (std 4.2%) Word Error Rate (WER), respectively. We further evaluated EchoSpeech under scenarios including walking and noise injection to test its robustness. We then demonstrated using EchoSpeech in demo applications in real-time operating at 73.3mW, where the real-time pipeline was implemented on a smartphone with only 1-6 minutes of training data. We believe that EchoSpeech takes a solid step towards minimally-obtrusive wearable SSI for real-life deployment. CCS CONCEPTS • Human-centered computing → Ubiquitous and mobile computing systems and tools; Gestural input; • Computing methodologies → Speech recognition.
Tactile device for scrolling
Tactile scroll bar with illuminated document position indicator
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2009
The Codex is a dual-screen tablet computer, about the size of a 4"x6" day planner, with a self-su... more The Codex is a dual-screen tablet computer, about the size of a 4"x6" day planner, with a self-supporting binding and embedded sensors. The device can be oriented in a variety of postures to support different nuances of individual work, ambient display, or collaboration with another user. In the context of a pen-operated note taking application, we demonstrate interaction techniques that support a fluid division of labor for tasks and information across the two displays while minimizing disruption to the primary experience of authoring notes.

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2006
We present a study that evaluates conventional Pan and Zoom Navigation and Rubber Sheet Navigatio... more We present a study that evaluates conventional Pan and Zoom Navigation and Rubber Sheet Navigation, a rectilinear Focus+Context technique. Each of the two navigation techniques was evaluated both with and without an overview. All interfaces guaranteed that regions of interest would remain visible, at least as a compressed landmark, independent of navigation actions. Interfaces implementing these techniques were used by 40 subjects to perform a task that involved navigating a large hierarchical tree dataset and making topological comparisons between nodes in the tree. Our results show that Pan and Zoom Navigation was significantly faster and required less mental effort than Rubber Sheet Navigation, independent of the presence or absence of an overview. Also, overviews did not appear to improve performance, but were still perceived as beneficial by users. We discuss the implications of our task and guaranteed visibility on the results and the limitations of our study, and we propose preliminary design guidelines and recommendations for future work.

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2005
We present a quantitative analysis of delimiters for pen gestures. A delimiter is "something diff... more We present a quantitative analysis of delimiters for pen gestures. A delimiter is "something different" in the input stream that a computer can use to determine the structure of input phrases. We study four techniques for delimiting a selection-action gesture phrase consisting of lasso selection plus marking-menu-based command activation. Pigtail is a new technique that uses a small loop to delimit lasso selection from marking (Fig. 1). Handle adds a box to the end of the lasso, from which the user makes a second stroke for marking. Timeout uses dwelling with the pen to delimit the lasso from the mark. Button uses a button press to signal when to delimit the gesture. We describe the role of delimiters in our Scriboli pen interaction testbed, and show how Pigtail supports scope selection, command activation, and direct manipulation all in a single fluid pen gesture.
Impact of Handedness and Merging in Command Selection Speed

NeckFace
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2021
Facial expressions are highly informative for computers to understand and interpret a person'... more Facial expressions are highly informative for computers to understand and interpret a person's mental and physical activities. However, continuously tracking facial expressions, especially when the user is in motion, is challenging. This paper presents NeckFace, a wearable sensing technology that can continuously track the full facial expressions using a neck-piece embedded with infrared (IR) cameras. A customized deep learning pipeline called NeckNet based on Resnet34 is developed to learn the captured infrared (IR) images of the chin and face and output 52 parameters representing the facial expressions. We demonstrated NeckFace on two common neck-mounted form factors: a necklace and a neckband (e.g., neck-mounted headphones), which was evaluated in a user study with 13 participants. The study results showed that NeckFace worked well when the participants were sitting, walking, or after remounting the device. We discuss the challenges and opportunities of using NeckFace in real...

ACM Transactions on Computer-Human Interaction, 2012
Despite predictions of the paperless office, most knowledge workers and students still rely heavi... more Despite predictions of the paperless office, most knowledge workers and students still rely heavily on paper in most of their document practices. Research has shown that paper's dominance can be attributed to the fact that it supports a broad range of these users' diverse reading requirements. Our analysis of the literature suggests that a new class of reading device consisting of an interconnected environment of thin and lightweight electronic slates could potentially unify the distinct advantages of e-books, PCs, and tabletop computers to offer an electronic reading solution providing functionality comparable to, or even exceeding, that of paper. This article presents the design and construction of such a system. In it, we explain how data can be mapped to slates, detail interactions for linking the slates, and describe tools that leverage the connectivity between slates. A preliminary study of the system indicates that such a system has the potential of being an electroni...

Proceedings of the SIGCHI conference on Human Factors in computing systems - CHI '06, 2006
Modes allow a few inputs to invoke many operations, yet if a user misclassifies or forgets the st... more Modes allow a few inputs to invoke many operations, yet if a user misclassifies or forgets the state of a system, modes can result in errors. Spring-loaded modes (quasimodes) maintain a mode while the user holds a control such as a button or key. The Springboard is an interaction technique for tablet computers that extends quasimodes to encompass multiple tool modes in a single spring-loaded control. The Springboard allows the user to continue holding down a nonpreferred-hand command button after selecting a tool from a menu as a way to repeatedly apply the same tool. We find the Springboard improves performance for both a local marking menu and for a non-local marking menu ("lagoon") at the lower left corner of the screen. Despite the round-trip costs incurred to move the pen to a tool lagoon, a keystroke-level analysis of the true cost of each technique reveals the local marking menu is not significantly faster.

New multi-modal annotation tools hold the promise of bringing the benefits of face-to-face contac... more New multi-modal annotation tools hold the promise of bringing the benefits of face-to-face contact to remote, asynchronous interactions. One such system, RichReview ++ , incorporates new techniques to improve access to the embedded multimedia commentary and allows users to annotate with new modalities, like deictic gestures. We conducted a series of field deployments of RichReview ++ to characterize how these features benefit students using them for activities in the university classroom. Our first deployment investigated the use of multi-modal annotations as a way for instructors to provide feedback on student term papers. Our second deployment used annotations to support peer discussion about assigned readings in a graduate-level course. We found that presenting voice comments as interactive waveforms seems to facilitate students' consumption of the instructor's voice comments. We also found that gestural annotations clarify voice and give annotators a quick and lightwei...

Several systems have illustrated the concept of interactive fabrication, i.e. rather than working... more Several systems have illustrated the concept of interactive fabrication, i.e. rather than working through a digital editor, users make edits directly on the physical workpiece. However, so far the interaction has been limited to turn-taking, i.e., users first perform a command and then the system responds with physical feedback. In this paper, we present a first step towards interactive fabrication that changes the workpiece while the user is manipulating it. To achieve this, our system FormFab does not add or subtract material but instead reshapes it (formative fabrication). A heat gun attached to a robotic arm warms up a thermoplastic sheet until it becomes compliant; users then control a pneumatic system that applies either pressure or vacuum thereby pushing the material outwards or pulling it inwards. Since FormFab reshapes the workpiece while users are moving their hands, users can interactively explore different sizes of a shape with a single interaction. Author Keywords: pers...

2017 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), 2017
The advent of high speed input sensor and display technologies and the drive for faster interacti... more The advent of high speed input sensor and display technologies and the drive for faster interactive response suggests that human-computer interaction (HCI) task processing deadlines of a few milliseconds or less may be required in future handheld devices. At the same time, users will expect the same, if not better, battery life than today's devices under these more stringent response requirements. In this paper, we present a toolbox for exploring the design space of HCI event processors. We first describe the simulation platform for interactive environments that runs mobile user interface code with inputs recorded from human users. We validate it against a hardware platform from prior work. Given system-level constraints on latency, we demonstrate how this toolbox can be used to design a custom heterogeneous event processor that maximizes battery life. We show that our toolbox can pick design points that are 1.5–2.5x more energy-efficient than general-purpose big. LITTLE archite...

Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019
We created a quiz-based intervention to help secondary school students in Cameroon with exam prac... more We created a quiz-based intervention to help secondary school students in Cameroon with exam practice. We sent regularly-spaced, multiple-choice questions to students' own mobile devices and examined factors which influenced quiz participation. These quizzes were delivered via either SMS or WhatsApp per each student's preference. We conducted a 3week deployment with 546 students at 3 schools during their month of independent study prior to their graduating exam. We found that participation rates were heavily impacted by trust in the intervening organization and perceptions of personal security in the socio-technical environment. Parents also played a key gate-keeping role on students' digital activities. We describe how this role-along with different perceptions of smartphones versus basic phones-may manifest in lower participation rates among WhatsApp-based users as compared to SMS. Finally, we discuss design implications for future educational interventions that target students' personal cellphones outside of the classroom. CCS CONCEPTS • Human-centered computing → Empirical studies in ubiquitous and mobile computing; Mobile phones; • Applied computing → Computer-assisted instruction.

Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017
Dominant approaches to programming education emphasize program construction over language compreh... more Dominant approaches to programming education emphasize program construction over language comprehension. We present Reduct, an educational game embodying a new, comprehension-first approach to teaching novices core programming concepts which include functions, Booleans, equality, conditionals, and mapping functions over sets. In this novel teaching strategy, the player executes code using reductionbased operational semantics. During gameplay, code representations fade from concrete, block-based graphics to the actual syntax of JavaScript ES2015. We describe our design rationale in depth and report on the results of a study evaluating the efficacy of our approach on young adults (18+) without prior coding experience. In a short timeframe, novices demonstrated promising learning of core concepts expressed in actual JavaScript. We discuss ramifications for the design of future computational thinking games.

Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, 2017
Speech commenting systems have been shown to facilitate asynchronous online communication from ed... more Speech commenting systems have been shown to facilitate asynchronous online communication from educational discussion to writing feedback. However, the production of speech comments introduces several challenges to users, including overcoming self-consciousness and time consuming editing. In this paper, we introduce TypeTalker, a speech commenting interface that presents speech as a synthesized generic voice to reduce speaker selfconsciousness, while retaining the expressivity of the original speech with natural breaks and co-expressive gestures. TypeTalker streamlines speech editing through a simple textbox that respects temporal alignment across edits. A comparative evaluation shows that TypeTalker reduces speech anxiety during live-recording, and offers easier and more effective speech editing facilities than the previous state-of-the-art interface technique. A follow-up study on recipient perceptions of the produced comments suggests that while TypeTalker's generic voice may be traded-off with a loss of personal touch, it can also enhance the clarity of speech by refining the original speech's speed and accent.

Proceedings of the 29th Annual Symposium on User Interface Software and Technology, 2016
We introduce a new form of low-cost 3D printer to print interactive electromechanical objects wit... more We introduce a new form of low-cost 3D printer to print interactive electromechanical objects with wound in place coils. At the heart of this printer is a mechanism for depositing wire within a five degree of freedom (5DOF) fused deposition modeling (FDM) 3D printer. Copper wire can be used with this mechanism to form coils which induce magnetic fields as a current is passed through them. Soft iron wire can additionally be used to form components with high magnetic permeability which are thus able to shape and direct these magnetic fields to where they are needed. When fabricated with structural plastic elements, this allows simple but complete custom electromagnetic devices to be 3D printed. As examples, we demonstrate the fabrication of a solenoid actuator for the arm of a Lucky Cat figurine, a 6pole motor stepper stator, a reluctance motor rotor and a Ferrofluid display. In addition, we show how printed coils which generate small currents in response to user actions can be used as input sensors in interactive devices.

Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018
Emotions play a major role in how interpersonal conflicts unfold. Although several strategies and... more Emotions play a major role in how interpersonal conflicts unfold. Although several strategies and technological approaches have been proposed for emotion regulation, they often require conscious attention and effort. This often limits their efficacy in practice. In this paper, we propose a different approach inspired by self-perception theory: noticing that people are often reacting to the perception of their own behavior, we artificially change their perceptions to influence their emotions. We conducted two studies to evaluate the potential of this approach by automatically and subtly altering how people perceive their own voice. In one study, participants that received voice feedback with a calmer tone during relationship conflicts felt less anxious. In the other study, participants who listened to their own voices with a lower pitch during contentious debates felt more powerful. We discuss the implications of our findings and the opportunities for designing automatic and less perceptible emotion regulation systems.
Uploads
Papers by François Guimbretière