Touchless medical images interaction in surgery
2017, Proceedings of the 2nd Gamification & Serious Games Symposium (GSGS 17), 30 June-1st July 2017, Neuchâtel, Switzerland
Sign up for access to the world's latest research
Abstract
AI
AI
Surgeons are exploring novel human-machine interfaces, particularly those utilizing augmented reality and depth sensors, to improve their interaction with medical images during surgery. This paper discusses the development of a touchless user interface, KiOP, which employs Microsoft Kinect 2.0 to allow surgeons to manipulate radiological images without physical contact, thus minimizing contamination risks and efficiency loss. The design challenges addressed include gesture recognition reliability, intuitive use, and operational sterility, with future applications envisioned in surgical training.
Related papers
Journal of Clinical Monitoring and Computing, 2014
Occasionally, surgeons do need various types of information to be available rapidly, efficiently and safely during surgical procedures. Meanwhile, they need to free up hands throughout the surgery to necessarily access the mouse to control any application in the sterility mode. In addition, they are required to record audio as well as video files, and enter and save some data. This is an attempt to develop a comprehensive operating room information system called ''Medinav'' to tackle all mentioned issues. An integrated and comprehensive operating room information system is introduced to be compatible with Health Level 7 (HL7) and digital imaging and communications in medicine (DICOM). DICOM is a standard for handling, storing, printing, and transmitting information in medical imaging. Besides, a natural user interface (NUI) is designed specifically for operating rooms where touch-less interactions with finger and hand tracking are in use. Further, the system could both record procedural data automatically, and view acquired information from multiple perspectives graphically. A prototype system is tested in a live operating room environment at an Iranian teaching hospital. There are also contextual interviews and usability satisfaction questionnaires conducted with the ''MediNav'' system to investigate how useful the proposed system could be. The results reveal that integration of these systems into a complete solution is the key to not only stream up data and workflow but maximize surgical team usefulness as well. It is now possible to comprehensively collect and visualize medical information, and access a management tool with a touch-less NUI in a rather quick, practical, and harmless manner.
Surgical Innovation, 2020
Background. Touchless interaction devices have increasingly garnered attention for intraoperative imaging interaction, but there are limited recommendations on which touchless interaction mechanisms should be implemented in the operating room. The objective of this study was to evaluate the efficiency, accuracy, and satisfaction of 2 current touchless interaction mechanisms—hand motion and body motion for intraoperative image interaction. Methods. We used the TedCas plugin for ClearCanvas DICOM viewer to display and manipulate CT images. Ten surgeons performed 5 image interaction tasks—step-through, pan, zoom, circle measure, and line measure—on the 3 input interaction devices—the Microsoft Kinect, the Leap Motion, and a mouse. Results. The Kinect shared similar accuracy with the Leap Motion for most of the tasks. But it had an increased error rate in the step-through task. The Leap Motion led to shorter task completion time than the Kinect and was preferred by the surgeons, especia...
Revista Facultad de Ingeniería Universidad de Antioquia, 2017
Computer Supported Cooperative Work (CSCW), 2014
While surgical practices are increasingly reliant on a range of digital imaging technologies, the ability for clinicians to interact and manipulate these digital representations in the operating theatre using traditional touch based interaction devices is constrained by the need to maintain sterility. To overcome these concerns with sterility, a number of researchers are have been developing ways of enabling interaction in the operating theatre using touchless interaction techniques such as gesture and voice to allow clinicians control of the systems. While there have been important technical strides in the area, there has been little in the way of understanding the use of these touchless systems in practice. With this in mind we present a touchless system developed for use during vascular surgery. We deployed the system in the endovascular suite of a large hospital for use in the context of real procedures. We present findings from a study of the system in use focusing on how, with touchless interaction, the visual resources were embedded and made meaningful in the collaborative practices of surgery. In particular we discuss the importance of direct and dynamic control of the images by the clinicians in the context of talk and in the context of other artefact use as well as the work performed by members of the clinical team to make themselves sensable by the system. We discuss the broader implications of these findings for how we think about the design, evaluation and use of these systems.
The International Journal of Medical Robotics and Computer Assisted Surgery
Background: Recent tele-mentoring technologies for minimally invasive surgery (MIS) augments the operative field with movements of virtual surgical instruments as visual cues. The objective of this work is to assess different user-interfaces that effectively transfer mentor's hand gestures to the movements of virtual surgical instruments. Methods: A user study was conducted to assess three different user-interface devices (Oculus-Rift, SpaceMouse, Touch Haptic device) under various scenarios. The devices were integrated with a MIS tele-mentoring framework for control of both manual and robotic virtual surgical instruments. Results: The user study revealed that Oculus Rift is preferred during robotic scenarios, whereas the touch haptic device is more suitable during manual scenarios for tele-mentoring. Conclusion: A user-interface device in the form of a stylus controlled by fingers for pointing in 3D space is more suitable for manual MIS, whereas a user-interface that can be moved and oriented easily in 3D space by wrist motion is more suitable for robotic MIS. K E Y W O R D S minimally invasive surgery, surgical simulations, tele-mentoring, user-interfaces, virtual surgical instruments 1 | INTRODUCTION Tele-medicine is playing an ever-increasing role in clinical practice with the aim to provide clinical healthcare from a distance. 1,2 It entails the use of software/hardware technologies to share clinical information and edit its content in real-time. An aspect of telemedicine, when applied to surgical context, includes tele-mentoring and tele-collaboration during a surgery. 3-5 Augmented reality based enabling technologies have been developed to facilitate telementoring between an operating and a remote surgeon during a minimally invasive surgery (MIS). It involves the use of user interfaces that assist the mentor (the remote surgeon) to perform screen markings 6-8 or display augmented hands gestures 9-11 to the mentee (the operating surgeon). More sophisticated user interfaces allow the This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
2008
In this paper we focus on the design of Computer Assisted Surgery (CAS) systems and more generally Augmented Reality (AR) systems that assist a user in performing a task on a physical object. Digital information or new actions are defined by the AR system to facilitate or to enrich the natural way the user would interact with the real environment. We focus on the outputs of such systems, so that additional digital information is smoothly integrated with the real environment of the user, by considering an innovative device for displaying guidance information: the mini-screen. We first motivate the choice of the mini-screen based on the ergonomic property of perceptual continuity and then present a design space useful to create interaction techniques based on a mini-screen. Two versions of a Computer ASsisted PERicardial (CASPER) puncture application, as well as a computer assisted renal puncture application, developed in our teams, are used to illustrate the discussion.
Future healthcare journal, 2022
Coupled with advances to federated on-device computer vision, the convenience of use and ease of access of cameras integrated into existing computers and tablets will increase touchless computing uptake in the form of gesture recognition software in healthcare for both clinicians and patients.
World Journal of Urology, 2012
Your article is protected by copyright and all rights are held exclusively by Springer-Verlag. This e-offprint is for personal use only and shall not be self-archived in electronic repositories. If you wish to self-archive your work, please use the accepted author's version for posting to your own website or your institution's repository. You may further deposit the accepted author's version on a funder's repository at a funder's request, provided it is not made publicly available until 12 months after publication.
Human Vision and Electronic Imaging XIII, 2008
Minimally invasive therapy (MIT) is one of the most important trends in modern medicine. It includes a wide range of therapies in videoscopic surgery and interventional radiology and is performed through small incisions. It reduces hospital stay-time by allowing faster recovery and offers substantially improved cost-effectiveness for the hospital and the society. However, the introduction of MIT has also led to new problems. The manipulation of structures within the body through small incisions reduces dexterity and tactile feedback. It requires a different approach than conventional surgical procedures, since eye-hand co-ordination is not based on direct vision, but more predominantly on image guidance via endoscopes or radiological imaging modalities. ARIS*ER is a multidisciplinary consortium developing a new generation of decision support tools for MIT by augmenting visual and sensorial feedback. We will present tools based on novel concepts in visualization, robotics and haptics providing tailored solutions for a range of clinical applications. Examples from radio-frequency ablation of liver-tumors, laparoscopic liver surgery and minimally invasive cardiac surgery will be presented. Demonstrators were developed with the aim to provide a seamless workflow for the clinical user conducting image-guided therapy.
Minimally invasive therapy (MIT) is one of the most important trends in modern medicine. It includes a wide range of therapies in videoscopic surgery and interventional radiology and is performed through small incisions. It reduces hospital stay-time by allowing faster recovery and offers substantially improved cost-effectiveness for the hospital and the society. However, the introduction of MIT has also led to new problems. The manipulation of structures within the body through small incisions reduces dexterity and tactile feedback. It requires a different approach than conventional surgical procedures, since eye-hand co-ordination is not based on direct vision, but more predominantly on image guidance via endoscopes or radiological imaging modalities. ARIS*ER is a multidisciplinary consortium developing a new generation of decision support tools for MIT by augmenting visual and sensorial feedback. We will present tools based on novel concepts in visualization, robotics and haptics providing tailored solutions for a range of clinical applications. Examples from radio-frequency ablation of liver-tumors, laparoscopic liver surgery and minimally invasive cardiac surgery will be presented. Demonstrators were developed with the aim to provide a seamless workflow for the clinical user conducting image-guided therapy.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (5)
- Wagner M., North T., Strgar T., Duay V., Bourquin S., Pignat M., Sierro A., Mudry P.-A. et Dubois-Ferrière V., Lunettes à réalité augmentée commandées par les gestes et la parole pour applications médicales, EPHJ, Juin 2015.
- Wipfil R., Dubois-Ferrière V., Budry S., Hoffmeyer P., Lovis C., Gesture-Controlled Image Management for Operating Room: A Randomized Crossover Study to Compare Interaction Using Gestures, Mouse, and Third Person Relaying, PLOS ONE, 2016, vol. 11, no 4.
- Li L., Time-of-flight Camera -An Introduction, Texas Instruments, Technical White paper, SLOA190, January 2014.
- Kahol K., Smith M., Surgeons on Wii: Applying Nintendo Wii to improve surgical skill, the 16th Annual Medicine Meets Virtual Reality Conference, California, 2008.
- Chevallier D., Serious games en médecine et santé: Gamification et simulation chirugicale, SeGaMed 2014.