Kinect’s Augmented Real Time Human Interaction Mirror
Sign up for access to the world's latest research
Abstract
This paper deals with intuitive ways of attractive Human Interaction Mirror (HIM) by using the Microsoft Kinect sensor. Our work is mainly based on the extraction of human body by video stream and makes the user interaction. The fusion of user's body motion and 3D cloth model is virtually displayed in our HIM-Mirror. The virtual image is rated by hybridization of skeletal tracking algorithm and PCA based face recognition algorithm. The perfect match of 3D cloth to the superimposed image is done by Skincolor detection and the clothes are adapted to the body of the user in front of the interactive mirror. Kinect SDK is used for various fundamental function and for tracking process and the entire application is developed in .NET framework.
Related papers
2017 IEEE 2nd International Conference on Signal and Image Processing (ICSIP), 2017
AIP Conf. Proc. 2845, 030008, 2023
Motion detection and tracking systems used to quantify the mechanics of motion in many fields of research. Despite their high accuracy, industrial systems are expensive and sophisticated to use. However, it has shown imprecision in activity-delicate motions, to deal with the limitations. The Microsoft Kinect Sensor used as a practical and cheap device to access skeletal data, so it can be used to detect and track the body in different subjects such as, medical, sports, and analysis fields, because it has very good degrees of accuracy and its ability to track six people in real time. Sometimes research uses single or multiple Kinect devices based on different classification methods and approaches such as machine learning algorithms, neural networks, and others. Researches used global database like, CAD-60, MSRAction3D, 3D Action Pairs and others, while the others used their on database by collected them from different ages and genders. Some research connected a Kinect device to a robot to simulate movements, or the process done in virtual reality by using an avatar, where an unreal engine used to make it. In this research, we presents the related works in this subject, the used methods, database and applications.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2014
In order to conduct ergonomic assessments of product using real-time data of human behavior in the digital virtual systems, this study developed a real-time acquisition and simulation of human behavior system based on Kinect. This paper studied joint point matching, digital human data structure and motion data calculation. Combining with Open Inventor graphics engine, the research used human behavior data to conduct the dynamic simulation in DHM. The accuracy between the data processing method of Kinect and manual measurement also was analyzed. The results show that the method can achieve accurate real-time acquisition of human behavior. This method has been integrated into the human factors analysis software SAMMIE conducted case studies and achieved good result.
IAEME PUBLICATION, 2020
The Human Machine Interface (HMI) based framework portrays the structure and advancement of a modern smart mirror that speaks to an unassuming interface for the surrounding home condition. The mirror permits characteristic methods between association through which the inhabitants can control the family unit savvy machines and access customized administrations. The smart mirror is competent to exhibit by extending home mechanization framework that gives a blend of family machines and different tweaked data administrations. In this paper, the HMI is developed using smart mirror framework that comprises of data related to climate expectation, date and time, clock, news channels and clients. These data can be taken from internet browser and utilizing python which give programming property and work show. The ARM Processor is associated with the cloud and gathered information from web to show the data on reflects. For security reason, we have planned the mirror with the assistance of face framework. The convolutional neural network based face recognition is employed and experimental result showed that 100 % accuracy of facial recognition is achieved.
Cloud computing has continued to evolve and advance over the ensuing years. Cloud computing is the practice of using a network of With the Advancement of technologies, the low-cost Microsoft Kinect Sensor revolutionized the field of 3D Vision. Microsoft Kinect Sensors gives eyes, ears and brain to the computers by simple hand gesturing and speaking. The Microsoft Kinect Sensor has brought new era of Natural User Interface (NUI) based on gaming and the associated SDK provided access to its powerful sensors, which can be utilized especially in Research purposes. Thousands of people around the world are playing with built-in multimodal sensors, but still a complete kinect system lacks, thus requiring a physical device to fulfill its work. The Kinect Sensors recognizes each and individual users when they talks and what they speak. The information provided by the Kinect gears up new opportunity to fundamental problems in Computer Vision.The Kinect Sensors incorporates several advanced sensing hardware's. Most notably, it contains depth sensor, a color camera, and a four-microphone array that provides full-body 3D motion capture along with facial recognition, and voice recognition capabilities. The Kinect has robust 3D sensors for face recognition, using Microsoft Kinect sensors we can build an effective Rehabilitations system .Apart from the gaming applications, the Microsoft Kinect has lot of applications in all fields like clothing, medical imaging, used in many organizations for effective presentations. This innovation behind Kinect hinges on advances in skeletal tracking.
Augmenting deformable surfaces like cloth and body in real video is a challenging task. This paper presents a system for cloth and body augmentation in a single-view video. The system allows users to change their cloth either by changing the color, the texture, or the whole cloth. It augments the user with virtual clothes. As a result, users can enjoy changing their cloth with any other cloth they want. As a prerequisite, the user needs to wear a special suit and enters through our motion capture system that captures the movements of the user. From the captured data, an animated 3D character model is created, which will serve as the new body. The model is rendered with the new cloth but without the head. We extract the real face of the user and place it on the virtual model. This system can be used in film production and advertisement.
Technology is becoming more natural and intuitive. People already use gestures and speech to interact with their pes and other devices. In this paper, we have developed an application interface to recognize human body gestures and reflect these gestures through Kinect sensor to Lego robot. The main contribution of this paper is implementation process of the system and development of application interface. Visual Studio 2013 in c# will be used to control a Lego Robot gestures detected through API Kinect. At the end we will test the system and we will represent the results graphically.
IOSR Journal of Computer Engineering, 2017
In this work we implemented a system to obtain human body parameter measurement without physically contacting the user. This implementation is contained the methods of obtaining 3D measurements using Kinect v2 depth sensor. The developed system at the initial stage is capable of detecting and obtaining personalized body parameters such as height, shoulder length, neck to hip length, hip to leg length and arm length by incorporating the necessary skeleton joints and front perimeter at chest, stomach and waist by incorporating the necessary 3D pixels. According to the results, the measurement on height and arm length of the person are relatively in good agreement with the actual values since the error is less than 5% and measurement has been taken in centimeters. Maximum 12% of an error incorporated of calculating front perimeter at chest. Experimental results obtained from the developed system are in acceptable range for dressing purpose and ultimately helpful for designing a real time 3D virtual dressing room.
Existing video communication systems, used in private business or video teleconference, show a part of the body of users on display only. This situation does not have realistic sensation, because showing a part of body on display does not make users feel communicating with others close to each other. It rather makes them feel communicating with the others who are far away. Furthermore, although these existing communication systems have file transfer function such as sending and receiving file data, it does not use intuitive manipulation. It uses mouse or touching display only. In order to solve these problems, we propose 3D communication system supported by Kinect and HMD (Head Mount Display) to provide users communications with realistic sensation and intuitive manipulation. This system is able to show whole body of users on HMD as if they were in the same room by 3D reconstruction. It also enables users to transfer and share information by intuitive manipulation to virtualized objects toward the other users. The result of this paper is a system that extracts human body by using Kinect, reconstructs extracted human body on HMD, and also recognizes users’ hands to be able to manipulate AR objects by a hand.
International Journal of Pattern Recognition and Artificial Intelligence
Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this article, we present a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection, and human pose estimation.
References (3)
- Furkan Is¸ıkdo˘gan and G¨okc¸ehan Kara, A Real Time Virtual Dressing Room Application using Kinect2012.
- J Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio,R. Moore, A. Kipman, and A. Blake, Real-Time Human Pose Recognition in Parts from Single Depth Images, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2011.
- Open NI. http://www.openni.org/