Real Time Sign Language Processing System
2018, International Journal for Research in Applied Science and Engineering Technology
https://doi.org/10.22214/IJRASET.2018.4414…
6 pages
1 file
Sign up for access to the world's latest research
Abstract
The only way the speech and hearing impaired (i.e dumb and deaf) people can communicate is by sign language. The main purpose of this project is to ease communication for mute people in situations where they have to interact with people who are unable to understand sign language. The main idea of this project is to build a system using which mute people can communicate with all other people using their normal gestures. The system does not require the background to be perfectly black. It works on any background. The project uses artificial intelligence, machine learning and image processing system to identify, sign language used by the deaf people to communicate and converts them into text so that normal people can understand.
Related papers
—Communications between deaf-mute and a normal person have always been a challenging task. This paper reviews a different methods adopted to reduce barrier of communication by developing an assistive device for deaf-mute persons. The advancement in embedded systems, provides a space to design and develop a sign language translator system to assist the dumb people, there exist a number of assistant tools. The main objective is to develop a real time embedded device for physically challenged to aid their communication in effective means.
For many deaf and dumb people, sign language is the principle means of communication. Normal people in such cases end up facing problems while communicating with speech impaired people. In our proposed system, we can automatically recognize sign language to help normal people to communicate more effectively with speech impaired people. This system recognizes the hand signs with the help of specially designed gloves. These recognized gestures are translated into a text and voice in real time. Thus this system reduces the communication gap between normal and the speech impaired people.
Generally hearing impaired people use sign language for communication, but they find difficulty in communicating with others who don " t understand sign language. This project aims to lower this barrier in communication. It is based on the need of developing an electronic device that can translate sign language into text in order to make the communication take place between the mute communities and the general public as possible. Computer recognition of sign language is an important research problem for enabling communication with hearing impaired people. This project introduces an efficient and fast algorithm for identification of the number of fingers opened in a gesture representing text of the Binary Sign Language. The system does not require the hand to be perfectly aligned to the camera and any specific back ground for camera. The project uses image processing system to identify, especially English alphabetic sign language used by the deaf people to communicate. The basic objective of this project is to develop a computer based intelligent system that will enable the hearing impaired significantly to communicate with others using their natural hand gestures. The idea consisted of designing and building up an intelligent system using image processing, machine learning and artificial intelligence concepts to take visual inputs of sign language " s hand gestures and generate easily recognizable form of outputs. Hence the objective of this project is to develop an intelligent system which can act as a translator between the sign language and the spoken language dynamically and can make the communication between people with hearing impairment and normal people both effective and efficient. The system is we are implementing for Binary sign language but it can detect any sign language with prior image processing.
International Journal of Engineering Applied Sciences and Technology
In our day to day life we see many of the people who are facing problems like dumb and deaf. They face difficulty to communicate with each other and also with the normal people. The communication for dumb deaf people with the normal people becomes the difficult task. Sign language is the communication media or method used to bridge the gap between dumb deaf people and the normal people. But many of the normal people are not familiar with the sign language communication, so it becames difficult communication between the normal people and dumb deaf people. So real time application is required to translate sign language to text or speech so that normal people will get what the dumb deaf people is trying to say. And same application will also be applicable for the normal people to communicate with the dumb deaf people. Many communication techniques exist in market, but they did not gave the proper real time solution to convert sign language communication to voice or text. Where some of the other application are sensor based and costly at the same time with lots of complexity. Our Proposed systems is a new technique of virtual talking without any use of the sensor. In this application, Image processing technique is been used it is known as Histogram of Gradient (HOG) with Artificial Neural Network (ANN). These two technique are used to train the system or create database and that database is stored in memory. Whereas Web camera is used to capture the continuous image of different sign gestures and captured images are send to raspberry pi. Raspberry pi controller recognizes the image and compare the same image with database which is been stored in memory of raspberry pi controller. This is how the sign language from dumb deaf people is detected. By using Voice Recognition Module(VRM), voice of normal people is converted to text with the help of Artificial Neural Network(ANN) that text is converted to various sign. In this way, two way communication between dumb deaf people and normal people takes place.
ITM Web of Conferences
Human Beings know each other and contact with themselves through thoughts and ideas.The best way to present our idea is through speech. Some people don’t have the power of speech; the only way they communicate with others is through sign language. Now a days technology has reduced the gap through systems which can be used to change the sign language used by these people to speech. Sign language recognition (SLR) and gesture-based control are two major applications used for hand gesture recognition technologies. On the other side the controller converts the sign language in to the text and speech which gets converted with the help of text to speech conversion and analog to digital conversion. A Dumb person throughout the world uses sign language for the communication.The best way to present our idea is through speech. Some people don’t have the power of speech; the only way they communicate with others is through sign language. Now a days technology has reduced the gap through system...
2015
Communication is the best media used by the people to communicate with each other. The problem arises when normal people and deaf-dumb people want to communicate with each other. Sign Language is a language which is used for communication by the deaf and dumb people. This project is used to reduce the communication barrier between the deaf-dumb people and the normal people. The Sign language interpreter we are developed uses a hand glove fitted with flex sensors that can interpret the English letters and numbers in American Sign Language (ASL) & some one-handed letters in Indian sign language (ISL). IndexTerms--Gesture, Flex Sensor, ARM7TDMI, Text To Speech Conversion
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022
Rising incidents of visual and hearing imparity is a matter of global concern. India itself has around 12 million visually impaired people and over 21 million people are either blind or dead or both. For the blind people, there are various solutions existing such as eye-donation, and hearing aid for the deaf but not everyone can afford it. The purpose of our project is to provide an effective method of communication between the natural people and the impaired people. According to a research article in the "Global World" on January 4,2017 with a deaf community of millions, hearing India is only just beginning to sign. So, to address this problem, we are coming forth with a model based on modern and advanced technologies like machine learning, image processing, artificial intelligence to provide a potential solution and bridge the gap of communication. The sign method is the most accepted method as a means of communication to impaired people. The model will give out the output in the form of text and voice in regional as well as English languages so it can have an effect on the vast majority of the population in rural as well as urban India. This project will definitely provide accessibility, convenience, safety to our visually impaired brothers and sisters who are looked upon by the society just because of their disability.
Deaf and dumb people use sign language for their communication but it was difficult to understand by the normal people. The aim of this project is to reduce the barrier between in them. The main objective of this project is to produce an algorithm for recognition of hand gestures with accuracy. In this project has a hand gloves model for gesture recognition. MEMS sensor is used to detecting the hand motions based on the stress. This project is used for the deaf and dumb people to communicate with normal people. The hand motions are detected by the MEMS sensor and the values are stored on the microcontroller memory unit. The output voices are previously stored on the voice processor unit. Depends on the hand motions the output will be displayed on the LCD and also played through the speaker.
International Journal of Recent Technology and Engineering (IJRTE), 2020
Deaf-mute people can communicate with normal people with help of sign languages. Our project objective is to analyse and translate the sign language that is hand gestures into text and voice. For this process, RealTimeImage made by deaf-mute peopleiscapturedanditisgivenasinput to the pre-processor. Then, feature extraction process by using otsu’s algorithm and classification by using SVM(support Vector Machine) can be done. After the text for corresponding sign has been produced. The obtained text is converted into voice with use of MATLAB function. Thus hand gestures made by deaf-mute people has been analysed and translated into text and voice for better communication.
2016
Communication between normal and handicapped person such as deaf people, dumb people, and blind people has always been a challenging task. It has been observed that they find it really difficult at times to interact with normal people with their gestures, as only a very few of those are recognized by most people. Since people with hearing impairment or deaf people cannot talk like normal people so they have to depend on some sort of visual communication in most of the time. Sign Language is the primary means of communication in the deaf and dumb community. As like any other language it has also got grammar and vocabulary but uses visual modality for exchanging information. The importance of sign language is emphasized by the growing public approval and funds for international project. Interesting technologies are being developed for speech recognition but no real commercial product for sign recognition is actually there in the current market. So, to take this field of research to an...

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (3)
- D.Senthamaraikannan, S.Shriram, Dr.J.William "REAL TIME COLOR RECOGNITION" International Journal Of Innovative Research In Electrical, Electronics, Instrumentation and control Engineering Vol. 2, Issue 3, March 2014
- Aleksandra Mojsilovic, Jianying Hu "A METHOD FOR COLOR CONTENT MATCHING OF IMAGES" Technical Report RT-0030, IBM Research, Tokyo Research Laboratory, Nov. 1989.
- P.Viola and M.Jones, "Rapid object detection using a boosted cascade of simple features" in Computer Vision and Pattern Recognition, Cambridge, MA(USA),2001.