Speech to Sign Language
2020
https://doi.org/10.13140/RG.2.2.10331.72487…
1 page
1 file
Sign up for access to the world's latest research
Abstract
Goal • To bridge the gap between hearing impaired people in India and others.
Related papers
Sign languages are natural languages that use different means of expression for communication in everyday life. More particularly, it is the only means of communication for the hearing impaired. Thus, it provides replacement for speech among deaf and mute people. Several research works are going on sign language in order to make the communication between a deaf person and a normal person easy. Examples of some sign languages are the American Sign Language, the British Sign Language, the native Indian Sign Language, the Japanese Sign Language etc. Generally, the semantic meanings of the language components in all these sign languages differ, but there are signs with a universal syntax. For example, a simple gesture with one hand expressing 'hi' or 'goodbye' has the same meaning all over the world and in all forms of sign languages. Sign languages are natural languages that use different means of expression for communication in everyday life. This paper outlines the current status of sign language and the Deaf community in India, focusing on: a)what is sign language b) what the existing problems c) what actions are being undertaken or planned that hopefully will lead to solutions d) Tools that will be used.
IRJET, 2022
Sign language is an integral part of human communication as it has allowed people to communicate with the hard of speaking and hearing community and understand them better. However, not everyone is capable of using sign language which causes a barrier between. One finds it hard to communicate without an interpreter. With the help of deep learning and machine learning systems, we can eliminate said barriers. The purpose of our machine learning project is to create a web/phone camera based sign language recognition and translation system that would convert sign language gestures to text and vice versa in real time. It is possible to implement them via two ways : vision-based or glove-based systems. Capturing and translating the signs from the real life world will be the core objective of this project. Convolutional Neural Network (CNN) algorithm is used to implement our project. OpenCV video stream will be used to capture the real time gestures through the web camera or the phone camera. The preprocessed images are then fed to the Keras CNN model. We get the output in the form of text predicting the sign. Not only does each country have its own sign language but there are also many other regional sign languages too. Due to the Covid-19 pandemic, the alternative to normal communication is Video-calling, Facetime, etc. Hardspeaking and hearing people are not able to use such facilities effectively causing a hindrance in communication. Our paper aims to find a solution to such a problem and proposes a system for the translation of sign language using a webcam, mic, smart mobile phones, etc.
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022
A total of 63 million people in India have hearing impairment, which is a common cause of disability. Due to communication barriers, these individuals are at risk for reduced cognitive skills and language deficits which may contribute to their poor general and oral health. Instructing these individuals regarding oral health concerns, treatment options and prognosis is often challenging for dental professionals. An Automatic Speech to Sign Language Translation Software (ASSiST) was developed to improve brushing technique among children with hearing impairment. The programming of Automatic Speech Recognition (ASR) system was done using Linear Predictive coding (LPC) algorithm for extraction of Speech Signals and Artificial Neural Networks as a classifier and recognizer. This tool is tested to help imparting oral hygiene instructions in the form of a sequence-based brushing technique to the children chair side which when spoken by the dentist translates it into the respective regional sign language which is displayed on the screen.
IRJET, 2021
Communication plays a critical role for people and is regarded as a skill in life. Having this important aspect of life and surroundings in mind, we present our project article, which focuses primarily on supporting patients with pain or silent speech. Our research work leads to improved contact with the deaf and the mute. Each sign language uses sign patterns visually conveyed to express the true meaning. The combination of hand gestures and/or motions of arm and body is called Sign Language and the Dictionary. It is the combination of hands and facial expressions. Our program project is able to understand signals in sign language. These symbols may be used to interact with hearing aids. Our article suggests a program that allows common people to interact effectively with others that are hard to understand. In this case, we are implementing the Indian Sign Language (ISL) method by using a microphone and a camera. Translation of the voice into Indian sign language system by the ISL translation system is possible. The ISL translation framework uses a microphone to get pictures (from ordinary people) or continuous video clips, which the application interprets.
Sign language is the native language of deaf and dumb people which they prefer to use it on their daily life. This paper describes architecture of a smart phone based Indian sign language maker for aiding deaf and dumb people (special people). This system comprised of two features namely, speech to gesture (Indian Sign Language) and text to speech. The speech to gesture can be done with the help of Speech recognizer, semantic analyzer, gesture sequence generator and gesture player. The text to speech is done with the help of Google text to speech engine. The aim of the project is to aid special people by translating the speech to Indian Sign Language and converting text to voice. Speech recognition and semantic analysis is done with the help of the Google voice. Gesture sequence generation and gesture playing is the place where we focused our work. Gesture sequence is generated based on the semantic analyzer output in which the gesture sequence generator can recognize. Gesture sequence generator will produce sequences read by gesture player which is nothing but an animated human agent who will perform hand signs corresponding to the input given. This gesture sequence generator will also learn from the user if the speech is not properly recognized or the words are not found in the sequence generator dictionary. This paper is an initial work of the project in which, gesture generation is done for particular domains like railway, airport and bus transport kiosk where a deaf person can respond to the phone calls from various customers. The future work will be a complete system which can produce hand signs for any domain.
International Journal for Research in Applied Science and Engineering Technology IJRASET, 2020
Verbal Communication is the only way using which people have interacted with each other over the years but the case stands different for the disabled. The barrier created between the impaired and the normal people is one of the setbacks of the society. For the impaired people (deaf & mute), sign language is the only way to communicate. In order to help the deaf and mute communicate efficiently with the normal people, an effective solution has been devised. Our aim is to design a system which analyses and recognizes various alphabets from a database of sign images. In order to accomplish this, the application uses various techniques of Image Processing such as segmentation & feature extraction. We use the machine learning technique, Convolutional Neural Network for detection of sign language. We convert the image by cropping the background and keeping only gesture, after that we convert the gesture into black & white scale in png format into 55*60 resolution. This system will help to eradicate the barrier between the deaf-mute & normal people. This system will standardize the Indian Sign Language in India. It will also improve the quality of teaching and learning in deaf and mute institutes. Just as Hindi is recognized as the standard language for conversation throughout India, ISL will be recognized as the standard sign language throughout India. The main aim of this work is serving the mankind that is achieved by providing better teaching and better learning.
International Journal of Innovative Research in Science, Engineering and Technology (IJIRSET), 2023
Sign language is a universal way of communication for challenged people with speaking and hearing limitations. Multiple mediums are accessible to translate or to acknowledge sign language and convert them to text. However, the text to signing conversion systems has been rarely developed; this is often thanks tothe scarcity of any sign language dictionary. Our projectaims at creating a system that consists of a module that initially transforms voice input to English text and which parses the sentence, then to which Indian sign language grammar rules are applied. This is done by eliminating stop words from the reordered sentence. Indian SignLanguage (ISL) does not sustain the inflections of theword. Hence, stemming is applied to vary over the wordsto their root/ stem class. All words of the sentence are then Checked against the labels in the dictionary containingvideos representing each of the words. The present systems are limited to only straight conversion of wordsinto ISL, whereas the proposed system is innovative, as our system aims to rework these sentences into ISL as pergrammar in the real domain.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (2)
- Mobile translation system from speech language to hand motion language, Rekha and B. Latha.
- Increasing adaptability of a speech into sign language translation system, V. L ópez-Lude ña, R. San-Segundo, C. G. Morcillo. .