Speech impairment is regarded as one of the significant disabilities. When it comes to communicating with others, people with this limitation employ sign language. However, individuals with speech disabilities are unable to communicate with those who do not understand sign language. Therefore, our initiative aims to bridge this communication gap. The primary goal of this work is to develop a vision-based system that can recognise sign language motions in real-time and provide translation in multiple languages. We trained a CNN model on both spatial and temporal features present in real-time video sequences, achieving an accuracy of 97.5%. For multilingual translation, we employed the T5 transformer, achieving an average BLEU score of 0.193, ROUGE-L score of 0.038, and 0.224 seconds per sentence for English-to-Hindi, and a BLEU score of 0.048, ROUGE-L score of 0.022, and 0.434 seconds per sentence for English-to-Kannada translation.
Deep Learning-Based Sign Language Communication System with Multi-Language Support
Pranathi Hegde,Nidhi N P,Bhanu M,Sahana. P. Shankar
Published 2025 in IEEE India Conference
ABSTRACT
PUBLICATION RECORD
- Publication year
2025
- Venue
IEEE India Conference
- Publication date
2025-12-18
- Fields of study
Not labeled
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-15 of 15 references · Page 1 of 1
CITED BY
- No citing papers are available for this paper.
Showing 0-0 of 0 citing papers · Page 1 of 1