Research Project on 'Real-Time Translation of Sinhala Sign Language to Sinhala Text'
**Main Objective**
Bridging the communication barrier between the hearing impaired/ aurally handicapped community and the general public
**Main Research questions**
1. How do preprocess the video and determine the sign type on a real-time basis
2. How to classify static and dynamic gestures in real-time
3. How to generate a meaningful Sinhala text based on the classified gestures
4. How to handle uninterpreted gestures
**Individual research question**
1. Video Preprocessing and Sign Type Determination
How to identify key moments of a sign in a real-time video feed ?
How to identify hands in a given image frame ?
How to detect whether the sign is a static or a dynamic?
2. Static Gesture classifier
How to create a efficient dataset?
How to train the model for accurate and efficient communication?
How to ensure accurate sign prediction and classification?
How to handle uninterpreted gestures?
3. Dynamic Gesture classifier
How to create a efficient dataset?
How to train the model for accurate and efficient communication?
How to ensure accurate video classification?
How to handle uninterpreted gestures?
4. Generate Sinhala Text
How to generate Sinhala text ?
How to make a sentence ?
How to check grammar ?
What is the best way to give correct output ?
**Individual Objectives**
1. Video Preprocessing and Sign Type Determination : Identify signs in a real-time video feed and detect whether the sign is a static sign or a dynamic sign.
2. Static Gesture classifier : Real-time classification of static gestures in order to provide an accurate interpretation and an appropriate prediction for the recognized sign
3. Dynamic Gesture classifier : Real-time identification of dynamic gestures with optimum accuracy
4. Generate Sinhala Text : Display correct Sinhala sentence or word for the signs.