Commit 9e507197 authored by supundileepa00's avatar supundileepa00
parents 97ce69dd a62deec2
# TMP-23-029
# Project ID: TMP-23-029
# Project Name: “Sign Language Translation with emotional base multi model,3D Avatar and Adaptive Learning.”
SLIIT Final Year Project
\ No newline at end of file
## Main Objective
The main objective of this project is to develop a Sign Language Translation System that bridges the communication gap between individuals who use sign language and those who do not. The system aims to facilitate effective communication and understanding between deaf/mute individuals and the wider community.
## Main Research Questions
1. How can finger-spelled sign language be accurately translated into spoken languages?
2. How can text or audio be transformed into sign language for effective comprehension by deaf individuals?
3. How can sign language learning be personalized and interactive to meet the individual needs and preferences of users?
4. How can sign language gestures be accurately identified and translated using image and video processing techniques?
5. How can emotions be incorporated into sign language translation to enhance communication and expression?
## Individual Research Questions and Objectives
### Member / Leader: Gamage B G J (IT20402266)
**Research Question:**
- How can sign language learning be personalized and interactive to meet the individual needs and preferences of users?
**Objectives:**
- Conduct an analysis of sign language grammar and lexicon to create a curriculum for personalized sign language learning.
- Utilize techniques such as gamification, human-computer interaction, and reinforcement learning to develop an interactive and engaging learning interface.
- Design and implement an intuitive and user-friendly interface that allows users to learn sign language at their own pace and according to their individual needs and preferences.
- Incorporate multimedia elements, such as videos, images, and animations, to enhance the learning experience.
- Implement reinforcement learning algorithms to tailor the sign language learning experience to the user's progress and preferences.
### Member: Ranaweera R M S H (IT20251000)
**Research Question:**
- How can sign language translation be achieved by identifying audio and text components in a video and translating them into sign language using 3D components?
**Objectives:**
- Pre-process video data to extract audio and text components.
- Develop techniques to translate audio and text into sign language.
- Integrate sign language translation with 3D components to create a visually appealing sign language video.
- Evaluate the accuracy of sign language translation and the overall effectiveness of the demonstration.
- Apply computer graphics techniques to enhance the realism and visual quality of the sign language animations.
### Member: A V R Dilshan (IT20005276)
**Research Question:**
- How can emotions expressed through audio and text components be translated into sign language using emotional data sets and 3D components?
**Objectives:**
- Pre-process video data to extract audio, text components, and emotions.
- Develop techniques to translate audio and text into sign language, incorporating emotions.
- Integrate sign language translation with 3D components to create a sign language video that effectively conveys emotions.
- Evaluate the accuracy of sign language translation, including the emotional aspect.
- Enhance the realism and application of the system by capturing emotions through video and audio.
### Member: Paranagama R P S D (IT20254384)
**Research Question:**
- How can hand gestures and emotional gestures in sign language be accurately identified using image sequences, and how can they be translated into text?
**Objectives:**
- Pre-process image sequences to isolate hand gestures and facial expressions used in sign language.
- Utilize action prediction by analyzing frame sequences to accurately identify sign language gestures.
- Train a model on sign language data to recognize gestures with high accuracy.
- Integrate the trained model into an application that processes images/video sequences and translates recognized sign language gestures into text.
- Optimize the deployments/models to increase performance and efficiency throughout the entire process.
## Novelty and Contributions
- Comprehensive analysis of Sri Lankan Sign Language, including grammar and lexicon, to provide valuable insights into its linguistic properties.
- Personalized sign language learning curriculum tailored to the proficiency level and learning style of users.
- Integration of reinforcement learning algorithms to enhance the sign language learning experience and provide personalized feedback.
- Utilization of 3D components and computer graphics techniques to improve the visual quality and realism of sign language translations.
- Incorporation of emotions into sign language translation, providing a more expressive and accurate communication experience.
- Advanced image and video processing techniques for accurate identification and translation of sign language gestures.
- Optimization of deployments and models to increase the overall performance and efficiency of the Sign Language Translation System.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment