Commit 9684ed1e authored by Prabuddha Gimhan's avatar Prabuddha Gimhan

Replace README.md relevant to dyslexia module (function 3)

parent bf57cadd
# LearnJoy-ML # Learn Joy - Function 3
Machine Learning Model for Learnjoy app ## Table of Contents
[Function 3 Development](#function-3-development)
* [1. About Function 3](#1-About-Function-3)
* [2. Function 3 processing & ML Model Development](#3-Function-3-processing-&-ML-Model-Development-folder)
* [3. Service Files](#3-Service-Files-folder)
* [4. Fast API & app file](#4-Fast-API-&-app-file-folder)
------------------ -------------------------------------------------
## FastAPI Development ## Function 3 Development
### 1.About Function 3
- To run the API, please ensure you have installed the required packages, "fastapi" and "uvicorn." If you haven't already installed them, you can do so using the following commands: - This function include three main parts:
- 1. Speech to text : Convert the voice to text children with Dyslexia
- 2. Accuracy : The accuracy of the pronunciation should be given as a percentage.
- 3. Text to speech : Convert the the Missing or wrong pronounsation words to voice.
```bash
pip install fastapi
pip install "uvicorn[standard]"
```
- Once these packages are installed, you can proceed to run the API.
1. To run the api, use the following command inside the API folder:
```bash
python main.py
```
2. You can access the project output in your browser using the following URL:
```bash
http://127.0.0.1:8000
```
3. To explore the API documentation, visit:
```bash
http://127.0.0.1:8000/docs
```
------------------------ ### 2. Function 1 processing & ML Model Development ()
## Setting Up the Development Environment - Inside this folder, you'll come across one Jupyter Notebooks dedicated to data preprocessing and model training:
Follow these steps to set up your development environment for this project: - `lj_function03.ipynb`: This file primarily involves to loard the speech to text & text to spech pre training models and finaly return the accuracy score.
### Create a New `venv` ### 3. Service Files ()
- Within this directory, there are one python ".py" files:
1. Navigate to your project directory: - `lj_functiono3.py`: This service file include three main functions(speech to text , scoring , text to speech)
- input : sound file and text `def speech_to_text(audio_file) , def scoring(words,transcriptions) ,def text_to_speech(text,return_tensors="pt")`
- Attached models : speech to text ([Link](https://drive.google.com/drive/folders/1jvP4lyhkyLbhtv0fUohLHaeQOnjGsttH?usp=drive_link))
- License : ` Apache License` ([Link](https://github.com/huggingface/transformers/blob/3cefac1d974db5e2825a0cb2b842883a628be7a0/src/transformers/models/wav2vec2/processing_wav2vec2.py))
Text to speech ([Link](https://drive.google.com/drive/folders/1pWtfsLg4IyvPTjKC-a-0-PEIWbZTyYE_?usp=drive_link))
- License : ` MIT License` ([Link](https://github.com/microsoft/SpeechT5?tab=MIT-1-ov-file#readme))
```bash - Process : This method is responsible for get the speech and convertto text and compaired with orginal text and return to speech accurasy and missing word speech
cd /path/to/your/project - Output : sentences and word scoring, missing words and voice of missing words.
```
2. Create a virtual environment: ---------------------
```bash ### 4. Fast API & app file ()
python -m venv <venv_name> - Within this app.py module include one Fast API end points for function 3 :
- 1. Request body : orginal word , audio file
```python
@app.post("/function3/STT")
async def main_fun03(words:str, audio_file: UploadFile = File(...)):
``` ```
### Activate and Deactivate `venv` - Request URL: ` https://Public_or_local_host.app`
- Response body:
```json
{
"Scoring": {
"final_sent_score": 100,
"final_word_score": 100,
"missing_voice2": []
},
"audio_file": {
"path": "C:\\Users\\KAUSH\\AppData\\Local\\Temp\\tmpefzwhc1x.wav",
"status_code": 200,
"filename": "speech.wav",
"media_type": "audio/wav",
"background": null,
"raw_headers": [
[
"content-type",
"audio/wav"
],
[
"content-disposition",
"attachment; filename=\"speech.wav\""
]
],
"_headers": {
"content-type": "audio/wav",
"content-disposition": "attachment; filename=\"speech.wav\""
},
"stat_result": null
}
}
- In `cmd`:
```bash
<venv_name>\Scripts\activate
``` ```
- In bash:
```bash
source <venv_name>/Scripts/activate
# To deactivate the virtual environment:
deactivate
```
### Create, Activate & Deactivate `venv` using conda
- Use Anaconda Navigator to create a venv:
```bash
# Activate the conda environment
conda activate <venv_name>
# To deactivate the conda environment
conda deactivate
```
### Install the Dependencies
- You can also use a `requirements.txt` file to manage your project's dependencies. This file lists all the required packages and their versions.
1. Install packages from `requirements.txt`:
```
pip install -r requirements.txt
```
This ensures that your development environment matches the exact package versions specified in `requirements.txt`.
2. Verify installed packages:
```bash
pip list
```
This will display a list of packages currently installed in your virtual environment, including the ones from `requirements.txt`.
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment