Commit 8a6265df authored by Chamodi Yapa's avatar Chamodi Yapa

Replaced the README file relevant to dyscalculia (function 01)

parent bf57cadd
# LearnJoy-ML # LearnJoy ML - Function 1
Machine Learning Model for Learnjoy app ## Table of Contents
[Function 1 Development](#function-1-development)
* [1. Dataset Creation](#1-dataset-creation-folder)
* [2. Data Preprocessing](#2-data-preprocessing-folder)
* [3. ML Model Development](#3-ml-model-development-folder)
* [4. Service Files](#4-Service-Files-folder)
* [5. Fast API & app file](#5-Fast-API-&-app-file-folder)
------------------ [Executing Function 1 on Google Colab](#executing-function-1-on-google-colab)
## FastAPI Development
-------------
## Function 1 Development
### 1. Dataset Creation ()
- Within this folder, you'll discover two Python scripts developed for dataset generation:
- To run the API, please ensure you have installed the required packages, "fastapi" and "uvicorn." If you haven't already installed them, you can do so using the following commands: - `dataset_gen.py`: This script generates synthetic data for two games, "Clock Challenge" and "Number Sequence," and stores the data in a CSV file.
```bash - `dataset_gen_add_improvement.py`: This script calculates an improvement score for each row in the dataset, and the final dataset incorporates this score, quantifying the child's progress on a scale of 1 to 10 based on their interactions with the game.
pip install fastapi
pip install "uvicorn[standard]"
```
- Once these packages are installed, you can proceed to run the API. - You can also access the saved CSV files in this folder.
1. To run the api, use the following command inside the API folder: ### 2. Data Preprocessing ()
```bash - Inside this folder, you'll come across two Jupyter Notebooks dedicated to data preprocessing:
python main.py
```
2. You can access the project output in your browser using the following URL: - `dataset_analysis.ipynb`: This file primarily involves loading the dataset created in the dataset creation step, removing the timestamp, and saving the resulting dataset as 'final_game_dataset.csv'. It also includes some dataset analysis.
```bash - `dataset_preprocess.ipynb`: In this notebook, the 'final_game_dataset' is loaded, and standard scaling is applied. The resulting scalers are saved as 'scaler_X.pkl' and 'scaler_y.pkl,' respectively.
http://127.0.0.1:8000
``` - Additionally, you can locate the 'final_game_dataset.csv' in this folder.
3. To explore the API documentation, visit:
### 3. ML Model Development ()
- Within this directory, there are two crucial files:
- `ml_model_dev.ipynb`: This file is responsible for developing the model for function 2.
- `final_model_deploy.ipynb`: In this file, you will discover the deployment code to test the saved model with user inputs:
``` python
# Example usage:
success_count = 8
attempt_count = 10
game_score_xp = 73
game_level = 9
engagement_time_mins = 5
```bash
http://127.0.0.1:8000/docs
``` ```
------------------------ - The saved model exceeds the size limit for GitHub uploads, so you'll need to download it from [Google Drive]() and place it inside the 'MLModelDev/fun1' folder as follows:
---------------------
## Setting Up the Development Environment ### 4. Service Files ()
- Within this directory, there are one python ".py" files:
Follow these steps to set up your development environment for this project: - `self_improvement.py`: This service file include the total process of function 01.
- input :Player database
- Attached models : rf_model.pkl ([Link](https://drive.google.com/drive/folders/1ENOy5CNJOgi1d9vrjqf7RRoieljmm7FJ?usp=drive_link))
### Create a New `venv` - Process :firstly get the player dataset and caluculate the real time improvement ,compaired the previouse improvement score and get the future time(One week or two week) improvment prediction
- Output : ID,Real_Time_predictions,Future_weeks_Predictions
1. Navigate to your project directory: ---------------------
### 5. Fast API & app file ()
- Within this app.py module include one Fast API end points for function 1 :
- 1. Request body : Connect the real time played database and use the player_name to compil this function(player_name :rQIRSQe3ZYTRDGfvRb8dvzaCKkA2)
- Special Note:We need to create that database as mentioned in below link player_game_dataset.csv ([Link]( https://drive.google.com/drive/folders/1mRzLkdMJLO-5pc4vBxvdmv1B2WD5i2Lg?usp=drive_link))
```python
@app.post("/function01/self_improvement")
async def function1_self_improvementplayer_name:str):
```bash
cd /path/to/your/project
``` ```
2. Create a virtual environment: - Request URL: ` https://Public_or_local_host.app/function01/self_improvement`
- Response body:
- Type 1:
```json
{
"Massage": "There is not enough data to process"
```bash }
python -m venv <venv_name>
``` ```
### Activate and Deactivate `venv` - Type 2:
```json
{
"ID": 0,
"Real_Time_predictions": {
"Massage": "This time you improve your workout completion to 49.17%",
"This_attend_improvement_score": "49.17%",
"previouse_attend_improvement_score": null,
"improvement_presentage": null,
"trend": null
},
"Future_weeks_Predictions": null
}
- In `cmd`: ```
- Type 3:
```json
{
"ID": 1,
"Real_Time_predictions": {
"Massage": "This time you decrease your exercise completion rate by 0.01% compared to the previous time.",
"This_attend_improvement_score": "47.50%",
"previouse_attend_improvement_score": "49.17%",
"improvement_presentage": "0.01%",
"trend": "Negative"
},
"Future_weeks_Predictions": null
}
```bash
<venv_name>\Scripts\activate
``` ```
- Type 4:
```json
{
"ID": 2,
"Real_Time_predictions": {
"Massage": "This time you improve your workout completion rate by 0.13% compared to the previous time.",
"This_attend_improvement_score": "55.67%",
"previouse_attend_improvement_score": "52.50%",
"improvement_presentage": "0.13%",
"trend": "Positive"
},
"Future_weeks_Predictions": {
"Improvment": "Normal",
"Completion frequency": "0.2857142857142857 perday",
"After_two_two_week_Success_engagements_Time_Min": 7.191901811847413,
"future_week_Success_improvement_score": "53.25%",
"Predict_weeks": 2
}
}
- In bash: ```
```bash
source <venv_name>/Scripts/activate
# To deactivate the virtual environment: - Note :
deactivate - Type 01:
``` - ID : None
- Real_Time_predictions : None
- Future_weeks_Predictions : None
### Create, Activate & Deactivate `venv` using conda - Type 02:
- ID : 0
- Real_Time_predictions : only this attend improvment score
- Future_weeks_Predictions : None
- Use Anaconda Navigator to create a venv: - Type 03:
- ID : 1
- Real_Time_predictions : return all real time predictions
- Future_weeks_Predictions : None
```bash - Type 04:
# Activate the conda environment - ID : 2
conda activate <venv_name> - Real_Time_predictions : return all real time predictions
- Future_weeks_Predictions : return all future predictions
# To deactivate the conda environment - 2. Request body : covert to speech to number
conda deactivate ```python
@app.post("/function01/STnumber/")
async def ST_number(audio_file: UploadFile = File(...)):
``` ```
### Install the Dependencies - Request URL: ` https://Public_or_local_host.app/function01/STnumber/`
- Response body:
- You can also use a `requirements.txt` file to manage your project's dependencies. This file lists all the required packages and their versions.
1. Install packages from `requirements.txt`:
```json
{
"ID": 1,
"Number": [
"42"
]
}
``` ```
pip install -r requirements.txt
- 3. Request body : covert to speech to time
```python
@app.post("/function01/STtime/")
async def ST_time(audio_file: UploadFile = File(...)):
``` ```
This ensures that your development environment matches the exact package versions specified in `requirements.txt`.
2. Verify installed packages: - Request URL: ` https://Public_or_local_host.app/function01/STtime/
- Response body:
```bash ```json
pip list {
"ID": 1,
"Time": [
"08:00"
]
}
``` ```
This will display a list of packages currently installed in your virtual environment, including the ones from `requirements.txt`.
\ No newline at end of file
## Executing Function 1 on Google Colab
- Upload the file named [`Colab_LearnJoy_Function1_Final_Deploy.ipynb`](/Function1/MLModelDev/Colab_LearnJoy_Function1_Final_Deploy.ipynb)
to your Google Colab environment.
- To execute Function 1, you can upload the downloaded `requirements.txt` file to Colab by following these steps:
1. Download the [`requirements.txt`](/requirements.txt) from GitHub repo.
2. Drag and drop the file into Google colab.
3. And run the first command to execute it.
4. After the installation restart the Runtime.
5. Upload the saved pkl files to drive and change the paths inside the code.
6. You can now run the code.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment