Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
2
2023-029
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
2023-029
2023-029
Commits
b677141b
Commit
b677141b
authored
Sep 05, 2023
by
Sumudu-Himasha-Ranaweera
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
fix:update
parent
ff3f7356
Changes
14
Expand all
Hide whitespace changes
Inline
Side-by-side
Showing
14 changed files
with
34561 additions
and
0 deletions
+34561
-0
Project/Backend/ML_Models/Emotion_Detection_Model/emotion_model.json
...kend/ML_Models/Emotion_Detection_Model/emotion_model.json
+1
-0
Project/Backend/ML_Models/Emotion_Detection_Model/haarcascade_frontalface_default.xml
...otion_Detection_Model/haarcascade_frontalface_default.xml
+33271
-0
Project/Backend/Server_Python/controllers/audio_detect_controler.py
...ckend/Server_Python/controllers/audio_detect_controler.py
+44
-0
Project/Backend/Server_Python/controllers/video_detect_controler.py
...ckend/Server_Python/controllers/video_detect_controler.py
+61
-0
Project/Backend/Server_Python/node_modules/.yarn-integrity
Project/Backend/Server_Python/node_modules/.yarn-integrity
+10
-0
Project/Backend/Server_Python/services/audio_detect_service.py
...ct/Backend/Server_Python/services/audio_detect_service.py
+80
-0
Project/Backend/Server_Python/services/video_detection_service.py
...Backend/Server_Python/services/video_detection_service.py
+115
-0
Project/Backend/Server_Python/yarn.lock
Project/Backend/Server_Python/yarn.lock
+4
-0
Project/Frontend/SignConnectPlus/src/pages/video-to-sign-language/VideoTranslate/VideoTranslate.tsx.bkp
...eo-to-sign-language/VideoTranslate/VideoTranslate.tsx.bkp
+336
-0
Project/Frontend/SignConnectPlus/src/store/reducers/curriculum.ts
...Frontend/SignConnectPlus/src/store/reducers/curriculum.ts
+193
-0
Project/Frontend/SignConnectPlus/src/store/reducers/marksCalculator.ts
...end/SignConnectPlus/src/store/reducers/marksCalculator.ts
+93
-0
Project/Frontend/SignConnectPlus/src/store/reducers/tutorial.ts
...t/Frontend/SignConnectPlus/src/store/reducers/tutorial.ts
+193
-0
Project/Frontend/SignConnectPlus/src/store/reducers/userProgress.ts
...ontend/SignConnectPlus/src/store/reducers/userProgress.ts
+142
-0
Project/Frontend/SignConnectPlus/src/types/marksCalculator.ts
...ect/Frontend/SignConnectPlus/src/types/marksCalculator.ts
+18
-0
No files found.
Project/Backend/ML_Models/Emotion_Detection_Model/emotion_model.json
0 → 100644
View file @
b677141b
{
"class_name"
:
"Sequential"
,
"config"
:
{
"name"
:
"sequential"
,
"layers"
:
[{
"class_name"
:
"InputLayer"
,
"config"
:
{
"batch_input_shape"
:
[
null
,
48
,
48
,
1
],
"dtype"
:
"float32"
,
"sparse"
:
false
,
"ragged"
:
false
,
"name"
:
"conv2d_input"
}},
{
"class_name"
:
"Conv2D"
,
"config"
:
{
"name"
:
"conv2d"
,
"trainable"
:
true
,
"batch_input_shape"
:
[
null
,
48
,
48
,
1
],
"dtype"
:
"float32"
,
"filters"
:
32
,
"kernel_size"
:
[
3
,
3
],
"strides"
:
[
1
,
1
],
"padding"
:
"valid"
,
"data_format"
:
"channels_last"
,
"dilation_rate"
:
[
1
,
1
],
"groups"
:
1
,
"activation"
:
"relu"
,
"use_bias"
:
true
,
"kernel_initializer"
:
{
"class_name"
:
"GlorotUniform"
,
"config"
:
{
"seed"
:
null
}},
"bias_initializer"
:
{
"class_name"
:
"Zeros"
,
"config"
:
{}},
"kernel_regularizer"
:
null
,
"bias_regularizer"
:
null
,
"activity_regularizer"
:
null
,
"kernel_constraint"
:
null
,
"bias_constraint"
:
null
}},
{
"class_name"
:
"Conv2D"
,
"config"
:
{
"name"
:
"conv2d_1"
,
"trainable"
:
true
,
"dtype"
:
"float32"
,
"filters"
:
64
,
"kernel_size"
:
[
3
,
3
],
"strides"
:
[
1
,
1
],
"padding"
:
"valid"
,
"data_format"
:
"channels_last"
,
"dilation_rate"
:
[
1
,
1
],
"groups"
:
1
,
"activation"
:
"relu"
,
"use_bias"
:
true
,
"kernel_initializer"
:
{
"class_name"
:
"GlorotUniform"
,
"config"
:
{
"seed"
:
null
}},
"bias_initializer"
:
{
"class_name"
:
"Zeros"
,
"config"
:
{}},
"kernel_regularizer"
:
null
,
"bias_regularizer"
:
null
,
"activity_regularizer"
:
null
,
"kernel_constraint"
:
null
,
"bias_constraint"
:
null
}},
{
"class_name"
:
"MaxPooling2D"
,
"config"
:
{
"name"
:
"max_pooling2d"
,
"trainable"
:
true
,
"dtype"
:
"float32"
,
"pool_size"
:
[
2
,
2
],
"padding"
:
"valid"
,
"strides"
:
[
2
,
2
],
"data_format"
:
"channels_last"
}},
{
"class_name"
:
"Dropout"
,
"config"
:
{
"name"
:
"dropout"
,
"trainable"
:
true
,
"dtype"
:
"float32"
,
"rate"
:
0.25
,
"noise_shape"
:
null
,
"seed"
:
null
}},
{
"class_name"
:
"Conv2D"
,
"config"
:
{
"name"
:
"conv2d_2"
,
"trainable"
:
true
,
"dtype"
:
"float32"
,
"filters"
:
128
,
"kernel_size"
:
[
3
,
3
],
"strides"
:
[
1
,
1
],
"padding"
:
"valid"
,
"data_format"
:
"channels_last"
,
"dilation_rate"
:
[
1
,
1
],
"groups"
:
1
,
"activation"
:
"relu"
,
"use_bias"
:
true
,
"kernel_initializer"
:
{
"class_name"
:
"GlorotUniform"
,
"config"
:
{
"seed"
:
null
}},
"bias_initializer"
:
{
"class_name"
:
"Zeros"
,
"config"
:
{}},
"kernel_regularizer"
:
null
,
"bias_regularizer"
:
null
,
"activity_regularizer"
:
null
,
"kernel_constraint"
:
null
,
"bias_constraint"
:
null
}},
{
"class_name"
:
"MaxPooling2D"
,
"config"
:
{
"name"
:
"max_pooling2d_1"
,
"trainable"
:
true
,
"dtype"
:
"float32"
,
"pool_size"
:
[
2
,
2
],
"padding"
:
"valid"
,
"strides"
:
[
2
,
2
],
"data_format"
:
"channels_last"
}},
{
"class_name"
:
"Conv2D"
,
"config"
:
{
"name"
:
"conv2d_3"
,
"trainable"
:
true
,
"dtype"
:
"float32"
,
"filters"
:
128
,
"kernel_size"
:
[
3
,
3
],
"strides"
:
[
1
,
1
],
"padding"
:
"valid"
,
"data_format"
:
"channels_last"
,
"dilation_rate"
:
[
1
,
1
],
"groups"
:
1
,
"activation"
:
"relu"
,
"use_bias"
:
true
,
"kernel_initializer"
:
{
"class_name"
:
"GlorotUniform"
,
"config"
:
{
"seed"
:
null
}},
"bias_initializer"
:
{
"class_name"
:
"Zeros"
,
"config"
:
{}},
"kernel_regularizer"
:
null
,
"bias_regularizer"
:
null
,
"activity_regularizer"
:
null
,
"kernel_constraint"
:
null
,
"bias_constraint"
:
null
}},
{
"class_name"
:
"MaxPooling2D"
,
"config"
:
{
"name"
:
"max_pooling2d_2"
,
"trainable"
:
true
,
"dtype"
:
"float32"
,
"pool_size"
:
[
2
,
2
],
"padding"
:
"valid"
,
"strides"
:
[
2
,
2
],
"data_format"
:
"channels_last"
}},
{
"class_name"
:
"Dropout"
,
"config"
:
{
"name"
:
"dropout_1"
,
"trainable"
:
true
,
"dtype"
:
"float32"
,
"rate"
:
0.25
,
"noise_shape"
:
null
,
"seed"
:
null
}},
{
"class_name"
:
"Flatten"
,
"config"
:
{
"name"
:
"flatten"
,
"trainable"
:
true
,
"dtype"
:
"float32"
,
"data_format"
:
"channels_last"
}},
{
"class_name"
:
"Dense"
,
"config"
:
{
"name"
:
"dense"
,
"trainable"
:
true
,
"dtype"
:
"float32"
,
"units"
:
1024
,
"activation"
:
"relu"
,
"use_bias"
:
true
,
"kernel_initializer"
:
{
"class_name"
:
"GlorotUniform"
,
"config"
:
{
"seed"
:
null
}},
"bias_initializer"
:
{
"class_name"
:
"Zeros"
,
"config"
:
{}},
"kernel_regularizer"
:
null
,
"bias_regularizer"
:
null
,
"activity_regularizer"
:
null
,
"kernel_constraint"
:
null
,
"bias_constraint"
:
null
}},
{
"class_name"
:
"Dropout"
,
"config"
:
{
"name"
:
"dropout_2"
,
"trainable"
:
true
,
"dtype"
:
"float32"
,
"rate"
:
0.5
,
"noise_shape"
:
null
,
"seed"
:
null
}},
{
"class_name"
:
"Dense"
,
"config"
:
{
"name"
:
"dense_1"
,
"trainable"
:
true
,
"dtype"
:
"float32"
,
"units"
:
7
,
"activation"
:
"softmax"
,
"use_bias"
:
true
,
"kernel_initializer"
:
{
"class_name"
:
"GlorotUniform"
,
"config"
:
{
"seed"
:
null
}},
"bias_initializer"
:
{
"class_name"
:
"Zeros"
,
"config"
:
{}},
"kernel_regularizer"
:
null
,
"bias_regularizer"
:
null
,
"activity_regularizer"
:
null
,
"kernel_constraint"
:
null
,
"bias_constraint"
:
null
}}]},
"keras_version"
:
"2.4.0"
,
"backend"
:
"tensorflow"
}
\ No newline at end of file
Project/Backend/ML_Models/Emotion_Detection_Model/haarcascade_frontalface_default.xml
0 → 100644
View file @
b677141b
This diff is collapsed.
Click to expand it.
Project/Backend/Server_Python/controllers/audio_detect_controler.py
0 → 100644
View file @
b677141b
from
fastapi
import
APIRouter
,
FastAPI
,
UploadFile
,
File
,
HTTPException
from
fastapi.responses
import
FileResponse
import
os
from
core.logger
import
setup_logger
from
services.audio_detect_service
import
EmotionPredictionService
import
tensorflow
as
tf
app
=
FastAPI
()
router
=
APIRouter
()
audio
:
UploadFile
logger
=
setup_logger
()
model
=
tf
.
keras
.
models
.
load_model
(
'../ML_Models/Emotion_Detection_Model/mymodel.h5'
)
prediction_service
=
EmotionPredictionService
(
model
)
@
router
.
post
(
"/upload_emotion/audio"
,
tags
=
[
"Emotion Detection"
])
async
def
upload_audio
(
audio
:
UploadFile
=
File
(
...
)):
try
:
file_location
=
f
"files/emotion/audio/{audio.filename}"
with
open
(
file_location
,
"wb"
)
as
file
:
file
.
write
(
audio
.
file
.
read
())
return
{
"text"
:
"OK"
}
except
Exception
as
e
:
logger
.
info
(
f
"Failed to upload file. {e}"
)
raise
HTTPException
(
status_code
=
500
,
detail
=
"Failed to upload the audio"
)
@
router
.
post
(
'/predict_emotion/audio'
,
tags
=
[
"Emotion Detection"
])
def
predict_using_audio
(
audio_request
:
UploadFile
=
File
(
...
)):
try
:
return
prediction_service
.
predict_emotion_detection_audio_new
(
audio_request
)
except
Exception
as
e
:
logger
.
info
(
f
"Error. {e}"
)
raise
HTTPException
(
status_code
=
500
,
detail
=
"Request Failed."
)
Project/Backend/Server_Python/controllers/video_detect_controler.py
0 → 100644
View file @
b677141b
from
fastapi
import
APIRouter
,
FastAPI
,
UploadFile
,
File
,
HTTPException
from
fastapi.responses
import
FileResponse
from
keras.models
import
model_from_json
import
os
from
core.logger
import
setup_logger
from
services.video_detection_service
import
EmotionPredictionService
import
tensorflow
as
tf
import
os
# Get the absolute path to the 'model' directory
model_directory
=
os
.
path
.
abspath
(
'model'
)
# Construct the absolute path to 'emotion_model.json'
json_file_path
=
os
.
path
.
join
(
model_directory
,
'emotion_model.json'
)
# Open the JSON file
# json_file = open(json_file_path, 'r')
app
=
FastAPI
()
router
=
APIRouter
()
video
:
UploadFile
logger
=
setup_logger
()
# Load emotion detection model
json_file
=
open
(
'../ML_Models/Emotion_Detection_Model/emotion_model.json'
,
'r'
)
loaded_model_json
=
json_file
.
read
()
json_file
.
close
()
emotion_model
=
model_from_json
(
loaded_model_json
)
emotion_model
.
load_weights
(
"../ML_Models/Emotion_Detection_Model/emotion_model.h5"
)
prediction_service
=
EmotionPredictionService
(
emotion_model
)
@
router
.
post
(
"/upload_emotion/video"
,
tags
=
[
"Emotion Detection"
])
async
def
upload_video
(
video
:
UploadFile
=
File
(
...
)):
try
:
file_location
=
f
"files/emotion/video/{video.filename}"
with
open
(
file_location
,
"wb"
)
as
file
:
file
.
write
(
video
.
file
.
read
())
return
{
"text"
:
"OK2"
}
except
Exception
as
e
:
logger
.
info
(
f
"Failed to upload file. {e}"
)
raise
HTTPException
(
status_code
=
500
,
detail
=
"Failed to upload the video"
)
@
router
.
post
(
'/predict_emotion/video'
,
tags
=
[
"Emotion Detection"
])
def
predict_using_video
(
video_request
:
UploadFile
=
File
(
...
)):
try
:
return
prediction_service
.
predict_emotion_detection_video_new
(
video_request
=
video_request
)
return
{
"text"
:
"OK5"
}
except
Exception
as
e
:
logger
.
info
(
f
"Error. {e}"
)
raise
HTTPException
(
status_code
=
500
,
detail
=
"Request Failed."
)
Project/Backend/Server_Python/node_modules/.yarn-integrity
0 → 100644
View file @
b677141b
{
"systemParams": "win32-x64-93",
"modulesFolders": [],
"flags": [],
"linkedModules": [],
"topLevelPatterns": [],
"lockfileEntries": {},
"files": [],
"artifacts": {}
}
\ No newline at end of file
Project/Backend/Server_Python/services/audio_detect_service.py
0 → 100644
View file @
b677141b
# from fastapi.types import ModelNameMap
# from sklearn import model_selection
# import tensorflow as tf
import
numpy
as
np
import
librosa
from
fastapi
import
HTTPException
,
UploadFile
from
typing
import
Dict
import
os
from
core.logger
import
setup_logger
logger
=
setup_logger
()
class
EmotionPredictionService
:
def
__init__
(
self
,
model
):
self
.
model
=
model
def
predict_emotion_detection_audio
(
self
,
audio_request
:
UploadFile
)
->
Dict
[
str
,
str
]:
try
:
# Create a temporary file to save the audio
audio_location
=
f
"files/emotion/audio/{audio_request.filename}"
with
open
(
audio_location
,
"wb"
)
as
file
:
file
.
write
(
audio_request
.
file
.
read
())
# Load the audio data from the saved file
y
,
sr
=
librosa
.
load
(
audio_location
)
mfccs
=
np
.
mean
(
librosa
.
feature
.
mfcc
(
y
=
y
,
sr
=
sr
,
n_mfcc
=
40
)
.
T
,
axis
=
0
)
test_point
=
np
.
reshape
(
mfccs
,
newshape
=
(
1
,
40
,
1
))
predictions
=
self
.
model
.
predict
(
test_point
)
emotions
=
{
1
:
'neutral'
,
2
:
'calm'
,
3
:
'happy'
,
4
:
'sad'
,
5
:
'angry'
,
6
:
'fearful'
,
7
:
'disgust'
,
8
:
'surprised'
}
predicted_emotion
=
emotions
[
np
.
argmax
(
predictions
[
0
])
+
1
]
return
{
"predicted_emotion"
:
predicted_emotion
}
except
Exception
as
e
:
logger
.
error
(
f
"Failed to make predictions. {str(e)}"
)
raise
HTTPException
(
status_code
=
500
,
detail
=
f
"Failed to make predictions. Error: {str(e)}"
)
def
predict_emotion_detection_audio_new
(
self
,
audio_request
:
UploadFile
)
->
Dict
[
str
,
str
]:
try
:
# Create a temporary file to save the audio
audio_location
=
f
"files/emotion/audio/{audio_request.filename}"
with
open
(
audio_location
,
"wb"
)
as
file
:
file
.
write
(
audio_request
.
file
.
read
())
# Load the audio data from the saved file
y
,
sr
=
librosa
.
load
(
audio_location
)
mfccs
=
np
.
mean
(
librosa
.
feature
.
mfcc
(
y
=
y
,
sr
=
sr
,
n_mfcc
=
40
)
.
T
,
axis
=
0
)
test_point
=
np
.
reshape
(
mfccs
,
newshape
=
(
1
,
40
,
1
))
predictions
=
self
.
model
.
predict
(
test_point
)
emotions
=
{
1
:
'neutral'
,
2
:
'calm'
,
3
:
'happy'
,
4
:
'sad'
,
5
:
'angry'
,
6
:
'fearful'
,
7
:
'disgust'
,
8
:
'surprised'
}
predicted_emotion
=
emotions
[
np
.
argmax
(
predictions
[
0
])
+
1
]
return
{
"predicted_emotion"
:
predicted_emotion
}
except
Exception
as
e
:
logger
.
error
(
f
"Failed to make predictions. {str(e)}"
)
raise
HTTPException
(
status_code
=
500
,
detail
=
f
"Failed to make predictions. Error: {str(e)}"
)
Project/Backend/Server_Python/services/video_detection_service.py
0 → 100644
View file @
b677141b
from
fastapi
import
FastAPI
,
UploadFile
,
HTTPException
from
typing
import
Dict
import
cv2
import
numpy
as
np
from
keras.models
import
model_from_json
import
os
app
=
FastAPI
()
from
core.logger
import
setup_logger
logger
=
setup_logger
()
# Define the emotion labels
emotion_dict
=
{
0
:
"Angry"
,
1
:
"Disgusted"
,
2
:
"Fearful"
,
3
:
"Happy"
,
4
:
"Neutral"
,
5
:
"Sad"
,
6
:
"Surprised"
}
# Load the emotion detection model
json_file
=
open
(
'../ML_Models/Emotion_Detection_Model/emotion_model.json'
,
'r'
)
loaded_model_json
=
json_file
.
read
()
json_file
.
close
()
emotion_model
=
model_from_json
(
loaded_model_json
)
emotion_model
.
load_weights
(
"../ML_Models/Emotion_Detection_Model/emotion_model.h5"
)
class
EmotionPredictionService
:
def
__init__
(
self
,
model
):
self
.
model
=
model
def
predict_emotion_detection_video
(
video_request
:
UploadFile
)
->
Dict
[
str
,
str
]:
try
:
# Create a temporary file to save the video
video_location
=
f
"files/emotion/video/{video_request.filename}"
with
open
(
video_location
,
"wb"
)
as
file
:
file
.
write
(
video_request
.
file
.
read
())
# Initialize video capture
cap
=
cv2
.
VideoCapture
(
video_location
)
if
not
cap
.
isOpened
():
raise
HTTPException
(
status_code
=
400
,
detail
=
"Failed to open video file."
)
predicted_emotions
=
[]
while
True
:
ret
,
frame
=
cap
.
read
()
if
not
ret
:
break
emotions
=
predict_emotion_from_frame
(
frame
)
predicted_emotions
.
extend
(
emotions
)
cap
.
release
()
os
.
remove
(
video_location
)
return
{
"predicted_emotions"
:
predicted_emotions
}
except
Exception
as
e
:
logger
.
error
(
f
"Failed to make predictions. {str(e)}"
)
raise
HTTPException
(
status_code
=
500
,
detail
=
f
"Failed to make predictions. Error: {str(e)}"
)
def
predict_emotion_detection_video_new
(
self
,
video_request
:
UploadFile
)
->
Dict
[
str
,
str
]:
try
:
# Create a temporary file to save the video
video_location
=
f
"files/emotion/video/{video_request.filename}"
with
open
(
video_location
,
"wb"
)
as
file
:
file
.
write
(
video_request
.
file
.
read
())
# Initialize video capture
cap
=
cv2
.
VideoCapture
(
video_location
)
if
not
cap
.
isOpened
():
raise
HTTPException
(
status_code
=
400
,
detail
=
"Failed to open video file."
)
predicted_emotions
=
[]
while
True
:
ret
,
frame
=
cap
.
read
()
if
not
ret
:
break
emotions
=
predict_emotion_from_frame
(
frame
)
predicted_emotions
.
extend
(
emotions
)
cap
.
release
()
os
.
remove
(
video_location
)
return
{
"predicted_emotions"
:
predicted_emotions
}
except
Exception
as
e
:
logger
.
error
(
f
"Failed to make predictions. {str(e)}"
)
raise
HTTPException
(
status_code
=
500
,
detail
=
f
"Failed to make predictions. Error: {str(e)}"
)
# Function to predict emotion from a video frame
def
predict_emotion_from_frame
(
frame
):
gray_frame
=
cv2
.
cvtColor
(
frame
,
cv2
.
COLOR_BGR2GRAY
)
face_detector
=
cv2
.
CascadeClassifier
(
'../ML_Models/Emotion_Detection_Model/haarcascade_frontalface_default.xml'
)
num_faces
=
face_detector
.
detectMultiScale
(
gray_frame
,
scaleFactor
=
1.3
,
minNeighbors
=
5
)
emotions
=
[]
for
(
x
,
y
,
w
,
h
)
in
num_faces
:
roi_gray_frame
=
gray_frame
[
y
:
y
+
h
,
x
:
x
+
w
]
cropped_img
=
np
.
expand_dims
(
np
.
expand_dims
(
cv2
.
resize
(
roi_gray_frame
,
(
48
,
48
)),
-
1
),
0
)
emotion_prediction
=
emotion_model
.
predict
(
cropped_img
)
maxindex
=
int
(
np
.
argmax
(
emotion_prediction
))
emotions
.
append
(
emotion_dict
[
maxindex
])
return
emotions
Project/Backend/Server_Python/yarn.lock
0 → 100644
View file @
b677141b
# THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY.
# yarn lockfile v1
Project/Frontend/SignConnectPlus/src/pages/video-to-sign-language/VideoTranslate/VideoTranslate.tsx.bkp
0 → 100644
View file @
b677141b
This diff is collapsed.
Click to expand it.
Project/Frontend/SignConnectPlus/src/store/reducers/curriculum.ts
0 → 100644
View file @
b677141b
// third-party
import
{
createSlice
}
from
'
@reduxjs/toolkit
'
;
// project imports
import
{
axiosServices
}
from
'
utils/axios
'
;
import
{
dispatch
}
from
'
../index
'
;
// types
import
{
Curriculum
,
DefaultRootStateProps
}
from
'
types/curriculum
'
;
// ----------------------------------------------------------------------
const
initialState
:
DefaultRootStateProps
[
'
curriculum
'
]
=
{
error
:
null
,
success
:
null
,
curriculums
:
[],
curriculum
:
null
,
isLoading
:
false
};
const
slice
=
createSlice
({
name
:
'
curriculum
'
,
initialState
,
reducers
:
{
// TO INITIAL STATE
hasInitialState
(
state
)
{
state
.
error
=
null
;
state
.
success
=
null
;
state
.
isLoading
=
false
;
},
// HAS ERROR
hasError
(
state
,
action
)
{
state
.
error
=
action
.
payload
;
},
startLoading
(
state
)
{
state
.
isLoading
=
true
;
},
finishLoading
(
state
)
{
state
.
isLoading
=
false
;
},
// POST CURRICULUM
addCurriculumSuccess
(
state
,
action
)
{
state
.
curriculums
.
push
(
action
.
payload
);
state
.
success
=
"
Curriculum created successfully.
"
},
// GET CURRICULUM
fetchCurriculumSuccess
(
state
,
action
)
{
state
.
curriculum
=
action
.
payload
;
state
.
success
=
null
},
// GET ALL CURRICULUM
fetchCurriculumsSuccess
(
state
,
action
)
{
state
.
curriculums
=
action
.
payload
;
state
.
success
=
null
},
// UPDATE CURRICULUM
updateCurriculumSuccess
(
state
,
action
)
{
const
updatedCurriculumIndex
=
state
.
curriculums
.
findIndex
(
curriculum
=>
curriculum
.
_id
===
action
.
payload
.
_id
);
if
(
updatedCurriculumIndex
!==
-
1
)
{
state
.
curriculums
[
updatedCurriculumIndex
]
=
action
.
payload
;
}
state
.
success
=
"
Curriculum updated successfully.
"
},
// DELETE CURRICULUM
deleteCurriculumSuccess
(
state
,
action
)
{
state
.
curriculums
=
state
.
curriculums
.
filter
(
curriculum
=>
curriculum
.
_id
!==
action
.
payload
);
state
.
success
=
"
Curriculum deleted successfully.
"
},
}
});
// Reducer
export
default
slice
.
reducer
;
// ----------------------------------------------------------------------
/**
* TO INITIAL STATE
* @returns
*/
export
function
toInitialState
()
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
hasInitialState
())
}
}
/**
* POST CURRICULUM
* @param newCurriculum
* @returns
*/
export
function
addCurriculum
(
newCurriculum
:
Curriculum
)
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
startLoading
());
try
{
const
response
=
await
axiosServices
.
post
(
'
/rest_node/curriculum
'
,
newCurriculum
);
dispatch
(
slice
.
actions
.
addCurriculumSuccess
(
response
.
data
));
}
catch
(
error
)
{
dispatch
(
slice
.
actions
.
hasError
(
error
));
}
finally
{
dispatch
(
slice
.
actions
.
finishLoading
());
}
};
}
/**
* GET CURRICULUM
* @param id
* @returns
*/
export
function
fetchCurriculum
(
id
:
number
)
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
startLoading
());
try
{
const
response
=
await
axiosServices
.
get
(
`/rest_node/curriculum/
${
id
}
`
);
dispatch
(
slice
.
actions
.
fetchCurriculumSuccess
(
response
.
data
));
}
catch
(
error
)
{
dispatch
(
slice
.
actions
.
hasError
(
error
));
}
finally
{
dispatch
(
slice
.
actions
.
finishLoading
());
}
};
}
/**
* GET ALL CURRICULUMS
* @returns
*/
export
function
fetchCurriculums
()
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
startLoading
());
try
{
const
response
=
await
axiosServices
.
get
(
'
/rest_node/curriculum
'
);
dispatch
(
slice
.
actions
.
fetchCurriculumsSuccess
(
response
.
data
));
}
catch
(
error
)
{
dispatch
(
slice
.
actions
.
hasError
(
error
));
}
finally
{
dispatch
(
slice
.
actions
.
finishLoading
());
}
};
}
/**
* UPDATE CURRICULUM
* @param updatedCurriculum
* @returns
*/
export
function
updateCurriculum
(
updatedCurriculum
:
Curriculum
)
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
startLoading
());
try
{
const
response
=
await
axiosServices
.
put
(
`/rest_node/curriculum/
${
updatedCurriculum
.
_id
}
`
,
updatedCurriculum
);
dispatch
(
slice
.
actions
.
updateCurriculumSuccess
(
response
.
data
));
}
catch
(
error
)
{
dispatch
(
slice
.
actions
.
hasError
(
error
));
}
finally
{
dispatch
(
slice
.
actions
.
finishLoading
());
}
};
}
/**
* DELETE CURRICULUM
* @param curriculumId
* @returns
*/
export
function
deleteCurriculum
(
curriculumId
:
string
)
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
startLoading
());
try
{
await
axiosServices
.
delete
(
`/rest_node/curriculum/
${
curriculumId
}
`
);
dispatch
(
slice
.
actions
.
deleteCurriculumSuccess
(
curriculumId
));
}
catch
(
error
)
{
dispatch
(
slice
.
actions
.
hasError
(
error
));
}
finally
{
dispatch
(
slice
.
actions
.
finishLoading
());
}
};
}
Project/Frontend/SignConnectPlus/src/store/reducers/marksCalculator.ts
0 → 100644
View file @
b677141b
// third-party
import
{
createSlice
}
from
'
@reduxjs/toolkit
'
;
// project imports
import
{
axiosServices
}
from
'
utils/axios
'
;
import
{
dispatch
}
from
'
../index
'
;
// types
import
{
DefaultRootStateProps
}
from
'
types/marksCalculator
'
;
// ----------------------------------------------------------------------
const
initialState
:
DefaultRootStateProps
[
'
marksCalculator
'
]
=
{
error
:
null
,
success
:
null
,
marksCalculator
:
null
,
isLoading
:
false
};
const
slice
=
createSlice
({
name
:
'
marksCalculator
'
,
initialState
,
reducers
:
{
// TO INITIAL STATE
hasInitialState
(
state
)
{
state
.
error
=
null
;
state
.
success
=
null
;
state
.
isLoading
=
false
;
},
// HAS ERROR
hasError
(
state
,
action
)
{
state
.
error
=
action
.
payload
;
},
startLoading
(
state
)
{
state
.
isLoading
=
true
;
},
finishLoading
(
state
)
{
state
.
isLoading
=
false
;
},
// POST USER
marksCalculatorSuccess
(
state
,
action
)
{
state
.
marksCalculator
=
action
.
payload
.
result
;
state
.
success
=
"
Marks Calculated Successfully.
"
},
}
});
// Reducer
export
default
slice
.
reducer
;
// ----------------------------------------------------------------------
/**
* TO INITIAL STATE
* @returns
*/
export
function
toInitialState
()
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
hasInitialState
())
}
}
/**
* POST Marks Calculator
* @param curriculumIndex
* @param tutorialIndex
* @param imageData
* @param targetClass
* @returns
*/
export
function
CalculateMarks
(
curriculumIndex
:
number
,
tutorialIndex
:
number
,
imageData
:
any
,
targetClass
:
string
)
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
startLoading
());
try
{
// Construct the request body as needed (e.g., for formData)
const
formData
=
new
FormData
();
formData
.
append
(
'
image
'
,
imageData
);
formData
.
append
(
'
class
'
,
targetClass
);
const
response
=
await
axiosServices
.
post
(
`/rest_node/marks-calculator/curriculum/
${
curriculumIndex
}
/tutorial/
${
tutorialIndex
}
`
,
formData
);
dispatch
(
slice
.
actions
.
marksCalculatorSuccess
(
response
.
data
));
}
catch
(
error
)
{
dispatch
(
slice
.
actions
.
hasError
(
error
));
}
finally
{
dispatch
(
slice
.
actions
.
finishLoading
());
}
};
};
Project/Frontend/SignConnectPlus/src/store/reducers/tutorial.ts
0 → 100644
View file @
b677141b
// third-party
import
{
createSlice
}
from
'
@reduxjs/toolkit
'
;
// project imports
import
{
axiosServices
}
from
'
utils/axios
'
;
import
{
dispatch
}
from
'
../index
'
;
// types
import
{
DefaultRootStateProps
,
Tutorial
}
from
'
types/tutorial
'
;
// ----------------------------------------------------------------------
const
initialState
:
DefaultRootStateProps
[
'
tutorial
'
]
=
{
error
:
null
,
success
:
null
,
tutorials
:
[],
tutorial
:
null
,
isLoading
:
false
};
const
slice
=
createSlice
({
name
:
'
tutorial
'
,
initialState
,
reducers
:
{
// TO INITIAL STATE
hasInitialState
(
state
)
{
state
.
error
=
null
;
state
.
success
=
null
;
state
.
isLoading
=
false
;
},
// HAS ERROR
hasError
(
state
,
action
)
{
state
.
error
=
action
.
payload
;
},
startLoading
(
state
)
{
state
.
isLoading
=
true
;
},
finishLoading
(
state
)
{
state
.
isLoading
=
false
;
},
// POST TUTORIAL
addTutorialSuccess
(
state
,
action
)
{
state
.
tutorials
.
push
(
action
.
payload
);
state
.
success
=
"
Tutorial created successfully.
"
},
// GET TUTORIAL
fetchTutorialSuccess
(
state
,
action
)
{
state
.
tutorial
=
action
.
payload
;
state
.
success
=
null
},
// GET ALL TUTORIAL
fetchTutorialsSuccess
(
state
,
action
)
{
state
.
tutorials
=
action
.
payload
;
state
.
success
=
null
},
// UPDATE TUTORIAL
updateTutorialSuccess
(
state
,
action
)
{
const
updatedTutorialIndex
=
state
.
tutorials
.
findIndex
(
tutorial
=>
tutorial
.
_id
===
action
.
payload
.
_id
);
if
(
updatedTutorialIndex
!==
-
1
)
{
state
.
tutorials
[
updatedTutorialIndex
]
=
action
.
payload
;
}
state
.
success
=
"
Tutorial updated successfully.
"
},
// DELETE TUTORIAL
deleteTutorialSuccess
(
state
,
action
)
{
state
.
tutorials
=
state
.
tutorials
.
filter
(
tutorial
=>
tutorial
.
_id
!==
action
.
payload
);
state
.
success
=
"
Tutorial deleted successfully.
"
},
}
});
// Reducer
export
default
slice
.
reducer
;
// ----------------------------------------------------------------------
/**
* TO INITIAL STATE
* @returns
*/
export
function
toInitialState
()
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
hasInitialState
())
}
}
/**
* POST TUTORIAL
* @param newTutorial
* @returns
*/
export
function
addTutorial
(
newTutorial
:
Tutorial
)
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
startLoading
());
try
{
const
response
=
await
axiosServices
.
post
(
'
/rest_node/tutorial
'
,
newTutorial
);
dispatch
(
slice
.
actions
.
addTutorialSuccess
(
response
.
data
));
}
catch
(
error
)
{
dispatch
(
slice
.
actions
.
hasError
(
error
));
}
finally
{
dispatch
(
slice
.
actions
.
finishLoading
());
}
};
}
/**
* GET TUTORIAL
* @param id
* @returns
*/
export
function
fetchTutorial
(
id
:
number
)
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
startLoading
());
try
{
const
response
=
await
axiosServices
.
get
(
`/rest_node/tutorial/
${
id
}
`
);
dispatch
(
slice
.
actions
.
fetchTutorialSuccess
(
response
.
data
));
}
catch
(
error
)
{
dispatch
(
slice
.
actions
.
hasError
(
error
));
}
finally
{
dispatch
(
slice
.
actions
.
finishLoading
());
}
};
}
/**
* GET ALL TUTORIALS
* @returns
*/
export
function
fetchTutorials
()
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
startLoading
());
try
{
const
response
=
await
axiosServices
.
get
(
'
/rest_node/tutorial
'
);
dispatch
(
slice
.
actions
.
fetchTutorialsSuccess
(
response
.
data
));
}
catch
(
error
)
{
dispatch
(
slice
.
actions
.
hasError
(
error
));
}
finally
{
dispatch
(
slice
.
actions
.
finishLoading
());
}
};
}
/**
* UPDATE TUTORIAL
* @param updatedTutorial
* @returns
*/
export
function
updateTutorial
(
updatedTutorial
:
Tutorial
)
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
startLoading
());
try
{
const
response
=
await
axiosServices
.
put
(
`/rest_node/tutorial/
${
updatedTutorial
.
_id
}
`
,
updatedTutorial
);
dispatch
(
slice
.
actions
.
updateTutorialSuccess
(
response
.
data
));
}
catch
(
error
)
{
dispatch
(
slice
.
actions
.
hasError
(
error
));
}
finally
{
dispatch
(
slice
.
actions
.
finishLoading
());
}
};
}
/**
* DELETE TUTORIAL
* @param tutorialId
* @returns
*/
export
function
deleteTutorial
(
tutorialId
:
string
)
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
startLoading
());
try
{
await
axiosServices
.
delete
(
`/rest_node/tutorial/
${
tutorialId
}
`
);
dispatch
(
slice
.
actions
.
deleteTutorialSuccess
(
tutorialId
));
}
catch
(
error
)
{
dispatch
(
slice
.
actions
.
hasError
(
error
));
}
finally
{
dispatch
(
slice
.
actions
.
finishLoading
());
}
};
}
Project/Frontend/SignConnectPlus/src/store/reducers/userProgress.ts
0 → 100644
View file @
b677141b
// third-party
import
{
createSlice
}
from
'
@reduxjs/toolkit
'
;
// project imports
import
{
axiosServices
}
from
'
utils/axios
'
;
import
{
dispatch
}
from
'
../index
'
;
// types
import
{
DefaultRootStateProps
,
curriculumTypeUserProgress
}
from
'
types/userProgress
'
;
// ----------------------------------------------------------------------
const
initialState
:
DefaultRootStateProps
[
'
userProgress
'
]
=
{
error
:
null
,
success
:
null
,
userProgresses
:
[],
userProgress
:
null
,
isLoading
:
false
};
const
slice
=
createSlice
({
name
:
'
userProgress
'
,
initialState
,
reducers
:
{
// TO INITIAL STATE
hasInitialState
(
state
)
{
state
.
error
=
null
;
state
.
success
=
null
;
state
.
isLoading
=
false
;
},
// HAS ERROR
hasError
(
state
,
action
)
{
state
.
error
=
action
.
payload
;
},
startLoading
(
state
)
{
state
.
isLoading
=
true
;
},
finishLoading
(
state
)
{
state
.
isLoading
=
false
;
},
// POST USER_PROGRESS
addUpdateUserProgressSuccess
(
state
,
action
)
{
// state.userProgresses.push(action.payload);
state
.
success
=
"
User Progress created or updated successfully.
"
},
// GET USER_PROGRESS
fetchUserProgressSuccess
(
state
,
action
)
{
state
.
userProgress
=
action
.
payload
.
userProgress
;
state
.
success
=
null
},
// UPDATE USER_PROGRESS
updateUserProgressSuccess
(
state
,
action
)
{
// const updatedUserProgressIndex = state.userProgresses.findIndex(userProgress => userProgress._id === action.payload._id);
// if (updatedUserProgressIndex !== -1) {
// state.userProgresses[updatedUserProgressIndex] = action.payload;
// }
state
.
success
=
"
UserProgress updated successfully.
"
},
}
});
// Reducer
export
default
slice
.
reducer
;
// ----------------------------------------------------------------------
/**
* TO INITIAL STATE
* @returns
*/
export
function
toInitialState
()
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
hasInitialState
())
}
}
/**
* POST USER_PROGRESS
* @param newUserProgress
* @returns
*/
export
function
addUserProgress
(
userId
:
string
,
curriculum
:
curriculumTypeUserProgress
)
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
startLoading
());
try
{
const
response
=
await
axiosServices
.
post
(
'
/rest_node/user-progress/subscribe-curriculum
'
,
{
userId
,
curriculum
});
dispatch
(
slice
.
actions
.
addUpdateUserProgressSuccess
(
response
.
data
));
}
catch
(
error
)
{
dispatch
(
slice
.
actions
.
hasError
(
error
));
}
finally
{
dispatch
(
slice
.
actions
.
finishLoading
());
}
};
}
/**
* GET USER_PROGRESS
* @param userId
* @returns
*/
export
function
fetchUserProgress
(
userId
:
string
)
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
startLoading
());
try
{
const
response
=
await
axiosServices
.
get
(
`/rest_node/user-progress/
${
userId
}
`
);
dispatch
(
slice
.
actions
.
fetchUserProgressSuccess
(
response
.
data
));
}
catch
(
error
)
{
dispatch
(
slice
.
actions
.
hasError
(
error
));
}
finally
{
dispatch
(
slice
.
actions
.
finishLoading
());
}
};
}
/**
* UPDATE USER_PROGRESS
* @param updatedUserProgress
* @returns
*/
export
function
updateUserProgress
(
userId
:
string
,
curriculumCode
:
string
,
tutorialCode
:
string
,
taskItemTitle
:
string
,
taskItemMarkUser
:
number
,
taskItemSpentTime
:
number
)
{
return
async
()
=>
{
dispatch
(
slice
.
actions
.
startLoading
());
try
{
const
response
=
await
axiosServices
.
put
(
`/rest_node/user-progress/update-task-item-progress`
,
{
userId
,
curriculumCode
,
tutorialCode
,
taskItemTitle
,
taskItemMarkUser
,
taskItemSpentTime
});
dispatch
(
slice
.
actions
.
updateUserProgressSuccess
(
response
.
data
));
}
catch
(
error
)
{
dispatch
(
slice
.
actions
.
hasError
(
error
));
}
finally
{
dispatch
(
slice
.
actions
.
finishLoading
());
}
};
}
Project/Frontend/SignConnectPlus/src/types/marksCalculator.ts
0 → 100644
View file @
b677141b
// Marks Calculator Type
export
type
MarksCalculator
=
{
predicted_class_name
:
string
,
confidence
:
string
,
status
:
string
};
export
interface
MarksCalculatorProps
{
marksCalculator
:
MarksCalculator
|
null
;
error
:
object
|
string
|
null
;
success
:
object
|
string
|
null
;
isLoading
:
boolean
}
export
interface
DefaultRootStateProps
{
marksCalculator
:
MarksCalculatorProps
;
}
\ No newline at end of file
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment