Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
T
TMP-23-105
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
1
Merge Requests
1
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
K.Tharmikan
TMP-23-105
Commits
b5d216a0
Commit
b5d216a0
authored
Sep 06, 2023
by
Heisapirashoban Nadarajah
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Upload Detect.py
parent
90128a31
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
43 additions
and
0 deletions
+43
-0
Detect.py
Detect.py
+43
-0
No files found.
Detect.py
0 → 100644
View file @
b5d216a0
import
os
import
numpy
as
np
import
librosa
import
tensorflow
as
tf
# Load the saved model
model
=
tf
.
keras
.
models
.
load_model
(
'model.h5'
)
# Define a function to extract features from the audio file
def
extract_features
(
audio_path
):
# Load the audio file
y
,
sr
=
librosa
.
load
(
audio_path
,
sr
=
22050
,
duration
=
3
)
# Extract features using Mel-frequency cepstral coefficients (MFCC)
mfcc
=
librosa
.
feature
.
mfcc
(
y
=
y
,
sr
=
sr
,
n_mfcc
=
20
)
# Compute the mean of each feature dimension
mfcc_mean
=
np
.
mean
(
mfcc
,
axis
=
1
)
# Compute the first and second derivatives of the MFCC features
mfcc_delta
=
librosa
.
feature
.
delta
(
mfcc
)
mfcc_delta2
=
librosa
.
feature
.
delta
(
mfcc
,
order
=
2
)
# Compute the mean of each derivative feature dimension
mfcc_delta_mean
=
np
.
mean
(
mfcc_delta
,
axis
=
1
)
mfcc_delta2_mean
=
np
.
mean
(
mfcc_delta2
,
axis
=
1
)
# Concatenate the feature vectors
features
=
np
.
concatenate
((
mfcc_mean
,
mfcc_delta_mean
,
mfcc_delta2_mean
))
return
features
# Define a function to predict the mood from the audio file
def
predict_mood
(
audio_path
):
# Extract features from the audio file
features
=
extract_features
(
audio_path
)
# Reshape the features to match the input shape of the model
features
=
features
.
reshape
(
1
,
-
1
)
# Make the prediction
predictions
=
model
.
predict
(
features
)
# Convert the prediction to an emotion label
emotion_labels
=
[
'angry'
,
'calm'
,
'disgust'
,
'fearful'
,
'happy'
,
'neutral'
,
'sad'
]
predicted_label
=
emotion_labels
[
np
.
argmax
(
predictions
)]
return
predicted_label
# Test the function on an example audio file
audio_file
=
'audio.wav'
predicted_mood
=
predict_mood
(
audio_file
)
print
(
'Predicted mood:'
,
predicted_mood
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment