Commit 8e68fddf authored by K.Tharmikan's avatar K.Tharmikan

Update README.md

parent 9400f71d
...@@ -53,7 +53,88 @@ ...@@ -53,7 +53,88 @@
3. To train and evaluate the SVM model on the labeled facial expression dataset. 3. To train and evaluate the SVM model on the labeled facial expression dataset.
> Heisapirashoban.n
**Main objectives**
1. To develop a mood-based music recommendation system that uses live voice recognition techniques to detect the user's mood and recommend music accordingly.
2. To research and identify suitable machine learning algorithms (CNN) for mood detection using live voice recognition techniques.
**Sub objectives**
Developing a database of music tracks categorized based on their mood states.
1. Designing and implementing a live voice recognition module to detect the user's mood state.
2. Developing a CNN model to classify the user's mood state based on the live voice input.
3. Designing and implementing a music recommendation module that recommends appropriate music based on the user's mood state and preferences.
4. Evaluating the performance of the proposed system in terms of accuracy, precision, and user satisfaction.
> M.A. Miqdad Ali Riza
**Main objectives**
1. To develop a system that creates a song playlist for an individual user using collaborative algorithm filtering
**Sub objectives**
1. To study the existing playlist creation systems and their limitations.
2. To understand the concepts of collaborative algorithm filtering and its application in playlist creation.
3. To identify the factors that influence the user's music preferences.
4. To collect and analyze the user's listening history and music preferences.
5. To design and develop a collaborative algorithm filtering system for playlist creation.
6. To evaluate the performance of the developed system and compare it with existing playlist creation systems.
7. To collect user feedback on the developed system and identify areas for improvement.
8. To suggest future research directions for improving the efficiency and effectiveness of the developed system.
> R.R.Stelin Dinoshan
**Main objectives**
1. To develop a multiclass classification model for song classification based on mood using base and frequency features.
2. To achieve high accuracy and efficiency in the multiclass classification model.
3. To compare the performance of the proposed model with existing classification models in the field.
**Sub objectives**
1. To collect a diverse dataset of songs from various genres and moods.
2. To preprocess the audio signals and extract base and frequency features.
3. To design and train a deep learning model, such as a convolutional neural network (CNN) or recurrent neural network (RNN), for multiclass classification of songs based on mood.
4. To optimize the model parameters and evaluate its performance using various metrics such as accuracy, precision, recall, and F1 score.
5. To investigate the impact of different audio features, such as spectral features, rhythm features, and tempo features, on the classification accuracy.
6. To explore the effectiveness of transfer learning techniques for mood classification of songs using base and frequency features.
7. To analyze the potential applications and implications of the proposed model in the music industry and related fields.
*** Other necessary information**
> Research Problem
1. The problem of changing songs to fit one's mood and the difficulty in music information retrieval due to the variety and emotions attached to music.
2. The need for recommending music according to the user's current emotional needs is highlighted, and it is mentioned that existing approaches are based on search queries and not emotional needs.
3. The proposal is to provide an overview of how music can affect the user's mood and how to choose the right music tracks to improve the user's moods, instead of manual sorting and playing.
> Nature of solution
The proposed system benefits us to present interaction between the user and the music player. The
purpose of the system is to capture the face properly with the camera. Captured images are fed into the
Convolutional Neural Network which predicts the emotion. Then emotion derived from the captured
image is used to get a playlist of songs. The main aim of our proposed system is to provide a music playlist
automatically to change the user's moods, which can be happy, sad, natural, or surprised. The proposed
system detects the emotions, if the topic features a negative emotion , then a selected playlist is going
to be presented that contains the foremost suitable sorts of music that will enhance the mood of the
person positively. Music recommendation based on facial emotion recognition contains four modules.
* • Face emotion recognition based mood detection:- Implementation of an emotion classifier
* using opencv and scikit-learn. It uses a support vector machine (svm) classifier to recognize
* mood based on images of faces.
*
* • Mood detection with voice recognition techniques:- he algorithm uses a neural network to
* classify mood in speech, using the Ravdess emotional speech audio dataset as training data. The
* input to the model is a voice sample, which is pre-processed to extract features, and the output
* is the detected mood.
*
* • Create song playlist on user wish list using machine learning techniques:- Recommend songs to
* a user based on their preferences using collaborative filtering. The process involves building a
* dataset of user-song ratings and using this data to train the collaborative filtering algorithm.
*
* • Song mood classification with base and frequency:- Classify songs based on their audio features
* using a neural network for multi-class classification. The process involves building a dataset of
* songs and using their audio features as inputs to the neural network, which will output the
* classification of the songs.
* Other necessary information
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment