With the rise of personalized music streaming services, there is a growing need for systems that can recommend music based on usersβ emotional states. Realizing this need, Moodify is being developed by Son Nguyen in 2024 to provide personalized music recommendations based on usersβ detected emotions.
The Moodify project is an integrated emotion-based music recommendation system that combines frontend, backend, AI/ML models, and data analytics to provide personalized music recommendations based on user emotions. The application analyzes text, speech, or facial expressions and suggests music that aligns with the detected emotions.
Supporting both desktop and mobile platforms, Moodify offers a seamless user experience with real-time emotion detection and music recommendations. The project leverages React for the frontend, Django for the backend, and three advanced, self-trained AI/ML models for emotion detection. Data analytics scripts are used to visualize emotion trends and model performance. Users will be directed to Spotify to listen to the recommended music, and they can save their favorite tracks to their Spotify account.
Moodify provides personalized music recommendations based on usersβ emotional states detected through text, speech, and facial expressions. It interacts with a Django-based backend, AI/ML models for emotion detection, and utilizes data analytics for visual insights into emotion trends and model performance.
The Moodify app is currently live and deployed on Vercel. You can access the live app using the following link: Moodify.
Feel free to also visit the backend at Moodify Backend API.
For your information, the frontendβs production (deployment) branch is frontend-deployment/production
, and the backendβs production (deployment) branch is main-deployment-branch/production
.
Diclaimer: The backend of Moodify is currently hosted with the Free Tier of Render, so it may take a few seconds to load initially. Additionally, it may spin down after a period of inactivity or high traffic, so please be patient if the backend takes a few seconds to respond.
Also, the amount of memory allocated by Render is only 512MB with 0.1 CPU, so the backend may run out of memory if there are too many requests at once, which may cause the server to restart. Also, the facial and speech emotion detection models may also fail due to memory constraints - which can also cause the server to restart.
There is no guarantee of uptime or performance with the current deployment, unless I have more resources (money) to upgrade the server :( Feel free to contact me if you encounter any issues or need further assistance.
The project has a comprehensive file structure combining frontend, backend, AI/ML models, and data analytics components:
Moodify-Emotion-Music-App/
βββ frontend/ # React frontend for the web application
β βββ public/
β β βββ index.html # Main HTML file
β β βββ manifest.json # Web app manifest
β β βββ favicon.ico # Favicon for the app
β β
β βββ src/
β β βββ components/ # Contains all React components
β β βββ pages/ # Contains main pages of the app
β β βββ styles/ # Contains global styles and themes
β β βββ context/ # Contains React Context API
β β βββ App.js # Main App component
β β βββ index.js # Entry point for React
β β βββ theme.js # Material UI theme configuration
β β
β βββ .gitignore # Git ignore file
β βββ Dockerfile # Dockerfile for containerization
β βββ package.json # NPM dependencies and scripts
β βββ README.md # Project documentation
β
βββ backend/ # Django backend for API services and database management
β βββ manage.py # Django's command-line utility
β βββ requirements.txt # Backend dependencies
β βββ backend/
β β βββ settings.py # Django settings for the project
β β βββ urls.py # URL declarations for the project
β β βββ users/ # User management components
β β βββ api/ # Emotion detection and recommendation APIs
β β
β βββ .gitignore # Git ignore file
β βββ Dockerfile # Dockerfile for containerization
β βββ db.sqlite3 # SQLite database (if used)
β
βββ ai_ml/ # AI/ML models for emotion detection
β βββ data/ # Datasets for training and testing
β βββ models/ # Trained models for emotion detection
β βββ src/ # Source files for emotion detection and recommendation
β β βββ api/ # API scripts for running emotion detection services
β β βββ recommendation/ # Music recommendation logic
β β βββ data_processing/ # Data preprocessing scripts
β β
β βββ Dockerfile # Dockerfile for containerization
β βββ README.md # AI/ML documentation
β
βββ data_analytics/ # Data analytics scripts and visualizations
β βββ emotion_distribution.py # Script for visualizing emotion distribution
β βββ training_visualization.py # Script for visualizing training and validation metrics
β βββ predictions_analysis.py # Script for analyzing model predictions
β βββ recommendation_analysis.py # Script for visualizing music recommendations
β βββ spark-hadoop/ # Spark and Hadoop integration scripts
β βββ visualizations/ # Generated visualizations
β
βββ kubernetes/ # Kubernetes deployment files
β βββ backend-deployment.yaml # Deployment file for the backend service
β βββ backend-service.yaml # Deployment file for the backend service
β βββ frontend-deployment.yaml # Deployment file for the frontend service
β βββ frontend-service.yaml # Deployment file for the frontend service
β βββ configmap.yaml # ConfigMap for environment variables
β
βββ mobile/ # React Native mobile application
β βββ App.js # Main entry point for React Native app
β βββ index.js # App registry for React Native
β βββ package.json # NPM dependencies and scripts
β βββ components/ # React Native components
β β βββ Footer.js # Footer component
β β βββ Navbar.js # Header component
β β βββ Auth/ # Authentication components (e.g., Login, Register)
β β βββ Profile/ # Profile-related components
β β
β βββ context/ # React Context API for state management
β β βββ DarkModeContext.js # Dark mode context provider
β β
β βββ pages/ # Main pages of the app
β β βββ HomePage.js # Home page component
β β βββ ProfilePage.js # Profile page component
β β βββ ResultsPage.js # Results page component
β β βββ LandingPage.js # Landing page component
β β βββ (and more...)
β β
β βββ assets/ # Images, fonts, and other assets
β βββ styles/ # Styling files (similar to CSS for web)
β βββ .gitignore # Git ignore file
β βββ package.json # Dependencies and scripts
β βββ README.md # Mobile app documentation
β
βββ nginx/ # NGINX configuration files (for load balancing and reverse proxy)
β βββ nginx.conf # Main NGINX configuration file
β βββ Dockerfile # Dockerfile for NGINX container
β
βββ images/ # Images used in the README documentation
βββ docker-compose.yml # Docker Compose file for containerization
βββ README.md # Comprehensive README file for the entire project
venv
).env
File (for environment variables - you create your own credentials following the example file or contact me for mine.)Start with setting up and training the AI/ML models, as they will be required for the backend to function properly.
Or, you can download the pre-trained models from the Google Drive links provided in the Pre-Trained Models section. If you choose to do so, you can skip this section for now.
git clone https://github.com/hoangsonww/Moodify-Emotion-Music-App.git
cd Moodify-Emotion-Music-App/ai_ml
python -m venv venv
source venv/bin/activate # For macOS/Linux
.\venv\Scripts\activate # For Windows
pip install -r requirements.txt
src/config.py
file:
src/config.py
file and update the configurations as needed, especially your Spotify API keys and configure ALL the paths.src/models
directory and update the paths to the datasets and output paths as needed.python src/models/train_text_emotion.py
Repeat similar commands for other models as needed (e.g., facial and speech emotion models).
Ensure all trained models are placed in the models
directory, and that you have trained all necessary models before moving to the next step!
src/models/test_emotion_models.py
script to test the trained models.Once the AI/ML models are ready, proceed with setting up the backend.
cd ../backend
python -m venv venv
source venv/bin/activate # For macOS/Linux
.\venv\Scripts\activate # For Windows
pip install -r requirements.txt
Configure your secrets and environment:
.env
file in the backend
directory..env
file:
SECRET_KEY=your_secret_key
DEBUG=True
ALLOWED_HOSTS=<your_hosts>
MONGODB_URI=<your_mongodb_uri>
backend/settings.py
and add SECRET_KEY
& set DEBUG
to True
.python manage.py migrate
python manage.py runserver
The backend server will be running at http://127.0.0.1:8000/
.
Finally, set up the frontend to interact with the backend.
cd ../frontend
npm install
npm start
The frontend will start at http://localhost:3000
.
Note: If you encounter any problems or need my .env
file, feel free to contact me.
HTTP Method | Endpoint | Description |
---|---|---|
POST |
/users/register/ |
Register a new user |
POST |
/users/login/ |
Login a user and obtain a JWT token |
GET |
/users/user/profile/ |
Retrieve the authenticated userβs profile |
PUT |
/users/user/profile/update/ |
Update the authenticated userβs profile |
DELETE |
/users/user/profile/delete/ |
Delete the authenticated userβs profile |
POST |
/users/recommendations/ |
Save recommendations for a user |
GET |
/users/recommendations/<str:username>/ |
Retrieve recommendations for a user by username |
DELETE |
/users/recommendations/<str:username>/<str:recommendation_id>/ |
Delete a specific recommendation for a user |
DELETE |
/users/recommendations/<str:username>/ |
Delete all recommendations for a user |
POST |
/users/mood_history/<str:user_id>/ |
Add a mood to the userβs mood history |
GET |
/users/mood_history/<str:user_id>/ |
Retrieve mood history for a user |
DELETE |
/users/mood_history/<str:user_id>/ |
Delete a specific mood from the userβs history |
POST |
/users/listening_history/<str:user_id>/ |
Add a track to the userβs listening history |
GET |
/users/listening_history/<str:user_id>/ |
Retrieve listening history for a user |
DELETE |
/users/listening_history/<str:user_id>/ |
Delete a specific track from the userβs history |
POST |
/users/user_recommendations/<str:user_id>/ |
Save a userβs recommendations |
GET |
/users/user_recommendations/<str:user_id>/ |
Retrieve a userβs recommendations |
DELETE |
/users/user_recommendations/<str:user_id>/ |
Delete all recommendations for a user |
POST |
/users/verify-username-email/ |
Verify if a username and email are valid |
POST |
/users/reset-password/ |
Reset a userβs password |
GET |
/users/verify-token/ |
Verify a userβs token |
HTTP Method | Endpoint | Description |
---|---|---|
POST |
/api/text_emotion/ |
Analyze text for emotional content |
POST |
/api/speech_emotion/ |
Analyze speech for emotional content |
POST |
/api/facial_emotion/ |
Analyze facial expressions for emotions |
POST |
/api/music_recommendation/ |
Get music recommendations based on emotion |
HTTP Method | Endpoint | Description |
---|---|---|
GET |
/admin/ |
Access the Django Admin interface |
HTTP Method | Endpoint | Description |
---|---|---|
GET |
/swagger/ |
Access the Swagger UI API documentation |
GET |
/redoc/ |
Access the Redoc API documentation |
GET |
/ |
Access the API root endpoint (Swagger UI) |
python manage.py createsuperuser
Access the admin panel at http://127.0.0.1:8000/admin/
Our backend APIs are all well-documented using Swagger UI and Redoc. You can access the API documentation at the following URLs:
https://moodify-emotion-music-app.onrender.com/swagger
.https://moodify-emotion-music-app.onrender.com/redoc
.Alternatively, you can run the backend server locally and access the API documentation at the following endpoints:
http://127.0.0.1:8000/swagger
.http://127.0.0.1:8000/redoc
.Regardless of your choice, you should see the following API documentation if everything is running correctly:
Swagger UI:
Redoc:
The AI/ML models are built using PyTorch, TensorFlow, Keras, and HuggingFace Transformers. These models are trained on various datasets to detect emotions from text, speech, and facial expressions.
The emotion detection models are used to analyze user inputs and provide real-time music recommendations based on the detected emotions. The models are trained on various datasets to capture the nuances of human emotions and provide accurate predictions.
The models are integrated into the backend API services to provide real-time emotion detection and music recommendations for users.
The models must be trained first before using them in the backend services. Ensure that the models are trained and placed in the models
directory before running the backend server. Refer to the (Getting Started)[#getting-started] section for more details.
Examples of training the text emotion model.
To train the models, you can run the provided scripts in the ai_ml/src/models
directory. These scripts are used to preprocess the data, train the models, and save the trained models for later use. These scripts include:
train_text_emotion.py
: Trains the text emotion detection model.train_speech_emotion.py
: Trains the speech emotion detection model.train_facial_emotion.py
: Trains the facial emotion detection model.Ensure that you have the necessary dependencies, datasets, and configurations set up before training the models. Specifically, make sure to visit the config.py
file and update the paths to the datasets and output directories to the correct ones on your system.
Note: By default, these scripts will prioritize using your GPU with CUDA (if available) for faster training. However, if that is not available on your machine, the scripts will automatically fall back to using the CPU for training. To ensure that you have the necessary dependencies for GPU training, install PyTorch with CUDA support using the following command:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
After that, you can run the test_emotion_models.py
script to test the trained models and ensure they are providing accurate predictions:
python src/models/test_emotion_models.py
Alternatively, you can run the simple Flask API to test the models via RESTful API endpoints:
python ai_ml/src/api/emotion_api.py
The endpoints are as follows:
/text_emotion
: Detects emotion from text input/speech_emotion
: Detects emotion from speech audio/facial_emotion
: Detects emotion from an image/music_recommendation
: Provides music recommendations based on the detected emotionImportant: For more information about training and using the models, please refer to the AI/ML documentation in the ai_ml
directory.
However, if training the model is too resource-intensive for you, you can use the following Google Drive links to download the pre-trained models:
model.safetensors
. Please download this model.safetensors
file and place it into the ai_ml/models/text_emotion_model
directory.scaler.pkl
. Please download this and place this into the ai_ml/models/speech_emotion_model
directory.trained_speech_emotion_model.pkl
. Please download this and place this into the ai_ml/models/speech_emotion_model
directory.trained_facial_emotion_model.pt
. Please download this and place this into the ai_ml/models/facial_emotion_model
directory.These have been pre-trained on the datasets for you and are ready to use in the backend services or for testing purposes once downloaded and correctly placed in the models
directory.
Feel free to contect me if you encounter any issues or need further assistance with the AI/ML models.
The data_analytics
folder provides data analysis and visualization scripts to gain insights into the emotion detection modelβs performance.
python data_analytics/main.py
View generated visualizations in the visualizations
folder.
Emotion Distribution Visualization
Training Loss Curve Visualization
There is also a mobile version of the Moodify app built using React Native and Expo. You can find the mobile app in the mobile
directory.
cd ../mobile
yarn install
yarn start
If successful, you should see the following home screen:
Feel free to explore the mobile app and test its functionalities!
The project uses NGINX and Gunicorn for load balancing and serving the Django backend. NGINX acts as a reverse proxy server, while Gunicorn serves the Django application.
sudo apt-get update
sudo apt-get install nginx
pip install gunicorn
/nginx/nginx.conf
with your configuration.sudo systemctl start nginx
gunicorn backend.wsgi:application
http://<server_ip>:8000/
.Feel free to customize the NGINX configuration and Gunicorn settings as needed for your deployment.
The project can be containerized using Docker for easy deployment and scaling. You can create Docker images for the frontend, backend, and AI/ML models.
docker compose up --build
docker images
If you encounter any errors, try to rebuild your image without using the cache since Dockerβs cache may cause issues.
docker-compose build --no-cache
We also added Kubernetes deployment files for the backend and frontend services. You can deploy the services on a Kubernetes cluster using the provided YAML files.
kubectl apply -f kubernetes/backend-deployment.yaml
kubectl apply -f kubernetes/frontend-deployment.yaml
kubectl expose deployment moodify-backend --type=LoadBalancer --port=8000
kubectl expose deployment moodify-frontend --type=LoadBalancer --port=3000
http://<backend_loadbalancer_ip>:8000
.http://<frontend_loadbalancer_ip>:3000
.Feel free to visit the kubernetes
directory for more information about the deployment files and configurations.
We have also included Jenkins pipeline script for automating the build and deployment process. You can use Jenkins to automate the CI/CD process for the Moodify app.
Install Jenkins on your server or local machine.
Jenkinsfile
in the jenkins
directory.Feel free to explore the Jenkins pipeline script in the Jenkinsfile
and customize it as needed for your deployment process.
Contributions are welcome! Feel free to fork the repository and submit a pull request.
Note that this project is still under active development, and any contributions are appreciated.
If you have any suggestions, feature requests, or bug reports, feel free to open an issue here.
Happy Coding and Vibinβ! πΆ
Created with β€οΈ by Son Nguyen in 2024.