🎵 AI-Powered Music Recommendation

Music That
Understands Your
Emotions

Harness the power of cutting-edge AI to discover music perfectly matched to your mood. Three specialized models analyze your emotions through text, voice, and facial expressions.

🎯 95%+ Accuracy
Real-Time Analysis
🔒 Enterprise Security
😊
Happy
😢
Sad
😡
Angry
😮
Surprised
🤖
3
AI Models
99.9%
Uptime
<2s
Response Time
🚀
20+
Technologies
About the Project

What is Moodify?

An intelligent music recommendation system powered by cutting-edge AI/ML technology

🎯

Mission

To revolutionize music discovery by creating an intelligent system that understands and responds to human emotions through advanced AI technology.

🚀

Vision

Providing personalized music recommendations that perfectly match users' emotional states, detected through text, speech, and facial expressions.

💡

Innovation

Combines three self-trained AI/ML models (BERT, CNN-LSTM, ResNet50) with real-time Spotify integration for seamless music discovery.

🌐

Accessibility

Available on web and mobile platforms with progressive web app features, ensuring a seamless experience across all devices.

Important Notice: Be aware of impersonation attempts. The official developer is Son Nguyen (@hoangsonww on GitHub). Do not send money or personal information to anyone claiming to be associated with this project.
Capabilities

Key Features

Comprehensive emotion detection and music recommendation system

📝

Text Emotion Analysis

Advanced BERT-based model analyzes text input to detect emotional content with high accuracy. Supports multiple languages and contexts.

NLP BERT Transformers
🎤

Speech Emotion Detection

CNN-LSTM architecture processes audio features (MFCC, pitch, energy) to identify emotions from voice recordings in real-time.

CNN LSTM Librosa
📸

Facial Expression Recognition

ResNet50-based deep learning model detects emotions from facial expressions with state-of-the-art accuracy using computer vision.

ResNet50 OpenCV FER
🎵

Spotify Integration

Seamless integration with Spotify API provides personalized music recommendations based on detected emotions and user preferences.

Spotify API OAuth
👤

User Management

Complete user authentication system with JWT tokens, profile management, mood history tracking, and listening preferences.

JWT MongoDB Redis
📊

Analytics Dashboard

Comprehensive data analytics using Apache Spark and Hadoop for emotion trends, model performance, and user insights visualization.

Spark Hadoop Matplotlib
📱

Mobile Application

Native mobile experience built with React Native and Expo, providing seamless emotion detection and music recommendations on-the-go.

React Native Expo
🔐

Enterprise Security

Multi-layer security with JWT authentication, rate limiting, input validation, encryption at rest and in transit, and comprehensive audit logging.

Security Encryption Compliance

Real-time Processing

Sub-second response times with Redis caching, load balancing via NGINX, and horizontal scaling through Kubernetes orchestration.

Redis NGINX K8s
System Design

System Architecture

Scalable, cloud-native microservices architecture

graph TB
    subgraph "Client Layer"
        A[Web App - React]
        B[Mobile App - React Native]
    end

    subgraph "Load Balancer Layer"
        C[NGINX Load Balancer]
    end

    subgraph "Application Layer"
        D[Django Backend API]
        E[AI/ML Service - Flask]
    end

    subgraph "AI/ML Models"
        F[Text Emotion Model
BERT-based] G[Speech Emotion Model
CNN + LSTM] H[Facial Emotion Model
ResNet50] end subgraph "External Services" I[Spotify API] end subgraph "Data Layer" J[(MongoDB)] K[(Redis Cache)] L[(SQLite)] end subgraph "Analytics Layer" M[Apache Spark] N[Apache Hadoop] O[Data Visualization] end A -->|HTTPS| C B -->|HTTPS| C C -->|Load Balance| D D -->|REST API| E E --> F E --> G E --> H D -->|Fetch Music| I D --> J D --> K D --> L J -.->|Analytics| M M <--> N M --> O style A fill:#61DAFB style B fill:#61DAFB style C fill:#009639 style D fill:#efefef style E fill:#ffffff style F fill:#FF6F00 style G fill:#FF6F00 style H fill:#FF6F00 style I fill:#1DB954 style J fill:#47A248 style K fill:#DC382D style L fill:#FFC0CB,stroke:#D87093,stroke-width:2px style M fill:#E25A1C style N fill:#66CCFF style O fill:#FFC0CB,stroke:#D87093,stroke-width:2px

Layered Architecture

  • Client Layer: React web app and React Native mobile app for user interaction
  • Load Balancer: NGINX for distributing traffic and SSL termination
  • Application Layer: Django REST Framework backend and Flask ML services
  • AI/ML Layer: Three specialized models for emotion detection
  • Data Layer: MongoDB for persistence, Redis for caching, SQLite for development
  • Analytics Layer: Apache Spark and Hadoop for big data processing
sequenceDiagram
    participant U as User
    participant FE as Frontend
    participant LB as Load Balancer
    participant BE as Backend API
    participant AI as AI/ML Service
    participant DB as MongoDB
    participant SP as Spotify API
    participant CH as Redis Cache

    U->>FE: Input (Text/Speech/Image)
    FE->>LB: Send Request
    LB->>BE: Route to Backend
    BE->>CH: Check Cache
    alt Cache Hit
        CH-->>BE: Return Cached Result
    else Cache Miss
        BE->>AI: Forward for Analysis
        AI->>AI: Process with ML Model
        AI-->>BE: Return Emotion
        BE->>SP: Request Music Recommendations
        SP-->>BE: Return Tracks
        BE->>DB: Store User History
        BE->>CH: Cache Result
    end
    BE-->>FE: Return Recommendations
    FE-->>U: Display Results
                    

Request Flow

  1. User submits emotion input through web or mobile interface
  2. Request is load-balanced by NGINX to backend instances
  3. Backend checks Redis cache for existing results
  4. If cache miss, forwards to AI/ML service for emotion detection
  5. ML model processes input and returns emotion with confidence score
  6. Backend queries Spotify API for music recommendations
  7. Results are cached in Redis and stored in MongoDB
  8. Response is returned to user with recommendations

Docker Compose Microservices Architecture

Load Balancer
🌐
NGINX
Port: 80/443
SSL Termination
Reverse Proxy
Application Layer
⚛️
Frontend
React 18
Port: 3000
SPA + PWA
🐍
Backend API
Django 4.2
Port: 8000
REST + GraphQL
AI/ML Services
🤖
ML Service
Flask + PyTorch
Port: 5000
BERT + CNN-LSTM + ResNet50
Data & Cache Layer
🍃
MongoDB
NoSQL Database
Port: 27017
User Data + History
Redis
In-Memory Cache
Port: 6379
Session + Cache

Containerized Deployment

  • Docker Compose: Orchestrates all services with defined dependencies
  • Kubernetes: Production orchestration with auto-scaling and self-healing
  • CI/CD: Jenkins pipeline with automated testing and deployment
  • Blue-Green Deployment: Zero-downtime deployments with instant rollback
  • Cloud Providers: AWS and GCP with Terraform IaC
  • Monitoring: Prometheus, Grafana, and ELK stack for observability

Multi-Layer Security Architecture

1 Perimeter Security
🛡️
DDoS Protection
CloudFlare + AWS Shield
🔥
Web Application Firewall
AWS WAF + ModSecurity
⏱️
Rate Limiting
NGINX + Redis 1000 req/min
2 Application Security
🔑
JWT Authentication
RS256 Asymmetric Tokens
👥
RBAC Authorization
Role-Based Access Control
Input Validation
Sanitization + Type Checking
🚫
XSS Protection
Content Security Policy
🔐
CSRF Protection
Token-Based Validation
3 Data Security
🔒
Encryption at Rest
AES-256-GCM for all data
🌐
Encryption in Transit
TLS 1.3 + HSTS Enabled
🗝️
Key Management
AWS KMS + HashiCorp Vault
📋
Audit Logging
ELK Stack + CloudWatch

Security Layers

  • Perimeter: CloudFlare DDoS protection, AWS WAF, NGINX rate limiting
  • Authentication: JWT tokens with RS256 asymmetric encryption
  • Authorization: Role-based access control (RBAC) with fine-grained permissions
  • Data Protection: AES-256 encryption at rest, TLS 1.3 in transit
  • Compliance: GDPR compliant with data retention policies and audit logs
  • Vulnerability Management: Regular security scans with Snyk and SonarQube
Tech Stack

Technologies & Tools

Built with modern, production-ready technologies

⚛️ Frontend

React
Redux
Material UI
Axios
Jest

🐍 Backend

Django
Django REST Framework
Flask
JWT
Gunicorn

🤖 AI/ML

PyTorch
TensorFlow
Keras
HuggingFace
OpenCV
Scikit Learn

💾 Databases

MongoDB
Redis
SQLite

☸️ DevOps

Docker
Kubernetes
Jenkins
Terraform
GitHub Actions

☁️ Cloud & Hosting

AWS
GCP
Vercel
Render
Netlify
Infrastructure

Deployment & Operations

Enterprise-grade deployment with multiple strategies

🔵🟢

Blue-Green Deployment

Zero-downtime deployments with instant rollback capability. Two identical production environments ensure smooth transitions and rapid recovery.

  • ✅ Instant traffic switching
  • ✅ Full environment validation
  • ✅ Easy A/B testing
  • ✅ Zero downtime
🐤

Canary Deployment

Progressive rollout strategy with traffic-based validation. Gradually increases traffic from 10% to 100% with automated health monitoring.

  • ✅ Progressive traffic increase
  • ✅ Early issue detection
  • ✅ Automated rollback
  • ✅ Minimal risk
🔄

CI/CD Pipeline

Comprehensive Jenkins pipeline with automated testing, security scanning, and deployment. Includes quality gates and approval workflows.

  • ✅ Automated testing
  • ✅ Security scanning
  • ✅ Quality gates
  • ✅ Multi-environment
☸️

Kubernetes Orchestration

Container orchestration with auto-scaling, self-healing, and service discovery. Manages complex deployments across multiple cloud providers.

  • ✅ Auto-scaling
  • ✅ Self-healing
  • ✅ Load balancing
  • ✅ Service mesh

Monitoring & Observability

📊

Metrics

Prometheus & Grafana dashboards for real-time system metrics, application performance, and business KPIs.

📝

Logging

ELK Stack (Elasticsearch, Logstash, Kibana) for centralized log aggregation, search, and analysis.

🔍

Tracing

Jaeger distributed tracing for request tracking across microservices and performance bottleneck identification.

🚨

Alerting

PagerDuty and Slack integration for instant notifications on critical issues with automated escalation policies.

Try It Now

Live Deployment

Experience Moodify in action

🌐

Web Application

Full-featured web app with all emotion detection modes and music recommendations.

Launch App →
Live on Vercel
🔧

Backend API

RESTful API with comprehensive documentation via Swagger and Redoc.

View API Docs →
Live on Render
💻

Source Code

Open-source project with comprehensive documentation and contribution guidelines.

View on GitHub →
MIT License

⚠️ Important Notes

  • The backend is hosted on Render's free tier (512MB RAM, 0.1 CPU), which may cause slower initial load times and occasional restarts due to memory constraints.
  • Facial and speech emotion detection models may cause temporary server issues due to heavy processing requirements.
  • For optimal performance, consider cloning the repository and running locally.
  • No guarantee of uptime or performance with current free-tier deployment.
<2s
Average Response Time
🎯
95%+
Emotion Detection Accuracy
🔄
99.9%
Target Uptime
🌍
24/7
Global Availability