🎭 Speech Emotion Recognition

Python 3.10 PyTorch License: MIT

A production-ready deep learning system for detecting emotions from speech using the RAVDESS dataset. Achieved 75% validation accuracy through enhanced CNN architecture with residual connections, attention mechanisms, and comprehensive data augmentation.

🎯 Project Achievements

βœ… Primary Goal Met: 75% validation accuracy (66.2% test accuracy)
βœ… Enhanced Features: 196-dimensional feature vectors
βœ… Advanced Architecture: 11.8M parameter CNN with residual blocks and attention
βœ… Production Ready: Complete pipeline from data to deployment

πŸ“Š Results Summary

Metric Baseline Model Enhanced Model Improvement
Validation Accuracy 38.89% 75.00% +36.11%
Test Accuracy 39.81% 66.20% +26.39%
Parameters 536K 11.8M 22x larger
Features 143 196 +37% richer

Per-Class Performance (Test Set)

Emotion Baseline Enhanced Improvement Status
Neutral 78.57% 71.43% -7.14% βœ“ Good
Calm 85.71% 85.71% +0.00% βœ“ Excellent
Happy 6.90% 58.62% +51.72% πŸš€ Huge gain
Sad 0.00% 51.72% +51.72% πŸš€ Huge gain
Angry 31.03% 68.97% +37.94% βœ“ Major gain
Fearful 13.79% 41.38% +27.59% βœ“ Good gain
Disgust 68.97% 75.86% +6.89% βœ“ Improved
Surprised 55.17% 79.31% +24.14% βœ“ Major gain

πŸš€ Quick Start

Installation

# Clone the repository
git clone https://github.com/yourusername/speech-emotion-recognition.git
cd speech-emotion-recognition

# Create conda environment
conda create -n voice_ai python=3.10
conda activate voice_ai

# Install dependencies
pip install -r requirements.txt

Usage

1. Download Dataset

python data/download_dataset.py

2. Prepare Features

python data/prepare_data.py

3. Train Enhanced Model

python models/train_v2.py

4. Evaluate Model

python models/evaluate_v2.py

5. Run Streamlit Demo

streamlit run deployment/app.py

Quick Inference

import torch
from models.emotion_cnn_v2 import ImprovedEmotionCNN
from data.prepare_data import extract_features

# Load model
model = ImprovedEmotionCNN(num_classes=8)
checkpoint = torch.load('results/best_model_v2.pth')
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()

# Extract features from audio
features = extract_features('path/to/audio.wav')
features_tensor = torch.FloatTensor(features).unsqueeze(0).unsqueeze(0)

# Predict
with torch.no_grad():
    output = model(features_tensor)
    probs = torch.softmax(output, dim=1)
    predicted = output.argmax(1)

emotions = ['neutral', 'calm', 'happy', 'sad', 'angry', 'fearful', 'disgust', 'surprised']
print(f"Predicted emotion: {emotions[predicted]}")
print(f"Confidence: {probs[0][predicted]:.2%}")

πŸ—οΈ Architecture

Enhanced Model (V2) - 75% Accuracy

Features (196 dimensions):

  • Mel-spectrograms: 128 bands
  • MFCCs: 13 coefficients
  • Delta MFCCs: 13 (temporal dynamics)
  • Delta-Delta MFCCs: 13 (acceleration)
  • Chromagram: 12 (pitch content)
  • Spectral Contrast: 7 (texture)
  • Tonnetz: 6 (harmonic content)
  • Additional: 4 (ZCR, centroid, rolloff, bandwidth)

Model Architecture:

Input (1, 196, 128)
    ↓
Conv2d 7Γ—7, stride 2 β†’ 64 channels
    ↓
Residual Block Γ— 2 (64 channels) + Channel Attention
    ↓
Residual Block Γ— 2 (128 channels) + Channel Attention
    ↓
Residual Block Γ— 2 (256 channels) + Channel Attention
    ↓
Residual Block Γ— 2 (512 channels) + Channel Attention
    ↓
Dual Global Pooling (Avg + Max) β†’ 1024 features
    ↓
FC 1024 β†’ 512 β†’ 256 β†’ 8 (emotions)

Total Parameters: 11,873,480

Key Improvements:

  • βœ… Residual connections for deeper learning
  • βœ… Channel attention mechanisms
  • βœ… Dual pooling (average + max)
  • βœ… Batch normalization throughout
  • βœ… Dropout (0.4) for regularization

Baseline Model (V1) - 39% Accuracy

Features (143 dimensions):

  • Mel-spectrograms: 128
  • MFCCs: 13
  • ZCR: 1
  • Spectral Centroid: 1

Model Architecture:

  • 3 Conv blocks (64 β†’ 128 β†’ 256)
  • Global average pooling
  • FC layers: 256 β†’ 128 β†’ 8
  • Total Parameters: 536,584

πŸ“ Project Structure

speech-emotion-recognition/
β”œβ”€β”€ data/
β”‚   β”œβ”€β”€ download_dataset.py       # RAVDESS dataset downloader
β”‚   β”œβ”€β”€ prepare_data.py            # Enhanced feature extraction (196 features)
β”‚   β”œβ”€β”€ dataset.py                 # PyTorch Dataset with train/val/test splits
β”‚   └── augmentation.py            # Data augmentation (SpecAugment, noise, etc.)
β”‚
β”œβ”€β”€ models/
β”‚   β”œβ”€β”€ emotion_cnn.py             # Baseline CNN (536K params)
β”‚   β”œβ”€β”€ emotion_cnn_v2.py          # Enhanced CNN (11.8M params) ⭐
β”‚   β”œβ”€β”€ train.py                   # Baseline training script
β”‚   β”œβ”€β”€ train_v2.py                # Enhanced training script ⭐
β”‚   β”œβ”€β”€ evaluate.py                # Baseline evaluation
β”‚   └── evaluate_v2.py             # Enhanced evaluation ⭐
β”‚
β”œβ”€β”€ deployment/
β”‚   β”œβ”€β”€ app.py                     # Streamlit demo application
β”‚   └── requirements.txt           # Deployment dependencies
β”‚
β”œβ”€β”€ notebooks/
β”‚   └── emotion_eda.ipynb          # Exploratory analysis + model comparison
β”‚
β”œβ”€β”€ results/
β”‚   β”œβ”€β”€ best_model.pth             # Baseline model weights
β”‚   β”œβ”€β”€ best_model_v2.pth          # Enhanced model weights ⭐
β”‚   β”œβ”€β”€ confusion_matrix_v2.png    # Confusion matrix visualization
β”‚   β”œβ”€β”€ per_class_accuracy_v2.png  # Per-class performance chart
β”‚   └── model_comparison.png       # Baseline vs Enhanced comparison
β”‚
β”œβ”€β”€ runs/                          # TensorBoard logs
β”œβ”€β”€ README.md                      # This file
β”œβ”€β”€ requirements.txt               # Python dependencies
└── LICENSE                        # MIT License

πŸ”§ Technical Details

Dataset: RAVDESS

Ryerson Audio-Visual Database of Emotional Speech and Song

  • 1,440 speech files
  • 8 emotion classes (neutral, calm, happy, sad, angry, fearful, disgust, surprised)
  • 24 professional actors (12 male, 12 female)
  • Controlled recording environment
  • Download: https://zenodo.org/record/1188976

Training Configuration (Enhanced Model)

config = {
    'batch_size': 24,
    'learning_rate': 0.001,
    'epochs': 150,
    'optimizer': 'AdamW',
    'weight_decay': 1e-4,
    'loss': 'CrossEntropyLoss + Label Smoothing (0.1)',
    'lr_scheduler': 'ReduceLROnPlateau (patience=8, factor=0.5)',
    'early_stopping': 'patience=20',
    'mixed_precision': 'FP16',
    'gradient_clipping': 'max_norm=1.0',
    'data_augmentation': True
}

Data Augmentation

  • SpecAugment: Time and frequency masking
  • Gaussian Noise: Random noise injection
  • Time Shifting: Temporal variations
  • Augmentation Probability: 60%

Hardware Requirements

  • Recommended: NVIDIA GPU with 8GB+ VRAM
  • Tested on: RTX 5060 Ti
  • Training Time: ~2.5 hours (150 epochs)
  • Inference: <1 second per file

πŸ“Š Monitoring & Visualization

TensorBoard

tensorboard --logdir=runs/

View real-time training metrics:

  • Training/validation loss
  • Training/validation accuracy
  • Learning rate schedule
  • Per-class accuracy

Generated Visualizations

  • Confusion Matrix: Shows emotion confusion patterns
  • Per-Class Accuracy: Bar chart of individual emotion performance
  • Model Comparison: Baseline vs Enhanced side-by-side

πŸŽ“ Key Learnings

What Worked

  1. Enhanced Features: Delta MFCCs and Chromagram were crucial for distinguishing similar emotions
  2. Residual Connections: Enabled much deeper learning without degradation
  3. Channel Attention: Helped model focus on important frequency bands
  4. Data Augmentation: SpecAugment significantly improved generalization
  5. Label Smoothing: Prevented overconfidence and improved calibration

Challenges Overcome

  • Happy vs Sad Confusion: Solved with chromagram (pitch) and delta MFCCs (dynamics)
  • Overfitting: Addressed with dropout, weight decay, and augmentation
  • Training Stability: Fixed with gradient clipping and batch normalization

Remaining Challenges

  • Fearful Emotion: Still only 41.38% accuracy (confused with other negative emotions)
  • Test-Val Gap: 75% validation vs 66.2% test suggests some overfitting

πŸš€ Deployment

Hugging Face Model Hub

The trained model is available on Hugging Face:

from huggingface_hub import hf_hub_download

model_path = hf_hub_download(
    repo_id="yourusername/speech-emotion-recognition",
    filename="best_model_v2.pth"
)

Streamlit Cloud

Live demo: [Coming Soon]

Local Demo

streamlit run deployment/app.py

Features:

  • Audio file upload
  • Real-time emotion prediction
  • Confidence scores visualization
  • Top-3 predictions

πŸ“ˆ Performance Metrics

Classification Report (Enhanced Model)

              precision    recall  f1-score   support

     neutral      0.667     0.714     0.690        14
        calm      0.686     0.857     0.762        28
       happy      0.531     0.586     0.557        29
         sad      0.500     0.517     0.508        29
       angry      0.769     0.690     0.727        29
     fearful      0.706     0.414     0.522        29
     disgust      0.688     0.759     0.721        29
   surprised      0.793     0.793     0.793        29

    accuracy                          0.662       216
   macro avg      0.667     0.666     0.660       216
weighted avg      0.667     0.662     0.658       216

πŸ› οΈ Development

Running Tests

# Test model architecture
python models/emotion_cnn_v2.py

# Test dataset loading
python data/dataset.py

# Check environment
python quick_start.py

Training from Scratch

# Complete pipeline
./run_pipeline.sh

# Or step by step:
python data/download_dataset.py
python data/prepare_data.py
python models/train_v2.py
python models/evaluate_v2.py

πŸ“š References

  1. RAVDESS Dataset: Livingstone SR, Russo FA (2018) The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). PLoS ONE 13(5): e0196391.

  2. SpecAugment: Park et al. (2019) "SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition"

  3. ResNet: He et al. (2016) "Deep Residual Learning for Image Recognition"

  4. Channel Attention: Hu et al. (2018) "Squeeze-and-Excitation Networks"

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • RAVDESS dataset creators for the high-quality emotion database
  • PyTorch team for the excellent deep learning framework
  • librosa developers for comprehensive audio processing tools

πŸ“§ Contact

For questions or feedback, please open an issue on GitHub.


Built with ❀️ using PyTorch, librosa, and Streamlit

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support