# Hugging Face Spaces GPU Pricing ## Current Pricing (as of 2024) ### GPU Options for Spaces | Hardware | GPU | VRAM | CPU | RAM | **Hourly Price** | |----------|-----|------|-----|-----|------------------| | **T4 - small** | T4 | 16 GB | 4 vCPU | 15 GB | **$0.40/hour** | | **T4 - medium** | T4 | 16 GB | 8 vCPU | 30 GB | **$0.60/hour** | | **A10G - small** | A10G | 24 GB | 4 vCPU | 15 GB | **$1.00/hour** | | **A10G - large** | A10G | 24 GB | 12 vCPU | 46 GB | **$1.50/hour** | ### Cost Estimates for Your Use Case #### Inference Only (8,510 test files) - **T4 GPU**: ~30-60 minutes = **$0.20 - $0.60** - **A10 Small**: ~15-30 minutes = **$0.25 - $0.50** - **A10 Large**: ~15-30 minutes = **$0.38 - $0.75** #### Training + Inference (3 epochs) - **T4 GPU**: ~6-8 hours training + ~1 hour inference = **$2.80 - $5.40** - **A10 Small**: ~4-6 hours training + ~0.5 hour inference = **$4.50 - $6.50** - **A10 Large**: ~3-4 hours training + ~0.5 hour inference = **$5.25 - $6.75** ### Important Notes 1. **Pay-per-use**: You only pay when the Space is running with GPU active 2. **Idle time**: If Space sleeps, you don't pay for GPU time 3. **Current config**: Set to `gpu-t4` ($0.40/hour) 4. **Recommended**: A10G Large ($1.50/hour) for faster training ### Cost Comparison **For a complete training + inference run:** | Option | Training Time | Inference Time | **Total Cost** | |--------|---------------|----------------|----------------| | T4 (current) | 6-8 hours | 30-60 min | **$2.80 - $5.40** | | A10 Small | 4-6 hours | 15-30 min | **$4.50 - $6.50** | | A10 Large | 3-4 hours | 15-30 min | **$5.25 - $6.75** | **Savings**: T4 is cheapest but slower. A10 Large costs ~$2-3 more but saves 3-5 hours. ### To Change GPU Edit `README.md`: ```yaml hardware: gpu-a10g-large # Change from gpu-t4 ``` Or use Space Settings → Hardware ### Reference Full pricing: https://huggingface.co/pricing