TADA: A Generative Framework for Speech Modeling via Text-Acoustic Dual Alignment

Paper Collection PyPI Blog

image


A unified speech-language model that synchronizes speech and text into a single, cohesive stream via 1:1 alignment.


MLX-TADA-1B

Pre-converted MLX weights for TADA (Text-Acoustic Dual Alignment) speech synthesis on Apple Silicon.

Built on Llama 3.2 1B. English only.

Component File Size
LLM + VibeVoice head model/weights.safetensors 3.0 GB
Aligner aligner/weights.safetensors 852 MB
Decoder (DAC) decoder/weights.safetensors 226 MB
Encoder encoder/weights.safetensors 178 MB
Total ~4.3 GB

All weights are stored in bfloat16 safetensors format.

Prerequisites

TADA models are built on Meta Llama 3.2. You must request access to the Llama models before using TADA:

Quick Start

pip install mlx-tada

Or install from source:

git clone https://github.com/HumeAI/tada.git
cd tada/apple
uv venv && uv pip install -e .

Download a reference audio clip:

curl -O "https://storage.googleapis.com/hume_reference_speakers/ljspeech.wav"

Python

from mlx_tada import TadaForCausalLM, save_wav

model = TadaForCausalLM.from_pretrained("HumeAI/mlx-tada-1b", quantize=4)
ref = model.load_reference("ljspeech.wav")
out = model.generate("Hello, this is a test of TADA speech synthesis.", ref)
save_wav(out.audio, "output.wav")

Offline Use

To download the weights locally for offline inference:

from huggingface_hub import snapshot_download
snapshot_download("HumeAI/mlx-tada-1b", local_dir="./weights/1b")

Then load from the local path:

model = TadaForCausalLM.from_weights("./weights/1b", quantize=4)

CLI

python -m mlx_tada.generate \
  --weights ./weights/1b \
  --audio ljspeech.wav \
  --text "Hello, this is a test of TADA speech synthesis." \
  --quantize 4 \
  --output output.wav

Hardware Requirements

Precision Memory
bfloat16 (default) ~8 GB
4-bit quantized ~3 GB

Tested on Apple M1 Pro and above. 4-bit quantization is recommended for most devices β€” it is roughly 10x faster with 60% less memory and minimal quality loss.

Related

πŸ“š Citation

If you use this project in your research, please cite our paper:

@article{dang2026tada,
  title={TADA: A Generative Framework for Speech Modeling via Text-Acoustic Dual Alignment},
  author={Dang, Trung and Rao, Sharath and Gupta, Ananya and Gagne, Christopher and Tzirakis, Panagiotis and Baird, Alice and CΕ‚apa, Jakub Piotr and Chin, Peter and Cowen, Alan},
  journal={arXiv preprint arXiv:2602.23068},
  year={2026}
}

Contact

Hume AI is an empathic AI research company. We research the datasets, tools, and models needed to give empathy to AI models to serve human wellbeing. If you're interested in any of our product or research collaborations, please reach out to us at hello@hume.ai

Acknowledgements

This project is built using Llama 3.2.

Llama 3.2 is licensed under the Llama 3.2 Community License

Downloads last month

-

Downloads are not tracked for this model. How to track
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for HumeAI/mlx-tada-1b

Finetuned
(890)
this model

Collection including HumeAI/mlx-tada-1b

Paper for HumeAI/mlx-tada-1b