Separated LTX2.3 checkpoint for alternative way to load the models in Comfy

image

The fp8 quantizations were done with the basic static weight scales and are set to not run with fp8 matmuls, the models marked input_scaled additionally have activation scaling, and are set to run with fp8 matmuls on supported hardware (roughly 40xx and later Nvidia GPUs).

As this is first time I'm attempting to do calibrate input scales, these are pretty experimental, but result wise seems to work, this is a test on a 4090, 8 steps with distill:

Tiny VAE by madebyollin

Can be used like this currently:

image

Downloads last month
98,298
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support