Instructions to use Metal079/SonicDiffusion with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Metal079/SonicDiffusion with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Metal079/SonicDiffusion", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
UPDATE: V2 is released as is much better, would recommend using instead https://huggingface.co/Metal079/SonicDiffusionV2
3 Dreambooth models based on AnythingV3 for the base model and training images from Evan Stanley's twitter.
evan5400 was trained on ~30 images for 5400 steps, use keyword 'sonic person' when prompting mobianstrimmed6000 was trained on 100 images for 6000 steps, use keyword 'mobian person' when prompting mobianstrimmed12000 was trained on 100 images for 12000 steps, use keyword 'mobian person' when prompting
Current testing shows that mobianstrimmed6000 and evan5400 produce the best quality images, I would recommend starting with one of those two and compare results.
Apologies for using different keywords between models, I wanted to fully switch over to 'mobian person' but unfortunetaly im not fully conviced even though I used more than double the training images, that it is better so I included both models. 6000 seemed like the sweet spot as I have noticed mobianstrimmed12000 doesnt give as consistantly good images as the other two.
- Downloads last month
- 257