How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Vortex5/Red-Synthesis-12B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "Vortex5/Red-Synthesis-12B",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker
docker model run hf.co/Vortex5/Red-Synthesis-12B
Quick Links

Red-Synthesis-12B

Overview

Red-Synthesis-12B was created by merging Scarlet-Seraph-12B, Strawberry_Smoothie-12B-Model_Stock, MN-12B-Mag-Mell-R1, Lunar-Nexus-12B, MN-12b-RP-Ink-RP-Longform, LunaMaid-12B, and Dreamstar-12B using a custom method.

Show YAML Config
models:
  - model: Vortex5/Scarlet-Seraph-12B
  - model: DreadPoor/Strawberry_Smoothie-12B-Model_Stock
  - model: inflatebot/MN-12B-Mag-Mell-R1
  - model: Vortex5/Lunar-Nexus-12B
  - model: SuperbEmphasis/MN-12b-RP-Ink-RP-Longform
  - model: Vortex5/LunaMaid-12B
  - model: Vortex5/Dreamstar-12B
merge_method: saef
parameters:
  paradox: 0.40
  strength: 0.88
  boost: 0.28
  modes: 2
dtype: bfloat16
tokenizer:
  source: Vortex5/Scarlet-Seraph-12B
      

Intended Use

📕 Storytelling
🎭 Roleplay
✨ Creative Writing

Credits

Downloads last month
14
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Vortex5/Red-Synthesis-12B