It is highly recommended (if your framework supports it) to use the official Mistral tokenization code instead of Huggingface's. This is possible in vLLM with --tokenizer-mode mistral.
Recommended samplers (from CURSE and corroborated by me, Fizz) are 1.2 temperature, 0.1 min_p, and 1.05 repetition penalty.
We recommend a system prompt, but its contents only faintly matter (I accidentally had an assistant system prompt during the entire time I was testing)
The model was then put through an SFT process (using Axolotl) on various sources of general instruct, storytelling, and RP data, which resulted in allura-forge/ms32-sft-merged.
Afterwards, the model was put through a KTO process (using Unsloth) on more focused storywriting and anti-slop data, as well as general instruction following and human preference, which resulted in the final checkpoints at allura-forge/ms32-final-TEXTONLY.
Finally, the vision tower was manually added back to the weights to continue to support multimodality.
Credits
Fizz - training and data wrangling
Artus (by proxy) & Bot - help with funding
CURSE - testing
Mango - testing, data, help with KTO configs
DoctorShotgun - making the original text-only model
Axolotl & Unsloth - creating the training frameworks used for parts of this finetune