Lilitu

This 70B parameter model is my first merge of some of my favorite models for storytelling and RP:

Quantizations

By mradermacher:

You are encouraged to try these great models individually as well. Thanks to the creators of these models and merges and also thanks to TheDrummer for Anubis (part of the Sapphira merge).

Chat and Instruct Template:

Llama3 Instruct

My settings:

Temp: 1
Min P: 0.03
Top P: 0.9
Top K: 0
Typical_P: 1
Repetition Penalty: 1.05
Rep Pen Range: 4096
XTC Threshold: 0.1
XTC Probability: 0
DRY Multiplier: 0.8
DRY Base: 1.8
DRY Allowed Length: 3
Dry Penalty Range: 4096
Or use the provided ST presets

Merge configuration:


models:
  - model: ./Sapphira-L3.3-70b-0.2
    parameters:
      weight: 0.55
      density: 0.6
  - model: ./Strawberrylemonade-L3-70B-v1.2
    parameters:
      weight: 0.25
      density: 0.4
  - model: ./Edens-Fall-L3.3-70b-0.3c
    parameters:
      weight: 0.15
      density: 0.3
merge_method: della
base_model: ./Sapphira-L3.3-70b-0.2
parameters:
  epsilon: 0.08
  lambda: 0.85
  normalize: true
dtype: bfloat16
Downloads last month
96
Safetensors
Model size
71B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Ionze/Lilitu-L3.3-70b-0.1