qwen3.5-9b-fujin

This model is a fine-tuned version of Qwen/Qwen3.5-9B.

W&B run: https://wandb.ai/cooawoo-personal/Qwen9B/runs/ip8kkxpt

Training procedure

Hyperparameters

Parameter Value
Learning rate 0.0001
LR scheduler SchedulerType.COSINE
Per-device batch size 1
Gradient accumulation 8
Effective batch size 8
Epochs 1
Max sequence length 16384
Optimizer OptimizerNames.PAGED_ADAMW_8BIT
Weight decay 0.01
Warmup ratio 0.05
Max gradient norm 1.0
Precision bf16
Loss type nll
Chunked cross-entropy yes

LoRA configuration

Parameter Value
Rank (r) 32
Alpha 64
Dropout 0.1
Target modules attn.proj, down_proj, gate_proj, in_proj_a, in_proj_b, in_proj_qkv, in_proj_z, k_proj, linear_fc1, linear_fc2, o_proj, out_proj, q_proj, qkv, up_proj, v_proj
Quantization 4-bit (nf4)

Dataset statistics

Dataset Samples Total tokens Trainable tokens
rpDungeon/some-revised-datasets/rosier_inf_strict_text.parquet 10,466 65,084,382 65,084,382
Training config
model_name_or_path: Qwen/Qwen3.5-9B
bf16: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
use_liger: true
use_cce: true
neftune_noise_alpha: 5
dataloader_num_workers: 4
dataloader_pin_memory: true
max_length: 16384
learning_rate: 0.0001
warmup_ratio: 0.05
weight_decay: 0.01
lr_scheduler_type: cosine
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
optim: paged_adamw_8bit
max_grad_norm: 1.0
use_peft: true
load_in_4bit: true
bnb_4bit_quant_type: nf4
lora_r: 32
lora_alpha: 64
lora_dropout: 0.1
logging_steps: 1
disable_tqdm: false
save_strategy: steps
save_steps: 500
save_total_limit: 3
report_to: wandb
output_dir: output-fujin-v2
data_config: data.yaml
prepared_dataset: prepared
attn_implementation: flash_attention_2
num_train_epochs: 1
saves_per_epoch: 3
run_name: qwen35-9b-qlora-v2
Data config
datasets:
- path: rpDungeon/some-revised-datasets
  data_files: rosier_inf_strict_text.parquet
  type: text
  truncation_strategy: split
shuffle_datasets: true
shuffle_combined: true
shuffle_seed: 42
eval_split: 0.0
split_seed: 42
assistant_only_loss: false

Framework versions

  • PEFT 0.18.1
  • Loft: 0.1.0
  • Transformers: 5.2.0
  • Pytorch: 2.6.0+cu124
  • Datasets: 4.6.1
  • Tokenizers: 0.22.2
Downloads last month
14
Safetensors
Model size
10B params
Tensor type
BF16
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for BirdToast/qwen3.5-9b-fujin

Finetuned
Qwen/Qwen3.5-9B
Finetuned
(47)
this model