Fine-Tuned Emotion Classification Model

Model Information

  • Base Model: unsloth/Meta-Llama-3.1-8B-Instruct
  • Training Method: LoRA (Low-Rank Adaptation)
  • LoRA Rank: 32
  • Training Samples: 56,400
  • Datasets Used: GoEmotions, Emotion, TweetEval

How to Load This Model

from unsloth import FastLanguageModel

# Load the fine-tuned model
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="emotion_model_finetuned",
    max_seq_length=2048,
    dtype=None,
    load_in_4bit=True,
)

# Enable inference mode
FastLanguageModel.for_inference(model)

# Use the model
prompt = """<|im_start|>system
You are a compassionate mental health support assistant.<|im_end|>
<|im_start|>user
I'm feeling anxious about tomorrow.<|im_end|>
<|im_start|>assistant
"""

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=128)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Files Included

  • adapter_config.json - LoRA adapter configuration
  • adapter_model.safetensors - Fine-tuned weights
  • tokenizer.json - Tokenizer files
  • training_config.json - Training hyperparameters
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RockyBai/Mirari

Datasets used to train RockyBai/Mirari