YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

iNosh AI v3 - PyTorch LoRA Adapter

Model Type: LoRA Adapter for Llama-3.2-1B-Instruct Base Model: unsloth/Llama-3.2-1B-Instruct Format: PyTorch (safetensors) Size: 22 MB (5.6M trainable parameters) Training: MLX LoRA fine-tuning (iteration 100, val loss 0.164)

Usage (PyTorch/Transformers)

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch

# Load base model
base_model = "unsloth/Llama-3.2-1B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(
    base_model,
    torch_dtype=torch.float16,
    device_map="auto",
)

# Load LoRA adapter
model = PeftModel.from_pretrained(model, "vasu24/inosh-ai-v3-pytorch")

# Generate
messages = [
    {"role": "system", "content": "You are GROOT, an AI kitchen assistant..."},
    {"role": "user", "content": "Add 500g chicken to pantry"}
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

Deployment

  • Modal: Serverless deployment
  • HuggingFace Inference Endpoints: Dedicated endpoints
  • Replicate: Pay-per-use API
  • On-device: Convert to GGUF for mobile

Training Details

See main documentation at: https://huggingface.co/vasu24/inosh-ai-v3

Downloads last month
22
Safetensors
Model size
1B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support