Qwen2.5-7B Mewari Translation (MLX LoRA)

A LoRA fine-tuned adapter for English to Mewari translation, built on top of Qwen/Qwen2.5-7B-Instruct using MLX on Apple Silicon.

Mewari (मेवाड़ी) is a Rajasthani language spoken in the Mewar region of Rajasthan, India, written in Devanagari script.

Usage

With MLX (Apple Silicon)

from huggingface_hub import snapshot_download
from mlx_lm import load, generate
from mlx_lm.sample_utils import make_sampler

# Download adapter from HuggingFace
adapter_path = snapshot_download(repo_id="viplismism/Qwen2.5-7B-Mewari-MLX-LoRA")

# Load base model with adapter
model, tokenizer = load("Qwen/Qwen2.5-7B-Instruct", adapter_path=adapter_path)

messages = [
    {"role": "system", "content": "You are an expert translator specializing in English to Mewari translation. Provide only the direct Mewari translation in Devanagari script, nothing else."},
    {"role": "user", "content": 'English text to translate: "Hello, how are you?"\n\nProvide the Mewari translation:'},
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
sampler = make_sampler(temp=0.7, top_p=0.9)
response = generate(model, tokenizer, prompt=prompt, max_tokens=256, sampler=sampler)
print(response)
# Output: नमस्ते, थूं कैसै है?

Example Translations

English Mewari
Hello, how are you? नमस्ते, थूं कैसै है?
The weather is very hot today आज मौसम घणो गरम है।
Please sit down and have some tea कृपया बैठ जावो अर कुछ चाय खावो।
What is your name? थारो नाव क्या है?
Where are you going tomorrow? काल थें कठै जावां?
My children go to school every day म्हारै बाचेरे हर दिन स्कूल जावै है।

Training Details

Parameter Value
Base Model Qwen/Qwen2.5-7B-Instruct
Method LoRA (MLX)
Training Data 2,700 English-Mewari pairs
Validation Data 300 English-Mewari pairs
LoRA Rank 64
LoRA Alpha 128
LoRA Dropout 0.1
Learning Rate 1e-5
Batch Size 1
Iterations 1000
Max Seq Length 512
Grad Checkpoint Yes

Training Results

Metric Value
Train Loss 2.076 → 0.265
Val Loss 2.703 → 0.290
Best Val Loss 0.282 (iter 700)
Peak Memory 18.856 GB
Hardware Apple M4 Max (36GB)

Limitations

  • Optimized for simple to moderate sentence translation
  • May produce repetition on certain complex or compound sentences
  • Best used with temperature 0.7 and top_p 0.9
  • MLX adapter format — designed for Apple Silicon inference
Downloads last month

-

Downloads are not tracked for this model. How to track
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for viplismism/Qwen2.5-7B-Mewari-MLX-LoRA

Base model

Qwen/Qwen2.5-7B
Adapter
(1769)
this model