Instructions to use barandinho/phi4-turkish-instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use barandinho/phi4-turkish-instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="barandinho/phi4-turkish-instruct", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("barandinho/phi4-turkish-instruct", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("barandinho/phi4-turkish-instruct", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use barandinho/phi4-turkish-instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "barandinho/phi4-turkish-instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "barandinho/phi4-turkish-instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/barandinho/phi4-turkish-instruct
- SGLang
How to use barandinho/phi4-turkish-instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "barandinho/phi4-turkish-instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "barandinho/phi4-turkish-instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "barandinho/phi4-turkish-instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "barandinho/phi4-turkish-instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use barandinho/phi4-turkish-instruct with Docker Model Runner:
docker model run hf.co/barandinho/phi4-turkish-instruct
Phi-4 Turkish Instruction-Tuned Model
This model is a fine-tuned version of Microsoft's Phi-4 model for Turkish instruction-following tasks. It was trained on a 55,000-sample Turkish instruction dataset, making it well-suited for generating helpful and coherent responses in Turkish.
Model Summary
| Developers | Baran Bingöl (Hugging Face: barandinho) |
| Base Model | microsoft/phi-4 |
| Architecture | 14B parameters, dense decoder-only Transformer |
| Training Data | 55K Turkish instruction samples |
| Context Length | 16K tokens |
| License | MIT (License Link) |
Intended Use
Primary Use Cases
- Turkish conversational AI systems
- Chatbots and virtual assistants
- Educational tools for Turkish users
- General-purpose text generation in Turkish
Out-of-Scope Use Cases
- High-risk domains (medical, legal, financial advice) without proper evaluation
- Use in sensitive or safety-critical systems without safeguards
Usage
Input Formats
Given the nature of the training data, phi-4 is best suited for prompts using the chat format as follows:
<|im_start|>system<|im_sep|>
Sen yardımsever bir yapay zekasın.<|im_end|>
<|im_start|>user<|im_sep|>
Kuantum hesaplama neden önemlidir?<|im_end|>
<|im_start|>assistant<|im_sep|>
With transformers
Below code uses 4-bit quantization (INT4) to run the model more efficiently with lower memory usage, which is especially useful for environments with limited GPU memory like Google Colab. Keep in mind that the model will take some time to download initially.
Check this notebook for interactive usage of the model.
import os
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, pipeline
import torch
model_name = "barandinho/phi4-turkish-instruct"
quant_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_use_double_quant=True)
os.makedirs("offload", exist_ok=True)
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.float16,
quantization_config=quant_config,
offload_folder="offload"
)
messages = [
{"role": "system", "content": "Sen yardımsever bir yapay zekasın."},
{"role": "user", "content": "Kuantum hesaplama neden önemlidir, basit terimlerle açıklayabilir misin?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
- Downloads last month
- 13