Text Generation
Safetensors
GGUF
English
qwen2
causal-reasoning
fine-tuned
productivity
business-intelligence
tunedai
conversational
How to use from
llama.cppInstall from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf tunedailabs/knapsack-causal-7b-merged:Q4_K_M# Run inference directly in the terminal:
llama-cli -hf tunedailabs/knapsack-causal-7b-merged:Q4_K_MInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf tunedailabs/knapsack-causal-7b-merged:Q4_K_M# Run inference directly in the terminal:
llama-cli -hf tunedailabs/knapsack-causal-7b-merged:Q4_K_MUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf tunedailabs/knapsack-causal-7b-merged:Q4_K_M# Run inference directly in the terminal:
./llama-cli -hf tunedailabs/knapsack-causal-7b-merged:Q4_K_MBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf tunedailabs/knapsack-causal-7b-merged:Q4_K_M# Run inference directly in the terminal:
./build/bin/llama-cli -hf tunedailabs/knapsack-causal-7b-merged:Q4_K_MUse Docker
docker model run hf.co/tunedailabs/knapsack-causal-7b-merged:Q4_K_MQuick Links
TunedAI Causal Reasoning Model
Fine-tuned by TunedAI Labs.
What it does
Performs structured causal analysis on business and productivity data. Given a causal question, it reasons through observation, mechanism, projection, and simulation.
Training
- Base model: Qwen/Qwen2.5-14B-Instruct
- Fine-tuned by: TunedAI Labs (tunedailabs.com)
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-14B-Instruct")
model = PeftModel.from_pretrained(base, "tunedailabs/knapsack-causal-14b")
tokenizer = AutoTokenizer.from_pretrained("tunedailabs/knapsack-causal-14b")
messages = [
{"role": "system", "content": "You are an expert analyst. When asked causal questions, work through all levels of analysis: patterns in the data, underlying mechanisms, anticipated effects, and counterfactual scenarios."},
{"role": "user", "content": "Why has this person's email response time increased 40% over the last month?"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=1000)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
License
Apache 2.0. Fine-tuned weights by TunedAI Labs. Base model by Alibaba Cloud.
Contact
TunedAI Labs — mark@tunedailabs.com
- Downloads last month
- 36
# Gated model: Login with a HF token with gated access permission hf auth login