1

Qwen3-VisionCaption-2B-it-REDACTED

Qwen3-VisionCaption-2B-it-REDACTED is a strictly censored image captioning model built upon Qwen3-VL-2B-Instruct, optimized for safe and controlled caption generation. It is designed to follow censorship rules closely while still providing clear, structured, and context aware visual descriptions. The abliterated image-captioning version is here: https://huggingface.co/prithivMLmods/Qwen3-VisionCaption-2B

Key Highlights

  • Strictly censored and safety aligned captioning output.
  • Reliable captions for general, artistic, technical, abstract, and synthetic images.
  • Consistent performance across wide, tall, square, panoramic, and irregular formats.
  • Adjustable detail level from short summary captions to detailed reasoning while respecting safety.
  • Built on Qwen3-VL-2B-Instruct architecture with strong multimodal reasoning and instruction following.
  • Multilingual output capability through prompt based control.

Datasets

This model was fine tuned using the following datasets:

  • prithivMLmods/blip3o-caption-mini-arrow High quality caption dataset focused on descriptive and reasoning oriented captions.
  • prithivMLmods/Caption3o-Opt-v2 Dataset optimized for precision, contextual understanding, and descriptive generalization.
  • Private and unlisted datasets curated for controlled and safe captioning intended for restricted visual interpretation.

The training objective focused on improving correctness, clarity, and responsible captioning across diverse visual categories while preventing unsafe or explicit outputs.

Quick Start with Transformers

from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch

model = Qwen3VLForConditionalGeneration.from_pretrained(
    "prithivMLmods/Qwen3-VisionCaption-2B-it-REDACTED", torch_dtype="auto", device_map="auto"
)

processor = AutoProcessor.from_pretrained("prithivMLmods/Qwen3-VisionCaption-2B-it-REDACTED")

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Provide a clean safe caption for this image."},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)

inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

Intended Use

  • Safe image captioning for general purpose applications.
  • Moderation aware visual understanding for research and evaluation.
  • Education, training, and documentation tasks requiring controlled output quality.
  • Captioning of stylized, synthetic, or complex images while respecting safety filters.

Limitations

  • May overly filter content in visually ambiguous or artistic scenarios.
  • Not intended for roles requiring unrestricted analysis or red teaming.
  • Performance may vary for extreme edge case images or heavy distortions.
Downloads last month
18
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3-VisionCaption-2B-it-REDACTED

Finetuned
(67)
this model
Quantizations
3 models

Datasets used to train prithivMLmods/Qwen3-VisionCaption-2B-it-REDACTED

Collection including prithivMLmods/Qwen3-VisionCaption-2B-it-REDACTED