1

Nsfw_Image_Detection_OSS

Nsfw_Image_Detection_OSS is an image classification vision-language encoder model fine-tuned from facebook/metaclip-2-worldwide-s16 for a binary NSFW detection task. It is designed to classify whether an image is Safe For Work (SFW) or Not Safe For Work (NSFW) using the MetaClip2ForImageClassification architecture.

MetaCLIP 2: A Worldwide Scaling Recipe https://huggingface.co/papers/2507.22062

Evaluation Report (Self-Reported)

Classification report:

              precision    recall  f1-score   support

         SFW     0.8736    0.8673    0.8705     11103
        NSFW     0.9047    0.9094    0.9071     15380

    accuracy                         0.8918     26483
   macro avg     0.8892    0.8884    0.8888     26483
weighted avg     0.8917    0.8918    0.8917     26483

download

Label Mapping

The model categorizes images into two classes:

  • Class 0: SFW
  • Class 1: NSFW
{
  "id2label": {
    "0": "SFW",
    "1": "NSFW"
  },
  "label2id": {
    "SFW": 0,
    "NSFW": 1
  }
}

Run with Transformers

!pip install -q transformers torch pillow gradio
import gradio as gr
import torch
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image

# Model name from Hugging Face Hub
model_name = "prithivMLmods/Nsfw_Image_Detection_OSS"

# Load processor and model
processor = AutoImageProcessor.from_pretrained(model_name)
model = AutoModelForImageClassification.from_pretrained(model_name)
model.eval()

# Define labels
LABELS = {
    0: "SFW",
    1: "NSFW"
}

def nsfw_detection(image):
    """Predict whether an image is SFW or NSFW."""
    image = Image.fromarray(image).convert("RGB")
    inputs = processor(images=image, return_tensors="pt")

    with torch.no_grad():
        outputs = model(**inputs)
        logits = outputs.logits
        probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()

    predictions = {LABELS[i]: round(probs[i], 3) for i in range(len(probs))}
    return predictions

# Build Gradio interface
iface = gr.Interface(
    fn=nsfw_detection,
    inputs=gr.Image(type="numpy", label="Upload Image"),
    outputs=gr.Label(label="NSFW Detection Probabilities"),
    title="NSFW Image Detection (MetaCLIP-2)",
    description="Upload an image to classify whether it is Safe For Work (SFW) or Not Safe For Work (NSFW)."
)

# Launch app
if __name__ == "__main__":
    iface.launch()

Intended Use

The Nsfw_Image_Detection_OSS model is designed to classify images into SFW or NSFW categories.

Potential use cases include:

  • Content Moderation: Automated filtering of unsafe or adult content.
  • Social Media Platforms: Preventing the upload of explicit media.
  • Enterprise Safety: Ensuring workplace-appropriate content in shared environments.
  • Dataset Filtering: Cleaning large-scale image datasets before training.
  • Parental Control Systems: Blocking inappropriate visual material.
Downloads last month
34
Safetensors
Model size
21.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Nsfw_Image_Detection_OSS

Finetuned
(5)
this model

Paper for prithivMLmods/Nsfw_Image_Detection_OSS

Evaluation results