Text Classification
Safetensors
Galician
bert
hate speech

MetaHate-mBERT-GL-es

Model Description

This is a fine-tuned mBERT model specifically designed to detect hate speech in text in Galician (Portuguese variety). The model is based on the bert-base-multilingual-cased architecture and has been fine-tuned on a custom dataset for the task of binary text classification, where the labels are no hate and hate.

Intended Uses & Limitations

Intended Uses

  • Hate Speech Detection: This model is intended for detecting hate speech in social media comments, forums, and other text data sources.
  • Content Moderation: Can be used by platforms to automatically flag potentially harmful content.

Limitations

  • Biases: The model may carry biases present in the training data.
  • False Positives/Negatives: It's not perfect and may misclassify some instances.
  • Domain Specificity: Performance may vary across different domains.

Citation

If you use this model, please cite the following reference:

@misc{piot2025bridginggapshatespeech,
      title={Bridging Gaps in Hate Speech Detection: Meta-Collections and Benchmarks for Low-Resource Iberian Languages}, 
      author={Paloma Piot and José Ramom Pichel Campos and Javier Parapar},
      year={2025},
      eprint={2510.11167},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2510.11167}, 
}

Acknowledgements

The authors thank the funding from the Horizon Europe research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 101073351. The authors also thank the financial support supplied by the Consellería de Cultura, Educación, Formación Profesional e Universidades (accreditation 2019-2022 ED431G/01, ED431B 2022/33) and the European Regional Development Fund, which acknowledges the CITIC Research Center in ICT of the University of A Coruña as a Research Center of the Galician University System and the project PID2022-137061OB-C21 (Ministerio de Ciencia e Innovación, Agencia Estatal de Investigación, Proyectos de Generación de Conocimiento; supported by the European Regional Development Fund). The authors also thank the funding of project PLEC2021-007662 (MCIN/AEI/10.13039/501100011033, Ministerio de Ciencia e Innovación, Agencia Estatal de Investigación, Plan de Recuperación, Transformación y Resiliencia, Unión Europea-Next Generation EU).

Usage

Inference

To use this model, you can load it via the transformers library:

from transformers import pipeline

# Load the model
classifier = pipeline("text-classification", model="irlab-udc/MetaHate-mBERT-GL-pt")

# Test the model
result = classifier("Your input text here")
print(result)  # Should print the labels "no hate" or "hate"
Downloads last month
17
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including irlab-udc/MetaHate-mBERT-GL-pt