Aitana-7B-S-base-1.0

Aitana-7B-S-base-1.0 is a generative language model from the Aitana family, developed by the GPLSI (Language and Information System Group) at the University of Alicante and Language Modeling Group at Barcelona Supercomputing Center. This model is based on BSC-LT/salamandra-7b and has been continuously pre-trained on multilingual data (Valencian, Spanish, and English) to improve representation of Valencian and Catalan languages.

Table of Contents

Model Description

Property Value
Base Model BSC-LT/salamandra-7b
Architecture Transformer decoder-only
Parameters ~7.77B
Languages Valencian, Spanish, English
License Apache 2.0

Aitana-7B-S-base-1.0 extends the multilingual Salamandra foundation with additional training on domain-specific Valencian, Spanish, and English data. The training emphasizes administrative, legal, and tourism domains.

Training Data

This model was trained on the following ALIA datasets:

Dataset ID Name Language Source
dc8 dogv_va_2025 Valencian gplsi/alia_dogv
dc9 dogv_es_2025 Spanish gplsi/alia_dogv
dc10 corts_es_va_2025 Spanish/Valencian gplsi/alia_les_corts
dc11 amic_va_2025 Valencian gplsi/alia_amic
dc12 boua_va_2025 Valencian gplsi/alia_boua
dc13 boua_es_2025 Spanish gplsi/alia_boua
dc14 tourism_va_2025 Valencian gplsi/alia_tourism
dc15 tourism_es_2025 Spanish gplsi/alia_tourism
dc16 tourism_en_2025 English gplsi/alia_tourism
- alia_multilingual_parallel_sentences Spanish/Valencian/English gplsi/alia_multilingual_parallel_sentences

Data Sources

  • DOGV (Diari Oficial de la Generalitat Valenciana): Official communications of the Valencian Community including laws and public sector communications
  • Les Corts Valencianes: Transcripts from the Valencian Parliament plenary sessions and committee meetings
  • AMIC: Valencian language corpus
  • BOUA (Butlletí Oficial de la Universitat d'Alacant): Official University of Alicante documents including grants, regulations, and resolutions
  • Tourism: Multilingual tourism domain content

Intended Uses

This model can be used for:

  • Text generation in Valencian, Spanish, and English
  • Fine-tuning for specific downstream tasks
  • Domain adaptation for administrative, legal, or tourism applications

Note: Due to the formal register of training data (administrative and legal domains), generated text tends toward formal language.

How to Use

Transformers

import torch
from transformers import pipeline, AutoTokenizer

model_id = "gplsi/Aitana-7B-S-base-1.0"
tokenizer = AutoTokenizer.from_pretrained(model_id)

generator = pipeline(
    "text-generation",
    model=model_id,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

# Valencian example
text = "Les corts valencianes han pres la decisió de"
result = generator(text, do_sample=True, top_k=10, max_new_tokens=100)
print(result[0]['generated_text'])

# Spanish example  
text = "El turismo en la Comunidad Valenciana"
result = generator(text, do_sample=True, top_k=10, max_new_tokens=100)
print(result[0]['generated_text'])

Evaluation

In the following table, we can see the results obtained with different benchmarks from lm-evaluation-harness in comparison with the model used for continuous pre-training. The results have been obtained from the model pre-trained; no instruction tuning or fine-tuning of any kind has been performed.

Normalized score per language

Language Salamandra-7B Aitana-7B-S-base-1.0
Spanish 0.248 0.26
Catalan 0.364 0.373
English 0.319 0.349
Valencian 0.663 0.664

Valencian

Classification Benchmarks

Dataset Lang. Task Metric Salamandra-7B Aitana-7B-S-base-1.0
XNLI va Natural Language Inference acc 0.496 0.495

Generation Benchmarks

Dataset Lang. Task Metric Salamandra-7B Aitana-7B-S-base-1.0
Cocoteros va Reading Comprehension bleu 12.30 16.09
Phrases ca-va va-ca Translation - Adaptation bleu 86.83 86.53
Phrases va-ca va-ca Translation - Adaptation bleu 94.68 82.99
Phrases va-es va-es Translation bleu 79.83 80.76
Phrases es-va es-va Translation bleu 66.31 71.01
Truthfulqa_va va Truthfulness bleu_acc 0.353 0.388

Catalan

Classification Benchmarks

Dataset Lang. Task Metric Salamandra-7B Aitana-7B-S-base-1.0
Belebele Cat_latn ca Reading Comprehension acc 0.51 0.546
COPA ca Commonsense Reasoning acc 0.798 0.812
XStoryCloze ca Commonsense Reasoning acc 0.75 0.767
OpenBookQA ca Question Answering acc 0.366 0.376
PAWS ca Paraphrasing acc 0.626 0.613
PiQA ca Question Answering acc 0.702 0.725
SiQA ca Question Answering acc 0.489 0.506
ARC Easy ca Question Answering acc 0.726 0.73
ARC Challenge ca Question Answering acc 0.47 0.459
XNLI ca Natural Language Inference acc 0.504 0.494
Teca ca Natural Language Inference acc 0.527 0.514.
WNLI ca Natural Language Inference acc 0.577 0.633
Catcola ca Linguistic Acceptability acc 0.732 0.71
Catalanqa ca Question Answering F1 0.832 0.829
Catalanqa ca Question Answering exact match 0.62 0.65
Mgsm direct ca Math exact match 0.068 0.096
Xquad ca Question Answering exact match 0.498 0.497
Xquad ca Question Answering F1 0.717 0.724

Generation Benchmarks

Dataset Lang. Task Metric Salamandra-7B Aitana-7B-S-base-1.0
Cabreu abstractive ca Summarization bleu 8.46 11.34
Cabreu extractive ca Summarization bleu 44.62 41.73
Cabreu extreme ca Summarization bleu 11.02 12.44

Spanish

Classification Benchmarks

Dataset Lang. Task Metric Salamandra-7B Aitana-7B-S-base-1.0
Belebele es Reading Comprehension acc 0.49 0.55
PAWS es Paraphrasing acc 0.616 0.591
XNLI es Natural Language Inference acc 0.462 0.447
WNLI es Natural Language Inference acc 0.45 0.45
XStoryCloze es Commonsense Reasoning acc 0.746 0.754
Escola es Linguistic Acceptability acc - -
Escola es Linguistic Acceptability mcc - -
OpenbookQA es Question Answering acc - -
MGSM Direct es Math exact match 0.064 0.084
XQUAD es Question Answering exact match 0.51 0.509
XQUAD es Question Answering F1 0.746 0.754

Generation Benchmarks

Dataset Lang. Task Metric Salamandra-7B Aitana-7B-S-base-1.0
Cocoteros es Reading Comprehension bleu 14.57 17.35
XLSum es Summarization bleu 3.52 5.79

English

Classification Benchmarks

Dataset Lang. Task Metric Salamandra-7B Aitana-7B-S-base-1.0
Arc Challenge en Question Answering acc 0.53 0.529
Arc Easy en Question Answering acc 0.822 0.816
Belebele en Reading Comprehension acc 0.562 0.537
PAWS en Paraphrasing acc 0.632 0.604
XNLI en Natural Language Inference acc 0.474 0.472
XStoryCloze en Commonsense Reasoning acc 0.796 0.79
OpenBookQA en Question Answering acc 0.352 0.356
PiQA en Question Answering acc 0.793 0.796
Social iqa en Question Answering acc 0.509 0.508
WNLI en Natural Language Inference acc 0.464 0.549
MGSM Direct en Math exact match 0.264 0.564
TriviaQA en Question Answering exact match 0.597 0.601
CoLA en Linguistic Acceptability mcc 0.381 0.339

Additional Information

Author

The model has been developed by the Language and Information Systems Group (GPLSI), the Centro de Inteligencia Digital (CENID), and the [Language Modeling Group at Barcelona Supercomputing Center] (https://www.bsc.es/research-development/research-areas/cognitive-computing/language-modeling), all contributing to cutting-edge research in Natural Language Processing (NLP). GPLSI and CENID are part of the University of Alicante (UA), while the Language Modeling Group operates within the Barcelona Supercomputing Center.

Part of the Aitana Family

This model is part of the Aitana model family developed by the GPLSI research group, which includes:

Funding

This work is funded by the Ministerio para la Transformación Digital y de la Función Pública, co-financed by the EU – NextGenerationEU, within the framework of the project Desarrollo de Modelos ALIA.

Acknowledgments

We would like to express our gratitude to all individuals and institutions that have contributed to the development of this work.

Special thanks to:

We also acknowledge the financial, technical, and scientific support of the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA, whose contribution has been essential to the completion of this research.

License

Apache License, Version 2.0

Disclaimer

This model is intended for general purposes and is available under a permissive Apache License 2.0. Be aware that the model may have biases and/or undesirable outputs. Users deploying systems based on this model are responsible for mitigating risks and complying with applicable AI regulations.

Reference

@misc{gplsi-aitana-7B-S-base-1.0,
  author       = {Sepúlveda-Torres, Robiert and Baucells, Irene and Estevanell-Valladares, Ernesto L. and Galiano, Santiago and Consuegra-Ayala, Juan Pablo and Miró Maestre, María and Martínez-Murillo, Iván and Grande, Eduardo and Bonora, Mar and Gutierrez, Yoan and Abreu Salas, José Ignacio and Lloret, Elena and Montoyo, Andrés and Muñoz-Guillena and Palomar, Manuel},
  title        = {Aitana 7B base: Continually pre-trained on Valencian},
  year         = {2026},
  institution  = {Language and Information Systems Group (GPLSI) and Centro de Inteligencia Digital (CENID), University of Alicante (UA)},
  howpublished = {\url{https://huggingface.co/gplsi/gplsi/Aitana-2B-S-base-1.0}},
  note         = {Accessed: 2026-4-8}
}

Copyright © 2026 Language and Information Systems Group (GPLSI) and Centro de Inteligencia Digital (CENID), University of Alicante (UA). Distributed under the Apache License 2.0.

Downloads last month
671
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for gplsi/Aitana-7B-S-base-1.0

Finetuned
(11)
this model
Quantizations
2 models

Datasets used to train gplsi/Aitana-7B-S-base-1.0

Collection including gplsi/Aitana-7B-S-base-1.0