Zoof Tokenizer

This is the custom Byte-Pair Encoding (BPE) tokenizer built for the Zoof-250M language model family. It was trained from scratch to efficiently handle English text and code.

Model Details

  • Vocabulary Size: 49152
  • Type: Byte-Pair Encoding (BPE)
  • Language: English
  • Intended Use: Tokenization for the Zoof model family.

Usage

You can load this tokenizer directly with the transformers library:

from transformers import PreTrainedTokenizerFast

tokenizer = PreTrainedTokenizerFast.from_pretrained("Jiraya/zoof-tokenizer")

text = "Hello, world!"
tokens = tokenizer.encode(text)
decoded = tokenizer.decode(tokens)

print(f"Tokens: {tokens}")
print(f"Decoded: {decoded}")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support