Custom Word2Vec Embeddings
This model contains custom word embeddings trained using Gensim's Word2Vec implementation.
Model details
- Trained using Gensim's Word2Vec
- Includes custom n-grams as tokens
- Vector size: 100
- Context window: 5
- Training algorithm: Skip-gram
Usage
from transformers import AutoTokenizer, AutoModel
import torch
# Load model directly
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your-username/your-model-name")
model = AutoModel.from_pretrained("your-username/your-model-name")
# Or load from the hub using gensim:
from gensim.models import KeyedVectors
import gensim.downloader as api
# Load vectors directly using gensim
word_vectors = api.load("your-username/your-model-name")
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support