Sentence Similarity
ONNX
sentence-transformers
light-embed
bert
feature-extraction
text-embeddings-inference
Instructions to use binhcode25/sbert-all-MiniLM-L6-v2-onnx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use binhcode25/sbert-all-MiniLM-L6-v2-onnx with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("binhcode25/sbert-all-MiniLM-L6-v2-onnx") sentences = [ "That is a happy person", "That is a happy dog", "That is a very happy person", "Today is a sunny day" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Notebooks
- Google Colab
- Kaggle
sbert-all-MiniLM-L6-v2-onnx
This is the ONNX version of the Sentence Transformers model sentence-transformers/all-MiniLM-L6-v2 for sentence embedding, optimized for speed and lightweight performance. By utilizing onnxruntime and tokenizers instead of heavier libraries like sentence-transformers and transformers, this version ensures a smaller library size and faster execution. Below are the details of the model:
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Embedding dimension: 384
- Max sequence length: 256
- File size on disk: 0.08 GB
- Pooling incorporated: Yes
This ONNX model consists all components in the original sentence transformer model: Transformer, Pooling, Normalize
Usage (LightEmbed)
Using this model becomes easy when you have LightEmbed installed:
pip install -U light-embed
Then you can use the model like this:
from light_embed import TextEmbedding
sentences = ["This is an example sentence", "Each sentence is converted"]
model = TextEmbedding('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
Citing & Authors
Binh Nguyen / binhcode25@gmail.com
- Downloads last month
- 3