Tom Aarsen commited on
Commit
e8198f2
·
1 Parent(s): 8861431

Give quick heads up about perf. relative to all-mpnet-base-v2

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -8467,7 +8467,7 @@ This is a [sentence-transformers](https://www.SBERT.net) model trained on the [g
8467
  * **0 Active Parameters:** This model does not use any active parameters, instead consisting exclusively of averaging pre-computed token embeddings.
8468
  * **100x to 400x faster:** On CPU, this model is 100x to 400x faster than common options like [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). On GPU, it's 10x to 25x faster.
8469
  * **Matryoshka:** This model was trained with a [Matryoshka loss](https://huggingface.co/blog/matryoshka), allowing you to truncate the embeddings for faster retrieval at minimal performance costs.
8470
- * **Evaluations:** See [Evaluations](#evaluation) for details on performance on NanoBEIR, embedding speed, and Matryoshka dimensionality truncation.
8471
  * **Training Script:** See [train.py](train.py) for the training script used to train this model from scratch.
8472
 
8473
  ## Model Details
 
8467
  * **0 Active Parameters:** This model does not use any active parameters, instead consisting exclusively of averaging pre-computed token embeddings.
8468
  * **100x to 400x faster:** On CPU, this model is 100x to 400x faster than common options like [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). On GPU, it's 10x to 25x faster.
8469
  * **Matryoshka:** This model was trained with a [Matryoshka loss](https://huggingface.co/blog/matryoshka), allowing you to truncate the embeddings for faster retrieval at minimal performance costs.
8470
+ * **Evaluations:** See [Evaluations](#evaluation) for details on performance on NanoBEIR, embedding speed, and Matryoshka dimensionality truncation. In short, this model is **87.4%** as performant as the commonly used [`all-mpnet-base-v2`](https://huggingface.co/sentence-transformers/all-mpnet-base-v2).
8471
  * **Training Script:** See [train.py](train.py) for the training script used to train this model from scratch.
8472
 
8473
  ## Model Details