SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/paraphrase-multilingual-mpnet-base-v2 on the bps-publication-title-pairs dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("yahyaabd/allstat-semantic-search-mpnet-base-v2-sts")
# Run inference
sentences = [
    'Laporan keuangan pemerintah provinsi periode 2003-2006',
    'Statistik Keuangan Provinsi 2003-2006',
    'Statistik Perdagangan Luar Negeri Indonesia Ekspor Menurut Kode ISIC 2013-2014',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric allstat-semantic-dev allstat-semantic-test
pearson_cosine 0.9709 0.9674
spearman_cosine 0.8819 0.8747

Training Details

Training Dataset

bps-publication-title-pairs

  • Dataset: bps-publication-title-pairs at 833f738
  • Size: 42,138 training samples
  • Columns: query, doc_title, and score
  • Approximate statistics based on the first 1000 samples:
    query doc_title score
    type string string float
    details
    • min: 5 tokens
    • mean: 10.71 tokens
    • max: 60 tokens
    • min: 5 tokens
    • mean: 12.58 tokens
    • max: 52 tokens
    • min: 0.0
    • mean: 0.53
    • max: 1.0
  • Samples:
    query doc_title score
    Hasil riset mobilitas Jabodetabek tahun 2023 Statistik Komuter Jabodetabek Hasil Survei Komuter Jabodetabek 2023 0.85
    Indeks harga konsumen di Indonesia tahun 2017 (82 kota) Harga Konsumen Beberapa Barang dan Jasa Kelompok Sandang di 82 Kota di Indonesia 2017 0.15
    Laporan sektor bangunan Indonesia Q4 2009 Indikator Konstruksi Triwulan IV Tahun 2009 0.91
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Evaluation Dataset

bps-publication-title-pairs

  • Dataset: bps-publication-title-pairs at 833f738
  • Size: 2,634 evaluation samples
  • Columns: query, doc_title, and score
  • Approximate statistics based on the first 1000 samples:
    query doc_title score
    type string string float
    details
    • min: 6 tokens
    • mean: 10.71 tokens
    • max: 28 tokens
    • min: 5 tokens
    • mean: 12.57 tokens
    • max: 39 tokens
    • min: 0.0
    • mean: 0.55
    • max: 1.0
  • Samples:
    query doc_title score
    Statistik tebu Indonesia tahun 2018 Direktori Perusahaan Perkebunan Karet Indonesia 2018 0.1
    Data industri makanan dan minuman 2017 Statistik Upah Buruh Tani di Perdesaan 2018 0.2
    Biaya hidup di Gorontalo tahun 2018 Survei Biaya Hidup (SBH) 2018 Gorontalo 0.9
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 5
  • warmup_ratio: 0.1
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Click to expand
Epoch Step Training Loss Validation Loss allstat-semantic-dev_spearman_cosine allstat-semantic-test_spearman_cosine
0.0380 100 0.0498 0.0301 0.7942 -
0.0759 200 0.0274 0.0231 0.8115 -
0.1139 300 0.0238 0.0194 0.8151 -
0.1519 400 0.0203 0.0181 0.8169 -
0.1898 500 0.02 0.0184 0.8188 -
0.2278 600 0.0208 0.0170 0.8229 -
0.2658 700 0.0182 0.0176 0.8209 -
0.3037 800 0.0187 0.0165 0.8260 -
0.3417 900 0.0182 0.0169 0.8237 -
0.3797 1000 0.0187 0.0166 0.8232 -
0.4176 1100 0.019 0.0170 0.8261 -
0.4556 1200 0.0186 0.0178 0.8206 -
0.4935 1300 0.0185 0.0173 0.8190 -
0.5315 1400 0.0188 0.0183 0.8172 -
0.5695 1500 0.018 0.0166 0.8192 -
0.6074 1600 0.0193 0.0168 0.8240 -
0.6454 1700 0.016 0.0152 0.8315 -
0.6834 1800 0.0178 0.0163 0.8263 -
0.7213 1900 0.0174 0.0150 0.8320 -
0.7593 2000 0.0172 0.0152 0.8290 -
0.7973 2100 0.0156 0.0158 0.8284 -
0.8352 2200 0.0164 0.0143 0.8313 -
0.8732 2300 0.0169 0.0165 0.8349 -
0.9112 2400 0.0147 0.0150 0.8368 -
0.9491 2500 0.0163 0.0148 0.8314 -
0.9871 2600 0.0149 0.0137 0.8379 -
1.0251 2700 0.0117 0.0134 0.8415 -
1.0630 2800 0.0124 0.0129 0.8375 -
1.1010 2900 0.0109 0.0124 0.8459 -
1.1390 3000 0.0109 0.0123 0.8445 -
1.1769 3100 0.0107 0.0126 0.8433 -
1.2149 3200 0.0105 0.0131 0.8427 -
1.2528 3300 0.0117 0.0130 0.8434 -
1.2908 3400 0.0107 0.0126 0.8448 -
1.3288 3500 0.0116 0.0119 0.8490 -
1.3667 3600 0.0114 0.0124 0.8394 -
1.4047 3700 0.011 0.0127 0.8408 -
1.4427 3800 0.0116 0.0128 0.8400 -
1.4806 3900 0.0117 0.0121 0.8451 -
1.5186 4000 0.0129 0.0125 0.8443 -
1.5566 4100 0.0117 0.0122 0.8464 -
1.5945 4200 0.012 0.0117 0.8468 -
1.6325 4300 0.011 0.0122 0.8485 -
1.6705 4400 0.0121 0.0112 0.8557 -
1.7084 4500 0.0119 0.0110 0.8570 -
1.7464 4600 0.0105 0.0113 0.8519 -
1.7844 4700 0.0101 0.0113 0.8479 -
1.8223 4800 0.0111 0.0116 0.8499 -
1.8603 4900 0.0108 0.0117 0.8520 -
1.8983 5000 0.0111 0.0111 0.8509 -
1.9362 5100 0.0112 0.0111 0.8546 -
1.9742 5200 0.0104 0.0115 0.8507 -
2.0121 5300 0.0095 0.0105 0.8553 -
2.0501 5400 0.0077 0.0106 0.8562 -
2.0881 5500 0.007 0.0104 0.8575 -
2.1260 5600 0.0075 0.0101 0.8619 -
2.1640 5700 0.0077 0.0104 0.8568 -
2.2020 5800 0.0073 0.0103 0.8588 -
2.2399 5900 0.0076 0.0101 0.8598 -
2.2779 6000 0.0072 0.0101 0.8602 -
2.3159 6100 0.0076 0.0104 0.8589 -
2.3538 6200 0.007 0.0101 0.8592 -
2.3918 6300 0.0084 0.0104 0.8547 -
2.4298 6400 0.0077 0.0102 0.8594 -
2.4677 6500 0.008 0.0102 0.8606 -
2.5057 6600 0.0075 0.0101 0.8596 -
2.5437 6700 0.0072 0.0105 0.8587 -
2.5816 6800 0.0079 0.0105 0.8588 -
2.6196 6900 0.0078 0.0098 0.8605 -
2.6576 7000 0.0075 0.0100 0.8593 -
2.6955 7100 0.008 0.0097 0.8649 -
2.7335 7200 0.0074 0.0100 0.8602 -
2.7715 7300 0.0069 0.0098 0.8628 -
2.8094 7400 0.008 0.0097 0.8615 -
2.8474 7500 0.007 0.0097 0.8639 -
2.8853 7600 0.0071 0.0093 0.8642 -
2.9233 7700 0.0077 0.0102 0.8605 -
2.9613 7800 0.008 0.0094 0.8623 -
2.9992 7900 0.0076 0.0094 0.8658 -
3.0372 8000 0.005 0.0091 0.8673 -
3.0752 8100 0.005 0.0088 0.8688 -
3.1131 8200 0.0051 0.0088 0.8705 -
3.1511 8300 0.0052 0.0089 0.8701 -
3.1891 8400 0.0047 0.0088 0.8711 -
3.2270 8500 0.0046 0.0086 0.8723 -
3.2650 8600 0.0051 0.0086 0.8733 -
3.3030 8700 0.0053 0.0088 0.8736 -
3.3409 8800 0.0049 0.0086 0.8733 -
3.3789 8900 0.0051 0.0087 0.8721 -
3.4169 9000 0.0051 0.0086 0.8716 -
3.4548 9100 0.005 0.0087 0.8717 -
3.4928 9200 0.0055 0.0088 0.8709 -
3.5308 9300 0.0046 0.0085 0.8738 -
3.5687 9400 0.0052 0.0085 0.8738 -
3.6067 9500 0.0052 0.0089 0.8706 -
3.6446 9600 0.0049 0.0085 0.8722 -
3.6826 9700 0.0051 0.0088 0.8720 -
3.7206 9800 0.0046 0.0088 0.8721 -
3.7585 9900 0.0051 0.0083 0.8757 -
3.7965 10000 0.005 0.0084 0.8744 -
3.8345 10100 0.005 0.0084 0.8754 -
3.8724 10200 0.0054 0.0087 0.8737 -
3.9104 10300 0.0054 0.0083 0.8757 -
3.9484 10400 0.005 0.0082 0.8754 -
3.9863 10500 0.0049 0.0083 0.8746 -
4.0243 10600 0.0041 0.0081 0.8757 -
4.0623 10700 0.0034 0.0082 0.8760 -
4.1002 10800 0.003 0.0083 0.8751 -
4.1382 10900 0.0033 0.0082 0.8770 -
4.1762 11000 0.0034 0.0083 0.8772 -
4.2141 11100 0.0033 0.0082 0.8773 -
4.2521 11200 0.0031 0.0082 0.8787 -
4.2901 11300 0.0033 0.0080 0.8805 -
4.3280 11400 0.0029 0.0082 0.8787 -
4.3660 11500 0.0035 0.0079 0.8796 -
4.4039 11600 0.0034 0.0079 0.8799 -
4.4419 11700 0.0032 0.0079 0.8794 -
4.4799 11800 0.0035 0.0079 0.8807 -
4.5178 11900 0.0035 0.0080 0.8798 -
4.5558 12000 0.0031 0.0079 0.8806 -
4.5938 12100 0.0034 0.0078 0.8812 -
4.6317 12200 0.0031 0.0078 0.8811 -
4.6697 12300 0.0032 0.0078 0.8813 -
4.7077 12400 0.0032 0.0079 0.8809 -
4.7456 12500 0.0032 0.0078 0.8815 -
4.7836 12600 0.0034 0.0077 0.8818 -
4.8216 12700 0.0035 0.0078 0.8817 -
4.8595 12800 0.0032 0.0078 0.8818 -
4.8975 12900 0.0032 0.0078 0.8818 -
4.9355 13000 0.0032 0.0078 0.8820 -
4.9734 13100 0.0031 0.0078 0.8819 -
5.0 13170 - - - 0.8747

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.3.1
  • Transformers: 4.47.1
  • PyTorch: 2.2.2+cu121
  • Accelerate: 1.2.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
23
Safetensors
Model size
278M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for yahyaabd/allstat-semantic-search-mpnet-base-v2-sts

Dataset used to train yahyaabd/allstat-semantic-search-mpnet-base-v2-sts

Evaluation results