speecht5_dhivehi_tts_v1

This model is a fine-tuned version of on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3982

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 128
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine_with_restarts
  • lr_scheduler_warmup_steps: 500
  • training_steps: 30000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.1121 6.5789 500 1.0624
1.0902 13.1579 1000 1.0063
1.0398 19.7368 1500 0.9727
0.9906 26.3158 2000 0.9334
0.9517 32.8947 2500 0.8930
0.9112 39.4737 3000 0.8556
0.8754 46.0526 3500 0.8299
0.8455 52.6316 4000 0.7964
0.8212 59.2105 4500 0.7761
0.7956 65.7895 5000 0.7534
0.782 72.3684 5500 0.7660
0.7518 78.9474 6000 0.7337
0.7403 85.5263 6500 0.7190
0.7262 92.1053 7000 0.7095
0.7112 98.6842 7500 0.6916
0.7086 105.2632 8000 0.6946
0.69 111.8421 8500 0.6727
0.6711 118.4211 9000 0.6532
0.6614 125.0 9500 0.6574
0.6538 131.5789 10000 0.6357
0.6307 138.1579 10500 0.6166
0.622 144.7368 11000 0.6139
0.6064 151.3158 11500 0.5978
0.5955 157.8947 12000 0.5843
0.5761 164.4737 12500 0.5681
0.5633 171.0526 13000 0.5688
0.5546 177.6316 13500 0.5483
0.5468 184.2105 14000 0.5336
0.535 190.7895 14500 0.5205
0.5279 197.3684 15000 0.5134
0.5172 203.9474 15500 0.5072
0.5094 210.5263 16000 0.4902
0.4969 217.1053 16500 0.4830
0.4893 223.6842 17000 0.4691
0.4821 230.2632 17500 0.4686
0.4727 236.8421 18000 0.4618
0.4696 243.4211 18500 0.4529
0.4659 250.0 19000 0.4477
0.4609 256.5789 19500 0.4421
0.456 263.1579 20000 0.4383
0.441 269.7368 20500 0.4305
0.4393 276.3158 21000 0.4277
0.435 282.8947 21500 0.4229
0.4326 289.4737 22000 0.4189
0.4283 296.0526 22500 0.4157
0.4277 302.6316 23000 0.4117
0.425 309.2105 23500 0.4113
0.4273 315.7895 24000 0.4076
0.4234 322.3684 24500 0.4034
0.4221 328.9474 25000 0.4052
0.4208 335.5263 25500 0.4034
0.4173 342.1053 26000 0.4000
0.42 348.6842 26500 0.4008
0.4188 355.2632 27000 0.3995
0.4224 361.8421 27500 0.4002
0.4144 368.4211 28000 0.3986
0.4182 375.0 28500 0.3979
0.4157 381.5789 29000 0.3984
0.4166 388.1579 29500 0.3987
0.4127 394.7368 30000 0.3982

Framework versions

  • Transformers 4.48.0.dev0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
28
Safetensors
Model size
144M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ahmedhassan7030/speecht5_dhivehi_tts_v1

Finetunes
1 model