Whisper Tiny Nepali - Kiran Pantha

This model is a fine-tuned version of openai/whisper-tiny on the OpenSLR54 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2933
  • Wer: 53.7269
  • Cer: 16.1186

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.8115 0.3597 300 0.7467 92.9167 34.9897
0.4976 0.7194 600 0.4963 79.2130 26.2625
0.3874 1.0791 900 0.4198 71.5046 22.6696
0.3422 1.4388 1200 0.3797 67.5926 20.8896
0.3179 1.7986 1500 0.3467 63.9120 19.3959
0.2451 2.1583 1800 0.3299 62.1528 18.6950
0.2167 2.5180 2100 0.3224 60.6713 18.3977
0.2428 2.8777 2400 0.3085 59.6528 17.6196
0.1862 3.2374 2700 0.3057 57.6620 16.9113
0.1795 3.5971 3000 0.3007 57.5231 16.7792
0.1758 3.9568 3300 0.2935 55.8565 16.5297
0.1496 4.3165 3600 0.2960 55.8796 16.3792
0.156 4.6763 3900 0.2940 55.4398 16.4819
0.1235 5.0360 4200 0.2915 54.4444 16.0085
0.1311 5.3957 4500 0.2936 54.4676 16.2801
0.1136 5.7554 4800 0.2933 53.7269 16.1186

Framework versions

  • Transformers 4.46.3
  • Pytorch 2.5.1+cxx11.abi
  • Datasets 3.2.0
  • Tokenizers 0.20.3
Downloads last month
73
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kiranpantha/whisper-tiny-ne

Finetuned
(1273)
this model

Evaluation results