chungnam_firestation3Kfiles_WER_model

This model is a fine-tuned version of openai/whisper-medium on the Marcusxx/chungnamFireStation3Kfiles dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2164
  • Wer: 54.3478

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2000
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0826 6.6667 1000 0.6837 170.3934
0.0377 13.3333 2000 0.7952 108.2816
0.0131 20.0 3000 0.8730 56.7288
0.0066 26.6667 4000 0.8840 54.6584
0.0015 33.3333 5000 0.9585 55.2795
0.0123 40.0 6000 0.9682 55.9006
0.0006 46.6667 7000 1.0101 53.7267
0.0022 53.3333 8000 1.0249 55.7971
0.002 60.0 9000 1.0083 57.4534
0.0 66.6667 10000 1.0515 55.3830
0.0 73.3333 11000 1.0877 54.7619
0.0 80.0 12000 1.1063 54.9689
0.0 86.6667 13000 1.1240 55.3830
0.0 93.3333 14000 1.1405 55.1760
0.0 100.0 15000 1.1579 54.7619
0.0 106.6667 16000 1.1736 54.9689
0.0 113.3333 17000 1.1890 54.9689
0.0 120.0 18000 1.2021 54.4513
0.0 126.6667 19000 1.2120 54.3478
0.0 133.3333 20000 1.2164 54.3478

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.2.2+cu121
  • Datasets 3.2.0
  • Tokenizers 0.19.1
Downloads last month
5
Safetensors
Model size
764M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Marcusxx/chungnam_firestation3Kfiles_WER_model

Finetuned
(535)
this model

Dataset used to train Marcusxx/chungnam_firestation3Kfiles_WER_model

Evaluation results