Whisper Tiny FineTuning Experiment
Collection
My experiment on trying to fine tune an ASR model (Whisper Tiny)
•
3 items
•
Updated
This model is a fine-tuned version of openai/whisper-tiny on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:
This fine-tuning model is part of my school project. With limitation of my compute, I scaled down the dataset
Additional information and demo code can be found in this github: HanCreation/Whisper-Tiny-German
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.6833 | 0.16 | 1000 | 0.8090 | 43.6285 |
0.6272 | 0.32 | 2000 | 0.7441 | 41.3900 |
0.5671 | 0.48 | 3000 | 0.7124 | 40.0427 |
0.5593 | 0.64 | 4000 | 0.6998 | 38.8453 |
Base model
openai/whisper-tiny