model
stringlengths 23
62
| wer
float64 0.04
1.36
| cer
float64 0.01
0.84
| timestamp
timestamp[ns] |
---|---|---|---|
BounharAbdelaziz/Morocco-Darija-STT-tiny | 0.151625 | 0.049208 | 2025-01-04T16:19:23.660000 |
BounharAbdelaziz/Morocco-Darija-STT-small | 0.227437 | 0.062119 | 2025-01-04T16:20:23.166000 |
BounharAbdelaziz/Morocco-Darija-STT-large-v1.2 | 0.040915 | 0.012667 | 2025-01-04T16:25:43.513000 |
openai/whisper-large-v3-turbo | 1.359807 | 0.837759 | 2025-01-04T16:26:59.585000 |
openai/whisper-large-v3 | 0.921781 | 0.500122 | 2025-01-04T16:28:26.395000 |
boumehdi/wav2vec2-large-xlsr-moroccan-darija | 0.641396 | 0.218514 | 2025-01-04T16:29:04.530000 |
abdelkader12/whisper-small-ar | 0.755716 | 0.312546 | 2025-01-04T16:29:39.679000 |
ychafiqui/whisper-medium-darija | 0.749699 | 0.319854 | 2025-01-04T16:32:07.948000 |
ychafiqui/whisper-small-darija | 0.783394 | 0.318636 | 2025-01-04T16:33:05.513000 |
BounharAbdelaziz/Morocco-Darija-STT-tiny-v1.3 | 0.746089 | 0.318879 | 2025-01-04T16:33:17.603000 |
BounharAbdelaziz/Morocco-Darija-STT-small-v1.3 | 0.604091 | 0.217783 | 2025-01-04T16:33:42.557000 |
openai/whisper-large-v3-turbo-forced-ar | 1.275572 | 0.812424 | 2025-01-04T16:44:02.538000 |
openai/whisper-large-v3-forced-ar | 1.340554 | 0.703532 | 2025-01-04T16:45:23.024000 |
BounharAbdelaziz/Morocco-Darija-STT-large-turbo-v1.3-forced-ar | 0.483755 | 0.159074 | 2025-01-04T17:49:44.765000 |
BounharAbdelaziz/Morocco-Darija-STT-large-turbo-v1.3 | 0.483755 | 0.159074 | 2025-01-04T17:55:44.814000 |
BounharAbdelaziz/Morocco-Darija-STT-tiny | 0.151625 | 0.049208 | 2025-01-04T18:00:46.567000 |
BounharAbdelaziz/Morocco-Darija-STT-large-turbo-v1.3 | 0.483755 | 0.159074 | 2025-01-04T18:01:28.861000 |
BounharAbdelaziz/Morocco-Darija-STT-small | 0.227437 | 0.062119 | 2025-01-04T18:03:44.593000 |
BounharAbdelaziz/Morocco-Darija-STT-tiny-v1.3 | 0.746089 | 0.318879 | 2025-01-04T18:08:42.999000 |
Overview
This dataset contains evaluation metrics for various Automatic Speech Recognition (ASR) models on Moroccan Darija. This dataset contains Word Error Rate (WER) and Character Error Rate (CER) metrics for different ASR models evaluated on a common evaluation set. These metrics are standard measurements used to assess the accuracy of speech recognition systems.
- WER (Word Error Rate): Measures the percentage of words that were incorrectly predicted. Lower values indicate better performance.
- CER (Character Error Rate): Measures the percentage of characters that were incorrectly predicted. Lower values indicate better performance.
Evaluation Details
Test Set
- The models were evaluated on the validation split of the Moroccan-Darija-Youtube-Commons-Eval dataset
- Total number of test samples: 105
- Audio format: 16kHz mono PCM
- Language: Moroccan Darija
Computation Method
- Metrics are computed using the
jiwer
library - All audio samples are normalized and resampled to 16kHz before transcription
- Ground truth transcriptions are compared with model predictions using space-separated word comparison
Currently evaluated model
- "BounharAbdelaziz/Morocco-Darija-STT-tiny"
- "BounharAbdelaziz/Morocco-Darija-STT-small"
- "BounharAbdelaziz/Morocco-Darija-STT-large-v1.2"
- "openai/whisper-large-v3-turbo"
- "openai/whisper-large-v3"
- "boumehdi/wav2vec2-large-xlsr-moroccan-darija"
- "abdelkader12/whisper-small-ar"
- "ychafiqui/whisper-medium-darija"
- "ychafiqui/whisper-small-darija"
- ...please add yours after eval...
Data Format
Each row in the dataset contains:
{
'model': str, # Model identifier/name
'wer': float, # Word Error Rate (0.0 to 1.0)
'cer': float # Character Error Rate (0.0 to 1.0)
}
- Downloads last month
- 18