Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:

Review

#2
by sos12 - opened

My review of the paper and its training methodology, along with the provided translation quality example, leads me to believe the translation wasn't successful. I suspect the context limit (200 tokens) hindered the translation model's ability to accurately determine context and connect paragraphs cohesively. I fear that training an SLM on this translated data might inflict "linguistic damage" on the trained model.

KAUST Center of Excellence in Generative AI org

Dear @sos12 , thank you for your interest in our dataset and valuable findings. We realize that FineWeb-Edu-Ar alone is not enough to train a good SLM. We would like to encourage its users to utilize it primarily in the first stages of pretraining, followed by a stage comprising native Arabic corpora to improve alignment with native Arabic speakers.

Sign up or log in to comment