Fine tuning precision task of transliteration on Gemma-2b-it model
#113
by
grishi911991
- opened
Facing issues with finetuning gemma-2b-it model for transliteration task.
I want create a lora for transliteration task from Roman to Devnagri and vice versa. But even after multiple combination of
- Lora rank
- Lora decay
- Modules
- Lora drop out rate
etc
When I inferred the results for the same Lora using oogabooga, it is not generated desired result, rather it just repeated user content. Even though I have used similar parameters for training lora for same task on gemma-9b model, it is working fine.
Can someone help here or have some thoughts.