Tokenization Issues with inital configs provided.
Reverted config files back with original bmo's, as i was seeing endless generation with the chatml modified configs. The way i swapped the embeddings out in the chatmlificaton was kind of hacky and still very experimental (token-surgery using violet_twighlight-v2 as the donor model to approximate chatml embeddings and replaced the embeddings inside of the original bmo.) [Quants will need to be redone.] @bartowski @Lewdiculous (Should still work in chatml during inference with the reverted changes, since the embedding themselves should reflect chatml approximations due to the 'surgery' and merge itself.)
Reverted config files back with original bmo's, as i was seeing endless generation with the chatml modified configs.
That explains it, yeah... It was way too late and I was way too sleepy already to really test much or say anything, haha. Thanks for the heads up.
Sorry for all the trouble to both of you. Ill make sure to mark the models with an experimental tag, until they are done with the full suite of testing going foreword.
All good!
v2 already finishing uploads on my end.