Incorrect links to the models.
All the links to XXL models in this README are pointing to the Large model and vice versa.
Also, I'd love to see the memory footprint data in the README to better allocate my scarce resources :)
Thanks! Just fixed the links.
As per the memory footprint, please take a look at this GitHub issue where this is discussed for the MetricX-24 models (which I recommend you use instead of the 23 models) https://github.com/google-research/metricx/issues/6. If you do require 23, then the standard variants of the 24 models (not the bfloat16) have the same memory footprint as the 23 models.
Thanks. I will give it a try. Everything over 48GB is a little bit hard to obtain. :( I'm evaluating all the unique pairs from opus parallel corpora for en-uk pairs using different QE models, and 55m of parallel sentences took me around 10 days on 2xA6000 to evaluate (using cometkiwi-xxl models).
Yes, I understand. I think the the XL or Large models should have a significantly lower memory footprint, especially the large bfloat16 versions of the 24 model. Good luck!