Update README.md
Browse files
README.md
CHANGED
@@ -70,7 +70,7 @@ The below model scripts can be used for any of the above TTM models. Please upda
|
|
70 |
TTM outperforms popular benchmarks such as TimesFM, Moirai, Chronos, Lag-Llama, Moment, GPT4TS, TimeLLM, LLMTime in zero/fewshot forecasting while reducing computational requirements significantly.
|
71 |
Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider
|
72 |
adoption in resource-constrained environments. For more details, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) TTM-Q referred in the paper maps to the `512-96` model
|
73 |
-
uploaded in the main branch.
|
74 |
|
75 |
## Recommended Use
|
76 |
1. Users have to externally standard scale their data independently for every channel before feeding it to the model (Refer to [TSP](https://github.com/IBM/tsfm/blob/main/tsfm_public/toolkit/time_series_preprocessor.py), our data processing utility for data scaling.)
|
|
|
70 |
TTM outperforms popular benchmarks such as TimesFM, Moirai, Chronos, Lag-Llama, Moment, GPT4TS, TimeLLM, LLMTime in zero/fewshot forecasting while reducing computational requirements significantly.
|
71 |
Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider
|
72 |
adoption in resource-constrained environments. For more details, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) TTM-Q referred in the paper maps to the `512-96` model
|
73 |
+
uploaded in the main branch. For other variants (TTM-B, TTM-E and TTM-A) please refer [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r2). For more details, refer to the paper.
|
74 |
|
75 |
## Recommended Use
|
76 |
1. Users have to externally standard scale their data independently for every channel before feeding it to the model (Refer to [TSP](https://github.com/IBM/tsfm/blob/main/tsfm_public/toolkit/time_series_preprocessor.py), our data processing utility for data scaling.)
|