Update README.md
Browse files
README.md
CHANGED
@@ -41,11 +41,11 @@ try both R1 and R2 variants and pick the best for your data.
|
|
41 |
|
42 |
- **512-96:** Given the last 512 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
|
43 |
in future. This model is targeted towards a forecasting setting of context length 512 and forecast length 96 and
|
44 |
-
recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc). This model refers to the TTM-Q variant used in the paper. (branch name: main) [[Benchmark Scripts]](https://github.com/
|
45 |
|
46 |
- **1024-96:** Given the last 1024 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
|
47 |
in future. This model is targeted towards a long forecasting setting of context length 1024 and forecast length 96 and
|
48 |
-
recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc). (branch name: 1024-96-v1) [[Benchmark Scripts]](https://github.com/
|
49 |
|
50 |
|
51 |
|
@@ -54,17 +54,17 @@ try both R1 and R2 variants and pick the best for your data.
|
|
54 |
|
55 |
The below model scripts can be used for any of the above TTM models. Please update the HF model URL and branch name in the `from_pretrained` call appropriately to pick the model of your choice.
|
56 |
|
57 |
-
- Getting Started [[colab]](https://colab.research.google.com/github/
|
58 |
-
- Zeroshot Multivariate Forecasting [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/
|
59 |
- Finetuned Multivariate Forecasting:
|
60 |
-
- Channel-Independent Finetuning [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/
|
61 |
-
- Channel-Mix Finetuning [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/
|
62 |
- **New Releases (extended features released on October 2024)**
|
63 |
-
- Finetuning and Forecasting with Exogenous/Control Variables [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/
|
64 |
- Finetuning and Forecasting with static categorical features [Example: To be added soon]
|
65 |
-
- Rolling Forecasts - Extend forecast lengths beyond 96 via rolling capability [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/
|
66 |
-
- Helper scripts for optimal Learning Rate suggestions for Finetuning [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/
|
67 |
-
|
68 |
## Benchmarks
|
69 |
|
70 |
TTM outperforms popular benchmarks such as TimesFM, Moirai, Chronos, Lag-Llama, Moment, GPT4TS, TimeLLM, LLMTime in zero/fewshot forecasting while reducing computational requirements significantly.
|
|
|
41 |
|
42 |
- **512-96:** Given the last 512 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
|
43 |
in future. This model is targeted towards a forecasting setting of context length 512 and forecast length 96 and
|
44 |
+
recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc). This model refers to the TTM-Q variant used in the paper. (branch name: main) [[Benchmark Scripts]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm-r1_benchmarking_512_96.ipynb)
|
45 |
|
46 |
- **1024-96:** Given the last 1024 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
|
47 |
in future. This model is targeted towards a long forecasting setting of context length 1024 and forecast length 96 and
|
48 |
+
recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc). (branch name: 1024-96-v1) [[Benchmark Scripts]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm-r1_benchmarking_1024_96.ipynb)
|
49 |
|
50 |
|
51 |
|
|
|
54 |
|
55 |
The below model scripts can be used for any of the above TTM models. Please update the HF model URL and branch name in the `from_pretrained` call appropriately to pick the model of your choice.
|
56 |
|
57 |
+
- Getting Started [[colab]](https://colab.research.google.com/github/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/ttm_getting_started.ipynb)
|
58 |
+
- Zeroshot Multivariate Forecasting [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/ttm_getting_started.ipynb)
|
59 |
- Finetuned Multivariate Forecasting:
|
60 |
+
- Channel-Independent Finetuning [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/ttm_getting_started.ipynb) [Finetuning](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_m4_hourly.ipynb)
|
61 |
+
- Channel-Mix Finetuning [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/tutorial/ttm_channel_mix_finetuning.ipynb)
|
62 |
- **New Releases (extended features released on October 2024)**
|
63 |
+
- Finetuning and Forecasting with Exogenous/Control Variables [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/tutorial/ttm_with_exog_tutorial.ipynb)
|
64 |
- Finetuning and Forecasting with static categorical features [Example: To be added soon]
|
65 |
+
- Rolling Forecasts - Extend forecast lengths beyond 96 via rolling capability [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/ttm_rolling_prediction_getting_started.ipynb)
|
66 |
+
- Helper scripts for optimal Learning Rate suggestions for Finetuning [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/tutorial/ttm_with_exog_tutorial.ipynb)
|
67 |
+
|
68 |
## Benchmarks
|
69 |
|
70 |
TTM outperforms popular benchmarks such as TimesFM, Moirai, Chronos, Lag-Llama, Moment, GPT4TS, TimeLLM, LLMTime in zero/fewshot forecasting while reducing computational requirements significantly.
|