lixinhao commited on
Commit
34f1e61
·
verified ·
1 Parent(s): 129bf29

Update README.md

Browse files

update transformers==4.40.1

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -95,7 +95,7 @@ VideoChat-Flash-7B is constructed upon UMT-L (300M) and Qwen2-7B, employing only
95
 
96
  First, you need to install [flash attention2](https://github.com/Dao-AILab/flash-attention) and some other modules. We provide a simple installation example below:
97
  ```
98
- pip install transformers==4.39.2
99
  pip install av
100
  pip install imageio
101
  pip install decord
 
95
 
96
  First, you need to install [flash attention2](https://github.com/Dao-AILab/flash-attention) and some other modules. We provide a simple installation example below:
97
  ```
98
+ pip install transformers==4.40.1
99
  pip install av
100
  pip install imageio
101
  pip install decord