lixinhao commited on
Commit
9c4dc80
·
verified ·
1 Parent(s): af0ebd1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -94,10 +94,14 @@ VideoChat-Flash-7B is constructed upon UMT-L (300M) and Qwen2-7B, employing only
94
  ## 🚀 How to use the model
95
 
96
 
97
- ### Generation
98
-
99
- We provide the simple generation process for using our model. For more details, you could refer to [Github](https://github.com/LLaVA-VL/LLaVA-NeXT).
100
 
 
 
 
 
 
 
 
101
  ```python
102
  from transformers import AutoModel, AutoTokenizer
103
 
 
94
  ## 🚀 How to use the model
95
 
96
 
 
 
 
97
 
98
+ We provide the simple conversation process for using our model. You need to install [flash attention2](https://github.com/Dao-AILab/flash-attention) to use our visual encoder.
99
+ ```
100
+ pip install transformers==4.39.2
101
+ pip install timm
102
+ pip install flash-attn --no-build-isolation
103
+ ```
104
+ Then you could use our model:
105
  ```python
106
  from transformers import AutoModel, AutoTokenizer
107