Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,10 @@ Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin, arXiv 2023
|
|
9 |
|
10 |
This model is fine-tuned from LLaMA-2-7B using LoRA for document reranking, this model takes input length upto 4096 tokens.
|
11 |
|
|
|
|
|
|
|
|
|
12 |
## Usage
|
13 |
|
14 |
Below is an example to compute the similarity score of a query-document pair
|
|
|
9 |
|
10 |
This model is fine-tuned from LLaMA-2-7B using LoRA for document reranking, this model takes input length upto 4096 tokens.
|
11 |
|
12 |
+
## Training Data
|
13 |
+
The model is fine-tuned on the training split of [MS MARCO Document Ranking](https://microsoft.github.io/msmarco/Datasets) datasets for 1 epoch.
|
14 |
+
Please check our paper for details.
|
15 |
+
|
16 |
## Usage
|
17 |
|
18 |
Below is an example to compute the similarity score of a query-document pair
|