pdsdpo commited on
Commit
87c64e8
·
verified ·
1 Parent(s): be03089

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -35,10 +35,12 @@ PDS-DPO-7B is a vision-language model built upon LLaVA 1.5 7B and trained using
35
 
36
  ## Citation
37
  ```bibtex
38
- @article{2024pdsdpo
39
- title={Multimodal Preference Data Synthetic Alignment with Reward Model},
40
- author={},
41
- journal={},
42
- year={}
 
 
43
  }
44
  ```
 
35
 
36
  ## Citation
37
  ```bibtex
38
+ @misc{wijaya2024multimodalpreferencedatasynthetic,
39
+ title={Multimodal Preference Data Synthetic Alignment with Reward Model},
40
+ author={Robert Wijaya and Ngoc-Bao Nguyen and Ngai-Man Cheung},
41
+ year={2024},
42
+ eprint={2412.17417},
43
+ archivePrefix={arXiv},
44
+ primaryClass={cs.CV}
45
  }
46
  ```