pdsdpo commited on
Commit
8c4e4b6
·
verified ·
1 Parent(s): 87c64e8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -7
README.md CHANGED
@@ -35,12 +35,10 @@ PDS-DPO-7B is a vision-language model built upon LLaVA 1.5 7B and trained using
35
 
36
  ## Citation
37
  ```bibtex
38
- @misc{wijaya2024multimodalpreferencedatasynthetic,
39
- title={Multimodal Preference Data Synthetic Alignment with Reward Model},
40
- author={Robert Wijaya and Ngoc-Bao Nguyen and Ngai-Man Cheung},
41
- year={2024},
42
- eprint={2412.17417},
43
- archivePrefix={arXiv},
44
- primaryClass={cs.CV}
45
  }
46
  ```
 
35
 
36
  ## Citation
37
  ```bibtex
38
+ @article{wijaya2024multimodal,
39
+ title={Multimodal Preference Data Synthetic Alignment with Reward Model},
40
+ author={Wijaya, Robert and Nguyen, Ngoc-Bao and Cheung, Ngai-Man},
41
+ journal={arXiv preprint arXiv:2412.17417},
42
+ year={2024}
 
 
43
  }
44
  ```