pdsdpo commited on
Commit
bc4fe82
·
verified ·
1 Parent(s): c84df76

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -7
README.md CHANGED
@@ -36,12 +36,10 @@ PDS-DPO-7B is a vision-language model built upon LLaVA 1.5 7B and trained using
36
 
37
  ## Citation
38
  ```bibtex
39
- @misc{wijaya2024multimodalpreferencedatasynthetic,
40
- title={Multimodal Preference Data Synthetic Alignment with Reward Model},
41
- author={Robert Wijaya and Ngoc-Bao Nguyen and Ngai-Man Cheung},
42
- year={2024},
43
- eprint={2412.17417},
44
- archivePrefix={arXiv},
45
- primaryClass={cs.CV}
46
  }
47
  ```
 
36
 
37
  ## Citation
38
  ```bibtex
39
+ @article{wijaya2024multimodal,
40
+ title={Multimodal Preference Data Synthetic Alignment with Reward Model},
41
+ author={Wijaya, Robert and Nguyen, Ngoc-Bao and Cheung, Ngai-Man},
42
+ journal={arXiv preprint arXiv:2412.17417},
43
+ year={2024}
 
 
44
  }
45
  ```