Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
1
2
PDS DPO
pdsdpo
Follow
https://pds-dpo.github.io/
AI & ML interests
None yet
Recent Activity
upvoted
a
paper
19 days ago
Multimodal Preference Data Synthetic Alignment with Reward Model
updated
a dataset
19 days ago
pdsdpo/pdsdpo-v1_0-data
updated
a model
23 days ago
pdsdpo/PDS-DPO-7B
View all activity
Organizations
None yet
models
2
Sort: Recently updated
pdsdpo/PDS-DPO-7B
Image-Text-to-Text
•
Updated
23 days ago
•
29
•
1
pdsdpo/PDS-DPO-7B-LoRA
Image-Text-to-Text
•
Updated
23 days ago
•
24
•
1
datasets
1
pdsdpo/pdsdpo-v1_0-data
Viewer
•
Updated
19 days ago
•
23k
•
48