Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
1
2
PDS DPO
pdsdpo
Follow
https://pds-dpo.github.io/
AI & ML interests
None yet
Recent Activity
upvoted
a
paper
20 days ago
Multimodal Preference Data Synthetic Alignment with Reward Model
updated
a dataset
20 days ago
pdsdpo/pdsdpo-v1_0-data
updated
a model
23 days ago
pdsdpo/PDS-DPO-7B
View all activity
Organizations
None yet
pdsdpo
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
liked
2 models
about 1 month ago
pdsdpo/PDS-DPO-7B
Image-Text-to-Text
•
Updated
23 days ago
•
29
•
1
pdsdpo/PDS-DPO-7B-LoRA
Image-Text-to-Text
•
Updated
23 days ago
•
24
•
1