Confidence-Building Measures for Artificial Intelligence: Workshop Proceedings Paper • 2308.00862 • Published Aug 1, 2023
D2PO: Discriminator-Guided DPO with Response Evaluation Models Paper • 2405.01511 • Published May 2, 2024
Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback Paper • 2406.09279 • Published Jun 13, 2024 • 2
WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs Paper • 2406.18495 • Published Jun 26, 2024 • 13
Towards a Framework for Openness in Foundation Models: Proceedings from the Columbia Convening on Openness in Artificial Intelligence Paper • 2405.15802 • Published May 17, 2024
Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models Paper • 2409.17146 • Published Sep 25, 2024 • 106
Bridging the Data Provenance Gap Across Text, Speech and Video Paper • 2412.17847 • Published 19 days ago • 7
The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization Paper • 2403.17031 • Published Mar 24, 2024 • 3
Asynchronous RLHF: Faster and More Efficient Off-Policy RL for Language Models Paper • 2410.18252 • Published Oct 23, 2024 • 5
TÜLU 3: Pushing Frontiers in Open Language Model Post-Training Paper • 2411.15124 • Published Nov 22, 2024 • 58
RouterRetriever: Exploring the Benefits of Routing over Multiple Expert Embedding Models Paper • 2409.02685 • Published Sep 4, 2024 • 1
Establishing Task Scaling Laws via Compute-Efficient Model Ladders Paper • 2412.04403 • Published Dec 5, 2024 • 2