Red Teaming GPT-4V: Are GPT-4V Safe Against Uni/Multi-Modal Jailbreak Attacks? Paper • 2404.03411 • Published Apr 4, 2024 • 8
Teams of LLM Agents can Exploit Zero-Day Vulnerabilities Paper • 2406.01637 • Published Jun 2, 2024 • 1