SafeInfer: Context Adaptive Decoding Time Safety Alignment for Large Language Models Paper • 2406.12274 • Published Jun 18, 2024 • 15
Safety Arithmetic: A Framework for Test-time Safety Alignment of Language Models by Steering Parameters and Activations Paper • 2406.11801 • Published Jun 17, 2024 • 16
How (un)ethical are instruction-centric responses of LLMs? Unveiling the vulnerabilities of safety guardrails to harmful queries Paper • 2402.15302 • Published Feb 23, 2024 • 4
Sowing the Wind, Reaping the Whirlwind: The Impact of Editing Language Models Paper • 2401.10647 • Published Jan 19, 2024 • 4
Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models Paper • 2410.12880 • Published Oct 15, 2024 • 3