Mitigating Hallucinations of Large Language Models via Knowledge Consistent Alignment Paper • 2401.10768 • Published Jan 19, 2024 • 2