Handling Hallucination in LLMs Reducing Factually Incorrect Outputs in Sensitive Domains
Keywords:
LLM, Hallucination, Fact-Checking, RLHF, Knowledge GraphAbstract
Hallucination, or the generation of factually incorrect information, is a prevalent issue in Large Language Models(LLMs), especially in fact-sensitive tasks like news generation and scientific writing. This paper explores variousstrategies to mitigate hallucination in LLM outputs, focusing on the integration of fact-checking modules,reinforcement learning from human feedback
References
Brown, T., et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems (NeurIPS), 33, 1877–1901.
Gururangan, S., et al. (2020). Don't stop pretraining: Adapt language models to domains and tasks. Proceedings of the 58th Annual Meeting of the ACL, 8342–8355.


