Handling Hallucination in LLMs Reducing Factually Incorrect Outputs in Sensitive Domains

Authors

  • Vinay Kumar Maginam

Keywords:

LLM, Hallucination, Fact-Checking, RLHF, Knowledge Graph

Abstract

Hallucination, or the generation of factually incorrect information, is a prevalent issue in Large Language Models(LLMs), especially in fact-sensitive tasks like news generation and scientific writing. This paper explores variousstrategies to mitigate hallucination in LLM outputs, focusing on the integration of fact-checking modules,reinforcement learning from human feedback 

References

Brown, T., et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems (NeurIPS), 33, 1877–1901.

Gururangan, S., et al. (2020). Don't stop pretraining: Adapt language models to domains and tasks. Proceedings of the 58th Annual Meeting of the ACL, 8342–8355.

Downloads

Published

2024-05-20

How to Cite

Vinay Kumar Maginam. (2024). Handling Hallucination in LLMs Reducing Factually Incorrect Outputs in Sensitive Domains. Journal of Computational Analysis and Applications (JoCAAA), 33(05), 1926–1936. Retrieved from https://www.eudoxuspress.com/index.php/pub/article/view/3086

Issue

Section

Articles