LightHouse: A Survey of AGI Hallucination

Kavli Affiliate: Feng Wang

| First 5 Authors: Feng Wang, , , ,

| Summary:

With the development of artificial intelligence, large-scale models have
become increasingly intelligent. However, numerous studies indicate that
hallucinations within these large models are a bottleneck hindering the
development of AI research. In the pursuit of achieving strong artificial
intelligence, a significant volume of research effort is being invested in the
AGI (Artificial General Intelligence) hallucination research. Previous
explorations have been conducted in researching hallucinations within LLMs
(Large Language Models). As for multimodal AGI, research on hallucinations is
still in an early stage. To further the progress of research in the domain of
hallucinatory phenomena, we present a bird’s eye view of hallucinations in AGI,
summarizing the current work on AGI hallucinations and proposing some
directions for future research.

| Search Query: ArXiv Query: search_query=au:”Feng Wang”&id_list=&start=0&max_results=3

Read More