New AI chatbots, like ChatGPT, reveal a disturbing tendency for them to produce false statements. This poses a significant challenge for businesses, organizations, and students who rely on generative AI systems for various tasks, including important activities like medicine, psychotherapy, legal writing and news reporting.
The issue of hallucination or fabrication in AI-generated content is acknowledged by developers of large language models such as Anthropic and OpenAI. They are actively working on improving the truthfulness of these models. However, uncertainties still surround the timeline for these improvements and the suitability of the models for critical tasks like providing medical advice.
Linguistics experts, including Emily Bender, emphasize the inherent mismatch between the technology and its intended applications. This suggests that solving the challenge of generative AI’s reliability may not be entirely possible. The implications of this are significant because AI is projected to have a substantial economic and social impact
Sam Altman, CEO of OpenAI, acknowledges the challenge of striking a balance between creativity and accuracy within AI models. He remains optimistic about addressing the issue of hallucination. However, experts like Emily Bender express doubt that these improvements will fully resolve the problem.
Language models, like ChatGPT, are designed to predict word sequences based on training data. They generate new text passages by selecting the most plausible next word. Although they can inadvertently produce accurate-sounding text, errors can go unnoticed, especially in obscure cases. Thus, it is important to recognize that the issue of hallucination cannot be entirely eliminated. If humans lie, then AI, having learned from humans, will also lie.
Even though some optimists believe that AI models can learn to distinguish fact from fiction, research efforts continue to focus on identifying and eliminating hallucinated content. It is worth noting that even Sam Altman himself admits that ChatGPT’s is not a trustworthy source of facts.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…