Digital Smiles logo

Digital Smiles

Navigating the Risks and Rewards of Generative AI Hallucinations

Protecting Against and Benefitting from Generative AI Hallucinations

As technology continues to advance, marketers are incorporating ChatGPT, Google's Bard, Microsoft's Bing Chat, Meta AI, and their own large language models (LLMs) into their strategies. However, as these tools become more prevalent, the issue of "hallucinations" must be addressed and prevented.

IBM defines AI hallucinations as a phenomenon where a large language model, such as a chatbot or computer vision tool, perceives non-existent patterns or objects, resulting in nonsensical or inaccurate outputs. This can occur when the model produces responses that are not based on training data, incorrectly decoded by the transformer, or do not follow any identifiable pattern. In simpler terms, the model "hallucinates" the response.

Suresh Venkatasubramanian, a professor at Brown University and co-author of the White House's Blueprint for an AI Bill of Rights, compares this issue to his young son telling stories at age four. Just like how his son would continue producing more stories when prompted, the model will continue generating responses, regardless of their accuracy.

The Frequency of Hallucinations

If hallucinations were rare occurrences, they may not be cause for significant concern. However, studies have shown that chatbots fabricate details in at least 3% of interactions and up to 27%, despite efforts to prevent such occurrences.

Amr Awadallah, CEO of Vectara and former Google executive, explains that even when given 10 to 20 facts to summarize, the model can still introduce errors. This is a fundamental problem that must be addressed.

Furthermore, the rates of hallucinations may be even higher when the model is performing tasks beyond mere summarization.

What Marketers Can Do

Despite the potential challenges posed by hallucinations, generative AI offers many advantages. To reduce the likelihood of hallucinations occurring, we recommend the following:

  • Use generative AI as a starting point for writing, not a substitute: Generative AI is a tool, not a replacement for human marketers. Use it as a starting point and develop prompts to help complete your work, ensuring that your content aligns with your brand voice.
  • Cross-check LLM-generated content: Peer review and teamwork are crucial in catching any errors or inconsistencies.
  • Verify sources: While LLMs are designed to work with large volumes of information, some sources may not be credible. It is important for marketers to fact-check the information provided by the model.
  • Use LLMs tactically: Run drafts through generative AI to identify missing information, but always double-check any suggestions made by the model.
  • Monitor developments: Stay updated on the latest advancements in AI to continuously improve the quality of outputs and be aware of any emerging issues, including hallucinations.

The Potential Benefits of Hallucinations

While hallucinations may pose potential dangers, they can also have some value in marketing. According to Tim Hwang, CEO of FiscalNote, "LLMs are bad at everything we expect computers to be good at, and good at everything we expect computers to be bad at." This means that while using AI as a search tool may not be effective, it excels in storytelling, creativity, and aesthetics.

This is where the concept of "hallucinating its own interface" comes into play. Marketers can provide the LLM with a set of objects and ask it to do things that are not typically measurable or may be costly to measure through other means. This prompts the model to "hallucinate" and provide creative responses, which can be beneficial for marketers looking to enhance their brand identity.

Emulating Consumer Perspectives

A recent application of hallucinations can be seen in the "Insights Machine", a platform that allows brands to create AI personas based on detailed target audience demographics. These AI personas interact as genuine individuals, offering diverse responses and viewpoints. While they may occasionally provide unexpected or hallucinatory responses, they primarily serve as catalysts for creativity and inspiration among marketers. Ultimately, it is the responsibility of humans to interpret and utilize these responses, highlighting the crucial role of hallucinations in these transformative technologies.

Pini Yakuel, co-founder and CEO of Optimove, wrote this article.

Originally reported by Martech: https://martech.org/how-to-protect-against-and-benefit-from-generative-ai-hallucinations/
This article was written automatically by artificial intelligence. Please make us aware if you have any concerns about this automatically generated content.

Our content includes affiliate links. This means that we may receive a commission if you make a purchase through one of the links on our website. This will be at no cost to you and helps to fund the content creation work on our website.