AI for Advisors newsletter
When I first started working with AI, one thing quickly became clear: Sometimes it confidently gets things wrong.
You’ve probably seen the stories: Lawyers submitting court briefs with fake citations, commencement speakers delivering invented quotes, and government officials presenting reports with non-existent data.
These so-called “hallucinations” are a real concern for any advisor using AI. If you’re not aware of them, or if you blindly trust AI outputs, you could end up passing along inaccurate information—something no professional wants to do.
The good news is that hallucinations are manageable. And with the right approach, you can dramatically reduce—or almost eliminate—them in your practice.
What are AI hallucinations?
Large language models like ChatGPT don’t “know” facts the way humans do. They predict the most likely next words based on patterns in their training data.
As a result, sometimes AI will fabricate information—citing non-existent laws, making up data, or giving polished but inaccurate answers. That’s what’s known as a hallucination.
It doesn’t happen because the AI is being “dishonest”; it happens because the model is doing what it was designed to do: predict language.
Why it matters for advisors
As advisors, we are held to a high standard of accuracy and professionalism. A wrong number, an incorrect interpretation of a tax rule, or a fabricated case study could damage client trust and potentially create compliance issues.
That’s why we must treat AI outputs with the same scrutiny we would apply to any source of information: review, verify, and refine.
Practical strategies to manage factuality risks
Always fact-check critical information: If you’re generating content that includes numbers, regulations, statistics, or anything technical, double-check it against reliable, independent sources before using it with clients.
Use AI for ideation, not final answers: Think of AI as a brainstorming partner or a first-draft assistant. Let it help you generate ideas, frameworks, or outlines—but reserve the role of fact-checker and final authority for yourself.
Be specific in your prompts: Vague prompts invite vague (and often inaccurate) answers. Clear, detailed prompts guide the AI toward more accurate responses.
For example, instead of asking, “Tell me about Roth IRAs,” you might ask, “List three commonly cited benefits of Roth IRAs according to IRS guidelines.”
Follow a structured prompting framework: One of the best ways to minimize hallucinations is to follow a disciplined prompting method. I use a structure called Role-Task-Format-Context-Questions-Examples. This is the process we teach advisors in our AI Powered Financial Advisor program and our AI Marketing for Advisors program.
Recognize AI’s limits: No matter how polished the output looks, always remember: AI is a tool, not a truth-teller. Human judgment must always be the final filter.
The Creative-Accuracy Paradox
By giving the AI a clear role to assume, a specific task to complete, a format for the response, relevant background context, clarifying questions, and real examples, you dramatically reduce the risk of hallucination.
In fact, when I follow this structured approach, I almost never encounter hallucinations. Why?
Well, here’s something interesting I’ve learned: The same underlying mechanism that can produce hallucinations is also what makes AI so powerful for creative and strategic thinking.
AI’s ability to make unexpected connections and generate novel combinations of ideas comes from that same pattern-prediction process that occasionally fabricates facts.
This is especially important when using AI for research. While AI excels at helping you explore ideas, it requires extra scrutiny when you’re looking for specific data, citations, or regulatory details.
The model might confidently point you toward a “study” that doesn’t exist or cite a regulation that’s been misinterpreted from its training data.
That said, when I follow the RTF-CQE prompting framework, I honestly can’t remember the last time I encountered a factual hallucination. The combination of clear prompting and appropriate verification has made this a non-issue in my day-to-day practice.
How reframing the risk helps
Instead of fearing hallucinations, I learned to treat AI outputs like the work of a smart but inexperienced junior employee. There’s a lot of value there—but it needs oversight, refinement, and fact-checking.
Once I made that mental shift, I stopped worrying about AI “getting it wrong” and focused instead on how I could use its strengths without being trapped by its weaknesses.
So, yes, hallucinations can happen. But you can dramatically reduce their likelihood when you:
- Fact-check critical information,
- Use AI for ideation, not final answers,
- Craft specific prompts,
- Follow a structured prompting framework,
- And apply human judgment.
By navigating AI’s strengths and limitations effectively, you protect your professional credibility and unlock greater creativity, efficiency, and client value.
AI for Advisors newsletter