AI for Advisors newsletter
Many financial advisors who try AI and walk away disappointed say some version of the same thing: “It just doesn’t give me what I want.”
But spend a few minutes looking at how they’re using it, and a different picture emerges. The AI isn’t failing them. Their instructions are failing the AI. This is the quiet problem in AI adoption right now and it’s entirely fixable.
AI is trained, not programmed
Before we get to solutions, it helps to understand why thin instructions produce thin results. AI language models don’t execute commands the way your CRM or Excel does. They don’t follow rigid logic trees. They predict responses based on patterns in language, which means the quality of what you get is directly shaped by the quality of what you give.
Consider the difference between these two prompts:
Version 1: “Explain Roth conversions.”
You’ll get a passable textbook summary. Accurate, probably. Useful for a specific client situation? Probably not.
Version 2: “Explain Roth conversions to a 62-year-old pre-retiree with $1.2M in assets. Compare three tax bracket scenarios over a five-year horizon. Include IRMAA implications, highlight risks of bracket creep, and flag items that require CPA coordination. Present in client-friendly language.”
Now you get intelligence. The model hasn’t changed. The precision has. This is the core shift in thinking that separates advisors who find AI genuinely useful from those who remain skeptical. Prompting is not about clever tricks or memorized formulas. It’s about structured thinking—the same kind of structured thinking that makes a good advisor good.
The 7 prompting best practices for advisors
These seven practices can be applied to virtually any AI task in your practice. You don’t need all seven every time. But when something isn’t working, you can almost always trace the failure back to a missing element or two.
1. Assign a role: Start by telling the model what perspective to work from. “Act as a CFP professional specializing in retirement income planning” sets the depth, vocabulary, and frame of reference for everything that follows. Without a role assignment, the model defaults to a general audience. Role stabilizes the response at the level you need.
2. Define the task or objective: Are you preparing internal analysis? Drafting client communication? Running a compliance review? Building a meeting agenda? The same topic generates very different outputs depending on what you’re trying to accomplish. Be explicit about the purpose before you describe the task. Good “task” statements typically are command verbs. Here are five strong command verbs with quick examples of how they’d appear in an advisor prompt:
Analyze: “Analyze this client’s asset allocation for sequence-of-returns risk given a 2027 retirement date.”
Draft: “Draft a client-facing email explaining why we’re reducing equity exposure in their portfolio.”
Compare: “Compare a Roth conversion strategy versus a qualified charitable distribution strategy for a 70-year-old with a $2M IRA.”
Summarize: “Summarize the key changes in the SECURE 2.0 Act that affect clients still in the accumulation phase.”
Identify: “Identify the top three planning opportunities for a married couple with a $4M estate and a closely held business.”
Note how the pattern is the same in each case: The verb tells the model exactly what cognitive operation to perform, which keeps the output focused and prevents the model from deciding on its own what kind of response to give.
3. Add client context: Age, asset level, income sources, goals, constraints, timeline—these details are what transform a generic answer into a relevant one. Without context, the model fills in blanks with assumptions, and those assumptions may not match your client’s reality. The more specific the context, the more specific the output. I like to think of context in terms of the five W’s: Who, what, when, where, and why.
4. Impose constraints: This is the practice many people skip, yet it can be very important in certain AI interactions. Constraints shape what the model will and won’t do. “No product recommendations.” “Flag uncertainty.” “Differentiate fact from interpretation.” “Note where CPA or attorney review is required.” These aren’t just stylistic preferences—they’re risk management. Constraints reduce the AI’s tendency toward overconfidence and scope creep.
5. Specify the format: Do you want a bullet list or a memo? A comparison table or a pros-and-cons framework? A client-facing agenda or an internal briefing? Format shapes usability. A well-structured output you can immediately use is worth far more than a thorough answer you have to spend 20 minutes reorganizing.
6. Provide an example: If you want the output to sound like your voice, your framing, or your level of formality, show the model how you write. Paste in an article or blog post or copy from a previous client email or agenda. This technique, sometimes called “few-shot prompting”, anchors voice and structure more reliably than trying to describe them in the abstract.
7. Prompt, then re-prompt: Your first response is a starting point, not a finished draft. The advisors getting the most value from AI treat it as a thinking partner, not a vending machine. After the initial output, ask for elaboration on weak points, edge cases you hadn’t considered, real-world applications, weighted trade-offs, or simply: “What is the most nonobvious insight in this situation?” Iteration is what separates professional use from casual use.
Before and after: A full example
Nothing makes this more concrete than seeing it applied directly.
Here’s a weak prompt:
“Create a meeting agenda about Social Security.”
The output will be generic. It may cover the right topics in the abstract, but it won’t be calibrated to your client, your meeting length, or the specific decision they’re facing.
Here’s a stronger prompt:
“Act as a retirement income specialist and CFP. I am meeting with a 60-year-old married couple with $1.4M in investable assets and no pension. One spouse plans to claim Social Security at 62; the other is undecided.
“Create: (1) a 45-minute meeting agenda, (2) three key tax inflection points to discuss, (3) two behavioral objections I should anticipate, and (4) a short draft follow-up email summarizing next steps. Flag compliance sensitivities and note where assumptions are being made.”
That prompt produces a usable, specific, structured output, the kind you might have spent an hour assembling from scratch.
Now go one step further—a follow-up prompt:
“What is the most nonobvious risk or opportunity in this situation?”
That follow-up question often surfaces the thing worth bringing into the meeting that you wouldn’t have thought to include. That’s the difference between AI as a shortcut and AI as genuine thinking support.
A word on compliance and overconfidence
Language models are fluent. They’re also confident and confidence is not the same as accuracy. This matters enormously in a compliance-sensitive profession.
Building protective language directly into your prompts is a good practice. Some language worth adding as a default:
- “Cite relevant IRS or SSA guidance where applicable.”
- “State uncertainty clearly where it exists.”
- “Differentiate hypothetical modeling from guaranteed outcomes.”
- “Identify where CPA or legal review is required before client presentation.”
Think of constraints not just as quality controls but as compliance infrastructure. Good prompting is risk management.
The 3-minute prompt diagnostic
When AI gives you a disappointing output, resist the urge to blame the model. Run this diagnostic instead:
- Step 1: Did I define the role?
- Step 2: Did I define the task clearly?
- Step 3: Did I add real client context?
- Step 4: Did I specify the output format?
- Step 5: Did I ask it to critique or stress-test the answer?
If you’re missing three or more of these, the problem is in your instructions, and the fix takes about two minutes.
Prompting is executive leverage
Here’s the frame that matters most for experienced advisors who might be tempted to dismiss AI and developing AI skills as administrative work for their assistants and staff. Prompting fluency is not a clerical skill. It is executive leverage. The advisor who can structure thinking clearly for an AI system will move faster, see more angles, catch more risks, and deliver a more consistent client experience. The advisor who teaches their team to prompt well is effectively raising the analytical floor of their entire practice.
This is not about becoming “technical.” You don’t need to understand how a language model works any more than you need to understand how a Bloomberg terminal processes data.
You need to understand what the tool requires from you to perform at its best. That’s precision and precision is exactly what good advisors already bring to client work.
Ready to make the leap? Horsesmouth’s AI for Advisors Pro training programs provide the structured, advisor-specific approach that transforms occasional users into confident practitioners. Learn more at www.horsesmouth.com/aipro.