Stop Asking AI, ‘What Do You Think?’

Feb 18, 2026 / By Sean Bailey, Horsesmouth Editor in Chief
Print AAA
Add to My Archive
My Folder

My Notes
Save
AI for Advisors: Every time you ask AI, “What do you think?” instead of “What’s wrong with my thinking?” you’re training it to be your yes-man rather than your thinking partner. Stop outsourcing judgment and start using artificial intelligence to interrogate your assumptions.

AI for Advisors newsletter

You just asked ChatGPT to review your client’s estate plan. It came back with five recommendations. They all sound good. They’re well-reasoned, clearly written, backed by what appears to be solid logic. You copy them into your notes. Maybe you adjust the language slightly. Then you send them to the client.

What just happened? You didn’t make five good decisions. You made one passive decision to accept AI’s judgment without testing it.

That single moment of deferred thinking seems harmless. But it’s not free. Every time you outsource reasoning without engaging your own judgment, you take out a small loan. Do it repeatedly, and the interest compounds.

Writer Charlie Hills calls this problem out directly: “Repeatedly asking AI open-ended opinion questions like ‘What do you think?’ can quietly erode independent reasoning. Instead of sharpening judgment, it offloads it.”

It’s not just that specific phrase, of course. It’s the pattern underneath it. Similar questions might be:

  • “What’s your recommendation?”
  • “How does this look?”
  • “What should I do here?”
  • “Is this a good approach?”

All variations of the same passive move of asking AI to be the decider instead of the challenger. That accumulated offloading is called cognitive debt.

What is cognitive debt?

Cognitive debt is what happens when you stop exercising judgment. Every shortcut you take such as deferring thinking, synthesis, or critical judgment to AI without engaging your own reasoning weakens the “muscle.” At first, you don’t notice. But judgment is like any muscle: Use it or lose it.

Over time, that liability compounds. You become faster but thinner. More productive but less original. More efficient but less sharp. For financial advisors, this matters deeply. Your superpower isn’t your ability to retrieve information. It’s your ability to exercise judgment. Ultimately, that is what you are paid for.

So, while thinking with AI is the highest order use of AI by a financial advisor, and one that I think has huge benefits, the moment AI becomes your primary generator of conclusions rather than your thinking partner, your decision muscles begin to atrophy.

Where cognitive debt shows up

Let me show you three places where advisors may unknowingly accumulate cognitive debt and what to do instead.

Scenario 1: Portfolio review (with cognitive debt)

You enter a client’s portfolio into ChatGPT (removing or anonymizing the PII) and ask:

“What do you think about this allocation for a 62-year-old couple planning to retire in three years?”

AI responds with a thoughtful analysis. It flags concentration risk. It suggests rebalancing. It sounds confident. You agree.

What you just deferred:

  • Testing whether AI understands this couple’s actual risk tolerance.
  • Questioning whether the timeframe matters differently for different parts of the portfolio.
  • Checking if the model is applying generic wisdom versus client-specific constraints.

Try this instead:

Prompt:

“I’m reviewing this allocation for a 62-year-old couple retiring in three years. Before you give me recommendations, tell me: What critical information about this couple am I NOT showing you that would change your analysis? Then stress-test my current thinking: What are the three strongest reasons this allocation might be exactly right for them?”

See the difference? You’re forcing AI to interrogate missing context and challenge your assumptions. You’re making AI do the intellectual work that sharpens YOUR judgment.

Scenario 2: Client meeting preparation (with cognitive debt)

You’re meeting with a business owner about succession planning. You ask ChatGPT:

“What should I cover in a succession planning meeting with a business owner?”

AI generates a comprehensive agenda. You use it. The meeting goes fine.

What you just deferred:

  • Your pattern recognition from past succession planning clients.
  • Your intuition about what THIS client needs versus what a generic client needs.
  • Your ability to spot what’s NOT on the standard checklist.

Try this instead:

Prompt:

“I’m meeting with a business owner about succession planning. Here’s my initial agenda: [paste your draft agenda]. Now tell me: What am I overweighting that doesn’t matter for most owners? What’s conspicuously absent that often becomes the real issue six months later? If this meeting fails to move them forward, what will I have missed?”

You’re not asking AI what to do. You’re asking it to stress-test what you’re already planning to do.

Scenario 3: Client communication during market volatility (with cognitive debt)

The market drops 3% in a day. A client emails: “Should I be worried?” You paste their question into ChatGPT and ask:

“How should I respond to this client who’s nervous about market volatility?”

AI generates a calm, professional response about long-term investing and staying the course. It sounds reassuring. You send it with minor edits.

What you just deferred:

  • Your knowledge of THIS client’s actual risk tolerance versus stated risk tolerance.
  • Your sense of whether they’re asking for reassurance or genuinely reconsidering their strategy.
  • Your judgment about what’s behind the question (Are they testing you? Panicking? Just venting?).

A better approach:

Prompt:

“A client just asked if they should be worried about today’s market drop. Here’s my draft response: [paste it]. Now tell me: What am I assuming about this client that might be wrong? What would make a ‘reassuring’ response backfire? If they write back still anxious, what did my message fail to address?”

You’re not outsourcing the response. You’re using AI to stress-test whether you really understand what the client is asking for.

Why AI’s default tone encourages cognitive debt

Here’s what makes this tricky: AI models are trained to be helpful and cooperative. That usually means:

  • Affirming your framing.
  • Reinforcing your confidence.
  • Offering polished reasoning that feels complete.

The danger is psychological. You feel validated. You feel smart. The model sounds certain. But  confidence is not correctness. In financial advice, that distinction is critical.

Try this now

Pick something you’re working on this week—a client recommendation, a blog post outline, a meeting agenda. Before you ask AI for its opinion, try this prompt:

“Before you agree with me, list the three strongest objections to my position. Then tell me whether those objections change your recommendation.”

That one prompt forces friction and friction preserves judgment.

The real competitive divide

The near future is already dividing advisors who effectively use AI from those who don’t. But for those daily, budding “power users,” the divide will be those advisors who use AI to amplify thinking versus those who use it to replace thinking. The first group compounds intellectual capital. The second accumulates cognitive debt.

Which side of that divide are you on?

Genuine power users need to understand that AI should not flatter you. It should interrogate you. So, stop asking it to issue quick responses such as “What do you think?” Start directing it to respond to questions such as:

  • “Critique this reasoning.”
  • “What assumptions am I making that could be wrong?”
  • “What’s the strongest counterargument?”
  • “If this fails, why would it fail?”

Train AI to be your adversarial thinking partner, not your yes-man. Your judgment is your competitive advantage, so don’t outsource it. Sharpen it.

What do you think?

What’s your experience? Are you noticing places where AI is doing your thinking instead of challenging it? Tell us about your experience below in the comments section.

Ready to make the leap? Horsesmouth’s AI for Advisors Pro training programs provide the structured, advisor-specific approach that transforms occasional users into confident practitioners. Learn more at www.horsesmouth.com/aipro.

Sean Bailey is editor in chief at Horsesmouth, where he has led editorial strategy for over 25 years. He is the co-author of Hack Proof Your Life Now! and has spent over 3,000 hours researching how AI can transform the way financial advisors work. Through his AI-Powered Financial Advisor and AI Marketing for Advisors programs, he helps advisors save time, deliver better client experiences, and market their services with unprecedented speed, quality, and confidence.

IMPORTANT NOTICE
This material is provided exclusively for use by Horsesmouth members and is subject to Horsesmouth Terms & Conditions and applicable copyright laws. Unauthorized use, reproduction or distribution of this material is a violation of federal law and punishable by civil and criminal penalty. This material is furnished “as is” without warranty of any kind. Its accuracy and completeness is not guaranteed and all warranties express or implied are hereby excluded.

© 2026 Horsesmouth, LLC. All Rights Reserved.