The AI Risk Advisors Aren’t Modeling Yet

Apr 1, 2026 / By Sean Bailey, Horsesmouth Editor in Chief
Print AAA
Add to My Archive
My Folder

My Notes
Save
AI for Advisors: What happens if one of the deepest assumptions in the modern economy—that valuable intelligence is scarce and human—starts to weaken due to the expansion of artificial intelligence faster than our systems can adjust?

AI for Advisors newsletter

A post on X caught my eye last week, and I haven’t been able to shake its implications. You need to start wrestling with them, too.

The post came from an account called Tech Layoff Tracker, citing internal placement data from a mid-tier state university’s computer science program. If the numbers are even directionally correct, they are hard to ignore.

In Fall 2023, 89% of graduates reportedly had job offers by graduation, with an average starting salary of $94,000. By Fall 2024, placement had dropped to 43%, and average salaries had fallen to $61,000. This semester, the reported placement rate was just 19% and still falling.

Source: Tech Tracker, X.com

Now, one post is not a dataset, and one school is not the labor market. This could be an outlier or a temporary dislocation in tech hiring. But sometimes a single data point shows up before the broader explanation.

That is also the topic of a recent provocative essay from Citrini Research titled “The 2028 Global Intelligence Crisis.” It is not a forecast. It is a thought experiment about what might happen if artificial intelligence becomes economically powerful faster than labor markets, institutions, and capital markets can adapt.

For financial advisors, that makes it worth your serious attention because, whether the scenario unfolds exactly as described, it points to a question more clients will eventually ask in one form or another:

“I keep hearing that AI is getting smarter, faster, and more powerful. Does that change any of the assumptions behind our financial plan or how we should be thinking about investing going forward?”

That is no longer just a technology question.

The assumption underneath the modern economy

Modern economies are built on a basic assumption so familiar we rarely stop to examine it: Human intelligence is scarce. That scarcity is what gives professional labor its value. It is why businesses hire analysts, marketers, consultants, software developers, attorneys, planners, and managers. It is why expertise commands higher income and why high-skill labor has long been one of the most reliable paths to financial security and wealth accumulation.

In practical terms, much of the modern economy assumes that productive thinking must be purchased one human at a time. Now, AI challenges that assumption.

At first, generative AI looked like a productivity layer: A better writing assistant, a faster research partner, or a tool that helps knowledge workers do more in less time.

That framing is still true, but only partially. The more disruptive possibility is that AI does not merely assist certain forms of knowledge work. It increasingly performs substantial pieces of it directly.

That includes the kinds of tasks many professionals are paid well to do every day: interpreting information, writing and communicating clearly, conducting research, preparing analyses, coordinating projects, documenting decisions, and keeping complex workflows moving for the greater good of your clients and your firm.

If those emerging AI capabilities keep improving and become broadly deployable inside businesses, the white-collar economic equation begins to shift. Instead of simply making employees more productive, AI may allow firms to generate more output with materially fewer people.

And that is where the AI story stops being just about technology and starts becoming a story about jobs, incomes, profits, lifestyles, and the broader economy.

The Intelligence Displacement Spiral

One of the most useful ideas in the Citrini scenario is what we might call the “Intelligence Displacement Spiral.”

We tend to assume that rising productivity is automatically good for the broader economy. Historically, that has often been true because productivity gains eventually translated into rising wages, broader employment, and stronger household consumption.

But what if this time some of the gains are not broadly distributed? What if AI-driven productivity improves corporate output while reducing the number of people needed to generate that output?

The spiral might work like this:

  1. AI systems dramatically increase productivity.
  2. Firms reduce the need for certain knowledge workers.
  3. Some displaced workers struggle to find comparable new roles.
  4. Others re-enter the workforce at lower pay or with less stability.
  5. Even many who remain employed face weaker salary, bonus, or advancement prospects.
  6. Household income and financial confidence begin to weaken.
  7. Consumer demand softens as households become more cautious.
  8. Firms feel pressure from slower demand and look for more efficiency.
  9. That leads to further automation and labor reduction.
  10. Corporate margins improve, at least temporarily.
  11. Profits are reinvested into additional automation.
  12. The cycle repeats, even as household financial resilience erodes.

Then you get an uncomfortable possibility: The companies and economy become more efficient while families and individuals become less secure as they confront an AI-driven reshaping of our economic reality.

That is a very different kind of disruption than what we’ve seen before. And for advisors, it matters because many of your client plans are built on assumptions of career continuity, predictable income growth, and stable professional relevance during their peak earning years.

If those assumptions weaken, a lot of your downstream client planning assumptions weaken with them.

The ‘Ghost GDP’ problem

Another concept from the scenario worth understanding is what Citrini calls “Ghost GDP.”

It is a provocative term, but it captures a real possibility. Imagine an economy where AI systems are producing enormous amounts of output. Software gets built faster. Services are delivered more cheaply. Reports, code, content, and digital products are generated at scale. GDP, at least in headline terms, continues to grow.

But fewer humans are earning wages from that output. In that world, the topline economic data may still look healthy while household financial reality becomes increasingly precarious. That is the essence of the Ghost GDP problem.

Output remains strong while consumer purchasing power weakens. The economy looks productive on paper but fragile in your clients’ lived experience. That gives you a useful lens for the years ahead. It suggests we may eventually enter periods where:

  • Market optimism and household anxiety coexist.
  • GDP growth and career insecurity coexist.
  • Technological progress and financial stress coexist.

That would not be a contradiction as much as it would be a structural shift.

Why white-collar households may be more exposed

One of the biggest mistakes people make when they think about any technology disruption is assuming it is mostly about manual labor or lower-skill roles.

That is not the center of gravity here. The more immediate exposure is often in white-collar work. That includes roles built around interpreting information, making decisions, communicating clearly, and coordinating complex work. In other words, the kinds of roles many professionals, including your clients, are well paid to perform.

That matters for advisors because many affluent households are deeply dependent on precisely this kind of income. These are the clients who often:

  • Earn high salaries late in their careers.
  • Receive bonuses or stock compensation.
  • Save aggressively during peak earning years.
  • Assume continued professional demand.
  • Build retirement projections around those assumptions.

If AI introduces more volatility into those careers, the disruption is not just theoretical. It hits directly at the core of financial planning.

If labor loses, capital may gain share

If this scenario unfolds even partially, one of the clearest implications is that value may increasingly concentrate around the ownership of AI infrastructure. In plain English, that means the biggest winners are less likely to be the people using AI casually and more likely to be the companies controlling the systems that make large-scale AI possible.

The deeper point is structural. If AI makes certain forms of human labor less economically scarce while making computing and infrastructure more valuable, we could see a meaningful shift in how income and wealth are distributed. Labor’s share may weaken. Capital’s share may strengthen. That creates a difficult environment where:

  • Parts of the market may see significant upside.
  • Many households may experience rising uncertainty.

That is exactly the kind of environment where you become more important.

Where financial stress could show up first

The most useful way to think about this is not whether AI will disrupt everything at once. Instead, it’s more useful to ask: Where would stress show up first? A few areas stand out, such as:

  • Career stability assumptions may weaken.
  • Mortgage and debt assumptions may shift.
  • Certain sectors may face valuation pressure.
  • Concentration risk may increase.
  • And client anxiety will likely rise before the financial consequences fully materialize.

You will increasingly be asked to interpret all of this in real time.

What you should consider

The right response is to treat AI as an emerging planning variable. That means building it into your thinking alongside:

  • Inflation
  • Longevity
  • Tax policy
  • Market volatility

Reassess labor income assumptions where appropriate. Stress-test plans for career volatility. Watch concentration risk on both sides of the balance sheet. Prepare for emotionally loaded client conversations.

And most importantly, begin developing a clear framework for thinking about AI that is calm, grounded, and analytically sound.

How to use AI to think this through

You do not need AI to tell you whether this scenario will happen. But you can use AI to think more rigorously about what it might mean. The goal is not to let AI form your opinion but use AI to help you think through the implications more systematically. Here are three practical ways to do that.

1. Prepare for the client questions that are likely coming

Prompt: Act as a thoughtful financial advisor and client communication strategist. I want to prepare for future client questions about how artificial intelligence may affect careers, income stability, and long-term financial planning. Generate 10 realistic client questions affluent professionals might ask. Then draft calm, intelligent talking points I could use to answer each one.

2. Pressure-test planning assumptions

Prompt: Act as a strategic planning analyst for financial advisors. Help me think through how accelerated AI adoption could affect high-income professional households over the next five to 10 years. Identify pressure points related to employment stability, compensation, savings rates, housing decisions, retirement timing, and concentration risk. Present this as a concise planning memo.

3. Build a client-facing educational piece

Prompt: Act as a financial educator. Create a seven-slide outline for a client presentation titled “How AI May Affect Careers, Markets, and Financial Planning.” Keep the tone measured, intelligent, and reassuring.

The real question is not whether AI gets better

For decades, advisors have helped clients navigate risks that were easy to name but difficult to time such as inflation, longevity, taxes, and market volatility. AI may be joining that list because we can already see that it has the potential to reshape the relationship between work, income, productivity, and economic value.

That is why this scenario is useful. It does not need to be right in every detail. It only needs to force the right question: What happens if one of the deepest assumptions in the modern economy—that valuable intelligence is scarce and human—starts to weaken faster than the system can adjust?

If that possibility becomes more real, you will need more than market commentary. You will need a framework. And the sooner you begin building one, the better prepared you will be prepared to guide your clients through whatever comes next.

Sean Bailey is the creator of The AI-Powered Financial Advisor training program and AI for Advisors Pro, where he teaches financial advisors how to apply artificial intelligence in their practices. He has spent thousands of hours studying generative AI and has trained hundreds of advisors.

Join the free AI for Advisors newsletter and podcast for weekly insights and practical AI use cases.

IMPORTANT NOTICE
This material is provided exclusively for use by Horsesmouth members and is subject to Horsesmouth Terms & Conditions and applicable copyright laws. Unauthorized use, reproduction or distribution of this material is a violation of federal law and punishable by civil and criminal penalty. This material is furnished “as is” without warranty of any kind. Its accuracy and completeness is not guaranteed and all warranties express or implied are hereby excluded.

© 2026 Horsesmouth, LLC. All Rights Reserved.