GPT-5 and AI for advisors

Humble victory lap
I was invited to speak on AI at the US SIF conference in DC last month. All of us on the panel praised AI and advisor applications for a full hour—and rightfully so, because we have the greatest technological revolution ever in front us. But then our moderator and my friend Iyassu Essayas passed me the mic for my concluding thoughts. I said that we were nowhere near singularity , that the bulk of scientific innovation was behind us, that foundation model progress was slowing down, that AI would remain a helpful tool for RIA firms and never become a competitor, ever. (Please notice my emphases: scientific and foundation—I'm not talking about engineered AI applications, only about the raw AI models like ChatGPT here).
The audience seemed genuinely surprised that an AI builder wouldn’t be a techno-maximalist—or a singularitarian , as we call the Valley techies who believe AI model capabilities will keep growing exponentially for the next decade. I explained that I don’t believe scaling up pretraining of foundation large language models (or speeding up reinforcement learning , for that matter) will ever produce a superintellingent entity.
Silicon Valley’s maximalists were especially loud during two moments: the release of ChatGPT in November 2022, and the arrival of the so-called “ reasoning models ” (o1, o3, DeepSeek-R1, etc.) in late 2024 and early 2025.
I’m writing this blog post partly as a victory lap, because last week’s release of GPT-5 by OpenAI proved me right. Was that due to skill? Luck? Who knows. Even a broken clock is right twice a day. Either way, writing tech analyses and forecasts twice a month, as I do, is risky business. You’ve got to take the wins when they come, however small they might be.
OpenAI releases a new generation of GPTs roughly every two years. The jump from GPT-2 to the GPT-3 series was massive. So was the leap from GPT-3 to GPT-4. But GPT-5 just now was largely an industrial fiasco compared to GPT-4. It has barely extended the boundaries of GPT-4 despite costing 100× more in money, training data, computing power, and human resources. And almost no serious AI researcher is still losing sleep over superintelligence, singularity, or whatever doomy scenario was bouncing around their noggins. I’ve always been an incrementalist. If you’ve been reading these blog posts since November 2022, you know I never believed in a fast takeoff for AI but only in gradual, non-revolutionary improvements over time. And today, I’m closer than ever to calling myself a steadyist.
It's great news!!!

So, bottom line, I've got good news: the core skills of financial advisors serving households remain largely unthreatened by these technologies. Advisors hold a rather AI-proof position compared to most knowledge workers. Their value primarily lies in:
- Relationship building and emotional management: Listening, guiding, coaching clients through life's ups and downs—these activities require human empathy or a sentient AI, which LLMs aren't designed to achieve anytime soon (good read here ).
- Designing global financial strategies for clients: We're not discussing the minutiae of financial analyses or projections—tasks pre-AI software already handled well—but rather an intuitive understanding of what brings comfort and confidence to another human being.
- Calling the shots: Sometimes clients hesitate to listen even to trusted human advisors out of fear; they're even less likely to entrust critical decisions to software alone.
- Being the face of the company, being the brand: Self-evidently, this is a human activity.

For firms with limited resources, what are some affordable ways to incorporate AI?
I've got more good news again. AI is special in at least two ways compared to previous generations of software advisors have been used to grappling with.
First, AI is cheap. With a ChatGPT license for $20/month, you can save 10 hours. Not only is AI already affordable, but it's also going to become even cheaper. AI is extremely deflationary—it makes writing software easier. Karl Marx had a point with his "socially necessary labor time" concept: lower production costs inevitably lead to lower prices. Software will keep getting cheaper. Big Tech is trying hard to drive prices to zero. You can thank Google, Facebook, and Alibaba for being late in the LLM race and losing to early movers OpenAI and Anthropic. Our three laggards are now purposedly open sourcing their LLMs to drive prices down, commoditize AI, and squeeze the margins of OpenAI and Anthropic.
Second, AI delivers results from day one. It’s a vastly different experience compared to a CRM or financial planning software, for example. Back in the day, you would slowly and painfully build data assets and processes for months before seeing any positive ROI from your CRM, or you would patiently train your team before benefiting from your financial planning tool. Now, you get immediate results from your AI tool, sometimes without even needing to manage the input yourself.
Stage 0: If you want a cheap strategy and get 80% of the benefits AI has to offer advisors today, while doing 0% of the research, just get a license for a notetaking app, and get ChatGPT or Claude. There you go—you're spending about $100 a month and reclaiming 10-20 hours of your time.
Stage 1: If you're willing to invest a bit more time to capture that remaining 20%, start by identifying your problems, not the technology. Name them, define them, then Google or ChatGPT them. Chances are there's already a provider out there to help you. If not, set up a burner email account and subscribe to newsletters from a dozen or so industry-specific AI tools already available. Open that inbox once a month and skim through their (our) latest announcements and updates.
Stage 2: If you're serious about being more efficient (you should be) and want to invest a little bit of money (for a high ROI), you should build custom AI internally.
How do I see AI evolving in financial planning over the next 5–15 years?
In 5 years, I believe the average practice will operate in a way not different from today. I believe AI will expand the Total Addressable Market of financial advisors in the US, not reduce it, for two reasons. First, AI will will create leverage at an unprecedented pace, thus wealth, thus financial planning complexity, for entrepreneurs and executives in most industries. Second, AI will allow many RIAs to lower their minimums thanks to new operational efficiencies. I would only tell you that a few jobs might be lost on the ops side, at the margin and in aggregate, if we reasoned inside a zero-growth scenario. But I don't believe at all in that scenario and I think jobs will still be created in our industry, especially by mature, successful, and growing independent RIAs now using AIs to solve a few of their last bottlenecks and grow faster.
In 10–15 years, I can imagine an individual advisor being able to serve twice as many families thanks to supervising many AI agents on top of support staff. One agent to update cash flow projections based on voice instructions. Another to rebalance the portfolio after a withdrawal request. Another dedicated entirely to client reporting.
Why not sooner? Because achieving this vision requires rewriting all of wealthtech specifically for LLMs instead of humans. And that work hasn’t even begun yet. Today, everything is still buttons for humans to click.