In an earlier article, we discussed the possibilities for using ChatGPT in your advisory firm. At Taylor Financial Group (TFG) we have found it can be used in many ways to streamline tasks, create processes, summarize and edit articles and create content. And since the technology for AI and chatbots is still in its infancy, there will only be bigger and better ways to use this tech in the future.
It is important to note that while AI models can be valuable tools for financial firms, they should be used in conjunction with human expertise, domain knowledge, and regulatory compliance to ensure the best outcomes for clients.
ChatGPT is a large language model; this means that it can understand and generate human-like text by leveraging the vast amounts of data and language it was fed during its training. It is essentially a powerful pattern recognition system that relies solely on statistical patterns in data and language, without true comprehension of meaning or intent. This being the case, there are certain limitations to consider when using ChatGPT (or any other large language model) for a financial firm. We discuss these limitations below.
1. Lack of real-time information
ChatGPT was trained on data available up until September 2021, so it has a knowledge cut-off. It does not have access to the most recent financial data, market trends or news that could affect the current state of the market. Of course not having real-time information defeats the purpose of using an AI tool to help make data analysis more efficient.
Practice pointer: We’ve tried writing articles about Roth IRAs using various chatbots for an initial draft. Unfortunately, since ChatGPT’s knowledge cut-0ff is 2021, important provisions from the SECURE Act 2.0 are not included in any summary it generates. Furthermore, we received very generic responses when we prompted it to provide the definition of a Roth IRA and explain why they are useful, along with spotty information on the SECURE Act legislation.
We’ve tried the same writing prompt with Open AI, Bard, and even Microsoft Bing’s chatbot and we tested several iterations of the prompt. None of those AI programs can generate a response that is anywhere near the information we are looking for. They weren’t even close. And even worse, there are no disclosures telling you that the information generated is lacking. You wouldn’t know unless you had prior knowledge of the topic you are researching. Beware!
2. No personalized financial advice
While ChatGPT can offer general information and explanations about financial concepts, it cannot provide personalized financial advice tailored to an individual’s specific circumstances. As you know, financial decisions often require a deep understanding of an individual’s goals, risk tolerance, financial situation, and other personal factors, which ChatGPT simply does not have access to.
Practice pointer: The AI chatbots we’ve been testing are large language model chatbots; they work within their own vast database, but they cannot pull data on clients to generate personalized financial reports. Because of ChatGPT’s inability to access real-time information, we’re even unable to prompt it to generate an answer to the simple question “What is the stock market up, year-to-date?”
Google Bard, another chatbot will give you an answer to that question, because it is connected to Google and thus has access to live information. But we’ve found that even though Bard has more up-to-date information readily available, it still won’t commit to giving financial advice. It can be prompted, though, to give information about making decisions based on personal risk tolerances.
ChatGPT will not give you a straight answer regarding which stocks are up or which stocks you should invest in. Instead, it will suggest getting financial advice from a professional. Because of its affiliation with Google, if you ask Bard how the stock market is doing, it will give you real-time results for NASDAQ and S&P 500 and a summary of what might be affecting the markets. However, it will not generate a response on which investments you should make.
3. Potential biases and errors
As we noted, large language models like ChatGPT learn from vast amounts of data, including Internet sources, which can contain biases or inaccuracies. It does not evaluate the quality of these sources and so it may inadvertently provide inaccurate, outdated, or biased information. In fact, it has done this on several of our inquiries.
The most frustrating part is that the chatbot gives no disclosure that it is providing faulty information. Sometimes it literally provides totally incorrect information, euphemistically referred to by its creators as “hallucinations.” So, again, it’s crucial to verify the information provided by the model from reliable sources and consult financial professionals when necessary.
Practice pointer: For example, while ChatGPT is good at providing summaries of articles, it doesn’t always provide the most accurate summary. It takes keywords and phrases from the article to build a summary, but sometimes important details are lost in the process. It is key to always fact-check any information that ChatGPT and other AI platforms generate.
4. Possible regulatory and compliance issues
Be careful of trying to go too far with ChatGPT. Financial firms need to adhere to specific regulations and compliance standards. While ChatGPT can provide general information, any given chatbot may not have been exposed to the latest regulatory changes or specific requirements applicable to a particular jurisdiction or financial product.
Practice pointer: Because AI is still being developed and growing as an industry, there’s still a lot of room for growth. For this reason, at TFG we make sure content generated by AI is either used internally or does not give financial advice to clients. For instance, we use the summaries ChatGPT generates as a starting point for creating client-facing content, with staff making sure we remain compliant. We double-check everything.
5. Security and privacy concerns
Handling financial data requires strict security measures to protect sensitive information. Using an AI language model involves sharing data with a third party. Financial firms must carefully consider the security and privacy implications and implement appropriate measures to protect client data.
Practice pointer: AI chatbots such as ChatGPT, Bard and Bing are not private servers and this has been a big concern for us at TFG. We’ve read many articles stating AI can be used to help financial advisors with tasks such as meeting recaps with clients, client portfolio analysis and more, but since these chatbots are public platforms, there would be a big security risk in entering any client information to generate a response tailored to that client. We’ve been very careful not to use client info for this reason and compliance would never allow us to do so with AI in its current state.
In our research at TFG, we’ve come across a CRM program similar to Salesforce that is integrated with an AI chatbot like ChatGPT and it can pull client data entered by the firm to create client communications, analysis, and risk tolerance reports. As CRM programs continue integrating AI into their platforms, it should become both easy and safe to pull stored client data to run analysis reports.
While it is clear that AI, and specifically ChatGPT, has vast potential in the financial advisory sector, it’s equally important to understand its limitations. By integrating this powerful tool into your operations judiciously, and with full awareness of its strengths and weaknesses, your firm can stay ahead of the curve while offering efficient, personalized service to your clients. Just be careful and know what you are getting—and what you are not getting.