As I onboard and train new team members, I’m always looking for new ways to promote independence while maintaining our firm’s core values. To help with this task, we turned to ChatGPT—but abandoned it after our first try.
As you may know from the recent flurry of news reports, this artificial intelligence chatbot is trained based on a large dataset of text from the Internet, books, and articles. It can do many cool things, like assist with an outline for an upcoming speech I’m giving. But here’s an example of ChatGPT spitting out an answer that’s flat-out wrong. Entering “How long can I hold an inherited IRA?” gave results that were based on old laws, and did not align with provisions of the 2022 SECURE 2.0 Act that took effect earlier this year.
The incorrect results from ChatGPT stated that required minimum distributions (RMDs) may be taken based on the life expectancy of a non-spouse beneficiary; the correct answer is that withdrawals from an inherited IRA owned by a non-spouse beneficiary must be completed within 10 years of the original account owner’s death.
Another search in ChatGPT turned up the wrong required beginning withdrawal age for IRAs inherited by a spouse. (The correct answer is 73, up from 72 last year.) Digging in further, I discovered that ChatGPT’s knowledge cutoff date is September 2021. After that, the AI chatbot received no new information!
ChatGPT is an exciting technology with potential, but it can’t yet replace the judgment and wisdom of seasoned experts, especially in fields with rules that update often—like ours. For now, our team-in-training will rely on sites with fresh content providing fact-based technical articles curated by humans. These include Schwab’s Insights & Education, Investopedia, the IRS, Bloomberg, The New York Times, and The Wall Street Journal’s Personal Finance section. And our Lead Advisors will ensure that all correspondence coming from our firm is factually correct and compliant with all laws and regulations.
I conclude ChatGPT is a poor wealth manager.