Artificial intelligence (AI) has been a buzzword with an ambiguous meaning for some time. And as the number of financial professionals considering using AI within their practices has steadily increased — even more so with the introduction of ChatGPT in November 2022 — we’re seeing a surge of inquiries related to AI and the financial services industry.
Applications of generative AI in the industry are diverse, with an emphasis on financial professionals using it to support supplemental tasks like the creation of marketing materials or to analyze client lists. As its capabilities continue to evolve, we are seeing new ways to leverage the OpenAI framework, including streamlining analyses, transcribing notes, and facilitating compliance reviews of video and audio material.
As this technology continues to gain popularity across the industry, it’s increasingly important to have a comprehensive understanding of what AI is and the risks it poses. Below are key compliance challenges to be aware of when leveraging generative AI to support activities within a financial advisory firm.
1. Tracking a Moving Target
It’s nearly impossible to pin something down that is constantly evolving, which is exactly what financial professionals are experiencing with generative AI. Consider the industry events we regularly attend. Last year at a conference, there were very few, if any, AI vendors in attendance. But this year nearly 10% of all vendors are offering products built with OpenAI frameworks that are designed to streamline daily, back-office tasks for financial professionals.
In the realm of compliance, AI is proving most useful when it comes to streamlining tasks using various plug-ins — for example, monitoring written communications, video, audio, etc. Within financial services more broadly, we’re seeing increased interest in adaptations of the technology as millions of individuals are imagining new ways to leverage AI, from modeling portfolios to running portfolios, to name a few. This can create compliance problems when it comes to protecting client information.
Key Takeaway: The best way to stay on top of this changing landscape is to dedicate time to tracking the newest advancements in AI and how those can fit into different areas of work. This can help firms establish and keep updated utilization policies to help users leverage the technology in an appropriate way.
2. Adhering to Regulation S-P
Coined the “safeguards rule,” Regulation S-P requires financial advisory platforms to create and adhere to strict policies protecting client records and information. This includes protecting against potential threats to client record breaches or any anticipated hazards. In its present state, AI operates as an advanced search engine, but the uncertainty of its future calls for caution in fully trusting it with sensitive data.
Key takeaway: Incorporating AI as a framework for everyday tasks can alleviate time-intensive projects, but to remain compliant with Regulation S-P, financial professionals should refrain from sharing sensitive client information in any platform leveraging the OpenAI framework.
3. Safeguarding Business Information
Preserving the confidentiality of client information is a top fiduciary obligation, but equally important is safeguarding a business’s proprietary interests. Considering the unknown nature of generative AI, it’s important to avoid sharing any proprietary business information when using this new technology. Should an outsider or competitor gain access to any inside information, it could give them an edge to leverage business strategies and information on their platforms.
Key takeaway: Financial professionals should hold business information in the same regard as client information when it comes to standards confidentiality. In other words — avoid sharing proprietary business information when leveraging AI.