By: Amy Cheetham and Angeline Rao
In 1992, FICO launched its Falcon Fraud Manager, an early real-time fraud detection system based on neural nets. Four years later, they filed a patent for their AI explainability product, Reason Reporter. So, while AI clearly isn’t new to financial services, the advent of foundational models and generative AI unlocks a completely new class of internal use cases.
Where does the financial services industry go from here? While traditional problems like fraud may seem ripe for generative AI, they have existing effective solutions and are rooted in structured data. Instead, fintech AI leaders agree that the latest wave of AI breakthroughs will be most impactful on unsolved challenges in streamlining business operations and processing unstructured data.
In our conversations with fintech AI leaders worldwide we focused on where, specifically, AI can help improve accuracy and efficiency to better serve end customers. All confirmed that new foundational AI can change how their industries do business. We’ll dive more into the massive opportunity we see for founders below.
At the same time, we’re well aware that introducing new technologies into a highly regulated industry like financial services can present significant challenges, as we’ve seen when helping our fintech portfolio navigate these often murky waters. FDIC and OCC scrutiny have increased in recent months and, with the AI boom, we expect this to continue, especially after the blows to the crypto ecosystem in the past year.
Where Gen AI will make the most impact in fintech
It’s easy to say “we’re bullish about the impact of AI across financial services” (in which case this article could have been a tweet…). All jokes aside, we’re excited to go deep on categories where we see that change is imminent or already in progress. In the current macroeconomic environment, fintech leaders are focused on cost savings and efficiency. These considerations, plus data privacy concerns, lead us to expect that most of the initial AI-based innovation in financial services will target internal use cases.
We spoke with leading fintech companies to understand where the industry is headed. The challenges they spoke about are widespread in the industry. Only the largest companies have the AI expertise and funding to solve them in-house, leaving a huge opportunity for new startups to tackle these challenges.
1. Compliance
Compliance is a near-universal pain point for financial services companies of all sizes.
Mandatory reports
Large financial institutions often have large and expensive teams focused on manual work required to stay accountable to regulators, like filing compliance reports. New foundational models have proved promising in distilling company data into the proper templates and formats of these reports. One of the largest financial services companies is experimenting with using LLMs to generate Suspicious Activity Reports for regulators. If successful, they expect to save 15-25% of their compliance team’s time on this use case alone.
Data privacy and security
Complying with security requirements is another huge operational effort for financial services companies today. Because foundational models can process vast amounts of unstructured data, we think they can be instrumental in helping companies enforce and audit data privacy and security. And we’re not alone. One large fintech is already testing LLMs’ ability to self-audit their data and code base in a few critical areas:
1. Can someone reverse engineer how we anonymize personal data?
2. Are we in line with our data retention policies?
3. To make sure we’re complying with GDPR, did we successfully delete every instance of this data from every ledger entry and data warehouse we use?
Today, without AI, answering these types of questions requires a coordinated effort among legal and compliance teams, product engineers, data engineers, and customer-facing teams. Just imagine how streamlined the process could be without having to all of these people.
Manual review
Manual review and monitoring of transactions is another key compliance use case. To stay compliant with anti-money laundering (AML), providers of peer-to-peer (P2P) financial services require large teams that constantly review transactions in real time. Google is already in the process of applying generative AI to its recently released AML AI product, which counts HSBC and Bradesco as customers. AML AI correctly detects risk at 2-4x times faster than current processes and reduces the volume of transaction leads that analysts need to review. HSBC can now process billions of transactions that used to take several weeks in just a few days. Google sets a strong precedent for AI products that tackle AML or other types of review like KYB.
2. Insurance
Come on – who doesn’t love insurance?! Think what you will, the insurance industry – with its copious unstructured data and manual processes – is ripe for AI-based disruption.
Collecting and organizing claim data
Introducing AI to organize claims data has made a huge impact on one Fortune 500 insurance company, given its high amount of physical correspondence. Each claim is built around full conversations with rich context and multiple points of contact, often made over the phone. By using new foundational natural language processing (NLP) models, this insurer ensures that its claims teams all have access to key points from previous conversations about a given claim. This saves their teams from sifting through or missing context entirely. Extending this insurer’s use case, we see an opportunity in insurtech analogous to what Infinitus Systems has done automating phone calls for healthcare.
Commercial claim evidence
Several large commercial insurers noted how much better recent AI is than traditional Optical Character Recognition (OCR). As a result, they can now use foundational models to more quickly process claim evidence that they receive, from photos of invoice, to balance sheets and PDFs. While we see great potential in this use case, insurers need to make underwriting decisions based on the full complexity and nuance of the claims data, even when using generative AI. Processing data with foundational models risks watering the data down.
The power of LLMs might tempt the insurance industry to incorporate all types of data into their underwriting models. However, the type of data permitted to be used in underwriting decisions is highly regulated. As an example, insurance companies cannot use data from a news headline to influence auto policy rates.
Crafting arbitration arguments
The now notorious case of the “ChatGPT lawyer” aside, we expect that LLMs will soon be able to generate case arguments, read the other side’s discovery, and write a long-form legal arguments for insurance lawyers. Since insurance companies can deal with millions of legal cases a year, this is another potentially huge savings in both time and money.
3. Processing complex financial data
We at Costanoa have previously published our thesis around the importance of data in generative AI. In financial services, a wealth of data exists in unstructured forms, and large language models are a Swiss Army knife for handling all types of data.
Accounting and finance
We have long been advocates of AI applications for finance teams. Costanoa portfolio company Vic.ai is pioneering LLM-based agents to automate accounts payable and receivable as an initial product. They also have plans for AI agents to perform tasks such as understanding financial data, forecasting, and risk management. This category has long been underserved and relies heavily on humans and antiquated excel and paper processes, this is a category we are very confident will see significant change over the coming years as we are better equipped to tackle more complex tasks using AI agents.
Ingesting unstructured financial documents
Loan applications, including the 20+ million personal loans that Americans take out each year, put a heavy lift on financial institutions to process documents of all shapes and formats, a challenge that OCR (Optical Character Recognition) is not built to tackle. All of these documents need to be entered into a bank’s systems to be used for loan decisions. Loan officers also need to read and comprehend relevant paperwork, such as a business plan.
Enter generative AI, and this challenge becomes much simpler. A large US bank’s use of LLMs for this exact use case has improved data accuracy and yielded richer data for decisioning. The end-to-end data process has been shortened from multiple days to three hours. New innovative startups can ensure this type of incredible efficiency proliferates across the financial services industry.
Extracting insights from large real-time data streams
Large language models are highly capable of presenting existing data into consumable forms, such as distilling them into human readable summaries or insights. Continuing on the loans use case, one large lender has begun to use LLMs to summarize business proposals in commercial loan applications. Trading desks and other financial firms have begun to use large language models to process large amounts of real-time data, such as Dow Jones, and flag insights that are relevant to existing portfolios or news that teams may need to act on.
AI’s future impact on financial services is certainly exciting, even in areas beyond what we have described. At the same time, we cannot forget that highly regulated industries face additional challenges.
AI regulatory landmines to be aware of
(This is not legal advice. We are not lawyers.)
Regulators do not like seeing compliance take a backseat to innovation. We have seen this intensely over the last few years as sponsor banks loosened their requirements and then regulators subsequently cracked down. Historically, regulators tend to let things play out initially as they form their perspective, and fintech companies need to be prepared for when they do.
These are the challenges that we expect AI applications within financial services to face:
Financial inclusion and fairness
In this new AI age, many people are eager to introduce unstructured data and new generative models immediately, but that strategy is not tractable in a highly regulated ecosystem like financial services. While improved model performance is alluring, the industry should be wary about these types of changes for use cases such as lending and customer onboarding. New data has the potential to introduce new unintended model biases. Financial services businesses are ultimately held accountable for all decisioning outcomes. They must be prepared to prove that any data used is both not discriminatory toward any group of people and does not have a discriminatory effect on model outputs.
AI Explainability
Explainability is a core part of Regulation B within the Equal Credit Opportunity Act. Before generative AI, in a traditional AI stack, models were custom trained in-house, giving fintech companies full reign on the end-to-end process. In the new AI stack, companies use powerful pretrained models that are open-source or from a provider, e.g. OpenAI, and then finetune on the company’s specific data. Before choosing which models or model providers to go with, consider if they can work with you to provide documentation and data to regulators at the depth and speed required. This should be a key requirement before considering productization.
Who’s ready to build a fintech AI company?
The financial services industry has always been highly regulated, and that did not stop billion dollar software companies from emerging and disrupting the sector in this last decade. With our deep experience in both fintech and AI, we are excited to support the next generation of startups who will create AI’s prolific impact on financial services. Over time, as generative AI is better understood and the technology matures, we will see value creation expand from internal use cases to include the application layer as well. We have all joked that every company is going to be a fintech one day, and that is still playing out. We are even more certain that every fintech will be an AI company, sooner rather than later.
Special thanks to our summer fellow Angeline Rao (Stanford GSB Class of ‘25) for her collaboration and research on this piece.