Despite these many beneficial uses of AI and ML, the request for comments acknowledges many risks and uncertainties around the use of these technologies.
The five federal agencies tasked with regulating the financial industry are, for the first time ever, jointly seeking comments on financial institutions’ use of artificial intelligence (AI) and machine learning (ML). This joint request for comments by the Board of Governors of the Federal Reserve, Consumer Financial Protection Bureau, Federal Deposit Insurance Corporation, National Credit Union Administration and Office of the Comptroller of the Currency highlights the growing trend of enhanced cybersecurity and privacy enforcement across all industry sectors and increased federal regulation of emerging technologies. Both financial institutions and fintech companies that supply the tools that power AI/ML applications should consider responding to this request for comments, as responses are likely to shape future federal regulations and enforcement actions.
What Are AI and ML and How Do They Relate to the Financial Industry?
Artificial intelligence and machine learning are often used interchangeably, but they are distinct concepts with unique applications. AI has existed since the 1950s, while ML gained prominence as a subfield of AI in the 1990s. As a well-known futurist succinctly explained a few years ago, AI is the “concept of machines being able to carry out tasks in a way that we would consider smart,” while ML applies AI “around the idea that we should really just be able to give machines access to data and let them learn for themselves.” AI and ML have gained widespread adoption in the financial sector in recent years.
The request for comments identifies six key applications of AI and ML:
- Flagging unusual transactions by “employing AI to identify potentially suspicious, anomalous, or outlier transactions” and “identifying transactions for Bank Secrecy Act/anti-money laundering investigations, monitoring employees for improper practices, and detecting data anomalies.”
- Improving and personalizing customer service, for example, through “the use of chatbots to automate routine customer interactions, such as account opening activities and general customer inquiries” or “to process and triage customer calls to provide customized service.”
- Informing credit decisions by supplementing existing process with analyses of “cash flow transactional information from a bank accounts.”
- As a risk management tool “to enhance credit monitoring (including through early warning alerts), payment collections, loan restructuring and recovery, and loss forecasting” and to “assist internal audit and independent risk management to increase sample size (such as for testing), evaluate risk, and refer higher-risk issues to human analysts.”
- Textual analysis of unstructured data such as “regulations, news flow, earnings reports, consumer complaints, analyst ratings changes, and legal documents.”
- Enhancing cybersecurity through “real-time investigation of potential attacks, the use of behavior-based detection to collect network metadata, flagging and blocking of new ransomware and other malicious attacks, identifying compromised accounts and files involved in exfiltration, and deep forensic analysis of malicious files.”
Despite these many beneficial uses of AI and ML, the request for comments acknowledges many risks and uncertainties around the use of these technologies:
[T]he use of AI could result in operational vulnerabilities, such as internal process or control breakdowns, cyber threats, information technology lapses, risks associated with the use of third parties, and model risk, all of which could affect a financial institution’s safety and soundness. The use of AI can also create or heighten consumer protection risks, such as risks of unlawful discrimination, unfair, deceptive, or abusive acts or practices… or privacy concerns.
Failures in these areas can lead to enforcement actions by the above agencies, as well as the Federal Trade Commission, the Department of Justice, the Securities and Exchange Commission and/or state Attorneys General.
Why Is the Federal Government Seeking Comments, and How Will They Be Used?
Unlike traditional notice-and-comment rulemaking where federal agencies request comments on proposed rules, this request seeks comments for informational purposes only. However, it is likely that these comments will be used to inform future proposed rulemakings and guide enforcement actions including through updated agency guidance documents.
The request for comment offers two potential clues about the government’s regulatory and enforcement intentions around the use of AI and ML by financial institutions.
First and foremost, cybersecurity and privacy take center stage in the request for comment. In addition to traditional cybersecurity concerns such as hacking and data breaches, the request notes that “AI can be vulnerable to ‘data poisoning attacks,’ which attempt to corrupt and contaminate training data to compromise the system's performance.” These concerns build on the report issued in March 2020 by the National Science & Technology Council, which found that AI “is often vulnerable to corrupt inputs that produce inaccurate responses from the learning, reasoning, or planning systems” often to the point that “deep learning methods can be fooled by small amounts of input noise crafted by an adversary,” allowing “adversaries to control the systems with little fear of detection.” Accordingly, commentators may wish to address whether and how the above agencies (or other parts of the federal government) should regulate the steps financial institutions should specifically take to safeguard their systems against such attacks and how institutions should respond when attacked.
Second, the request for comments suggests potential regulations and enforcement actions based on what financial institutions know—or do not know—about how these technologies are designed and deployed. The request notes that many of today’s regulatory frameworks are based on the premise that “[u]nderstanding the conceptual soundness of any model, tool, application, or system aids in managing its risk,” but “[i]n the case of certain less transparent AI approaches, however, evaluations of conceptual soundness can be complicated,” due to concerns over explainability, overfitting and dynamic updating. Here again, the request for comments echoes the 2020 NSTC report, which noted “[a]s AI systems are deployed in high-value environments, the issue of ensuring that the decision process is trustworthy… is paramount.” The report then suggests trust can be achieved by “defining performance metrics… making AI systems explainable and accountable, improving domain-specific training and reasoning, and managing training data.” Commentators thus may wish to address whether trust parameters should be expressly mandated, including whether financial institutions should be required to take those considerations into account when selecting AI and ML products.
The request for comments is also notable, particularly for fintech vendors and community banks, because it acknowledges that “[f]inancial institutions may opt to use AI developed by third parties, rather than develop the approach internally.” Thus, a risk based approach rather than a one-size fits all approach may be more appropriate: “A financial institution’s AI strategy, use of AI, and associated risk management practices could vary substantially based on the financial institution’s size, complexity of operations, business model, staffing, and risk profile, and this could affect the corresponding risks that arise.” Commentators could also address how such risk based approaches should be developed.
When and How to Submit Comments
The deadline to submit comments is June 1, 2021. Commentators may jointly file comments with all five requesting agencies or may elect to file with a subset of agencies. Considerations that may inform where to file include which issues are being addressed in comments and the identity of the commentator (large financial institution, fintech company, community bank, credit union, etc.) vis-à-vis the jurisdiction of the regulatory agency.
For More Information
If you have any questions about this Alert, please contact Brian H. Pandya, Sandra A. Jeskie, Cindy Yang, any of the attorneys in our Banking and Finance Industry Group, any of the attorneys in our Financial Technology Industry Group, any of the attorneys in our Technology, Media and Telecom Industry Group or the attorney in the firm with whom you are regularly in contact.
Disclaimer: This Alert has been prepared and published for informational purposes only and is not offered, nor should be construed, as legal advice. For more information, please see the firm's full disclaimer.