On September 28th, the OCC released its Committee on Bank Supervision Operating Plan for fiscal year (FY) 2024. The plan guides the OCC’s policy initiatives, supervisory priorities and planning for the following year. It highlights several key areas for examiners to focus on, including:
Our Take
While the OCC’s FY24 supervisory plan covers similar topics to last year, there is a call to action for financial related topics such as liquidity stress testing scenarios with respect to volatile economic conditions and depositor preferences. As such, bank management should prepare to present to examiners analyses of their uninsured deposits and balance sheet profiles as well as explanations of how they have (a) updated their internal liquidity stress testing assumptions for behaviors exhibited during the recent bank failures and (b) tested their capacity to execute contingency funding plans.
Another thread across the priorities is the focus on the ability of banks to evaluate and manage risks posed by novel products, many of which use innovative technology and partnerships with fintechs. The focus on growing relationships with fintechs, which was also highlighted in the June 6th interagency guidance on Third Party Risk Management, means that banks must be able to demonstrate adequate due diligence, on-going monitoring and sufficient oversight capabilities. This may involve reviewing agreements with third parties to provide transparency and auditability of their systems as well as having the talent necessary to adequately provide oversight and understand the associated risks.
These examination priorities should be considered with the understanding that examiners are under pressure to escalate concerns more quickly following the recent bank failures. Banks should therefore be prepared to (a) identify and correct issues before examiners find deficiencies; (b) act with urgency to remediate findings in a timely manner; (c) equip risk functions with sufficient resources and authority to oversee and address issues; and (d) enhance reporting of remediation efforts to support board and senior management oversight.
As economic conditions as well as geopolitical risk factors continue to evolve, regulators will expect that banks have the ability to adapt to the evolving conditions and regulatory priorities.
On September 19th, the CFPB released guidance on adverse actions impacting credit, such as denials or lowering credit limits, by firms that use artificial intelligence (AI), machine learning (ML) or other complex models to reach their decisions. The guidance reminds the lenders that they must give “specific and accurate” reasons for taking adverse actions as opposed to “vague and overly broad reasons that obscure...the reasons relied upon.” It further clarifies that firms may not rely upon the CFPB’s sample adverse action forms, which provide lists of reasons such as “limited credit experience” and “poor credit performance,” if they obscure the specific reason for the denial. The guidance notes that specific reasons are especially important for firms that use AI as they often draw from broad sources or analyze types of data beyond the customer’s expectations. It also states that the guidance applies equally to firms using opaque “black box” models that they may not understand sufficiently to meet their obligations.
Our Take
Explainability has been a key component of the Administration’s expectations around AI, with previous CFPB guidance and President Biden’s AI Bill of Rights acknowledging the need for consumers to be able to understand how algorithmic models are being used to make decisions impacting them. The updated guidance, while not introducing any new expectations, is now putting firms on notice that meeting the CFPB’s explainability expectation may involve a greater level of detail than originally thought. In response, firms relying on AI/ML models will need to examine their model development and governance frameworks to ensure that they leverage industry leading explainability practices such as (a) the imposition of monotonicity constraints1 on the relationship between risk drivers and model outputs as part of model training and (b) application of explainability analysis methods such as partial dependence plots,2 Shapely Additive Explanations3 and feature importance charts. The burden of creating standards and providing what is specific and accurate will fall onto institutions themselves and it will be important to update policies, procedures, model development frameworks, and model validation standards. As the explainability analysis methods are themselves not fool-proof, they may produce unstable results and offer a false sense of security, and firms will need to continually audit these methods to make sure they are up-to-date and accurate.
1Monotonicity constraints ensure that variables have a consistent impact on an outcome in a specific direction (e.g., if the variable grows, then the model output always decreases).
2Partial dependence plots examine how changes in one variable impact the overall result of the model.
3Shapely Additive Explanations are a method for determining the contribution of each variable to the ultimate result.
These notable developments hit our radar this week: