AI Risks in Financial Services
Fair-lending cases, CFPB actions, model-risk failures, and AML enforcement — scored from public records.
Industry overview
Financial services has the deepest model-risk regime in the economy, and AI deployments are now squarely inside it. Fair-lending enforcement has expanded from traditional underwriting into algorithmic decisioning. The CFPB has signaled that adverse-action notices generated by black-box models are not, by themselves, compliant. Model-risk frameworks like SR 11-7 — written for statistical models — are being interpreted to cover LLM-based systems, and institutions are discovering that "vendor said it works" is not a validation.
Key risks for Finance
Disparate impact in algorithmic underwriting
Models trained on historical lending data can reproduce historical bias. Disparate-impact claims under ECOA and the Fair Housing Act do not require intent — pattern alone is sufficient. Vendor explanations of "our model does not use protected class" do not survive proxy analysis.
Inadequate adverse-action explanations
Regulation B requires specific, accurate reasons for credit denials. Generic LLM-generated explanations or post-hoc rationalizations of black-box scores have been challenged by the CFPB as non-compliant.
Model-risk and SR 11-7 gaps
OCC, Fed, and FDIC examiners increasingly expect LLM and ML systems to sit inside the institution's model-risk framework — with documented validation, ongoing monitoring, and challenger models. Many AI vendors do not provide the artifacts that framework requires.
AML / sanctions screening failures
AI-augmented transaction monitoring and KYC tools that miss true positives or generate unmanageable false-positive rates can produce both BSA violations and operational meltdowns. Both have happened.
Regulatory surface
Key regimes: ECOA / Reg B, Fair Housing Act, BSA / FinCEN, OCC and Fed model-risk guidance (SR 11-7), CFPB unfairness and UDAAP authority, NYDFS Part 500, EU AI Act high-risk credit decisioning.
AI services tagged for Finance
7 servicesBuyer checklist
- 1
Documented model-risk artifacts: validation report, ongoing performance monitoring, challenger model, conceptual soundness review.
- 2
Disparate-impact testing across protected classes using actual portfolio data.
- 3
Adverse-action explanation pathway that satisfies Reg B at the level of specific, accurate reasons.
- 4
Vendor contract that grants the institution the audit and explainability rights examiners will ask for.
- 5
Incident classification: a hallucination in customer-facing output is potentially a UDAAP issue, not a customer-service issue.
Frequently asked
Does SR 11-7 apply to large language models?▾
OCC and Fed examiners have applied SR 11-7 principles to LLM-based systems used in regulated functions. The conceptual-soundness, validation, and ongoing-monitoring requirements are interpretation-of, not exempt-from, the existing guidance.
Can AI-generated adverse-action notices be compliant?▾
They can, if the underlying model produces specific and accurate reasons that the explanation reflects. They are non-compliant if the explanation is post-hoc narrative wrapped around a black-box score.
Get alerts when Finance risk scores change.
Court cases, breaches, and regulatory actions — pushed to you when they affect this industry.