Goodbye disparate impact. Hello uncertainty.
Fintech Takes
Alex Johnson
May 8th, 2026
{cta_url_read_in_browser = community_base_url + "/library/" + article_id + "?utm_source=newsletter&utm_medium=email&utm_campaign=" + edition_slug + "&utm_content=read_in_browser"}{cta_url_read_in_app = community_base_url + "/library/" + article_id + "?utm_source=newsletter&utm_medium=email&utm_campaign=" + edition_slug + "&utm_content=read_in_app"}{cta_url_join_conversation = community_base_url + "/library/" + article_id + "?utm_source=newsletter&utm_medium=email&utm_campaign=" + edition_slug + "&utm_content=join_conversation" + "#comments"} {if profile.vars.member_status == "lead" || profile.vars.member_status == "unfit"} {else}{if profile.vars.member_status == "fit"} {else}{if profile.vars.member_status == "member"} {else} {/if}{/if}{/if}

In partnership with

Sponsor logo

Happy Friday, Fintech Takers!

I hope you had a good week.

I have relocated to a new office and have spent a good chunk of my week getting the new Fintech Takes HQ up and running. 

We’re working out of a gloriously old and ramshackle building where, at any given time, it’s difficult to say which outlets will be functioning. It’s 6 to 5 and pick 'em as to whether the building will fall down before our one-year lease expires. 

But, for now, it’s home.

- Alex 

P.S. — Uncertainty is a market condition, but we can’t let it paralyze us. 

Clarity isn’t coming first. Action can. Bring all of the questions keeping you up at night and join us on May 14th (our rescheduled date!) as we dig into the problems your team is actually facing.

This one is an AMA, so you tell me what you want to talk about! Save your spot.

Was this email forwarded to you?


Sponsored by TruStage

When borrowers miss loan payments, it's usually not because they forgot.

TruStage's new guide maps how borrowers prioritize competing obligations when money gets tight, and what those choices reveal about repayment behavior, trust, and portfolio risk.

It's a concise behavioral deep dive into why missed payments are often a rational trade-off (and how smart lenders can respond).

With delinquencies climbing across consumer credit, rethinking how you read borrower behavior is the best path to stronger credit performance.

A stronger portfolio is only a few clicks away.

Follow the money (literally).

LPS-8601922.1-1125-1227


DEEP DIVE

10 Questions About the Future of Fair Lending

A couple of weeks ago, I wrote about the news that the CFPB had finalized a rule to eliminate the use of disparate impact analysis in fair lending compliance under the Equal Credit Opportunity Act and its implementing regulation (Reg B).

In my analysis, I was critical of the CFPB’s new rule, arguing that disparate impact analysis and remediation is often helpful for lenders in identifying and correcting for subtle flaws in their underwriting models.

This write up engendered an unusual amount of thoughtful feedback from readers, many of whom took issue with my defense of disparate impact as a mechanism for supervising and enforcing fair lending.

I genuinely appreciate this feedback and the time that folks spent engaging with me, privately, over the past couple of weeks on this topic. In fact, it spurred me to reach out to many of the smartest folks I know in lending and credit risk, to ask their opinions and to get some of my dumb questions about fair lending and disparate impact answered.

I still have many, many questions, but they’re now a little less dumb, and I thought it might be useful to share them in today’s newsletter.

So, here they are: 10 questions about the future of fair lending, organized into three parts.

Part 1: What Actually Happened?

Before we get into whether the CFPB's rule was a good idea, it's worth being clear about what it does and doesn't do, and what, from a regulatory perspective, might happen next.

1. How much will this really change anything?

Take a step back and look at the actual constraints on lender behavior.

The Fair Housing Act still recognizes disparate impact, so anything that touches residential real estate — mortgage, home equity, refinance — is unchanged. ECOA's private right of action still exists, so plaintiffs' lawyers can still bring effects-based claims (and ECOA has a five-year statute of limitations, which means everything lenders do today is exposed to a future CFPB that may bring the doctrine back). State AGs in California, New York, Massachusetts, New Jersey, and Illinois have already signaled they'll continue pursuing effects-based theories under state law.

So what's actually different?

From what I'm hearing, the largest banks might be taking their foot off the gas, just a little — shrinking fair lending teams at the margin, spending less time and resources on proactive analysis and remediation — but they're not stopping. And critically, they're not dramatically altering their credit underwriting in response to this, because the rest of the legal and reputational architecture hasn't changed.

Mid-tier banks and small fintech companies are more likely to scale back substantively. The practical effect of the rule, in other words, may be that the floor drops while the ceiling stays roughly where it is, and the gap between the responsible and irresponsible ends of the industry widens.

2. Will states formally codify disparate impact in lending law?

If federal enforcement isn't really going away, where does it actually go?

We've seen this movie before. When the CFPB stepped back from open banking, New York moved to write rules into state law. Not just to fill the federal vacuum, but to lock in standards that survive future administrations.

Fair lending is set up for the same dynamic, only larger. Several states already recognize disparate impact under their general civil rights or anti-discrimination statutes. New Jersey's Division on Civil Rights adopted comprehensive disparate impact regulations under the Law Against Discrimination, that explicitly extend disparate impact to lending and reference algorithmic decision-making.

The next step is formal codification — statutes or regulations that mandate disparate impact testing, set bright-line statistical thresholds, or require model audits as a condition of operating in the state.

If California, New York, New Jersey, and Massachusetts go that direction over the next 18 months, the federal rollback becomes symbolic and lenders end up with a fragmented 50-state rulebook instead of a single federal one.

3. Did the CFPB just kill the most market-friendly fair lending tool lenders had?

There's another thing the rule did that's gotten less attention than the headline change, and it cuts in a direction that's hard to reconcile with the rest of the rule.

Special Purpose Credit Programs (SPCPs) are a carve-out in Reg B that lets lenders extend credit to a defined "economically disadvantaged class" — for example, a program targeting Black entrepreneurs, Native borrowers on tribal land, or first-generation homebuyers — using criteria that would otherwise look like illegal discrimination. They were created exactly because Reg B's general prohibition on using protected characteristics in credit decisions made it hard for lenders to intentionally serve underserved populations.

Over the past 5-7 years, lenders figured out something interesting about SPCPs: they often perform quite well! JPMorgan Chase, Bank of America, and others have publicly reported that their SPCP portfolios match or outperform their general books. JPMC's small-business SPCP, for example, reduced the gap in loan-approval rates between majority-white and majority-minority areas from 11% to 2-3% during its pilot.

The reason is that traditional underwriting models are well-calibrated for known risks — populations the historical training data represents heavily — but poorly calibrated for unknown risks, where the model variance is large and the lender genuinely doesn't know how the loans will perform. SPCPs were a permission structure for taking unknown risk inside a defined wrapper, and the data shows that those bets frequently pay off.

The new rule restricts SPCPs significantly. For-profit lenders can no longer use race, sex, or national origin as eligibility criteria.

So the CFPB has simultaneously eliminated the legal pressure to address disparate outcomes AND restricted the main tool lenders had to voluntarily address them. If markets are supposed to close credit gaps without regulatory pressure, why narrow the most market-friendly fair lending tool — the one lenders actually liked — at the same time?

Part 2: Can Fair Lending Be Fixed Rather Than Gutted?

So if the CFPB's rule didn't really eliminate fair lending (and arguably made it more fragmented and more dependent on state-level enforcement) the obvious question is whether there was a better path. The disparate impact doctrine has real, defensible critiques, but those critiques don’t necessarily mean that we should delete it entirely.

4. What counts as a "business necessity"?

Let's start with a concept at the heart of the doctrine that I'm not sure many people fully understand.

Disparate impact analysis works as a three-step burden-shifting framework. First, a regulator or plaintiff shows that a lender's neutral policy produces a disparity across a protected class. Second, the lender rebuts by showing the policy serves a business necessity, some legitimate reason it improves the lending business. Third, even if the lender clears that bar, the regulator can still win by showing a less discriminatory alternative (LDA), a different policy that would serve the same business purpose with less disparate effect.

Both "business necessity" and "LDA" have always been mushy. If a model improves loss rates by 30 bps but widens the approval gap by 5 points, who decides if that's necessary? Who's responsible for finding the LDA? The lender, the regulator, or some third party? On what evidence? 

Forty years into this doctrine, we've never crisply answered any of these questions.

5. Is it possible to prevent regulators from weaponizing fair lending?

The ambiguity above wouldn't matter as much if the doctrine had been applied consistently and predictably over the past 40 years. It hasn’t.

There are two cases, in particular, that many of the folks I spoke with cited:

  • Townstone Financial(2020-2024). The CFPB sued a small Chicago mortgage lender based on radio show comments the owner made. The comments were racist but they arguably also weren't directed at any specific applicant in the way that the law had traditionally prohibited.

  • Ally Financial (2013). A $98M settlement based on statistically derived racial data showing that auto dealers marked up loans more for minority borrowers. The problem was that Ally couldn't see the race of borrowers (the dealers did the markups), and the way that race was statistically derived was, by the CFPB's own internal estimates, far from accurate at the individual level.

The overwhelming feedback I got when I asked people about this is that these cases are exceptions, not the rule. Most fair lending enforcement is straightforward and evidence-driven. But the exceptions have done real damage to the disparate impact doctrine's legitimacy, and that damage is part of what made the CFPB's new rule politically possible.

So, the question is this: Is there a version of disparate impact that's structurally protected against this kind of overreach; bright-line statistical thresholds, mandatory regulator pre-clearance for novel theories, formal safe harbors for good-faith model testing? Or is enforcement discretion an unavoidable part of the framework?

6. Can we even measure disparate impact accurately right now?

Even if we built the cleaner, more predictable doctrine described above, we'd still have a problem at the heart of the entire fair lending debate that almost nobody talks about: the data used to enforce it isn't very good.

For mortgage lending, lenders collect race and ethnicity directly via HMDA. That data is highly reliable. For small business lending, the 1071 rule (such as it is) will eventually require similar collection. But for credit cards, auto loans, personal loans, BNPL, and basically everything else, lenders are prohibited by Reg B from collecting demographic information at application.

So when regulators or compliance teams want to test for disparate outcomes in those products, they have to infer race using BISG (Bayesian Improved Surname Geocoding), which combines surname distributions from Census data with the racial composition of the applicant's census tract.

BISG works okay in the aggregate but is much less accurate at the individual level, particularly for Asian/Pacific Islander applicants and for multiracial applicants, where the CFPB's own methodology paper acknowledges weaker performance. The Ally case — where the lender paid $98M based partly on BISG-derived racial data — is the canonical example of what can go wrong with this methodology.

7. Should we just revise Reg B and collect demographic data at application?

Reg B's prohibition on collecting demographic data at application was designed for a world where human underwriters made decisions, and demographic information would inevitably bias those decisions. That world is gone. Underwriting decisions are now mostly made by statistical models and rule-driven decisioning systems. We can architecturally control what data flows into those models and systems.

Collecting race and gender at application — encrypted, walled off, available only for fair lending testing and explicitly excluded from the underwriting decision — is now a solvable engineering problem, not a policy contradiction. It would also fix the BISG accuracy problem at the root, which would address one of the lenders' most legitimate complaints about disparate impact testing and enforcement.

8. Can we make fair lending compliance testing dramatically more efficient?

A lot of the institutional anger at disparate impact compliance isn't really about the doctrine. It's about the cost.

The biggest banks have entire teams running fair lending testing. A community bank with thirty people in compliance, or a Series B fintech with three, has to choose between paying a vendor a lot of money, doing it badly, or skipping it.

If the actual goal is more lenders running better fair lending tests, the path forward isn't deleting the test. It's making the test cheaper, faster, and more standardized.

Where are the open-source fair lending testing tools? Where's the regulator-blessed reference implementation that a community bank or fintech lender can plug a portfolio into and get a defensible answer in a week? Why does this remain a $500/hour consulting engagement in 2026?

Part 3: Is Fair Lending About to Become Much Harder?

Even if we did everything in Part 2 — clarified the doctrine, fixed the data, built better infrastructure — there's a question lurking underneath all of it about whether the entire framework is technologically sustainable. Underwriting is changing fast, and fair lending laws were designed for a world that's about to look very different.

9. Does cash flow underwriting break the blind audition?

In the 1970s and 80s, major orchestras started using blind auditions — performers behind a screen — and women's representation in those orchestras went from less than 5% in 1970 to roughly 25% a generation later. The screen worked because it forced decisions onto the variable that mattered (musicianship) and away from the variable that didn't (gender).

Traditional credit reports do something similar: they're studiously stripped of demographic information by design. Combined with automated underwriting, that's a technological implementation of the blind audition principle. The model literally cannot see the borrower's race or gender.

Cash flow underwriting fundamentally breaks that screen. Transaction data, merchant names, payment timing, and geographic locations all give off fairly obvious demographic signals, which any sufficiently capable model will be capable of picking up on.

And LLM-driven underwriting using high-dimensional embeddings collapses the entire concept of a "proxy" that disparate treatment law (which covers intentional discrimination against applicants based on protected characteristics … which the CFPB has left in place) depends on. Disparate treatment assumes you can identify the variable a lender used in place of a protected characteristic. If a model uses a 4,000-dimensional embedding of an applicant's transaction history, browsing metadata, and language patterns, and the embedding produces racial disparities, what's the proxy? Is there one in any meaningful legal sense?

10. Are agentic AI and disparate impact inherently compatible?

The previous question is about what models can see. This one is about what they'll do once they can optimize against a fairness constraint at full speed. 

Almost nobody in fair lending compliance is thinking in AI-safety terms yet, but they should be.

Imagine a fully agentic lending system with a fair lending compliance layer that requires demographic ratios to stay within X%. 

The system will find the cheapest way to satisfy that constraint.

That might mean approving a small batch of high-confidence applications from underrepresented groups to satisfy the ratio, while making the marginal applicant in those groups — the borrower fair lending was actually designed to protect — harder to approve than ever. It might mean trading applications with another lender whose constraint is easier to satisfy at the margin. Or it might mean selling portfolios to balance the books at quarter-end (a more extreme version of what big banks already do when they buy CDFI loan pools to improve their HMDA ratios).

The constraint gets satisfied, but the underlying goal gets ignored.

In AI safety research, this is known as reward hacking, the tendency of an AI system to optimize against a measurable proxy for an unmeasurable goal. And it's not a hypothetical 2030 problem. It's a 2027 problem.


Sponsored by Lithic

Bad issuer infrastructure hides until you scale. Then comes the fees (mysterious, naturally), outages (always at volume), and the features that were supported until they weren't (funny how that works).

Lithic runs direct physical connections to Mastercard before any of that becomes your problem.

That means real-time authorization control and transaction data that other processors can't access.

Your processor's ceiling has a way of becoming your product's ceiling.

Companies in disbursements, spend management, and digital banking that can't afford a weak link build on Lithic.


MORE QUESTIONS TO PONDER TOGETHER

Big news for the endlessly curious (yes, you): I’m collecting your fintech questions on a rolling basis. 

What’s keeping you up at night? What great mysteries in financial services beg to be unraveled? Think of it this way, if a stranger is a friend you just haven't met yet, your question is a Fintech Takes conversation waiting to happen. 

One that could headline a Friday newsletter or be answered in an upcoming Fintech Office Hours event.

Drop your question here, whenever inspiration strikes!


WHERE I'LL BE

There are many fun events — virtual and in-person — coming up in the next few months. Here’s where I’ll be!

🖥️ Lending in the Fog: How Lenders and Borrowers Are Adapting to Uncertainty | May 14

Our rescheduled AMA on all things lending. Come ask me and Bjoern your toughest questions!

✈️ Emerge | May 19 - 21

I can’t believe this will be my first time attending Emerge. This will be a big one to cross off my fintech bucket list!

🖥️ Embedded at Scale: When the Stack Gets Put to Work | May 27

Join us for a fun and informative conversation about the future of embedded payments!

🏔️ Fintech Frontier Summit | May 31 - June 3

Talk about a great topic and a great venue. We’ll be talking about bank - fintech partnerships at a ranch in the mountains in northern Montana. In June. Fuck yes. Space is very limited, but let me know if you apply, and I’ll put in a good word!

✈️ Open Banker Salon | June 5

OK, I’m cheating a bit on this one. I won’t be attending this event (too close to my son’s birthday) but I really wish I was. John, Ashwin, Casey, and the rest of the Open Banker team have done a great job pulling together a compelling group of speakers and topics, and the format (a salon!) sounds exactly like what we need more of in D.C.


CORRECTING THE RECORD

My wife — Mrs. Fintech Takes — subscribes to this newsletter. While she doesn’t care about fintech, she does enjoy reading the personal anecdotes, observations, and bits of trivia that I routinely sprinkle throughout. She has requested a space within the newsletter for her to correct the record when she feels that I have shared something that is false or misleadingly characterized. I have reluctantly acquiesced to her request. Her first correction is below.


Dear Readers,

I believe it falls within my purview as Mrs. Fintech Takes to inform you that Alex did not stop thinking about the NBA for “the foreseeable future” (as he claimed). He has been found furtively watching the playoffs despite the grief it brings him. Make of this what you will. 

Yours in truth,

Mrs. Fintech Takes


Thanks for the read! Let me know what you thought by replying back to this email.

— Alex

LinkedInTwitterInstagramPodcast

Get your brand in front of leaders

Workweek Media Inc.

1023 Springdale Road, STE 9E

Austin, TX 78721

Takes too hot?

Unsubscribe
Workweek Logo