![]() | ||||||||||
| ||||||||||
Happy Monday, Fintech Takers! And happy (belated) Mother’s Day to all the fintech moms out there. It’s the hardest job there is and one of the most underappreciated. I hope you had a wonderful day. - Alex P.S. — Join me on May 27th. I'm sitting down with Jay Dearborn of WEX to talk about what it takes to scale embedded payments; API-first architecture, card issuance at scale, the compliance layers you can't fake. Should be a fun conversation! Was this email forwarded to you? Sponsored by Wex Every product team eventually runs into the same question: do we want to offer payments, or do we want to become a payments company? They aren't the same thing. Building it yourself pulls in compliance, KYC, infrastructure, and an operating model that doesn’t stay contained. It spreads. Embedded payments change the shape of that decision. That shift is moving faster than most roadmaps have accounted for. 3 FINTECH NEWS STORIES#1: Anthropic is Building a New Type of Financial Services VendorWhat happened?In one 48-hour push centered around May 5, Anthropic made it abundantly clear that financial services is one of its top priorities. The company announced a set of 10 AI agents, built for specific jobs-to-be-done in financial services:
A partnership with FIS:
And an integration with Moody’s:
So what?Financial services is Anthropic’s second largest vertical, making up roughly 40% of its top-50 customers. Clearly, the company believes that it has a lot of room to grow in this industry, and these partnerships and product launches are designed to accelerate that growth. But how, exactly, will that work? If you take a step back, and look holistically across all of these announcements, a pattern starts to become clear. Financial services — particularly investment banking and commercial banking for large corporates — is, at its core, a combination of two things: human labor and the software tools that labor uses. The whole industry organizes itself around that pairing. And for decades, there have been exactly two ways to sell into it.
Anthropic is building a business that delivers both at once: the embedded, workflow-aware, bespoke intelligence of a consulting engagement, and the zero-marginal-cost reusability of a software product. Anthropic's forward-deployed AI engineers learn the workflows from FIS’s customers and Blackstone’s portfolio companies, but the deliverable isn't a slide deck and a six-month consulting engagement. The deliverable is a productized AI agent that ships in Claude Cowork on Monday, and is immediately available to every other bank, asset manager, and PE-owned portfolio company on the platform, at zero marginal cost. That's a dangerous combination for both incumbent vendor archetypes. The consulting firms can't match it. They have no productization layer because their deliverable is a slide deck, not an infinitely reusable AI agent. The tech and data vendors can't match it. Their products are designed to be generic enough to resell, not customized to the exact specifications of each client’s individual workflows. Anthropic's position is the inverse of both. The consulting work funds the productization. The productization funds the next round of consulting. Every embedded engagement teaches the platform a new workflow, and the platform then makes that workflow available to every customer who didn't hire them. It’s the McKinsey model with Salesforce unit economics. Anthropic is building a fundamentally new type of financial services vendor. I wonder if FIS, Moody’s, and the rest of the incumbents understand what they’re helping to unleash. #2: Model Risk ManagementWhat happened?Federal regulators have published updated model risk management guidance for banks:
And the Federal Reserve's Vice Chair for Supervision, Michelle Bowman, gave a speech articulating some of the reasons for the change:
So what?Let’s start by being very clear. The Federal Reserve, the OCC, and the FDIC replaced a comprehensive, highly prescriptive, battle-tested 21-page supervisory guidance document on model risk management with a principles-based supervisory guidance document on model risk management that A.) is half as long as the prior guidance, B.) is applicable primarily to banks over $30 billion in assets (which represents roughly 30 banks total), C.) will not, by itself, lead to supervisory criticism if it is not followed, and D.) purposefully excludes generative AI and agentic AI models. And we are doing this — at the exact same time that Anthropic and OpenAI are embedding their AI engineers inside of every bank and bank vendor they can find and Anthropic is building models that create staggeringly massive cybersecurity risks — because, as Bowman said, “innovation is a necessary component of financial services, and supervisory guidance should not be a barrier for banks to engage with new and evolving tools and technologies.” Truly, what the hell? The old model risk management guidance (SR 11-7) was prescriptive and weighty, but its prescriptions were mostly common sense. Risk folks I've spoken with liked it. The guidance gave them cover to do their jobs well, and if they got pushback from the business side, all they had to do was point to it. That is no longer the case. The new guidance (SR 26-2) no longer requires model risk management, as an organizational function, to be structurally separate from the business. The same groups developing the models can be the ones validating them. That validation work no longer carries specific requirements for documentation, and the intensity and frequency of that validation work can now be different depending on the material risks posed by the models. This new materiality standard (which is one of the biggest changes from SR 11-7 to SR 26-2) might seem reasonable, except that the guidance only applies to the largest banks in the country (as opposed to all banks over $1 billion, which was the functional standard before) and no longer poses any risk of supervisory criticism or enforcement action. And the new guidance explicitly excludes generative AI and agentic AI! In 2026! That’s insane! I know the old guidance had some specific, prescriptive requirements that were, on the surface, difficult to square with how LLMs work. For example, SR 11-7’s validation framework rested on the idea that an independent validator could rerun the model with the same inputs and verify the outputs match. LLMs are probabilistic in nature, rather than deterministic, which makes this type of simple replication test much more difficult. But, like, isn’t that a good thing? Shouldn’t we be worried about mashing together these probabilistic models with the deterministic systems that we trust our financial services system to run on? Shouldn’t we be putting pressure on banks that want to use LLMs and similar attention-based deep learning models to solve this replicability problem before they put them into production? These problems are solvable. I know for a fact that there are some sophisticated banks and non-bank lenders working hard, right now, to solve them. And thank goodness they are! Because regulators aren’t acting with nearly the same urgency. Instead, they’re using deregulation to encourage innovation in an area that is already moving too fast for anyone to keep up with and promising an RFI on model risk management and agentic AI “in the near future.” #3: If you're first out the door, that's not called panicking.What happened?Coinbase laid off approximately 14% of its employees:
PayPal's cut will be more gradual, but larger:
And BILL’s cut will be larger still:
So what?The "AI restructuring" announcements coming out of PayPal, Coinbase, and BILL share the same rhetorical shape as Jack Dorsey’s announcement earlier this year: the cuts are framed as a fearless and forward-leaning strategy ("lean, AI-native," "accelerate AI adoption and automation," etc.) That framing isn't necessarily wrong, but it isn't necessarily right, either. We're in a strange transition stage where multiple things are probably true at the same time, and untangling them takes work. Most of the takes I've seen — both the "AI is eating jobs" version and the "this is just cover for cost-cutting" version — are missing the nuance. Here are the five things I think are simultaneously true: 1.) Software companies, including the ones operating in financial services, employ way more people than they need to operate effectively. SaaS economics are absurdly good. 80% gross margins on mature products give a company enormous latitude to hire ahead of need, build redundant functions, run experimental teams indefinitely, and accumulate organizational layers. Add the 2020-2021 COVID effect — the mistaken idea that the pandemic had “pulled forward” a more digitally-native future — and you get a software workforce that was sized for a world that never quite arrived. The 2022-2024 layoff waves at Meta, Google, Microsoft, and Salesforce were the first leg of correcting this. The current cuts are the second. AI is the new rationale, but the underlying overcapacity was there before AI showed up. 2.) AI is weakening the moats that many software companies assumed would protect them. Classic competitive moats — switching costs, integration depth, brand, partner networks — were built for a world where the marginal cost of building competing software was high. AI collapses that cost. An AI-native competitor can now (in theory) ship in months what a non-AI-native incumbent took five years to build. This, I think, makes the cost cutting and organizational restructuring that we are seeing from executives like Jack Dorsey and Brian Armstrong such an urgent priority. They’re worried about their moats. 3.) AI is making white-collar workers materially more productive, which means software companies will (eventually) need fewer of them. This is obvious, but it bears stating clearly: AI likely means that we will need fewer people in almost every job. This is most clearly true, at the present moment, in software engineering (though the short-term effects of this on the job market are not what you might expect) but it will eventually permeate into other, less deterministic job categories. Of course, we don’t know how long that will take or how the gains from that increased efficiency will be realized. Productivity gains tend to get captured first by the company, then by workers, then by customers. Right now we're squarely in the "captured by the company" phase. Whether the next phases actually arrive — better products, lower prices, higher wages for the workers who remain — depends on how competitive the labor and product markets get from here. 4.) Software companies have no idea how to reorganize themselves to actually take advantage of AI. The conventional wisdom is that "AI-native organizations will be smaller and faster." Cool. What does an AI-native org chart actually look like? Who reports to whom? What's the role of a middle manager when half their direct reports are agents? Should there be middle managers at all? Are functional silos still the right structure, or do you reorganize around workflows? Do you centralize AI engineering or push it out to product teams? Nobody has answered any of these questions yet. The companies announcing AI-driven cuts in May 2026 are guessing. Some will guess right. Some won't. Some will message it right. Some won’t. 5.) Investors and policymakers are at a complete loss for how to evaluate the cost structures of existing companies and their potential to harness AI. Public-market analysts can't decide whether they like these AI-driven cuts or not, which is why nearly identical layoff announcements move BILL (+8%) and PayPal (-10%) in opposite directions in the same week. The unit economics aren't settled. Compute costs aren't settled. Token efficiency isn't settled. Investors are trading on vibes because the underlying data doesn't exist yet. (Editor’s Note — PayPal appointed a “Chief AI Transformation & Simplification Officer” as a part of its restructuring, which probably helps explain why its stock went down. Talk about a “tell me you have a bloated and dysfunctional organization without telling me” move.) Policymakers are in the same fog, except their confusion is producing regulatory uncertainty. As we just discussed, banking supervisors haven't decided whether to treat agentic AI as a model risk, a third-party risk, or a cyber risk. The fintech CEOs cutting today are doing so inside a regulatory environment that is, itself, improvising, which means some of these layoffs are bets on a regulatory regime that hasn't actually been built yet. Some of those bets are going to lose. If you're first out the door and you guess right, that's called strategy. If you're first out the door and you guess wrong, that's called panicking. Right now, no one — not the CEOs writing the memos, not the investors buying and selling stock, not the regulators trying to keep up — can reliably tell the difference. 2 READING RECOMMENDATIONS#1: Build vs. Buy for Banks in the Age of AI (by Team8) 📚My friends at Team8 wrote a great report on how AI alters banks’ build, buy, and partnership decisions. It’s excellent, and highly relevant to the topics discussed in today’s newsletter. #2: AI Is Becoming The Operating System For Financial Life. We Need To Build It Right. (by Jennifer Tescher) 📚Also highly relevant to the topics discussed in today’s newsletter! I can’t tell you how excited I am to learn more about AI, from a financial health perspective, at next week’s Emerge conference. 1 QUESTION FROM THE FINTECH TAKES NETWORKThere are a TON of interesting questions being asked in the Fintech Takes Network. I’ll share one question, sourced from the Network, each week. However, if you’d like to join the conversation, please apply to join the Fintech Takes Network. The Fed, OCC, and FDIC are going to be publishing an RFI on model risk management, with a focus on LLMs and agentic AI. What do you hope that they ask about? What are the most important considerations for them getting new, LLM-specific guidance right? If you have any thoughts on this question, reply to this email or DM me in the Fintech Takes Network! Thanks for the read! Let me know what you thought by replying back to this email. — Alex | ||||||||||
|
{beacon}

