How AI Will Make The Next Banking Crisis Worse, Not Better
Do we need "social contract" AI models?
Happy Tuesday and welcome to Investing in AI. I’m Rob May, CEO of Nova and active angel investor in AI startups (most recently FeatureForm and MuseTax). I also run the AI Innovator’s Podcast. Reach out if you’d like to be a guest. I had a post geared up for Sunday, which I’ll now publish later, because I was watching the whole SVB debacle over the weekend, and it make me think about the topic I want to discuss today.
Several people wrote about how digital banking made the SVB run possible in ways that wouldn’t have been before. I believe the stats I read were that when IndyMac failed during the financial crisis, the bank run withdrew $10B in 16 days. With SVB, it was $41B in 48 hours. That’s almost 25% of their deposit base. I’m not sure any bank in the world could withstand that.
In parallel, there was some online chatter about tools that can fix this in the future. I’m sure we will see more services that automatically split deposits among many banks, and other creative ideas. But I want to approach this from the AI angle. What if one of the tools that emerges is an AI agent that monitors bank health? By looking at online chatter, financials, and understanding the bank’s investment portfolio, it can give you a score on bank strength, and also predict the odds of a bank run.
Sounds like a great tool for your company, right? Yet think through the implications if we all use that tool.
As an investor in over 100 companies, I saw a lot of email traffic. Some founders were sending emails about how, they were moving from SVB to First Republic or Mercury or some other bank. Other founders who didn’t bank with SVB were sending notes that they were banking with First Republic or Mercury but heard they will be next to fail, so they were moving somewhere else. So some of the companies leaving SVB were just moving to banks that were about to experience their own run.
The point is - in a world where all of this is automated and accelerated, these AI agents could inadvertently trigger a run on one bank, which could lead to runs on others, which could cause a lot of instability in the system. AI looking out for our individual corporate needs could collectively cause a bunch of problems.
The best way to fix this is friction. Tech is OBSESSED with lowering the friction to everything and I don’t believe that’s always a good idea. It was a good thing that SVB was closed over the weekend. It gave regulators a chance to pause and work through options. Introducing more friction back into the system is probably the best solution, but it won’t happen. There is no way. It just isn’t the way we think about the world anymore.
The only other solution that I see is that we need some sort of collective action AI models for these situations. I don’t know how that would work. It would have to be an AI version of Rousseau’s “social contract.” Rousseau starts with the idea that we all have rights that we voluntarily give up in order to impose duties on the others. I think Rousseau would say that it’s entirely my right to stab you in the neck, and it’s entirely your right to do that back to me. But instead of running around the world in fear of constantly being stabbed, we all agree to give up our rights to stab each other and to punish stabbers, so that we might have a better society that is more peaceful and functions much better.
What’s the right AI oversight system to put in place that looks like a social contract? What will make sure all our individual AI agents don’t act so fast that they trigger a cascade of devastation when calm, patience, and a little more friction would have helped keep things stable? Who should own, train, and run these models? It’s not something I’ve thought about before so, if you have thoughts, or are working on this, I’d love to hear from you.
Thanks for reading.
Rob, your point on friction is spot on and is in fact what happens in the public markets when a stock falls too suddenly. There are regulations that impose a “time out” when certain conditions are met.
And I think these sort of regulations and policies were crafted after some automated systems went hay wire a while back, so this is a problem space our regulators could be proactive in protecting. Assuming there is now a universal backstop on deposits (implied or not), maybe the government can regulate halts to funds transfers for a period of time, etc