Just-In-Case AI: Does Downside Risk Limit AI ROI?
Lessons from covid supply chain issues could apply to AI
Happy Sunday! I’m Rob May, CEO at BrandGuard, the world’s only AI powered brand governance platform. I’m also an active angel investor, and run the AI Innovator’s Podcast. Our 2024 season is kicking off in two weeks so if you know a good guest please reach out.
Today I want to write about what happens when AI moves beyond human capacity, and whether or not we should have any concerns or backup plans. I’ll use BrandGuard as an example and talk a bit about how supply chain lessons from Covid are relevant.
At BrandGuard, we ingest brand guidelines and sample content and then use it automatically approve or disapprove customer facing brand assets. So far we haven’t seen any customers roll GenAI out at scale so all of our use cases are for companies that just have sprawling marketing groups and produce enough content with humans that ensuring brand consistency is already a problem. But everyone we talk to is considering GenAI and figuring out how to use it.
In that world, BrandGuard becomes even more valuable because if you are making 1,000 personalized emails or 5,000 landing pages or 10,000 custom videos with GenAI, using humans to ensure they are all brand compliant is not possible. You have to lean on AI. That means many assets with customer facing messaging will go out without human approval.
As society advances in rolling out AI across more and more processes, this “no human checked this” issue is going to grow. That might be fine in most cases and will definitely make most workflows more efficient. But we know there are always exceptions, surprises, and unexpected issues in tech. What happens when, for whatever reasons, the machines fail? And they fail at scale?
For brand assets that may be a problem, but it won’t crash the world. For some areas where AI may be applied, it could cause major problems.
I’m reminded of just-in-time supply chains. When I was in business school, it was one of the big things we talked about - how Japan crushed the U.S. in the 70s and 80s because they adopted JIT. By the time 2019 rolled around, just-in-time supply chains were the norm pretty much everywhere.
Then Covid hit and messed all that up. Everyone wished they had a second source of supply, or had more inventory on hand, or whatever. Supply chain people stopped talking about “just in time” supply chains and started talking about “just in case” supply chains.
My question is - how do we apply that to these AI workflows we are building? What does a “just in case” AI workflow look like, and what does that mean for how we set it up? Should humans be sampling and auditing some AI decisions? Will other AIs monitor the first set of AIs? Will we need human backups for everything?
The economics of rolling AI out to many business and government workflows will sometimes be driven by the economics of the “just in case” piece. When AI lowers the cost to perform a task by 90% because you offload it to machines, having a human backup may be a significant add-back. It might be enough of an add-back to not make the cost of the initial rollout worth it.
This world will be upon us very soon, where AI breaks in surprising ways, in places we didn’t expect it to, and people will start talking about “just in case” AI backups, and will be stuck in that case until AI gets human level or better at that task. That could be a while. Think about self-driving cars. It was about 8 years ago we were hearing that fully autonomous driving is just around the corner. Now I don’t think very many people believe it will be here in the next few years. The expected economics of full automotive autonomy have changed.
I wonder what fields will most need a “just in case” AI backup, and how that will make for less attractive AI economics in some surprising places.
Thanks for reading.