The Biggest Mistake AI Investors Are Making: Too Much Extrapolation
Or, why OpenAI may be a short
Happy Sunday and welcome to Investing in AI! I’m Rob May, CEO at Nova. We make BrandGuard which is an ensemble of AI models that help with brand protection and governance in an AI world. I also run the AI Innovator’s Podcast. Let me know if you have a good suggested guest.
I know many of you like to see it when people take on my ideas publicly so I have two things for you this week. My 5 Contrarian AI Theses post generated a lot of comments, and two people were brave enough to challenge some of them publicly. Parasvil Patel from Radical Ventures came on my podcast to discuss his views on those points. And then Eric Koziol wrote a great piece analyzing and disagreeing with some of my points. Go check them both out.
This week I want to talk about what I believe is one of the biggest challenges when investing in AI. To understand this challenge, I need to make an important point first. That point is - knowledge and intelligence, and particularly the way they intertwine, is discontinuous. This point is evident in working on data science problems.
For example, sometimes in data science and AI you are trying to solve a particular problem and you start with one technical approach. That approach, through more data and slight tweaks, gets you closer and closer to your goal, but then you hit a wall. Maybe you need a model that is 94% accurate and you start at 60%, slowly climb through 70%, 80%, and around 86% you stall out. The model just doesn’t get better with more data. The approach may be at it’s limit. To make progress beyond the level of 86%, you have to start over with a new approach using a new data science technique.
There are two types of common discontinuities in AI. One is when performance jumps much more than expected - e.g. you double the amount of data you have but triple the performance of the output from that. The other is when you max out on one technique and need an entirely new approach to keep making progress.
The difficult thing about investing against these discontinuities is that we, as early stage investors, aren’t used to them. They don’t exist in most other realms of investment. If a company is building computer chips and is the leader at the 28nm processing node, they probably have the technical prowess, customer visibility, and capital to also be the leader at the 22nm node as technology nodes shrink. If a company is building B2B SaaS, say a CRM, the story is similar. Anyone who comes along as a challenger has to replicate may of the old features, and the scale of customer acquisition channels, that the original company has. So, assuming the original company will maintain it’s leadership is a solid bet.
Disrupting incumbents it’s hard because historically, it has required something to change in the environment to make the incumbent vulnerable. That could be a major technology change, a regulatory environment change, or a change in consumer behavior. Now, in an AI world, these tech discontinuities mean you can’t make the bets you made previously.
If you are in AI, think back to CNNs, LSTMs, and GANs and how revolutionary they all were? Today those technologies barely matter and everything is Transformer architectures. But even things built on transformers aren’t the end-all-be-all. AI still lacks something. Humans are still way better at most things, and we do it with less training data.
That tells me another jump is coming - a tech discontinuity. Something new will take us from LLMs to more logic and reasoning. I expect a new tech architecture breakthrough in AI.
If/when that happens, what does it mean to companies on the LLM path? Will the same companies be dominant in this new wave, or, will the new technology be sufficiently different to just rapidly destroy the value of all the old tech? Will companies building LLM-related infrastructure be able to adapt to it to the new thing, whatever that new thing is?
If AI were like other technologies, I would expect LLMs to have a 15 year life cycle of leadership. But AI is different. LLMs may be gone in 3-5 years. What does that mean for investing in companies in and around the LLM space?
I think to invest in the space, you have to look for companies that have one of 3 things.
Stable customer relationships that will give them time to switch out the underlying technology as it advances. Customers should see the company as a sherpa, guiding them through an AI journey, and not making decisions purely on technical prowess.
Creative and flexible teams will be really important. In a world where the tech could change with one new published paper or code release, teams can’t fall in love with one way of doing things.
Applications that use many different types of AI and machine learning solutions, so that the company is well versed in solving a plethora of problems and not tied to any one solution that could go away overnight.
The broader point here is - in the world of foundation models, market leadership that is based on technical leadership could be fragile. Be careful investing against that. It makes me wonder if OpenAI should be a short. The company that leads now may have too much stake in the status quo to see the next thing. Time will tell.
Thanks for reading.