Conflicting AI Opinions And FSD As An Investing Roadmap
How to cut through the hype and insanity
Welcome to Investing in AI! The newsletter is free but you can support us by sending us customers for Neurometric (anyone who is considering GPU alternative hardware), or AI deals at HalfCourt Ventures. Also check our our AI Innovator’s Podcast if you want to hear more stories about AI applications.
Investing in AI is difficult for a bunch of reasons. Today I want to talk about how to navigate some of the hype. Let’s look at some statements from the past few months on AI by leaders who should be in a strong position to see where it is going.
Satya Nadella said recently that AI was generating no value.
Sundar Pichai from Google said AI investments were paying off and that the risk of overinvesting is small compared to the risk of missing out.
Sequoia has said there is a bubble and called it the $600 billion question.
Vinod Khosla has said most AI investments will lose money.
Then there is the graph below from the Wait But Why Blog about how we are on the cusp of an AGI explosion.
The funny thing is, that last point, the graph from Wait But Why, is a post from 2015.
This leads us to ask - is AI progressing at an exponential rate, or not? How do we invest against the idea that AGI could be just around the corner, or 30 years away?
The simple way to invest around this is to look at traction, via usage or revenue, but that can be tricky too. Just look at these recent public statements:
Jake Saper from Emergence wrote about “mirage” product-market fit in AI.
Greg Isenberg tweeted about “curiosity revenue” which matches a recent VC narrative I’ve heard about companies growing revenue really fast then collapsing quickly.
We can’t even agree on the benefits of AI with Andrej Karpathy talking about the benefits of vibe coding with these new tools and others posting about how junior developers who rely on these tools can’t really code.
Then Klarna, the poster child for applied AI progress after announcing they replaced Salesforce and Workday with tools built using AI, has surprised the industry by saying they will move back to human customer support.
It’s insanity. How do you navigate it?
One good way to evaluate AI progress is by looking at one of the areas that has been working on it the longest - self driving cars. A recent article looked at Elon Musk’s predictions about self driving and how far off they were. He initially promised it in 2020, then again in 2023, and we still aren’t there. But we have made progress.
If you look at where self-driving has been successful, there are some aspects of it that are largely solved. The ability to have automated cruise control on standard highways is pretty robust. Navigating common city roads in good weather and low traffic is also pretty strong. Those two ideas give us a starting point for how to evaluate AI adoption and success. But how do you know when you are listening to some Elon-like overhype, or some real breakthrough progress that can be applied to real problems?
The first thing to understand is that fully self driving, or in our case full AI automation of most tasks, is the kind of problem where making 80% of the progress is only 20% of the work. The last 20% of progress (really the last 5%) is the most difficult. As an investor, you solve this by evaluating use cases along the spectrum of that 80% to 100%. What use cases work well at 80% AI success?
The second thing is you have to understand the specific interactions of AI with certain tasks have different characteristics (highway vs city driving, for example). What makes AI investing so much more work than other types of tech investing is putting in the time to understand what tasks work for where AI is in the near future.
Here are some high level questions to ask:
How valuable is partial AI automation to this task?
If partial AI automation happens, does the human component of the task become more valuable or less valuable?
What is the TAM of this use case at various automation levels? Where in the spectrum is the biggest TAM unlock - jumping between which two levels of automation?
What are the consequences if the AI makes a mistake?
What are the cultural and workforce adoption issues in rolling out the AI automation if it works?
These should give you a good high level starting point on evaluating certain possible AI automation tasks and matching them to near term AI capabilities.
Then from a portfolio perspective, I like to align my investing across the capability spectrum. Think about an allocation like 60/30/10 - 60% AI that seems to have a proven match to a task and is going somewhere, 30% to stuff that shows promise but isn’t there yet, and 10% to speculative AI that could be game changing but, could also blow up entirely and never make it. The 10% is often the most overhyped stuff, and you need exposure to it but, it’s very unlikely it’s as close as the pundits would have us believe.
AI has some characteristics that make it uniquely difficult to predict its future. I expect we are still several major disruptive moments away from really settling in to a more stable AI future. I think following the self driving car market is a good gauge of progress, partly due to the technical learnings but also due to the fact that cars require a lot of safety and security and will lead the framework development for how we think about AI and safety in other domains.
Thanks for reading.
Excellent post, Rob. I like the concept of AI automation following a spectrum.