Why I Don't Invest in AGI - It's Not The Biggest Economic Opportunity
Value will accrue to the use cases of AGI - not the platform that creates it first.
Happy Sunday and welcome to the Investing In AI newsletter. I’m Rob May and I run the AI Innovator’s AngelList syndicate if you are interested in early stage AI deals. I’m also looking for some applied AI people to interview for the AI Innovator’s Podcast, so if you work on implementing AI tools into corporate workflows, please reach out. I’d love to discuss it on the podcast.
If you make AI investments, you inevitably see a regular flow of companies who list their corporate mission as “solving AGI” (artificial general intelligence) or something like that. I usually don’t take a second look at these businesses - not because it isn’t an important goal, but because I don’t think it’s a good economic opportunity. That sounds crazy right? What could be more valuable than building the world’s first machine that is as intelligent as a human?
— The Singularity —
Let’s first look at the core theory behind why AGI is perceived to be so powerful. It’s often called “the singularity.” There are many perspectives on this but they generally looking something like this:
The first machine to reach human intelligence will then start improving itself on its own, without human intervention
Having even a brief head start of a few hours or days will make it a winner-take-all outcome as the first truly intelligent machine will take off on a self-improvement curve so that no one can catch up
An AGI machine will be capable of so many things that it will dominate many areas of economic activity.
— AGI Economic Skepticism —
I have to admit that years ago, in older versions of this newsletter, I was a believer in the singularity and I thought companies and academic labs might be in a race that mattered immensely to the future of the world. But after almost a decade now of working at the intersection of AI and business in various forms, I no longer feel this way. Here are my main arguments for AGI economic skepticism.
The gap between the top AI groups is small, so I’m skeptical that the first group to solve AGI will do something that others can’t figure out and copy quickly.
It’s not obvious or guaranteed that the first machine to reach AGI will also be the best at self improvement. Humans obviously don’t know exactly what to do to build AGI, and so building a machine with human levels of intelligence doesn’t guarantee that machine will quickly find the next breakthrough for the fastest form of self-improvement. In fact, you can imagine a scenario where the second company to build AGI targets a better self-improvement algorithm and quickly passes the first.
Even if a company gets to AGI with a significant lead, they may be gated by compute or power resources. For example, if OpenAI reaches it first then both Microsoft and Google, with vastly more compute, may be able to catch up and pass the OpenAI AGI as it could be growth constrained by compute.
It’s unclear how dominant a digital AI could be on the world. We don’t have a model of a disembodied intelligent being to know. And we don’t know what motivations it would have, so, it could just as easily be that we end up with an AGI oligopoly as the first AGI is limited in its influence and growth. Interacting in the physical world is still slow and difficult.
If you do have an AGI, that doesn’t mean everyone will buy it and you will be economically dominant. Large enterprises are particularly sensitive to data and privacy issues with generative AI. They will be the same, probably even more stringent, with AGI. So it’s unlikely that just because you have an AGI suddenly every large company adopts it. The adoption will take time, and that’s time for others to catch up and compete.
The market will want different AGI options. Some companies won’t want to use certain providers, depending on who they are. Walmart won’t use AWS, and when we are looking at workflow intelligence over just compute, that issue will be even more serious.
We don’t know the upper limit on the theoretical value of AGI because we don’t know anything about the upper limit of intelligence in a fundamentally probabilistic world. The theoretical maximum might be not too far beyond where humans are today.
As you can see, there are a lot of practical reasons to believe that the achievement of AGI isn’t some magical point of singularity where the winner dominates the world going forward. I think it’s an important goal, but not necessarily one that will be a massive economic return.
In fact, while many AI investors spend cycles focused on what capabilities GPT-5 may have, or whatever next model is going to be dominant, I’m much more focused on the billions or trillions of daily workflows around the world that don’t yet use any AI technology. I believe pushing GPT-4 level AI deeper into these workflows is more valuable than building the next GPT-X. AI will make so many workflows so efficient and drive so many productivity increases, and I think the economic gains could go to the companies that adopt AGI more than the providers of AGI themselves.
The mental model I use is this - as intelligence becomes more abundant and everyone has more access to it, what goes up in value? I think it’s industry knowledge, customer relationships, unique data, and experience with the nuances of specific industry workflows that aren’t easily available to an AGI.
I’m sure many of you have counterpoints I haven’t thought of. I’m always happy to hear them so please reach out if you disagree.
Thanks for reading.
Great read. People naively say AGI, but there are a lot of things to consider and overcome, and we aren't sure if the AGI is the thing we're expecting.
Memo to myself: https://glasp.co/kei/p/9a539065cfc5dc503fcd
Cogent observations as usual, Rob. How do we even identify machine intelligence beyond what we already understand? It seems that intelligence in machines may be difficult to identify and even measure, since we know so little even of human intelligence. Back in the early days of evolutionary algorithms we spoke of goals. As models become increasingly complex, how do analyse these beyond what we've programmed? Are there emergent properties? Do these contain intelligence? What are their implications?
Your perspective on economic returns is refreshing. Keep it up!