How The Fundamental Limits of Intelligence May Shape AI Business Models
Most physical phenomena have tradeoffs - does superintelligence?
Happy Sunday and welcome to Investing in AI. I’m Rob May, CEO at BrandGuard. I also run the AI Innovators Community in Boston and New York, and we have a startup showcase coming up November 30th in Boston. There will be speed dating between big companies and AI startups, and tables for startups to demo. If you are a tech executive wanting to meet startups, or a startup wanting to present, or just an AI practitioner who wants to attend, sign-up here.
This week I want to talk about a topic that I don’t hear many people discussing. The question I want to ask is - are there fundamental limits to intelligence? And if there are, what does that mean for investing in AI companies?
This is important because there is a lot of talk about superintelligences and what they will mean. Is it a race between the big tech companies to be first? Is it a race between governments and countries to win at designing the first superintelligence? But a bigger question is - why do we think this is even possible?
Humans tend to extrapolate linearly in almost every field of knowledge. That is our intuitive sense about how the world works and how systems scale. Thus our baseline assumption is that as compute gets more powerful and data sets grow, so will our abilities to build smarter and smarter machines.
On top of that, all the data so far shows that larger foundation models perform better as they get bigger. They show emergent skills. A model that doubles in size performs more than 2x better than the smaller model. This is a reason to believe a superintelligence is possible.
But.
Building a superintelligence machine will ultimately hit the limits of physics in some areas. Circuits can only get so small before you are down to just a few copper atoms in a wire that carries the current through the circuit. Transmission and computation can only happen so fast as they bump up against the speed of light and other limits of physical materials. (Ignoring quantum computing for now).
To take a side detour as an analogy, look at what Bryan Johnson at Blueprint is doing. He is trying to optimize his health using all the latest research to halt and reverse aging. But what is most interesting to me is that, it seems he is starting to hit limits where he has to make tradeoffs. The way bodies are designed, optimizing for endurance running and optimizing for powerlifting are in conflict. I don’t think it’s possible to be world class at both, even if you could genetically engineer a human to attempt to do so. Johnson is working through this with the tradeoffs he must now make by figuring out what he wants to optimize for. Doing everything possible to minimize your chances of avoiding one type of disease might open you up to increased risk of others.
So the question I want to ask is - if there is no way to make a human body perfectly healthy across every dimension, because some optimizations conflict with other optimizations - will the same be true of building an intelligent machine?
We know there are biological limitations to human intelligence. Are there theoretical limits to generalized artificial intelligence as well? Ancedotally, it seems to me that the more tasks Amazon Alexa does, the worse it performs on any given task. Is this the future of intelligent machines in general?
These limits show up in most places in the world, not just biology. For example, in corporations, there are economies of scale but also diseconomies of scale that stem from complexity, beyond a certain size.
Intelligence is composed of many different components. Can we really build a machine that is the best at all of them?
To sum it all up, I believe there is a possibility threshold of intelligence beyond which you must optimize for some components of intelligence (or maybe for specific types of knowledge) because you can no longer optimize for generalized overall intelligence and push all the major dimensions of it forward.
What will this mean for investing?
It means if this begins to happen, investors, entrepreneurs, and executives, will need to understand the tradeoff boundaries. In other words, where is it best to make tradeoffs, to what end, and how? How do the fundamental limits that drive these tradeoffs map to use cases, technologies, data, and other aspects of the intelligence supply chain?
If this is true, it means building a superintelligence may not be a winner-take-all game.
I don’t know if this will happen. But I wanted to bring it up because it’s something I have discussed with friends but, haven’t really heard much of a public discussion about. They say chance favors the prepared mind so I believe giving some thought to issues like this before they arise might make us more prepared as investors if they do.
If you have opinions or ideas on this, I’d love to hear them, as always. Thanks for reading.
The actual limitation of AI progress is the presence of new ideas/principles/approaches. And such ones are produced/generated by persons, not by corporations. Therefore, there are two types of races - (A) guessing whose idea has the potential and (B) competition for the most successful implementation of already revealed idea.