The Mental Model Most AI Investors Are Missing
Reflexivity Tied To Technical Innovation Will Have A Big Impact
I haven’t written in a while. What prompted me to start back up though (other than finally getting to a spot where I had time) was my frustration with the lack of a good mental model for technical reflexivity in most investors - particularly in AI.
The idea behind reflexivity has been around forever. You could call it a form of a feedback loop, or maybe a piece of cybernetics, but George Soros was the first person to give it a name and point out it’s relevance to investing. The premise is this:
Reflexivity in economics is the theory that a feedback loop exists in which investors' perceptions affect economic fundamentals, which in turn changes investor perception.
Easy enough to understand. But now let us add a level. There is a feedback loop between some types of new technologies and the ecosystem that technology is in, or creates. This isn’t a situation of someone building a faster processor and so more people want it and then it grows fast by getting built into more systems. That analysis is linear.
This is about a process of creativity that stems from changing the status quo and realizing what is possible. People get used to things. People think about the world through the lenses provided by the status quo of the things they use. Then when the world changes, sometimes whole new ideas are possible. The strongest example of this is probably the Web. By connecting computers together, it enabled all kinds of ideas that people didn’t think of before. The network of interconnected computers provided a new mental model for them to work from to invent new things.
Social media didn’t immediately come with the web. Why not? My theory is that it takes time for the new reflexive part of an innovation to arrive. To understand what is fully possible under the new technology paradigm, some people need to have worked in it natively for a few years so that they begin to break down the status quo way of thinking.
The best example I have of this is investing in AI computer chips. Two of my best investments have been Mythic, an analog crossbar array chip, and Rain, a neuromorphic chip. In 2016 when I started looking at AI chip investments, most investors I showed them too said “NVIDIA has this solved, you can’t compete” which was entirely wrong.
Later, when AI chips started to get traction, the investment reasoning was, in my opinion, still too shallow and linear. Most people thought that there will be X number of AI workloads and these chips will address the TAM of those workloads. But they missed the reflexive part.
When developers spend a couple of years programming on a chip like Mythic or Rain or one of the many other non Von Neumann architectures that are rising up in popularity, they will being to think differently about what is possible. The reflexivity arises because engineers won’t simply port workloads to these chips - these chips will in turn influence how they think about possible workloads. That will lead to new ideas, and I suspect a new ecosystem of innovation centered around what is possible with new chip architectures. I don’t know what those innovations will be, but I can predict that they will exist in some form.
Bringing this all back around to my initial point on reflexivity - the reason I’m so excited about the AI chip market is not the linear TAM of mapping existing AI workloads to these chips - it’s the reflexivity TAM of what is possible when these chips instigate a whole new paradigm of what is possible computationally. That’s the mental model to use to invest in a market like this.
Thanks for reading.
@robmay
Very interesting! It seems to me that this concept of reflexivity can be applied to graphics cards and machine learning as well. When graphics cards first became a thing in the 90's nobody envisioned the massive market for machine learning workloads that would arise decades later. That market was became available partially because of the availability of fast graphics chips to do massively parallel computations.
great post and highly relevant. It's akin to Nassim Taleb's "Second-order thinking" --> the major consequences of any new technology (AI chips) or socioeconomic event (Ukraine invasion) are largely unpredictable as they are masked by non-linear relationships.