The Forces Driving The Economics Of Generative Media
How will the forces shape the eventual market equilibrium of generative AI?
Hi Everyone,
We’ve had a lot of new signups recently so for those who don’t know me, I’m Rob May, CEO at Nova, and active angel investor in the AI space. I also run the AI Innovator’s Podcast (formerly called the Investing in AI podcast) This newsletter is primarily to help me think through what is happening in the AI space. I find that writing helps me clarify my thoughts, and I hope you can benefit from that.
Today I want to talk about Generative AI and the forces driving the underlying economics of it all. I wrote previously about why I think it’s mostly a bad bet, but I still think there are pockets that will be really valuable. And I also think what happens with generative AI will impact other pieces of the market and provide other opportunities for economic complements and other tangential technologies to shine.
What is both exciting and nerve-wracking about investing in this space is the chaos. There are many trends that are shaping how this will end up when it reaches a stable equilibrium as an ecosystem, and weighing the factors and how they might pan out is difficult. I believe that making bets in markets like this, while many people sit out and wait, can lead to great outcomes if you are correct, and that thus it is worth the risk.
What follows is a list of random thoughts on forces shaping the eventual outcome of generative AI, particularly as it applies to media.
Legislation - At the moment, I think politicians are not concerned enough about the possible impacts of generative AI. I hear some speak on occasion about the risks of deep fakes and fake news but they don’t grasp the full scale of what could come., As the problems become more concrete as these technologies spread, I expect legislation in the next few years. What form that will take is hard to predict because few politicians have a good deep understanding of the potential and risks of this technology. To me, this is one of the highest areas of uncertainty.
Better AI computing chips - It currently costs millions, often tens of millions, to train these massive generative AI models. But all kinds of AI chips are coming that promise to lower the cost dramatically. (I’m personally an investor in Rain and Mythic but there are dozens of interesting competitors and approaches). If a new chip takes a model that cost $20M to train on GPUs and drops the cost to $200K, that suddenly makes training your own model available to most businesses, provided your use case doesn’t require constant re-training. Inference can also be expensive for some use cases, and these chips promise to lower the price of that as well. If it breaks the perceived oligopoly in foundation models, what does that mean for those business, and for others? Maybe the oligopoly will continue to exist, and hold for reasons other than compute costs for model training.
Foundation model oligopoly - At the moment, foundation models are expensive to train and therefore limited to big tech companies and really well funded startups. If this oligopoly holds, and there are just 5 or 6 companies that train and run large generative AI models at scale, is that enough competition to hold down prices for customers? Do they differentiate enough on use cases to give themselves high margins and charge customers high rates? Do they eat into the upstream and more vertically targeted use cases to improve their own margins, or not? And if they do, does the FTC (back to point 1) step in? If models become cheap to train does the oligopoly still hold because of historic path dependent reasons, or maybe because of branding around output quality and certainty of certain levels of service or quality? These are all important issues to watch as this plays out.
Limitations on further training data - I’ve heard that GPT-4 was trained on so much public text data that OpenAI isn’t sure where to go next to get an order of magnitude more. Whether that is true or not, it definitely will become an issue at some point. Will it matter for the economics of generative AI? Maybe the OpenAI-Microsoft partnership will help OpenAI get access to more private data (corporate Word docs?) that help continually push the training data set to a larger scale. But at some point it starts getting hard to find more data. Maybe the models will be so good that we don’t care and pushing them forward isn’t an imperative. Maybe we have humans label certain data types at mass scale to help the models, in which case the time to do so could be the bottleneck to the next level of breakthrough.
Small data AI algorithms - There are many ways to do AI other than training large neural networks on massive data sets. Fewer people are working on this area but it does have believers. And there are people constantly looking for new algorithms for intelligence. One of these could come out of left field and make it easy to train and run LLMs and image diffusion models on your phone. It’s doubtful that’s around the corner but, it’s a possibility.
Social confusion about machine vs human content - Regardless of what happens at the technical level, if an explosion of content of all types confuses and irritates humans to the point that it becomes major problem, something will emerge to help. That could be something else in this list (Legislation, New Tech, etc) or it could just be changes in social behavior, like assuming by default things were machine generated. That could lower the economic value of content of all types, which will shape the eventual market equilibrium.
Poisoning the well problem - This is related to the point above but, in general, there could be so much content created that for the content types supported by generative AI, people just tune out because there is too much junk. In this case, nothing new emerges to solve the problem we just deal with the content wasteland.
Declining marginal value of a better model - One thing we already see is that, when a new model comes out, various smaller, cheaper, and sometimes open source versions of it quickly proliferate. These versions often have limitations compared to the original but are appropriate for certain use cases that don’t need the full blow version. By the time GPT-5 comes out, the market perception could very well be “awesome, but, it’s really not worth it as GPT-4 is good enough for 98% of our use cases.” I don’t know if or when that will happen. But as an investor, it’s something to consider.
Unexpected benefits from generative AI use cases - With the success of Stable Diffusion, people are asking where else we can apply diffusion models. There is no doubt that some applications will be surprising, and that some existing technologies will be improved indirectly from these generative AI advances. Predicting when and where is difficult, and requires experimentation.
New layers of the tech stack emerging - When you look at some of the issues caused by generative AI, particularly numbers 1, 3, 6, and 7 in this list, you can see opportunities for new products and services that emerge to help you navigate generative AI. When the Internet came along and content exploded online, Search became valuable. What’s the equivalent thing that will happen in a generative AI world? (At Nova, we’ve been working on an orchestration and automation layer that helps solve some of these issues, so this is at the forefront of my mind.)
With all this chaos, what’s an investor to do? First of all, make bets. You want exposure to this type of uncertainty because the upside in these areas if you bet correctly could be massive. Secondly, keep your bets smaller than normal when the uncertainty is high. This is a time to reserve more capital for your winners once it is clear who they are (if you do follow on) or make more investments (if you don’t do follow on). And finally, keep a close eye on how these forces play out, and when you see something that might be a tipping point, something that will dramatically increase the likelihood of a specific outcome in the future, bet big.
This is a really exciting time for AI, and how the economics of these new technologies align, where value is created, and who benefits the most in the long run are all still open questions in my mind. If you are have an opinion, something to add, or a different point of view, I’d love to hear it.
Thanks for reading.
No disagreement here. Just sharing thoughts on points 6,7, 9 and 10:
Chat GPT is creating a commodity, so it’s going to be a race to the bottom. There is no value once you wipe out the rarity of it. If you can make a $10,000 painting in 60 seconds, it’s no longer a $10,000 painting, for example.
It’s also going to be used by zealots for nefarious purposes. I can now write 1,000 articles attacking a politician and post it on the internet. So, it will devalue communication among humans. This content wasteland could drive people to ignorance, because they will resolve the content is “probably just AI.” You can have a video of a politician speaking, and it could be pure fiction. But then the real content would be buried in the fiction and discounted. The only way to get trusted information would be to physically go to a rally or speech.
I would be looking for investments in tools that allow you to validate content. Tools to evaluate accuracy or to spot generative AI.