Happy Sunday and welcome to Investing in AI. If you read my Tetris Business Models post last week, be sure to read my friend Bob Mason’s rebuttal against it when you have time. I’ll add that if you are a reader and want to write a contrasting opinion to one of my pieces, I’m happy to post it if it’s well thought out. We’ve done it a few times here, with both Ben Vigoda and Eric Koziol writing rebuttals published here, and Parasvil Patel came onto our podcast for a rebuttal of a post a while back. So always happy to give you an audience if you have a good idea.
Today I want to write about a fear I have - the coming explosion of clickbait beyond text articles. I’ve thought about this a lot because I was a very early blogger in and the mid 2000s, several other bloggers tried to convince me to start a media company with my blog as the base. But, with ad driven models on blogs, I just kept thinking clickbait would matter more than good content because it’s human nature to click on stuff that is wildly novel or conveys FOMO.
That’s why you see all these posts like “These 4 facts about how peanut butter is made will shock you!!!” Then the post is a bunch of generic drivel for the first 500 words followed by 4 facts that aren’t that shocking or interesting. If you eat peanut butter, you click and read because you want to know. And when you click through, it’s a waste of time. This is the internet we live in now.
It goes beyond just media though. If you are a member of any early stage investment syndicates, you’ve seen the email subject lines like “200% ARR growth, 3x founder, sequoia backed, 40% oversubscribed, closing today!!!” Then you go to evaluate the company and the 200% ARR growth is from 10K to 30K and the founder was a small part of a founding team twice before but had no real impact, and the 40% oversubscribed is because they posted an artificially low limit on the round but actually hoped to get twice as much.
So far, we’ve been spared from too much clickbait in other formats. Video clickbait has been stupid stunts and funny things that happened but, creating video clickbait was expensive and time consuming. Generative AI is changing that, not just in video but in many areas beyond basic text generation.
For example, a sci-fi magazine that accepts submissions is overwhelmed by AI generated stories. And some community colleges are overwhelmed by bot based submissions for admission.
My question today is - when this happens to every corner of the internet, and every type of content, what does it mean for investing in AI?
The knee jerk reaction is to say we need AI to fight it but, I’m not convinced. I think the initial reaction is probably a change in process flow to make it more difficult for AI, and to involve humans a bit more. I say that because, looking at a long history of warring technologies, the bad guys always seem to be a little bit ahead of the good guys and so, a pure tech solution is rarely enough to stop the bad guys.
There are some good guys having some progress - I know Reality Defender is making waves, but that’s more enterprise focused and not solving the problems I see coming for the average consumer.
It seems to me like we will soon be awash in a world of linkbait junk everywhere. I would love to make some investments in areas that help fight that but, most of the solutions I see actually seem pretty naive. If you have ideas, or are innovating in that space, I’d love to chat with you.
Thanks for reading.
The good guys vs bad guys argument always reminds me of the generator vs discriminator balance in a GAN. If done properly, the GAN will reach a Nash equilibrium where the generator (in this case, the spammer) produces data that is indistinguishable from real data, reducing the discriminator (in this case the spam filter) to making random guesses. This will probably not assuage your fear. ;-)
Agree. Fan of Reality Defender. Eerily surprised when, for an exercise, they demonstrated with my LinkedIn photo how easy it is to simulate a version of me. Different, but not too different from my likeness.
I wouldn't be surprised if there is a crop of firms who try to solve the problem of authentication at the individual level. My concern is that a grandmother may not be able to determine that a synthesized voice of a grandchild in need of help is simulated.