Lessons From Filterworld And A Warning For AI
Tech continues to get it wrong, and abuse it's power
Happy Sunday and welcome to Investing in AI. Today I want to talk about the excellent book Filterworld: How Algorithms Flattened Culture. It’s the kind of book techies hate because it implies that tech can make things worse, and for most of us, techno-optimism is our religion. (For the record, I wrote a few years ago about why tech keeps getting expected outcomes wrong, and my religion is best described as “everything is a tradeoff”)
But let me grab your attention with a stat from the book that you may find shocking. In 2023, Netflix streamed 4,000 titles. But in the video store days, a Blockbuster “super store” would have stocked over 6,000 titles. This goes against everything the internet was supposed to be. It was supposed to enhance diversity, options, the long tail. Remember? But in his book Kyle Chayka presents ample evidence that companies like Netflix are actively pushing the content they want on us under the guise of “personalization.”
Another story from the book discussed a research project where multiple Netflix accounts were created and different content viewed to meet some personality stereotype. While their recommendations diverged in many ways, all accounts were pushed “The Fast and the Furious” series, which it turns out, Netflix had paid a very high fee to license. This is not how we expect these things to work.
The internet was supposed to increase democracy in every area. We would have better politics because of more informed voters. We would have better journalism because we removed the gatekeepers and let people write what they wanted. We would have a more generally educated population, more appreciative of diversity, and with more exposure to the long tail of all types of content. But let me ask you something - do you think the average person you know is smarter, more engaged in democracy, and exposed to more diversity than 20 years ago? I don’t. And Filterworld shows how the same forces that homogenized thought are homogenizing culture. (For what it’s worth, I wrote about a related issue in Venturebeat in 2013. It was easy to see this coming.)
I’m really worried AI is going to make all of this worse. I hear all the same things about AI that I heard at the beginning of other waves of tech about how it will make things better. Yet, I don’t hear any acknowledgement that we made mistakes that time around and now have ideas how to improve with the next wave of innovation.
If you are working in the space, I hope you will read Filterworld, and I hope you will consider the implications of your work. Everyone seems worried about AI safety and AGI but, as Filterworld shows, we are doing a lot of harm to ourselves and our society with algorithms even now, well before we solve AGI. Let’s not lose sight of the more pragmatic issues our AI work can cause.
Thanks for reading.
Until it becomes economically advantageous to do otherwise, the poor behaviors you have described will continue, and the general experience for all of us will continue to degrade.
If you must use these "solutions" at all, refusing to use them in the way their designers intended is an excellent start: don't go to Netflix to see what is there, but rather to watch what you already know you want to see. Further, don't purchase "sponsored" products you see in general search results; don't use "like" features anywhere, etc. It's not perfect, but anything that keeps you in as much control as possible is better than simply letting someone else take the wheel.
The insidious evaporation of being a human. While humans are asleep while awake.