Happy Sunday and welcome to Investing in AI. If you get a chance, listen to our latest podcast about predicting the future using AI, with Dan Schwarz from FutureSearch.
I recently finished the book AI Snake Oil, which I highly recommend. The authors (who write a newsletter of the same name) are a computer science professor and PhD candidate at Princeton, so very credible in the space. To be clear, they are not luddites or AI naysayers. The book is about the limits of AI, and the limits of intelligence or predictions in general. Throughout the book they distinguish between use cases where AI makes sense and areas it doesn’t.
In the beginning they cover the main issue of debates about AI:
Imagine an alternate universe in which people don’t have words for different forms of transportation—only the collective noun “vehicle.” They use that word to refer to cars, buses, bikes, spacecraft, and all other ways of getting from place A to place B. Conversations in this world are confusing. There are furious debates about whether or not vehicles are environmentally friendly, even though no one realizes that one side of the debate is talking about bikes and the other side is talking about trucks. There is a breakthrough in rocketry, but the media focuses on how vehicles have gotten faster—so people call their car dealer (oops, vehicle dealer) to ask when faster models will be available. Meanwhile, fraudsters have capitalized on the fact that consumers don’t know what to believe when it comes to vehicle technology, so scams are rampant in the vehicle sector.
Now replace the word “vehicle” with “artificial intelligence,” and we have a pretty good description of the world we live in.
The book is packed with good insights but I want to focus today on one in particular - the fundamental limits of prediction. This topic is important because it sets the upper bound for what a Superintelligence could do if it had all the compute power and knowledge possible. When we extrapolate charts of AI progress forward, we often ignore that there are fundamental limits to what models can predict.
The book gives a great example of the Fragile Families Study, which followed 4,000 families and recorded many aspects of their lives at multiple intervals between birth and age 15. The data set was made public and data scientists were invited to submit models to predict outcomes for 15 year olds based on earlier data points. They didn’t do much better than chance. Why not?
The authors believe it’s because these models often look for large shocks that impact someone’s life but, “much more common than large shocks are small initial advantages that are compounded over time.” And that “the difficulty of measuring these small differences leads to higher irreducible errors in predictions.”
The world is a complex system with many feedback loops at many levels. Very small changes, the kind that are hard to measure, will have huge long term impacts in many areas.
This difficulty won’t necessarily stop companies are governments from trying to use AI for places it fails. After all, the world hates uncertainty. From the book:
“It’s true that companies and governments have many misguided commercial or bureaucratic reasons for deploying faulty predictive AI. But part of the reason surely is that decision-makers are people—people who dread randomness like every one else. This means they can’t stand the thought of the alternative to this way of decision-making—that is, acknowledging that the future cannot be predicted. They would have to accept that they have no control over, say, picking good job performers, and that it’s not possible to do better than a process that is mostly random.”
You get the picture of what this book is all about. It’s well written and well researched and if you are interested at all in understanding how to take advantage of the opportunities AI presents while understanding the limitations and drawbacks, I highly recommend AI Snake Oil.
Thanks for reading.
Sounds like a great read and definitely a must. A lot of proported AI thought leaders out there with an agenda. I just want to see real wins. I hope we can all detect vaporware. Thanks Rob.
I had the fortunate opportunity to sneak in and actually listen to the professor discuss live the ideas of the book. Didn't get a change to ask a question, though.
Agree regarding the lack of evolved language helps to create some of this 'snake oil' that we see in the ecosystem.
The language we use reflects how we perceive the world, and the specific words we use in everyday parlance reflects both individual and cultural values. This brings my concerns regarding individuals who use AI to completely think and write for them. (This is the extreme case as I recognize most individuals are not completely deferring, but still. We're all cognitive misers to some extent.) I'm curious about the feedback loops of how we shape the machines we build and how those machines shape us.
Waiting for my copy, but a book that I'm interested in to explore the ideas of creativity and the tools we use to create: The Uncanny Muse: Music, Art, and Machines from Automata to AI