AI and Adaptive Culture - Would an AGI Support Trump?
What are the second order cultural effects of machine intelligence?
Happy Sunday and welcome to Investing in AI! First a quick announcement that we’ve re-launched the AI Innovator’s podcast (note: send me guests!) and this latest episode features Margaret Quigley, who buys data sets for a living.
Now for today, I want to ask a few questions about how we might adapt to AI technology. The technology arc I want to tie it back to is that of online content and how we ended up with some unexpected outcomes there. I saw it firsthand.
A few of you reading this may know that I was a very very early blogger. I started writing in March of 2003 on Moveable Type, where you still had to re-compile your blog after each post. There was an energy at the time that blogging was going to change online content forever. The feeling is best described as “down with the gatekeepers.” We were proponents of the idea that opening up content creation to everyone would lead to better discussions, more points of view, thoughtful discourse, and a better society.
I never thought that. I have a misanthropic streak and so my general point of view was that letting everyone have a say would lead to lots of people who shouldn’t be talking, to talk.
Fast forward 20 years and I don’t think online content culture had the impact that was anticipated. We went from “down with the gatekeepers” to “we are drowning in crap, please bring back the gatekeepers.” It’s really easy to start a “news” site and make it look serious and professional and write whatever hogwash you want and it’s really hard to monitor all the sites and point out all the hogwash. The idea we would be more educated became the reality that we are more fragmented.
Now my question is… what will be the similar adaptive effects of AI? We have this idea that it will solve all our problems but the truth is, when we start to have more of our lives run by AI, we are going to adapt to the algorithms. We already say things differently to get the answer we want from Siri or Alexa. That’s a small adaptation. How do we adapt in good and bad ways to what is coming?
When I pose this question in person, people often argue that it won’t have the negative impacts of previous technology waves because AI Is “smart.” Like somehow smart equates with “always correct.” But I’d point out that smart people often disagree. Vinod Khosla and Elon Musk have been debating politics on X, and they are both pretty bright successful guys yet entirely disagree. It makes me wonder if we had an AGI, would it support Trump, or not? No matter how smart the machine was, it still has to have a target goal that it is optimizing for. Two smart machines optimizing for two different things, gauging the probabilities of outcomes, might totally disagree on who to vote for. Will that make our problems better or worse?
I don’t know but, I think we spend too much time talking about how machines might kill us and not enough time talking about the other myriad dangers they pose as our culture finds ways to adapt to AI everywhere.
Thanks for reading.
the flood of AI generated garbage (and a few gems) that started in earnest in late 2022 will create the AI equiv of a KT Boundary, where everything after that point is suspect, both in origin and intent.
this means that from the information that came from trusted, vetted sources like clay tablets and handwritten scrolls and books printed on paper (before the boundary) we will use AI to learn (or relearn). What can Gilgamesh teach us about the nature of society and human existence? Plenty.
'Humans are born, they live, then they die, this is the order that the gods have decreed. But until the end comes, enjoy your life, spend it in happiness, not despair. Savor your food, make each of your days a delight, bathe and anoint yourself, wear bright clothes that are sparkling clean, let music and dancing fill your house, love the child who holds you by the hand, and give your partner pleasure in your embrace. That is the best way for a human to live.'