Note: Not a single word of this newsletter was written by AI
Happy Sunday and welcome to Investing in AI. I’m Rob May, CEO at Nova. In the past I’ve written about how chatbots could be problematic for society as their quality improves. Someday every chatbot will be as good as the best salesperson at understanding your behavior and responding to get you to buy. What does that mean for sales, and what is the line between helpfulness and manipulation?
Today though, I want to look at chatbots through the lens of dating. I recently had a conversation with an AI ethicist who has studied this type of issue, and it struck me as a complicated one worth highlighting.
Imagine in the future that before Joe has a date with Susan, he can take a chatbot built off all the videos of Susan, things she has written, etc, and practice his conversation skills on the bot. This will make his first date go much better. If Joe is shy, introverted, or has poor social skills, this can be a very helpful aid to his dating life. But what if Joe is manipulative? What if he doesn’t really like Susan but wants Susan to like him, and now he’s figured out how to do it to achieve his ulterior motive, whatever it may be?
I don’t know if it still exists but at one point, Harvard offered a class on optimizing your online dating profile. A friend of mine who took it gave me some hints for men. For example, always put a picture of yourself with other people so you look social. Say “my friends say I’m…” rather than describing yourself in first person. There were key themes a guy was supposed to mention that women really like to hear. I tried them out on my own profile. While it helped me get more matches, I ended up going on dates with people who actually wanted someone more like my optimized profile than with the person I really am. That wasn’t very helpful.
As a result of that experience, I question the value of these types of NLP tools that will help people smooth over certain types of conversations. They are definitely needed and can be helpful in many areas. But they need to be done in a way that retains authenticity. In fact, a lot of the downside of using more AI could be a lack of authenticity in many areas of life.
Think about what the Internet did to media? It spoon fed us mass market linkbait way more than it highlighted the long tail of great and unusual stuff. I think this could be a model for what happens in an AI world as well. AI tools that teach people how to communicate more effectively at work, on dates, in relationships, with children, and in public, could do the same thing. They could collapse the diversity of conversation around the most generically popular things to say. Both sides of the bell curve of communication skills in society could be pushed closer to the middle as expectations shift because of what we get used to with AI.
You could say technology is neutral and how you use it is up to you. Or you could say that technology is never quite neutral but does lend itself towards some use cases more than others, naturally. For all the concern (rightly so) about problems in AI like bias and fairness and explainability, I think the societal impacts of the type I’ve described here aren’t given enough consideration.
AI can change our behavior in good and bad ways. We have to be thoughtful about the type of society we want as we build these tools. Whether something is “right” or “wrong” will often depend on what we are trying to accomplish. If you are working in this area, I’d love to hear about it.
Thanks for reading.