Technically Sentient: Will Algorithms Absolve Humans of Responsibility?
What happens when humans hide behind the objectivity of algorithms?
Happy Sunday and welcome to Technically Sentient. I’m Rob May, a Partner at PJC investing in AI and Robotics. If you have an early stage AI company, please send me a note if you are raising capital.
— Best Links of the Week —
Fake Data Could Help Solve Machine Learning’s Bias Problem. Slate.
LSTM’s are dying. What is replacing them? Towards Data Science.
Two Senators are proposing the U.S. seek a national artificial intelligence strategy. Homeland Prep News.
Algorithms to Live By. Nature.
Youtube Uses AI To Squash Conspiracy Theories. Wired.
—- Research —
Job2Vec: Job Title Benchmarking With Collective Multi-View Representation Learning. Link.
Large Scale Intelligent Microservices. Link.
Decoupling Representation Learning From Reinforcement Learning. Link.
— Commentary —
The Wall Street Journal ran an interesting article about how companies are determining who goes back to the office, and when. The thing that struck me about it is that some companies are using an algorithm to determine it, and there is a perception that the algorithm is somehow unbiased because it is driven by data.
It made me wonder if, in a world where algorithms run more and more of our lives, do we use them to abstain from assigning any responsibility to humans for the results, or to fix them? The whole reason for using an algorithm in this case is to get pass the buck a bit and avoid criticism. Given that humans don’t really like criticism, is this a canary in a coal mine for things to come?
Imagine a company that certifies algorithms against bias. You have a hiring algorithm that obtains that certification. Now when you reject a candidate, you just blame the algorithm, and they can’t sue for bias because the algorithm has been certified non-biased. Of course, this reminds me of the credit rating agencies rating CDOs as AAA ratings, and being very wrong. The same could happen here.
But I am less worried about that, and more worried about the implications of humans being increasingly out of the loop of responsibility. A lack of responsibility changes behavior, and that could take the whole AI industry in a bad direction - particularly if we believe things are less biased than they are because they meet some standard.
AI is too nascent to turn over final control of important decisions to AI even when it appears to work well. Until these algorithms have been vetted across many years of results, changing data sets, and different environments, they should be advisors to humans, not final arbiters. It sucks to get criticized, but better we have small errors from human mistakes than risk large scale errors from algorithmic ones.
Thanks for reading.
@robmay