Why AI Could Lead To Higher Costs And Less Productivity
The law of diminishing specialization could hurt the bottom line
Happy Sunday and welcome to Investing in AI! Today I want to talk about an unusual idea from an old book I picked up recently. I often browse the basement of Strand Bookstore in New York because it has the best selection of used technology and mathematics books of any place I’ve seen. Recently I found Edward Tenner’s book “When Things Bite Back: Technology And The Revenge of Unintended Consequences.” I bought the book because I’ve been a long time believer in the idea that the tech industry is too naive about the tradeoffs, unintended consequences, and second order effects of new technology adoption. But what is most interesting about this book is… it was written in 1996. I wanted to read it and see if we are still making the same naive mistakes in our assumptions about technology.
One of the book’s later chapters talks about all the ways office workers became less productive in the age of the computer. It works like this… executives used to have secretaries to type, print, and copy things, but now with all the computers and software and tech innovations, the executives can do these things themselves quickly and no longer need secretarial help. So the support staff in the company can be mostly let go, saving a lot of money.
But what the research showed is that it actually ends up costing more. Cal Newport came across the same book several years ago, and wrote about this same study here.
Here is Newport’s analysis of it all:
This reduction in the typical deep-to-shallow work ratio (see Rule #1 in Deep Work) became so pronounced as computer technology invaded the front office that Sassone gave it a downright Newportian name: The Law of Diminishing Specialization.
What makes Sassone’s study particularly fascinating is that he used rigorous data collection and analysis methods to answer the question of whether or not this diminishing specialization was a good trade-off from a financial perspective.
His conclusion: no.
Reducing administrative positions saves some money. But the losses due to the corresponding reduction in high-level employees’ ability to perform deep work — a diminishment of “intellectual specialization” — outweighs these savings.
And here are two interesting quotes on the results from the original study:
“The results of a comparison of a ‘typical’ department, with a department with a reasonable high level of intellectual specialization were startling. The typical office could save over 15 percent of its payroll costs by restructuring its staff and increasing the intellectual specialization of its workers.”
“The typical office can save about $7,400 [around $13,200 in 2018 dollars] per employee per year by restructuring its office staffs and improving its levels of intellectual specialization.”
It’s easy to bah humbug these results and make a lot of excuses for why this is no longer true. And maybe to do so would be accurate. In fact, the Consensus research tool shows that most papers say yes, new tech has increased productivity. But skimming these papers, it seems that they analyzed employee per unit performance rather than system level performance. What Sassone seems to be saying is that, while individual employee performance on certain tasks might be more productive, the system overall may not be because of the loss of intellectual specialization benefits.
Whether this is true or not of the web 1.0 era is not my concern here. My concern is whether or not some of this could apply to the AI revolution. To understand that, we need to think about it on a case by case basis, and understand how good an AI is for particular use cases.
For example, consider automated support systems. They are more efficient and much cheaper than humans for routine requests, but my anecdotal experience with them is that complex requests that used to be handled quickly by a human are now more difficult. You have to go through all the basic menus and validate a bunch of information to get transferred to a human who can actually help, even when you know you have a complex topic that will require a human.
I wrote a few months ago about how evaluating the downside risk of an AI project could give you insights into the ROI. That’s the case I’m worried about. If too many AI projects are “pretty good” but have to be constantly re-checked by humans, need more human supervision, or result in more complexity in other areas, then the gains of AI could be lost.
We could also find ourselves back to the diminishing specialization problem. I’m not a designer, but if I feel like I can get 90% of what I want from a foundation model on my own in 20 minutes, I’ll probably do that more often. To me, it seems better than having a designer take a few hours on the same thing. But then, that’s time I’m not spending doing what I do best.
I know most of you who are techies are pretty strongly against this point of view. You believe almost religiously that tech improvements are always good and should always be adopted and are always beneficial if we just do it the right way. Yet second order effects in tech are powerful. For example:
Uber makes traffic worse, despite early tech predictions it would make everything better.
Social media makes us less connected, despite assumptions it would help us maintain connections.
AI adoption in many areas of business is slow. Why? Because many of the experiments companies are running don’t show enough value when you look at them holistically.
What is the story we will write about AI in a decade? Will it have transformed everything we do? Lead to increased productivity? Created a bunch of negative second order effects? My guess is that it will look a lot like most other areas of technology. AI will have pockets of huge benefit and productivity, while net ROI may be negative but we will be reluctant to acknowledge it. But I’ve been wrong a lot before so, perhaps this won’t come true.
If you have opinions on this, as always, I’d love to hear them.
Thanks for reading.
Absolutely true in medicine. Tech has “replaced” 90% of the work that transcriptionists, scribes, medical coder/billers and after hours answering services did 20 years ago. Guess who usually is expected to pick up the 10% of those jobs that tech can’t quite do yet…the docs. Despite our highly specialized training, this workflow often makes sense to the employer because docs are not paid hourly wages. The loss of productivity to the healthcare system is significant. Not to mention the epidemic of burnout among physicians who now spend a significant part of their day, usually after normal office hours, editing notes, tidying up coding/billing, and triaging after hours calls that AI can’t quite handle yet.
Transitional hybrid workflows will be needed until the tech gets closer to 99% human replacement performance. The hidden “cost” of early automation needs to be taken more seriously, at least in my field.
Solow pointed out that he saw computers everywhere but in the productivity statistics