Very interesting. In enterprise settings, there are a million edge cases and processes that would probably be of need for exception handling. I suspect there can be a niche market for this kind of concierge service, especially as executives are planning for AI to run across the enterprise. There may be a need for an upgraded version of a helpdesk.
Perhaps this also creates opportunities to offer more products/experiences that are purpose-built to be exceptional in a way that excludes AI agents. In essence an extension of the artisanal or organic movements to more and more domains, as some parts of the customer base are willing to pay for the extra friction that comes with human labor.
Your first example of knowing that deliveries in your building require a floor number is probably one of the better use cases for ai to work well. It can learn from past examples - and yours is likely not the first example - and infer to ask for a floor number a priori.
This is brilliant. Not sure if I'm suffering from Frequency Illusion, but I've seen a few content creators mention similar stories to your case.
YouTube’s copyright takedown system shows why exception handling could become a real competitive advantage. AI flags possible violations and automation enforces takedowns, but exceptions like valid licenses often get stuck in slow, opaque loops.
Where I get more concerned for end users is with Agentic AI. These challenges could worsen as the AI might take multi‑step, adaptive actions on its own. Unwinding such layered decisions is harder, raising the risk of compounding errors and unclear accountability.
I suspect that there will be sectors and firms that will do their own risk matrix and decide that the complications of the machine being very very wrong in exception cases will not be worth the implementation.
My observation from coding agents is that tasks and outcomes that are common are easy; unique things AI struggles with—and things that sound close to common things but are different with a twist are the hardest of all—because the bot wants to just go back to the common part and be done with it.
Human bureaucracy is already pretty bad with exception handling; we’ve all hit some wall with getting a refund or whatever that’s “not our policy” — it will be interesting to see if bots can be malleable to keep a higher goal in mind (prevent churn) vs pedantic approaches (sorry, our return policy is 30 days and it’s been 31 days).
Very interesting. In enterprise settings, there are a million edge cases and processes that would probably be of need for exception handling. I suspect there can be a niche market for this kind of concierge service, especially as executives are planning for AI to run across the enterprise. There may be a need for an upgraded version of a helpdesk.
Perhaps this also creates opportunities to offer more products/experiences that are purpose-built to be exceptional in a way that excludes AI agents. In essence an extension of the artisanal or organic movements to more and more domains, as some parts of the customer base are willing to pay for the extra friction that comes with human labor.
Your first example of knowing that deliveries in your building require a floor number is probably one of the better use cases for ai to work well. It can learn from past examples - and yours is likely not the first example - and infer to ask for a floor number a priori.
This is brilliant. Not sure if I'm suffering from Frequency Illusion, but I've seen a few content creators mention similar stories to your case.
YouTube’s copyright takedown system shows why exception handling could become a real competitive advantage. AI flags possible violations and automation enforces takedowns, but exceptions like valid licenses often get stuck in slow, opaque loops.
Where I get more concerned for end users is with Agentic AI. These challenges could worsen as the AI might take multi‑step, adaptive actions on its own. Unwinding such layered decisions is harder, raising the risk of compounding errors and unclear accountability.
I suspect that there will be sectors and firms that will do their own risk matrix and decide that the complications of the machine being very very wrong in exception cases will not be worth the implementation.
My observation from coding agents is that tasks and outcomes that are common are easy; unique things AI struggles with—and things that sound close to common things but are different with a twist are the hardest of all—because the bot wants to just go back to the common part and be done with it.
Human bureaucracy is already pretty bad with exception handling; we’ve all hit some wall with getting a refund or whatever that’s “not our policy” — it will be interesting to see if bots can be malleable to keep a higher goal in mind (prevent churn) vs pedantic approaches (sorry, our return policy is 30 days and it’s been 31 days).