This new HBR article suggests that managers will increasingly rely on AI in the future. Here’s my take:
The authors point to tasks like giving staff scheduling to “AI”. But that’s more like an app within an ERP system than AI management per se, and in retail it’s a full-blown existing app.
What if the AI schedules routine stuff 90% great, but screws up the meeting with your biggest client or most important boss? AI algorithms can have a great batting average, but they fail differently than humans, and that can be a big problem.
Focus on Judgment Work
The author’s claim “automated status reporting via AI” will be a great time saver. But writing a status report, I’ve found, is one of the best ways to focus your mind on the issues where your judgment is needed.
Your judgment is needed even more to decide whether or not an AI algorithm is actually giving you valid and actionable information (see prior post here).
Treat Intelligent Machines as Colleagues
Admittedly, I’m only a lowly English major, but we used to call it “Personification” when a writer pretends the inanimate is a person; and the “Pathetic Fallacy” when the writer gives nature human traits. I don’t treat books or Google as a colleague — that would be just as weird as this suggestion.
Work like a Designer
Sure. But the literature on Management as Design is awfully similar to the literature on Management as Orchestration. An arguably, neither has anything to do with AI. (A great book by a real conductor is “The Art of Possibility”.)
Develop Social Skills and Networks
Yes, absolutely, develop those social skills. But I’m pretty sure you won’t get them talking to machines like Alexa, Siri, Cortana or Google. At least Google doesn’t try and pretend they are a person.
My experience with AI applications is not that they cannot help, but that once an algorithm proves itself, it becomes embedded in an application like scheduling or translation or search. Managers are already overloaded with such applications helping them be more productive, and I do expect the trend to continue.
AI algorithms fail differently than humans, and only a real-world context can catch those failures. Direct use of the algorithm without the benefit of human context can be quite risky. For example, machine recognition of characters and voice have approached or exceeded human levels for a while, but that hasn’t made them usable without a lot of application context.
The authors sensibly advise you to work on your people skills, because that is the essence of management. Otherwise, this is just another exercise in AI hype.