Emboldening Fools

Interested in Becoming a Member?

An SIM Membership like no other, provides you with an abundance of tools, resources and opportunities to help you achieve your professional and personal success at every step of the way! Be part of our learning community of more than 34,000 corporate and individual members.


For more information about membership, please click here »

Member Login

If u are a subscriber, please use ur subscriber login.
If you are a SIM Member, please use your SIM Membership login.



Forgot your password?

Member Login



Forgot your password?
login  Cancel

Sign Up

If you wish to sign up for a SIM Membership, please click here

Subscribe

If you wish to subscribe to Today's Manager, please click here

If you wish to subscribe to Singapore Management Review, please click here

Website maintenance notice: Website will not be accessible from 27 June (11 pm) to 28 June (9 am) due to scheduled maintenance. We apologise for any inconvenience caused.

Home > Articles > Emboldening Fools

 Emboldening Fools

Steven Bleistein | Today's Manager
June 1, 2019

Outsourcing human empathy and judgement to machines is not without danger. Trust in machine judgement will lead to increasingly unforeseen failures of the kind that they were ostensibly meant to prevent.

Expectations from artificial intelligence (AI) and automation are excessive, and many managers incorrectly believe that these can replace humans. However, there is no ersatz for human empathy and judgement, and outsourcing these to a machine is not without danger.

Amazon not long ago developed a system to identify the best managers with strong leadership capabilities early on in their careers in order to eliminate human bias in human selection. The system unexpectedly, however, displayed an aggressive bias toward men, never recommending women candidates. The machine merely mimicked male-dominated leadership biases of the company with a perfection that even humans could not match.

Unbiased machines do not exist. Machine learning systems merely perfect and exacerbate the biases of whoever programmed them, or the bias present in human decision-making examples from which the machine learns. Amazon ultimately abandoned its system.

Imagine what would have happened at Amazon had the company kept the system in place. Would managers ignore or go against the machine? I have my doubts. The human resources (HR) director of a major company in Japan told me once that she relies on an AI system to identify the best candidates for employment and promotion, mistakenly believing a machine is unbiased compared to her or any other human.

At an event in Tokyo on AI and automation, a panel of experts agreed that AI is not intended to replace human judgement, but rather to improve upon it. Just because a manager gets advice from a machine, does not mean he/she has to take it, but merely consider it.

Yet, what do you think the propensity of a human manager is to decide against the advice of a machine? After all, should a manager follow the machine’s advice and the advice turns out to be wrong, a manager can always blame the machine, whereas if the manager decides against the machine and the machine turns out to be right, the manager might be faulted for arrogantly ignoring the expert system.

Do you think it is farfetched to presume that a manager would abdicate decision-making authority against his/her better judgement to protect his/her career? Have you ever heard the expression, “No one ever got fired for hiring IBM?” Will no one get fired for deferring to a machine, or at least is that what managers will think?

At what point does it become so much easier to rely on technology by default that we forget how to do for ourselves what technology does for us? If you think such a scenario is unlikely, consider this: many Japanese adults have lost the ability to write Chinese characters by hand because of their reliance on Japanese word processors. When it comes to automated driving, how long do you think it will be before people forget how to parallel park a car, if not how to drive altogether?

Trust in the judgement of machines will lead to increasingly unforeseen failures of the kind that they were ostensibly intended to prevent. Not long ago, I read about how a self-driving car drove off the edge of a bridge under repair because of the driver’s overreliance on the car’s navigation system.

In Los Angeles, an inebriated man got behind the wheel of his Tesla and passed out while on the highway with automatic driving engaged. The car duly stayed in its lane and maintained a constant distance between itself and the car ahead. All is fine as long as conditions do not change—unlikely on any highway in a major city to say nothing of in Los Angeles. The California Highway Patrol had to maneuver a patrol car in front of the Tesla and gradually reduce speed until the car’s autopilot brought it to a stop. I have to ask myself if this man’s confidence in his ability to drive home safely was not bolstered because of the automated features of his Tesla.

AI and automation has the unfortunate side effect of emboldening fools. Do not let these make one of you.

IMAGE: 123RF

Mr Steven Bleistein is CEO of Tokyo-based consulting firm Relansa, Inc, and the sought-after expert on rapid business growth and change. He is the author of Rapid Organizational Change (Wiley 2017), and writes for The Straits Times.

 

Copyright © 2019 Singapore Institute of Management

Article Found In

Today's Manager Issue 2, 2019

View Issue
 

Browse Articles

By Topic
By Industry
By Geography