Leading Thinking

Humans Are Underrated 28th August 2015

As technology becomes more dominant in the workplace, here are the three job skills that you need to thrive.
As the Pepper robot from Softbank scurries about your home or office, it reads your emotions by your words, tone of voice, facial expressions, and body language. It then responds in all those ways; its hands and posture in particular are remarkably expressive. If you thought emotions were beyond the competencies of robots, you were right for a long time. But no more.
Maybe you believe that humans uniquely will always have to perform the highest-stakes, most delicate and demanding tasks in our lives, such as surgery. But researchers at the University of California at Berkeley are training a robot to identify and cut away cancerous tissue—not like today’s surgical robots, which are actually tools used by human surgeons, but entirely on its own.
Or perhaps you figure technology, for all its wonders, is just nibbling away at the edges of human employment. There aren’t that many surgeons, after all. But in May, Daimler began testing the first self-driving semitruck on the roads of Nevada. The No. 1 job among American men, held by 2.9 million of them, is truck driver. Not that women are safe. Technology will continue to devour clerical and office tasks, and the No. 1 job among U.S. women, held for now by 3 million of them, is administrative assistant.

The greatest anxiety troubling workers today is embodied in a simple question: How will we humans add value? Popular culture is obsessed by it. Humans, a new series on the AMC network, spins a story from the promise and perils of eerily humanoid robots called synths. That seems to be Hollywood’s 2015 theme of the year. Think of Ex Machina (humanoid robot outsmarts people, kills a man, enters society as a person) or Terminator Genisys (Arnold Schwarzenegger’s humanoid robot must again save the world) or Avengers: Age of Ultron (humanoid robot tries to eradicate humanity) or Chappie (bad guys try to destroy humanoid robot police officer who is reprogrammed to think and feel). The big idea is always the same: For good or ill, machines become just like people—only better.

We humans have good reason to be uneasy. Strange things are happening in the economy. Ever fewer men of prime working age—the group that historically has been the most thoroughly employed—are working (see chart), and while several factors are feeding the trend, most economists believe that advancing technology is one of them. In factories and offices, on construction sites and behind counters, technology keeps doing more jobs better than people.


Fear of technological unemployment is as old as technology, and it has always been unfounded. Over time and across economies, technology has multiplied jobs and raised living standards more spectacularly than any other force in history, by far. But now growing numbers of economists and technologists wonder if just maybe that trend has run its course. That’s why former Treasury Secretary Lawrence H. Summers says these issues will be “the defining economic feature of our era.”
How will we humans add value? There is an answer, but so far we’ve mostly been looking for it in the wrong way. The conventional approach has been to ask what kind of work a computer will never be able to do. While it seems like common sense that the skills computers can’t acquire will be valuable, the lesson of history is that it’s dangerous to claim that there are any skills computers cannot eventually acquire. The trail of embarrassing predictions goes way back. Early researchers in computer translation of languages were highly pessimistic that the field could ever progress beyond its nearly useless state as of the mid-1960s; now Google translates written language for free, better all the time thanks to feedback from human users, and Skype translates spoken language in real time, for free. Hubert Dreyfus of MIT, in a 1972 book called What Computers Can’t Do, saw little hope that computers could make significant further progress in playing chess beyond the mediocre level then achieved, but IBM’s Deep Blue beat world champion Garry Kasparov in 1997. Economists Frank Levy and Richard J. Murnane, in an excellent 2004 book called The New Division of Labor, explained how driving a vehicle requires such complex split-second judgments that it would be extremely difficult for a computer ever to handle the job; Google introduced its autonomous car six years later. Harvard psychologist Steven Pinker observed in 2007 that “assessing the layout of the world and guiding a body through it are staggeringly complex engineering tasks, as we see by the absence of dishwashers that can empty themselves or vacuum cleaners that can climb stairs.” Yet iRobot soon thereafter was making vacuum cleaners and floor scrubbers that find their way around the house without harming furniture, pets, or children, and was also making other robots that climb stairs; it could obviously make machines that do both if it believed demand were sufficient. And the Armar IIIa robot, developed at Karlsruhe Institute of Technology in Germany, can unload (and load) the dishwasher.

The pattern is clear. Extremely smart people note the overwhelming complexity of various tasks, including some, like driving a car, that people handle almost effortlessly, and conclude that computers will find mastering them terribly tough. Yet over and over it’s just a matter of time until the feat is accomplished, often less time than anyone expects. We just can’t get our heads around the notion of computer processing power doubling every two years. At that rate, infotech power increases by a factor of a million in 40 years. The computing visionary Bill Joy likes to point out that jet travel is faster than walking by a factor of 100, and that changed the world. Nothing in our experience prepares us to grasp a factor of a million. At the same time, increasingly sophisticated algorithms let computers handle complex tasks using less computing power. So year after year we reliably commit the same blunder of underestimating what machines will do.

Yes, figuring out what computers will never do is an exceedingly perilous route to determining how humans can remain valuable. A better strategy is to ask, What are the activities that we humans, driven by our deepest nature or by the realities of daily life, will simply insist be performed by other humans, even if computers could do them?

Humans will remain in charge

A large category of those activities comprises roles for which we demand that a specific person or persons be accountable. A useful example is making decisions in courts of law, which we will require that human judges render for quite a long time to come. It’s an example in which the human vs. computer question is not hypothetical. Parole decisions are made by judges in some countries, such as Israel, where researchers investigated how those decisions are influenced by the critical human issue of lunch. Over the course of a day, the judges approve about 35% of prisoners’ applications for parole. But the approval rate declines steadily in the two hours before lunch, almost to zero just before the lunch break. Immediately after lunch, it spikes to 65% and then again declines steadily. If you’re a prisoner, the number of years you spend behind bars could be affected significantly by whether your parole application happens to be the last one on the judge’s stack before lunch or the first one after. Data-driven algorithms have proved superior to human judges and juries in predicting recidivism, and it’s virtually certain that computer analysis could judge parole applications more effectively, and certainly less capriciously, than human judges do. Yet how would you rate the chances of that job getting reassigned from judges to machines? The issue isn’t computer abilities; it’s the social necessity that individuals be accountable for important decisions. Similarly, it seems a safe bet that those in other accountability roles—CEOs, generals, government leaders at every level—will remain in those roles for the same reason.

Humans must work together to set collective goals

In addition, humans rather than computers will have to solve some problems for purely practical reasons. It isn’t because computers couldn’t eventually solve them. It’s because in real life, and especially in organizational life, we keep changing our conception of what the problem is and what our goals are. Those are issues that people must work out for themselves, and, critically, they must do it in groups. Partly that’s because organizations include many constituencies that must be represented in problem solving, and partly it’s because groups can solve problems far better than any individual can.

Only humans can satisfy deep interpersonal needs

A more important category of people-only work comprises the tasks that we must do with or for other humans, not machines, simply because our most essential human nature demands it, for reasons too deep even to be articulated. We are social beings, hardwired from our evolutionary past to equate personal relationships with survival. We want to work with other people in solving problems, tell them stories and hear stories from them, create new ideas with them, because if we didn’t do those things on the savanna 100,000 years ago, we died. The evidence is clear that the most effective groups are those whose members most strongly possess the most essentially, deeply human abilities—empathy above all, social sensitivity, storytelling, collaborating, solving problems together, building relationships. We developed these abilities of interaction with other people, not machines, not even emotion-sensing, emotion-expressing machines. We may enjoy the Pepper robot, but we didn’t evolve to interact with it.

We want to follow human leaders, even if a computer could say all the right words, which is not an implausible prospect. We want to hear our diagnosis from a doctor, even if a computer supplied it, because we want to talk to the doctor about it—perhaps just to talk and know we’re being heard by a human being. We want to negotiate important agreements with a person, hearing every quaver in his voice, noting when he crosses his arms, looking into his eyes.
To look into someone’s eyes—that turns out to be, metaphorically and quite often literally, the key to high-value work in the coming economy.


For more of this article by Geoff Corvin please click here

Adapted from Humans Are Underrated by Geoff Colvin, to be published on Aug. 4, 2015, by Portfolio, an imprint of Penguin Publishing Group, a division of PenguinRandom House LLC.Copyright © 2015 by Geoffrey Colvin. Also to be published in the U.K. by Nicholas Brealey Publishing.

Photo credit: Adam Levey for Fortune Magazine

The Business Case for OutplacementBy None | 28th May 2019

In today’s business environment, change is inevitable and can result in some tough choices. Redundancy and redeployment can be a challenging and emotive time for the organisation, the team wh Read more... >

The Employee Experience: It’s Trickier (and more important) Than You ThoughtBy None | 29th April 2019

I just finished a week of meetings discussing HR Technology and the Employee Experience and I want to give you some thoughts. This topic is enormously important, and it’s actually harder than Read more... >

Career Transition ThinkingBy None | 20th March 2019

Help employees to feel in control and understand their choices during periods of change. Restructuring, redeployment, re-organisation, downsizing, re-location, acquisition, merger and - probabl Read more... >

Sign Up for Updates

Innovative Thinking Transforming
PerformanceBusiness Success