Empathy won't necessarily save your job from Artificial Intelligence
If you're counting on the necessity of the 'human touch' to save your profession, it may be worth further consideration
"Many of us tend to romanticize what we do," says prof. Richard Susskind. "The idea that it could be reduced to something that a machine could process is contrary to what many of us have trained for and worked for for many years."
The author of The Future of Professions: How Technology Will Transform the Work of Human Experts believes that many professions are over-estimating their resistance to automation — and that emerging generations of workers should consider choosing their career paths with a little less cynicism.
"Don't go into a profession because you fancy undertaking the work your parents or your uncle or your professors did," Susskind suggests. "Don't become a lawyer because you like John Grisham or Suits or The Good Wife. That's twentieth century law.
"Go into law because you're interested in providing better access to justice. Go into medicine not because you want to practice as a GP or a surgeon as you see today, but because you want to be part of a generation that will redefine the way health is improved."
It's not an abstract topic for Susskind; his own two millennial children face this challenge, he says, and they too will need to 'learn to learn'.
"Be prepared to always adapt to changing situations, because it's not going to be like what was typical for the previous century, where you learned something during the first 25 years of your life and then do that until you retire. That doesn't work anymore. Every few years, you'll probably have to relearn something."
There are warning signs against our own conceit around our professional immunity to AI: if you find your work unvaried and predictable, it's likely fodder for the pattern recognition which lies at the heart of most automation processes.
And if you're relying on 'the human factor' to ring-fence your job, it's also worth considering the broader psychology of that argument.
"If you're in a profession where a lot of the work is non-routine, you think you're pretty safe," Susskind says. "But what one has to realise is that notions of trust and empathy are relevant to the current way of delivering professional services, which is one-to-one consultation.
"But if you can find different ways of producing and distributing knowledge and expertise in society, the features that were important in the advisory model, like empathy and trust, may be less important. What you're maybe looking for is a reliable outcome, rather than an empathetic service."
Susskind gives the example of the specialist medical practitioner: you find a lump, notice a new mole, or soldier on for a fortnight with a pain that just won't go away. You don't think it's cancer - you desperately don't want it to be cancer. You want to sit across from someone with decades of experience in oncology and bedside manner to promise you that you do not have cancer. Or - should worst come to worst - someone to reassure you and talk through your options if you do. You want empathy.
"If it transpires that in the future AI systems will be more reliable than human doctors and specialists, then a fairly simple choice might need to be made: do we go for a far more reliable online diagnosis, or do we go to a demonstrably less reliable service which is more empathetic?"
Mandating AI in healthcare
Jürgen Schmidhuber, co-director of the Dalle Molle Institute for Artificial Intelligence Research and the man dubbed in the New York Times as the potential father of AI, concurs with Susskind that medicine's 'personal touch' may ultimately submit to the hard science underneath.
"Maybe the most obvious example is medical diagnosis, where you would normally need a trained doctor to look at CT scans, tissue or microscopy images to find out if there is some pathology. Today there are already a lot of examples where medical diagnosis is superhuman.
"That is going to change everything in healthcare: at some point these artificial neural networks will be so good that the lawmakers will say that diagnosis has to be done by machine."
Medicine's status as an empirical science has driven much AI and machine learning towards the field; IBM's Watson Health AI framework is making industry-shaking strides in the field. However, attempts to market it as a useful resource for medical professionals rather than a potential usurper — though subject to occasional hiccups — don't necessarily convince.
Professor Susskind's children may be way ahead of his concern for their future. A recent survey concluded that 48 per cent of millennials are interested in non-routine occupations. Although this figure may have more to do with idealism than pragmatism, it is noteable that only 39 per cent of boomers hold the same concern.
Simply burying one's head hopefully in the sand — special pleading for your job, career or field on the grounds of empathy or human interaction — may prove a poor strategy for the long-term future. Many, of course, will run out the clock on their way to retirement; but even in the coming decade, AI will ever more become a question of siding with the machines or against them.
IMAGES: Wikimedia Commons