AI for physicians: what should you be doing?

With issues around AI developing at a breathtaking pace, Dr Matthew Fenech explores the rapidly-changing landscape about AI in healthcare and offers valuable advice on how to stay on top of the curve.

The pace of change in thinking around AI in health has been breathtaking. In the last year alone, we’ve had a stream of policy reports, including Future Advocacy’s collaboration with the Wellcome Trust on ‘Ethical, Social, and Political Challenges of AI in Health’, Nuffield Council's briefing notes on bioethics, and an analysis of the opportunities and risks for the NHS by Reform. And it’s not just think tanks that are discussing AI in healthcare; the Royal College of Radiologists has published a position statement on artificial intelligenceand while the RCP has published its own document on AI.

The findings of the Royal College of Surgeons’ Commission on the Future of Surgery are expected soon. At NHS England’s ‘initial code of conduct for data-driven health and care technology’ and the secretary of state, Matt Hancock, delivered a keynote speech outlining his ambition to “make the UK the world leader [...] in HealthTech”.

AI in health is here – what should you be doing about it? When RCP President Professor Andrew Goddard asks you to embrace AI technology, what does this mean in practice? Here are a few suggestions.

Talk the talk

It’s important to familiarise yourself with AI and to begin to understand which of this vast suite of tools may be best applied to your work. Thankfully, there are a number of resources out there that should help ease you into this unfamiliar territory. Future Advocacy’s report has an introductory section with an accessible of definition AI and real-world examples of the five use cases for AI in health and medical research.

Even so, AI moves fast, so I suggest subscribing to a newsletter with regular updates and other resources such as podcasts (a good example is Exponential View.) Also highly recommended is the Health and Social Care.AI initiative—their monthly AI webinars are highly inclusive and welcome anyone interested in learning about and influencing AI in health and social care, regardless of their background or prior experience.

Lastly, if you’re responsible for organising your local or regional training days, why not invite someone to talk to you about AI in your particular specialty?

It’s worth starting to think now about how to improve at those aspects of your job which will remain human for longer.

Dr Matthew Fenech, AI policy consultant

Critically analyse your own work

Health and care work have traditionally been considered resistant to AI-enabled automation. Cutting-edge estimates of the automatability of different sectors suggest that 12% of jobs in health and social care could be displaced by automation over the next 20 years, which will be more than offset by the creation of new jobs in this sector as a result of economic growth.

It’s important to realise, however, that jobs are concatenations of different tasks, each requiring different skills and varying in their automatability. It won’t be whole jobs that will be automated, but some of their constituent tasks. In 2017, a report described automatable tasks as “specific actions in familiar settings where changes are relatively easy to anticipate”, and suggested that about 60% of jobs have at least 30% of tasks that can technically be automated with current technology.

When you think about your day-to-day work, does that description of certain tasks sound familiar to you? You’re not alone. NHS nurses reported spending an estimated 2.5 million hours a week on clerical tasks, equating to more than an hour per day for every nurse. Healthcare professionals (HCPs) need to start thinking about how best to safely and ethically automate the routine aspects of their work, in conjunction with AI experts.

More importantly, HCPs need to consider how best to use the time that this technology frees up.

The utopian vision is one where HCPs spend more time listening to patients and their relatives, and better explaining their diagnosis, prognosis, and treatment options. But a more empathetic cohort of physicians will not be the inevitable consequence of the increased use of AI. Training programmes will need to be adapted to account for these changes. It’s worth starting to think now about how to improve at those aspects of your job which will remain human for longer.

Be sceptical …

It won’t have escaped your attention that there’s a lot of hype about AI in healthcare. Barely a day goes by without a headline screaming that AI now “beats” doctors at some diagnosis or other. Some of the early promises made about the revolutionary potential of AI in health are coming up short, as IBM are discovering with their much vaunted Watson for Health programme.

Thankfully, there are some clinicians and other experts who keep a close eye on the claims made by AI manufacturers, and it’s worth reading what they have to say. Blogs by Dr Margaret McCartney, Dr Luke Oakden-Rayner, and Professor Enrico Coiera stand out for their accessible and fair critiques of the latest announcements in the AI in healthcare space, while Cathy O’Neil deals with wider issues around the fair application of algorithms to sensitive sectors.

Ultimately, we need to remember that AI tools are just that - tools. We need to evaluate them using the same rigorous standards we apply to any new medical innovation.

You wouldn’t dream of prescribing a new drug without first having read the literature and being convinced that the benefits outweigh the risks for the patient in question. Algorithms should be no different.

You wouldn’t dream of prescribing a new drug without first having read the literature and being convinced that the benefits outweigh the risks for the patient in question. Algorithms should be no different.

Dr Matthew Fenech, AI policy consultant

[...] but not cynical

There’s some really encouraging work out there. The collaboration between Moorfields and DeepMind Health, recently published in Nature Medicine, is a great example of an AI tool being developed to address a specific clinical need that was identified by a frontline clinician, with input from patients and other stakeholders.

Future Advocacy’s work on the impact of AI in low- and middle-income countries identified many exciting innovations that aim to use AI to deliver services that are severely underdeveloped in certain areas, such as in the diagnosis of malaria or the prediction of outbreaks of dengue fever.

Moreover, if we want all medical practice to be evidence-based, then we need to better understand the insights that healthcare data can provide. It’s undeniable that we’re not using this potential goldmine to maximum effect. AI can help with this by providing data analytical tools of much greater scale and speed that we’ve seen before.

Ultimately, polarised views help no-one. The truth about AI in healthcare is much more nuanced, and it’s up to physicians to educate themselves sufficiently to be able to understand the opportunities, tackle the risks, and communicate both to their patients.

Dr Matthew Fenech is a consultant in artificial intelligence policy with 10 years experience as an NHS research doctor. Follow him on twitter at @MattFenech83

This blog was originally published on the Our Future Health website.