Healthcare systems across the OECD, whether publicly or privately funded, face a range of challenges that threaten their financial and operational sustainability. The demographic crunch is the backdrop to this. Life expectancy has risen inexorably over recent years, but quality of life has often failed to keep pace. More people are living longer, but frequently with complex, chronic conditions that require ongoing treatments. Health systems have to meet this increase in demand with constrained resources, especially since the global financial crisis. Publicly funded systems in particular have received fewer resources than required as governments seek to economise, leading in some cases to a rationing of care.
Workforce shortages also threaten the sustainability of healthcare delivery. Although a more acute challenge in low and middle income countries, the inability of systems to recruit and retain sufficient clinical staff to meet rising demand could further harm patient outcomes. In the UK, for example, the health and social care sector has become increasingly reliant upon foreign labour, while employers often pay premium rates for temporary staff to fill gaps.
In its current configuration, the health system of a typical developed state risks becoming unsustainable without a substantial increase in resources, a clear reduction of expectations in what can be delivered for patients or a revolution in productivity. Although no silver bullet, AI could be part of the solution to these challenges.
The fundamental gains are in efficiency and cost-effectiveness. AI tools that can analyse medical data such as microscopic sections from biopsies can revolutionise the speed and accuracy of medical diagnosis. Advances in genomics and image analysis will also drive greater understanding of how complex diseases develop, allowing for more proactive and personalised treatment. Earlier diagnoses promise to reduce patient costs further down the line. AI solutions can make reporting more efficient, thus freeing up time for NHS staff to spend on direct patient care. Yet an underdeveloped policy framework and the presence of a risk averse culture amongst clinicians has slowed the deployment of these technologies – often rendering public healthcare providers less willing adopters of technology than they would like to be.
What are the primary obstacles to the deployment of AI in public healthcare? What simplifications of the regulatory and procurement framework would have the greatest effect? How are legacy technologies and commercial arrangements holding back change and how can this be addressed?
Many of these innovations require access to huge pools of patient data. Ensuring access to data for AI researchers and commercial organisations generally requires the breaking down of well-established institutional boundaries. Both requirements can encounter barriers in patient concerns over the sharing of personal data, as exemplified in the Royal Free/DeepMind controversy, and in the narrower commercial and institutional interests of healthcare providers, pharmaceutical manufacturers and tech companies. The greater use of AI driven diagnostic technologies also raises questions of trust and patient confidence. Whether those making algorithms should share the negligence liability shouldered by clinicians and whether regulators assess whether an algorithm is sufficiently robust in the same way as new drugs are evaluated are core questions.
What are the primary practical obstacles to data pooling and analysis inside public healthcare systems? How can they be resolved? Are there any unique aspects to data protection and patient trust in healthcare that need bespoke solutions?
This article was written for the Politics of AI conference convened by Global Counsel in 2019 and forms a part of a wider AI briefing pack: https://www.global-counsel.co.uk/analysis/insight/politics-ai
The views expressed in this note can be attributed to the named author(s) only.