top of page
Search

AI, Workforce Shortages and the NHS. What Happens After we Close the Gap?

Much of the current debate about artificial intelligence in healthcare starts in the same place. Workforce shortages, backlogs, rising demand and a system struggling to keep pace. Against that backdrop, AI is increasingly framed as a necessary response to structural pressure rather than a futuristic aspiration.


Recent reflections from McKinsey on the emergence and potential of AI in healthcare reinforce this point. Healthcare is described as highly labour-intensive, with persistently weak productivity growth over recent decades. In that context, the earliest value of AI is not radical automation, but the ability to relieve administrative burden, streamline workflows and allow scarce clinical staff to spend more time on patient care. McKinsey notes that the majority of healthcare organisations are already pursuing generative AI use cases, largely focused on operational efficiency and workforce augmentation.

For systems such as the NHS, this framing matters. AI is positioned first as a way of addressing capacity gaps rather than eliminating jobs. In a service facing longstanding vacancies and growing demand, that initial application is both politically and operationally attractive.


But it is not the end of the story.


Once AI has helped to close the most acute workforce gaps, a more difficult question begins to surface. What happens next? What happens to roles that become genuinely redundant as AI capability matures? This is not an abstract concern. Outside of healthcare, conversational AI is already replacing large parts of contact centre activity, triage functions and routine transactional work.


The UK government is now openly acknowledging that some jobs will disappear as AI adoption accelerates. In a recent People Management article, investment minister Jason Stockwood suggested that universal basic income could be one way of supporting workers whose roles are displaced by AI. He described the need for “concessionary arrangements” to soften the transition for industries that “go away”, alongside lifelong learning to allow people to retrain. While universal basic income is not government policy, the fact that it is being discussed reflects the scale of anticipated disruption.

The same article, however, also contains a significant caution. Multiple experts argue that income protection should not become the centrepiece of the response to AI-driven change. Professor Sebastian Reiche warns that if income support substitutes for active labour market policy, the UK risks repeating past mistakes by managing decline rather than shaping transition. Jacqueline Ruding similarly notes that universal basic income on its own is unlikely to be sufficient to address the societal and economic impacts of AI.


This distinction is critical. Treating displacement primarily as a welfare issue turns AI into a problem to be mitigated rather than a capability to be harnessed. It focuses attention on compensating for lost work, rather than asking how human effort might be redeployed to create new forms of value alongside emerging technologies.


There is a credible alternative. Research on AI and productivity consistently shows that the largest economic gains arise when AI is used to augment human roles, not simply replace them. Job redesign, skills investment and organisational change are what unlock value. Where those conditions are absent, productivity gains are limited and inequality risks grow.


In a healthcare context, that alternative path is especially compelling. The NHS is not short of work. It is short of capacity in the right places. If AI reduces demand for certain administrative or transactional roles, the question should not automatically be how those individuals are compensated for exit. It should be how they are supported to move into areas of unmet need that AI cannot easily replace.


That could mean community-based caring roles aligned with the shift from hospital to neighbourhood care (social prescribing, care navigation, care mentors). It could mean diagnostic and prevention support that strengthens early intervention. It could mean digitally enabled coordination, analytics and AI supervision roles that help the system safely scale new technologies. It could also mean building internal capability to design, train and govern increasingly agentic AI tools, rather than outsourcing that expertise entirely.


None of this is guaranteed. Retraining at scale is hard. Not all skills are transferable. There will inevitably be friction and failure. But the policy choice still matters.

The People Management article highlights another risk. A fully universal income support model would be extremely costly and could divert funding away from education, skills and industrial policy. McKinsey similarly notes that the next phase of AI-driven productivity will require sustained capital investment. A strategy that absorbs fiscal headroom through long-term welfare commitments risks constraining the very investment needed to realise AI’s benefits.


For the NHS, this brings the argument back to first principles. The long-term plan is built around major shifts towards prevention, community care and digital transformation. AI has the potential to accelerate all three. But only if the workforce is treated as part of the solution, not a downstream casualty of efficiency. The real question, then, is not whether AI will change work in healthcare. It already is. The question is whether we choose to use the capacity it creates to merely keep the lights on, or whether we reinvest that capacity in building a more preventative, more personalised and more resilient health system.


That is not a welfare debate. It is a workforce strategy choice.

 
 
 

Comments


bottom of page