As artificial intelligence becomes increasingly embedded in NHS systems, the UK is developing new rules to ensure patient safety whilst maintaining innovation in healthcare technology.
The question of who is responsible when artificial intelligence gets healthcare decisions wrong is becoming increasingly urgent as the NHS rolls out AI-powered diagnostic and clinical tools across the country. The UK’s medicines regulator is now laying the groundwork for a comprehensive new framework to govern medical AI, seeking to strike a careful balance between protecting patient safety and enabling innovation in a sector where Britain has genuine competitive advantage.
In September 2025, the Medicines and Healthcare products Regulatory Agency (MHRA) launched a National Commission into the Regulation of AI in Healthcare, bringing together experts from technology, clinical practice, patient advocacy, and regulation. The initiative reflects growing recognition that existing rules, designed for traditional medical devices, may not adequately capture the unique challenges posed by AI systems that learn, adapt, and make autonomous decisions in clinical settings. The Commission’s call for evidence, which closed on 2 February 2026, drew responses from patients, healthcare professionals, AI developers, industry bodies, and regulatory organisations across the UK and internationally. These submissions will now inform a new regulatory framework scheduled for publication in 2026.
Currently, the UK treats AI tools used for medical purposes as medical devices, requiring them to meet strict safety and performance standards under the UK Medical Devices Regulations. However, this framework was developed before modern AI systems became commonplace in healthcare. The existing approach does not fully address the complexities of how AI systems operate, learn from data, or potentially fail in ways that traditional devices do not.
“The boundaries of regulation matter significantly,” explains the MHRA in its consultation documents. “We need to consider not only AI systems that qualify as medical devices but also healthcare AI tools that operate outside this classification yet still influence patient outcomes.” This distinction is critical, as some AI applications—such as administrative tools or general decision-support systems—may not technically meet the definition of a medical device but could still have clinical consequences.
The question of liability and accountability is particularly complex. When a conventional medical device fails, the responsibility chain is relatively clear: the manufacturer is liable for defects, regulators ensure standards, and clinicians use the device according to its approved purpose. But AI systems operate differently. An AI diagnostic tool might make an error based on patterns in its training data, unexpected patient demographics not represented during development, or genuine limitations in the underlying science. Should responsibility fall with the developer, the healthcare organisation using the system, the clinician interpreting its output, or the regulator who approved it? The answer will likely involve shared responsibility, but the framework for this remains undefined.
To address these challenges, the MHRA has already launched practical initiatives. The “AI Airlock” regulatory sandbox brings together MHRA experts, Approved Bodies, NHS representatives, and other regulators to test AI medical devices in real-world settings. This second phase is currently ongoing, with findings expected to shape future guidance. Additionally, from January 2026, the MHRA waived regulatory fees for micro and small UK firms developing medical AI through a pilot scheme, recognising that regulatory costs can burden innovative companies whilst larger manufacturers absorb them more easily.
The drive for bespoke medical AI regulation reflects a broader shift in UK health policy. Post-Brexit, the UK has the opportunity to develop regulatory approaches tailored to domestic innovation whilst maintaining international standards. This is particularly significant for AI, where regulatory arbitrage—companies choosing to develop where rules are lighter—could either attract cutting-edge research or, conversely, undermine safety if standards become too permissive.
The European Union has taken a different approach, establishing a tiered risk-based AI Act with high-risk systems facing strict requirements. The UK is not bound by this framework but is monitoring developments closely, particularly around clinical decision support systems and diagnostic tools, which would likely be classified as high-risk in Europe. The timing is significant: EU rules for high-risk AI systems are scheduled to take effect from 2 August 2026, creating potential market pressures for UK regulation to align or differentiate strategically.
Patient safety advocates have emphasised that speed of approval should not come at the expense of rigorous testing. The National Commission’s working groups include patient representatives, recognising that those who bear the consequences of medical AI errors have legitimate input into how these systems should be governed. Key questions under consultation include how to ensure clinical evidence for AI systems, what post-market surveillance should look like, and how to manage transparency—ensuring clinicians and patients understand when and why AI is involved in healthcare decisions.
The stakes are high. A miscalibrated AI system deployed across NHS trusts could potentially affect thousands of patients before errors emerge. Conversely, regulation that is too restrictive could push innovative UK companies offshore and slow the adoption of tools that could genuinely improve diagnostic accuracy, reduce clinician workload, and free NHS resources for direct patient care.
Source: @bmj_latest
Key Takeaways
- The MHRA is developing a new regulatory framework for medical AI scheduled for 2026, following a consultation period that closed on 2 February 2026
- Current regulations treat AI tools as medical devices, but this framework may not adequately address how AI systems learn, adapt, and fail in ways traditional devices do not
- Accountability for AI errors remains undefined—responsibility may be shared between developers, healthcare organisations, clinicians, and regulators
- The UK is positioning itself as a potential world leader in medical AI regulation, balancing patient safety with innovation in a high-value technology sector
- Fee waivers for small UK firms developing medical AI aim to support innovation whilst maintaining safety standards
What This Means for Kent Residents
For patients in Kent and Medway, the development of this regulatory framework affects how quickly new diagnostic and treatment tools reach NHS services and how those tools are evaluated for safety. Kent and Medway NHS Trust, along with local GP practices and hospital services, will eventually adopt regulated AI systems under whatever framework emerges. Residents should be aware that appropriate regulation can help ensure these tools genuinely improve care—whether through faster cancer diagnosis, better personalised treatment planning, or more efficient use of NHS resources—without being deployed prematurely or with inadequate safeguards. If you have concerns about how AI is being used in your care, your NHS clinicians and local hospital trust should be able to explain which tools are in use and how they inform clinical decisions. For more information about NHS services in Kent, contact your local GP practice or visit NHS England’s website for details about digital innovation in your region.


