HomeLocal HealthHealthEuropean Health Leaders Warn AI Must Be Built With Doctors, Not To...

European Health Leaders Warn AI Must Be Built With Doctors, Not To Replace Them

Healthcare professionals across Europe are calling for artificial intelligence tools to be co-designed with clinicians rather than imposed from above, raising important questions for the NHS and patient safety.

Healthcare leaders meeting in the European Parliament have issued a stark reminder that artificial intelligence should enhance medical practice, not undermine it. In a panel discussion held on 25 February focused on the future health workforce in the age of AI, European doctors emphasised the critical need for clinicians and other healthcare professionals to have a central role in designing and implementing AI tools from the outset.

The message is timely. As AI systems become increasingly embedded in healthcare settings across Europe and the United Kingdom, there is growing concern that technology companies and policymakers could deploy these tools without adequately consulting the medical professionals who must use them daily in patient care.

Ricardo Baptista Leite, a doctor and chief executive of Health AI, a non-profit organisation championing responsible artificial intelligence in health, highlighted a significant problem: much of what is being sold as cutting-edge health technology does not deliver on manufacturers’ promises. “Health technology assessments provide a unique opportunity to separate the snake oil from what truly works,” he said. “There’s a lot of stuff being sold that doesn’t work or doesn’t fulfil the promises that manufacturers are making. We need serious processes.”

This concern reflects a broader challenge facing healthcare systems. Across Europe, clinicians and patients are already using AI tools—whether formally approved or not—yet regulatory frameworks have struggled to keep pace. In the European Union, the AI Act, which entered into force in August 2024, classifies medical AI as “high-risk” and imposes stringent requirements including transparency, data governance, and human oversight. However, implementation across member states remains uneven, and many healthcare organisations are uncertain how to navigate the new regulations responsibly.

The emphasis on professional involvement in AI development represents a departure from the “top-down” approach sometimes favoured by technology firms and health bureaucracies. Rather than viewing doctors as obstacles to innovation, European health leaders are arguing that clinicians must be partners in shaping how AI is integrated into clinical workflows. This is not merely a matter of professional courtesy; it reflects a practical reality: doctors and nurses understand the complexities of patient care, the risks inherent in clinical decision-making, and the potential pitfalls of algorithmic error in ways that pure technologists may not.

The workforce implications are also significant. Rather than asking whether AI will replace doctors, health leaders are posing a more nuanced question: how will AI be used for task-shifting, allowing healthcare professionals to focus on aspects of care that require human judgment, compassion and complex decision-making? This distinction matters enormously. An AI system that helps radiologists process imaging data more efficiently could free them to spend more time on diagnostic complexity and patient communication. Conversely, an AI system imposed without clinical input might create new bottlenecks or introduce errors that endanger patients.

Regulation is catching up to reality. The EU AI Act and complementary frameworks like the Medical Devices Regulation now require that high-risk AI systems used for medical purposes undergo rigorous testing, maintain clear audit trails, and provide mechanisms for human oversight. Developers must demonstrate that their systems perform reliably across diverse patient populations, address data bias, and remain transparent about their limitations. These safeguards are essential, yet they also raise questions about implementation burden, particularly for smaller healthcare providers and startups developing innovative solutions.

For the United Kingdom, which operates independently of EU regulations, the principles endorsed by European health leaders remain highly relevant. The NHS has been cautiously exploring AI applications—from diagnostic support systems to administrative efficiency tools—but has emphasised the importance of clinical validation and professional engagement. NICE, the National Institute for Health and Care Excellence, has published guidance on evaluating digital health technologies, stressing the need for robust evidence and consideration of how tools interact with clinical practice.

The call for AI to be designed alongside healthcare professionals also reflects a deeper principle: public trust in healthcare technology depends on safety, transparency and professional accountability. Patients are more likely to accept AI-assisted care if they know that their doctors have endorsed and shaped the technology, rather than feeling that algorithms have been imposed without clinical scrutiny.

Healthcare leaders across Europe are signalling that the next phase of AI adoption must be collaborative. Technology companies, regulators, and healthcare professionals need to work together from inception through deployment. This requires genuine dialogue, not tokenistic consultation. It requires time, resources and institutional structures that prioritise patient safety and professional expertise alongside innovation.

Source: @bmj_latest

Key Takeaways

  • European health leaders have called for AI in healthcare to be designed with doctors, not imposed on them or used to replace clinicians
  • The EU AI Act classifies medical AI as high-risk and requires strict oversight, transparency and human involvement in clinical decision-making
  • Healthcare professionals argue that rigorous health technology assessments are essential to distinguish genuinely useful AI tools from those that fail to deliver on their promises
  • Workforce planning must evolve to consider AI as a tool for task-shifting that enhances clinical practice rather than a replacement technology

What This Means for Kent Residents

For patients across Kent and Medway, this message from European health leaders has direct implications. Your GP, consultant, nurse or other healthcare professional should have input into any AI tools that support your care. If you use services provided by Kent and Medway NHS Trust or other local providers, the principle of clinical co-design means that doctors and staff will have had a say in how these technologies are implemented. If you have concerns about how AI is being used in your healthcare, speaking with your GP or local NHS service is the appropriate first step. The NHS remains committed to ensuring that technology enhances—rather than replaces—the human relationships that are central to good medical care.

Transparency Notice: This article was produced with AI assistance and reviewed by our editorial team before publication. Kent Local News uses artificial intelligence tools to help deliver fast, accurate local news. For more information, see our Editorial Policy.
KLN Staff Reporter
KLN Staff Reporterhttps://kentlocalnews.co.uk
The KLN Staff Reporter desk covers breaking news, crime alerts, traffic updates, and council news across Kent. Our reporting team works around the clock to bring you the latest developments from communities across the county.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Local News

Business & Economy

Health