Artificial-intelligence (AI)-driven medical software is increasingly being adopted in healthcare across Europe. Often referred to as AI-based software as a medical device (AI-SaMD) these systems can improve diagnostic accuracy, streamline clinical workflows and support personalised care.
However, AI-SaMD also raises new challenges. Safety, explainability, data-governance and lifecycle-management now play a central role in regulatory assessment. As a result, manufacturers in the EU must address not only medical device but also emerging horizontal AI-specific regulation.
This article outlines the key benefits and limitations of AI in medical devices and explains the regulatory challenges for AI medical software regulation in the EU, with a focus on the near future.
Advantages of AI in medical devices
AI-enabled medical software can enhance healthcare delivery in several connected ways.
- Improved diagnostic performance: Machine-learning models trained on high-quality clinical data can detect subtle patterns in images and physiological signals. Therefore, they may support earlier or more accurate diagnosis in areas such as radiology, pathology and cardiology (1).
- Workflow and capacity gains: AI can automate time-consuming tasks like image segmentation, case prioritisation or preliminary triage. Thereby, turnaround times decrease and clinicians can focus on complex decisions. This advantage is particularly relevant in stroke care where time to treatment is critical (1).
- Personalisation and prediction: Predictive models can stratify patients by risk and support individualised treatment decisions. In addition, AI can enable adaptive monitoring strategies that respond to a patient’s changing clinical status (2). These capabilities support both acute care and rehabilitation pathways.
Overall, AI has the potential to improve patient outcomes, use clinical resources more efficiently and support care in settings with limited specialist capacity (1).
Still, these benefits depend on robust clinical validation, effective risk management and regulatory compliance. They should not be assumed without appropriate evidence.
Principal risks and limitations
Despite its promise, AI-SaMD presents several important risks.
- Data bias and generalisability: If training datasets lack geography, demographics or clinical practice, model performance may degrade in real-world use (1). Overfitting and missing rare clinical presentations remain common issues.
- Explainability and trust: Many high-performing AI models operate as “black boxes”. This makes it harder for clinicians to judge when to accept or override algorithmic recommendations. Limited interpretability can also complicate incident analysis when outcomes are poor (1, 2).
- Lifecycle behaviour and safety management: Over time, some AI systems will be updated or retrained under change control. Robust change control and monitoring is essential to maintain performance after placing on the market. Managing these changes requires predefined controls and continuous monitoring, which traditional device rules do not fully address (2, 3).
These risks also raise questions about how responsibilities are shared between manufacturers, healthcare institutions and individual users, especially for monitoring, reporting and clinical decision-making.
The European regulatory context
Europe regulates medical AI medical software through both sector-specific and horizontal legislation.
Software with a medical purpose fall under the Medical Device Regulation (EU) 2017/745 (MDR) (4). Manufacturers must demonstrate safety, clinical performance and post-market surveillance.
At the same time, the EU Artificial Intelligence Act (EU) 2024/1689 (AI act) introduces additional obligations for high-risk AI-systems, including most AI-enabled medical devices (5). These requirements cover data governance, risk management, human oversight, transparency and life cycle controls. As a result, manufacturers must align compliance with both frameworks in a coordinated way.
Many medical devices that use AI coud be in a higher MDR risk group. Therefore, Notified Body assessment and stronger pre-market evidence are required (1).
The EU AI Act and related standardisation activities (including the European Commission’s proposals and evolving guidance) further clarify expectations for training data quality. They explicitly address AI-specific topics such as training data quality, transparency, human oversight, algorithmic robustness and post-market surveillance to ensure safe and trustworthy AI-SaMDs in clinical use (1, 2). In addition, clinical engineers and medical physicists play a key role. They verify performance claims, conduct acceptance testing and maintain quality assurance once AI tools are deployed in hospitals (3). Guidance from European and international bodies increasingly shapes how developers validate and maintained AI systems in clinical practice.
Key regulatory challenges in Europe
- Aligning AI requirements with device law: AI governance introduces horizontal obligations such as transparency and data traceability. However, manufacturers must still meet device-specific clinical evidence requirements under the MDR (1).
- Managing adaptive algorithms: Regulators must develop proportionate approaches for self-adapting systems. It is clear that predefined change-control plans and post-market performance monitoring are essential (3).
- Explainability and clinical integration: There is no single accepted metric for explainability. Therefore, regulators must balance clinical needs with technical feasibility. Standardised documentation, such as fact sheets or model cards, can support decision-making and incident investigation (1, 2).
- Data governance and cross-border validation: AI development depends on large, diverse datasets. Yet, Europe’s GDPR and national privacy laws complicate cross-institutional data sharing. Practical governance frameworks and standard data-use agreements are essential (1, 3). This challenge is further shaped by national health-data laws in Germany, Austria and Switzerland as well as initiatives such as the European Health Data Space.
- Capacity constraints for conformity assessment: Notified Bodies and competent authorities need additional expertise. Otherwise, market access will slow or become inconsistent across member states (1).
Strategic directions to ballance innovation and safety
Several practical strategies can support compliant AI development:
- Predefined change management: Include retraining and recalibration plans in regulatory submissions (3).
- Transparent documentation: Use standardised fact-sheets that describe data sources, intended use and limitations (2).
- Robust validation: Perform external, multi-centre validation across devices and regions (1).
- Operational quality assurance: Implement KPIs to detect performance drift in clinical use (3).
- Regulatory capacity building: Strengthen AI expertise within regulatory bodies and Notified Bodies (1).
Conclusion
AI-based medical software offers significant clinical potential. At the same time, it introduces new technical and regulatory risks. In Europe, the MDR provides the device-level foundation, while AI-specific governance – most notably through the EU Artificial Intelligence Act – and operational guidance continues to evolve.
Ultimately, safe and effective use of AI in medical devices requires coordinated action. Standardised reporting, robust external validation, lifecycle governance for adaptive models and sufficient regulatory capacity all play a critical role. Regulation must remain rigorous, yet flexible enough to support innovation that delivers real clinical benefit.
Want to dive deeper into the topic?
Have a look at the references for some fascinating insights.
References
- Fraser, A. G., Biasin, E., Bijnens, B., Bruining, N., Caiani, E. G., Cobbaert, K., … Rademakers, F. E. (2023). Artificial intelligence in medical device software and high-risk medical devices – a review of definitions, expert recommendations and regulatory initiatives. Expert Review of Medical Devices, 20(6), 467–491. https://doi.org/10.1080/17434440.2023.2184685
- Ebad, S. A., Alhashmi, A., Amara, M., Miled, A. B., & Saqib, M. (2025). Artificial Intelligence-Based Software as a Medical Device (AI-SaMD): A Systematic Review. Healthcare, 13(7), 817. https://doi.org/10.3390/healthcare13070817
- Zanca, F., Brusasco, C., Pesapane, F., Kwade, Z., Beckers, R., & Avanzo, M. (2022). Regulatory Aspects of the Use of Artificial Intelligence Medical Software. Seminars in radiation oncology, 32(4), 432–441. https://doi.org/10.1016/j.semradonc.2022.06.012
- Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices. EUR-Lex – 02017R0745-20240709 – EN – EUR-Lex
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Regulation – EU – 2024/1689 – EN – EUR-Lex
Need assistance with planning and executing your clinical study or investigation?
Our team guides you through all steps of your clinical study—from selecting the optimal study design to compliant implementation. This way, you can save time and costs without compromising on quality. Use clinical studies as a strategic tool to advance your product development effectively and establish a strong foundation for a successful market launch.
We are here to help and support you in developing an optimal clinical strategy for safe, effective, and market-ready products. Our expert team ensures smooth planning and execution of clinical studies that meet both, regulatory requirements and your project goals.
Explore our website to learn more about how we can support you as a medical device manufacturer. Feel free to contact us at studien@vascage.at.



