The EMA and FDA’s joint publication of 10 guiding principles for the use of AI across the medicines lifecycle marks an important moment for global drug development. These principles signal a clear step forward to help guide industry in terms of how sponsors design trials, manage data, and interact with regulators in both jurisdictions. AI is now becoming part of core regulatory thinking, and sponsors will increasingly be expected to treat AI‑enabled methods like any other GxP‑relevant system across the full medicines lifecycle.
The principles emphasize human-centric design, risk based approaches, multidisciplinary expertise, and rigorous data governance. For clinical development teams, this translates into several operational realities:
AI must be validated like any other clinical tool, including clear context of use, transparent model design, and robust documentation. That said, the concept of a ‘risk-based’ approach means that low risk supportive tools (for example document search) will not be treated the same as AI affecting clinical decision-making such as dose selection, eligibility, or safety signal detection.
Sponsors will need to clearly articulate what the AI does, where in the trial process it is used (e.g., site feasibility, patient selection, endpoint derivation, monitoring etc.), and how its output influences decisions.
Performance must be continuously assessed, with lifecycle monitoring. This might include assessing model drift over time, how to manage updates, and ensuring ongoing performance expectations are met.
New multidisciplinary team members become non negotiable, to augment clinical expertise, data science, statistics, and software engineering to ensure models are reliable and ethically sound.
These principles reinforce the increasing expectation that AI enabled tools, such as patient selection algorithms, digital biomarkers, or trial simulation engines, must be “audit ready” from day one. Sponsors should also account for the need for traceability and explainability to support regulatory scrutiny.
For regulatory teams, the EMA-FDA alignment is significant for understanding how AI will shape regulatory submissions and inspection readiness:
It offers a harmonized foundation for future AI guidance across both regions and signals intent for deeper EU-US collaboration. This should enable the design of global programs that use AI consistently across regions. Sponsors should nonetheless engage early with regulators to clarify any jurisdiction-specific constraints.
Teams need to be audit ready with respect to AI technology. For example, if patient selection, endpoint adjudication or signal detection relies on AI, inspectors will want to see both the algorithm's performance evidence and how sites and sponsor staff interacted with its outputs.
The EMA’s 2024 AI Reflection Paper, which emphasized lifecycle governance and the need to ensure “human oversight” across AI mediated decision-making.
The FDA’s ongoing work on Good Machine Learning Practice and digital health AI, which similarly stresses transparency, reliability, and iterative model evaluation.
The 10 principles are not binding requirements yet, but they form a common reference point that future EMA and FDA guidance will build upon. Recent efforts from both regulatory bodies signal a shift toward principles-based AI regulation that is flexible enough to support innovation but grounded in expectations for safety, ethics, and scientific quality.
As clearer guidance and more routine use of AI begin to crystallize, we should expect that AI strategies will soon be scrutinized with the same seriousness as quality-by-design or statistical analysis plans. Therefore the 10 principles represent not just guidance for today, but a preview of future filing expectations.