Skip to content
Precision for Medicine
Press Release

Precision Continues Asia-Pacific Expansion with Opening of Japan Office

What the EMA–FDA AI Principles Really Mean for Clinical Development & Regulatory Affairs

What the EMA–FDA AI Principles Really Mean for Clinical Development & Regulatory Affairs

Why the EMA–FDA AI Principles Signal a New Era in Drug Development

The EMA and FDA’s joint publication of 10 guiding principles for the use of AI across the medicines lifecycle marks an important moment for global drug development. These principles signal a clear step forward to help guide industry in terms of how sponsors design trials, manage data, and interact with regulators in both jurisdictions. AI is now becoming part of core regulatory thinking, and sponsors will increasingly be expected to treat AI‑enabled methods like any other GxP‑relevant system across the full medicines lifecycle.

Implications for Clinical Development

The principles emphasize human-centric design, risk based approaches, multidisciplinary expertise, and rigorous data governance. For clinical development teams, this translates into several operational realities:

  • AI must be validated like any other clinical tool, including clear context of use, transparent model design, and robust documentation. That said, the concept of a ‘risk-based’ approach means that low risk supportive tools (for example document search) will not be treated the same as AI affecting clinical decision-making such as dose selection, eligibility, or safety signal detection.

  • Sponsors will need to clearly articulate what the AI does, where in the trial process it is used (e.g., site feasibility, patient selection, endpoint derivation, monitoring etc.), and how its output influences decisions.

  • Performance must be continuously assessed, with lifecycle monitoring. This might include assessing model drift over time, how to manage updates, and ensuring ongoing performance expectations are met.  

  • New multidisciplinary team members become non negotiable, to augment clinical expertise, data science, statistics, and software engineering to ensure models are reliable and ethically sound.

These principles reinforce the increasing expectation that AI enabled tools, such as patient selection algorithms, digital biomarkers, or trial simulation engines, must be “audit ready” from day one. Sponsors should also account for the need for traceability and explainability to support regulatory scrutiny.

 

Implications for Regulatory Affairs

For regulatory teams, the EMA-FDA alignment is significant for understanding how AI will shape regulatory submissions and inspection readiness:

  • It offers a harmonized foundation for future AI guidance across both regions and signals intent for deeper EU-US collaboration. This should enable the design of global programs that use AI consistently across regions. Sponsors should nonetheless engage early with regulators to clarify any jurisdiction-specific constraints.

  • Teams need to be audit ready with respect to AI technology. For example, if patient selection, endpoint adjudication or signal detection relies on AI, inspectors will want to see both the algorithm's performance evidence and how sites and sponsor staff interacted with its outputs.

  • AI will become more central to many aspects of regulatory affairs, including for example, dossier development. As more detailed guidance emerges, regulators are likely to expect a structured, cross reference description of AI use, covering context of use, validation strategy, governance and change management much as they do today for control strategies and CMC or statistical analysis plans in clinical.
  • A practical next step for regulatory teams may be to assess where AI is already embedded in their development and pharmacovigilance systems and to assess which of these uses might become ‘regulator visible’ in upcoming procedures. 

How the EMA–FDA AI Principles Align With the Global AI Regulatory Landscape

These principles do not exist in isolation. They build upon:

  • The EMA’s 2024 AI Reflection Paper, which emphasized lifecycle governance and the need to ensure “human oversight” across AI mediated decision-making.

  • The FDA’s ongoing work on Good Machine Learning Practice and digital health AI, which similarly stresses transparency, reliability, and iterative model evaluation.

  • Broader EU initiatives, including the Biotech Act proposal and provisions in new pharmaceutical legislation that explicitly accommodate innovative AI-driven methods in controlled regulatory environments.

  • EMA PRIME Scheme Explained: Eligibility, Benefits, Risks, and When to Apply

    EMA PRIME Scheme Explained: Eligibility, Benefits, Risks, and When to Apply

    Discover

 

Impact of the EMA–FDA AI Principles on Future Filing Requirements

The 10 principles are not binding requirements yet, but they form a common reference point that future EMA and FDA guidance will build upon. Recent efforts from both regulatory bodies signal a shift toward principles-based AI regulation that is flexible enough to support innovation but grounded in expectations for safety, ethics, and scientific quality.

As clearer guidance and more routine use of AI begin to crystallize, we should expect that AI strategies will soon be scrutinized with the same seriousness as quality-by-design or statistical analysis plans. Therefore the 10 principles represent not just guidance for today, but a preview of future filing expectations.

For questions about preparing your program for a smooth regulatory journey, contact us.

 

Frequently Asked Questions

Are the EMA–FDA AI Principles legally binding?

No. The 10 principles are not yet binding requirements. They currently serve as a shared reference point that future EMA and FDA guidance is expected to build upon.

Why did the EMA and FDA publish these AI principles jointly?

The joint publication signals a coordinated effort to harmonize expectations for AI across global drug development, reducing divergence and helping sponsors design programs that work across both regions.

How will these principles impact clinical development teams?

Clinical teams will need to ensure AI tools are validated, transparent, risk‑appropriate, and continuously monitored. Sponsors must clearly define context of use and be “audit‑ready” from the start.

Do these principles apply to AI used in ongoing trials?

Yes. While not binding, the principles reflect the evolving expectations regulators will have during inspections and future submissions, including for ongoing and long‑running studies.

How do these principles relate to broader global AI regulations?

They complement ongoing EMA, FDA, EU, and international efforts to define safe, transparent, and lifecycle‑managed AI — positioning the principles within a growing, harmonized regulatory ecosystem.

What should sponsors do now to prepare for future regulator expectations?

Sponsors should map where AI is already used, implement governance and documentation frameworks, assess model risk, and ensure systems are traceable and explainable ahead of future regulatory expectations.