
EMA and FDA join forces: Aligned principles on AI use in drug development
The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have together compiled ten principles for good artificial intelligence (AI) practice in drug development.
The principles apply across the drug development lifecycle, from early research to clinical trials, extending to manufacturing and safety monitoring. As a result, they are relevant for drug manufacturers, marketing authorisation applicants, and holders. The goal is to encourage innovation in drug development while ensuring patient safety.
“Alignment with FDA expectations on AI regulation is strategically important for European biotech, as many companies operate internationally,” said Afrodita Bijelic, PhD, a regulatory affairs consultant, in an email to European Biotechnology Magazine. “[The harmonized standards] boost investor confidence by signalling that AI-driven programs meet robust, internationally recognised expectations.”
The need for regulation
As AI becomes increasingly integral to medicine development, clear guidelines and standards are needed to guarantee ethical, effective, and trustworthy use. Establishing shared guidelines across the Atlantic promotes regulatory alignment, supports international collaboration, and offers companies a clearer framework for the responsible use of AI in drug development.
In the EMA announcement on January 14, the European Commissioner for Health and Animal Welfare, Olivér Várhelyi, said that “the guiding principles of good AI practice in drug development are a first step of a renewed EU-US cooperation in the field of novel medical technologies.” Meanwhile, the FDA announcement states that both its Center for Drug Evaluation and Research (CDER) and Center for Biologics Evaluation and Research (CBER) were involved in developing the principles in collaboration with the EMA.
“Consistent requirements for transparency, validation, and risk assessment allow companies to scale AI innovations more efficiently, avoiding delays from divergent regulations,” said Bijelic. “Overall, alignment helps Europe remain competitive and attractive for AI-driven drug development rather than falling behind other jurisdictions.”
Building on earlier guidance from EMA and FDA
Both regulatory agencies had previously shared recommendations on the use of AI in drug development. First, the EMA released an AI reflection paper in September 2024. This scientific guideline includes a detailed step-by-step section structured along the lifecycle of medicinal products, from drug discovery and development to post-authorisation settings such as pharmacovigilance and effectiveness studies.
On the other side of the world, about a year ago, the FDA shared a list of recommendations “to sponsors and other interested parties on the use of artificial intelligence to produce information or data intended to support regulatory decision making regarding safety, effectiveness, or quality for drugs.”
These earlier documents laid the groundwork for a more coordinated, transatlantic approach to regulating AI in drug development.
10 joint AI principles, summarized
The new common 10 guiding principles for good AI practice in drug development emphasize human-centric, ethical, and risk-based approaches. They call for AI technologies to be designed with clear ethical values, a well-defined context of use, and adherence to legal, regulatory, and technical standards. Companies are encouraged to integrate multidisciplinary expertise, maintain robust data governance and documentation, and follow best practices in model design, development, and performance assessment to ensure transparency, reliability, and patient safety.
The principles also highlight the importance of life cycle management, including ongoing monitoring, periodic re-evaluation, and risk-based quality oversight to address issues such as data drift. Finally, AI outputs should be accompanied by clear, accessible information for users and patients, explaining the technology’s purpose, performance, limitations, and interpretability. Together, these principles provide a framework for responsible, trustworthy, and effective AI use across all stages of medicine development.
While some details remain vague, the principles establish an early reference point for companies aiming to align with regulatory requirements. “I expect these principles to evolve gradually rather than being translated immediately into binding requirements,” said Bijelic.
Voluntary principles, strategic advantage
While the guiding principles are broad in scope, they are intentionally centered on core concepts rather than prescriptive instructions, allowing flexibility across different AI use cases and supporting innovation. When asked about implementation, Bijelic said: “Given that the principles are voluntary, adoption will likely vary across companies. Larger biotechs or startups heavily invested in AI are more likely to adopt them, as adherence demonstrates credibility with regulators, investors, and partners. Smaller companies or those with limited resources may find it more challenging to implement all ten principles comprehensively.”
As a regulatory expert, Bijelic believes that “while adoption is now voluntary, early alignment with the principles could become a strategic advantage, effectively creating a de facto standard even before any formal requirements are introduced.”
“In the longer term, as AI becomes more central to clinical decision-making and regulatory submissions, it is reasonable to expect that certain elements will evolve into more concrete, potentially binding requirements, particularly in areas such as model validation, risk assessment, and ongoing lifecycle monitoring,” said Bijelic.



FDA
adobe stock photos - ndomble