This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

BioTalk

Powered by Bird & Bird

| 3 minute read

EMA’s finalised reflection paper on the use of AI

On 9 September 2024 the European Medicines Agency (“EMA”) issued its final reflection paper on the use of AI in the medicinal product lifecycle. As set out in our previous BioTalk article, the EMA’s draft reflection paper was published for public consultation in July 2023. Stakeholders were invited to submit their input until December 2023, which was done quite extensively. The final reflection paper has now been published and shows that the EMA is aiming to ensure that the benefits of AI can be used for progressing pharmaceutical development.

 

Earlier position of the EMA

The position taken by the EMA on the use of AI remains unchanged. For recollection, in the draft reflection paper, the EMA highlighted the transformative potential of AI in enhancing the efficiency and effectiveness of medicinal product development and regulation. It reiterated the importance of prioritizing a (i) risk-based as well as (ii) human-centred approach in all phases of AI development and deployment within the medicinal product lifecycle (including drug discovery, non-clinical development, clinical trials, precision medicine, product information, manufacturing, and post-authorisation activities). The EMA demonstrated that existing regulatory principles, to a certain extent, directly apply to the use of AI in the life sciences sector, while also acknowledging the novel AI-related risks and applying a risk-based approach. All of this remains the same in the final reflection paper, and the EMA is still presenting its position on the basis of these principles. 

Key amendments in EMA’s final reflection paper

The final version of the reflection paper on AI does not introduce significant shifts compared to the draft reflection paper, but it includes more precise definitions, an expanded glossary, and more detailed explanations on certain technical aspects. For example, the EMA now uses the definition of AI systems developed by the Organisation for Economic Co-operation and Development (OECD), which states that an AI system is “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”[1]

To address the risks posed by AI, the EMA takes a two-pronged approach focusing on patient safety as well as the impact on regulatory decisions. The term ‘high risk’ has been replaced with more specific categories: ‘high patient risk’ and ‘high regulatory impact’, likely to prevent confusion with the definition of high-risk AI under the EU AI Act. If an AI tool affects patient safety, it is classified as having a ‘high patient risk’. If it has a substantial impact on regulatory decision-making, it is considered to have a ‘high regulatory impact’.[2] This nuanced approach helps distinguish between the various types of risk associated with AI in the context of medicinal product regulation.

Another notable change concerns the inclusion of not only the market authorisation applicant and marketing authorisation holder as responsible party for the use of AI in the pharmaceutical lifecycle, but also now explicitly naming the clinical trial sponsor and manufacturer. This underlines the different stages and the various stakeholders that can make use of AI during the development and marketing of a medicinal product, and their responsibilities in this regard.

The EMA explicitly states that the reflection paper must, of course, be interpreted in alignment with existing legal requirements and overarching EU principles and legislation on AI, data protection, and medicines regulation.[3] The adoption of the AI Act was considered during the finalization of the reflection paper, ensuring that the EMA’s approach aligns with the broader regulatory framework governing AI across the European Union. 

 

Outlook

Overall, the EMA’s finalized reflection paper provides a solid start of the body of guidance that will be drafted for the regulation of AI in the life sciences sector in the coming years. The paper emphasizes the need for a balanced collaborative approach that prioritizes patient safety and regulatory integrity while enabling the transformative potential of AI. It shows that the EMA is willing to embrace these new developments, but is also trying to make this work in the current legal framework. By clearly defining the roles and responsibilities of key stakeholders, as well as establishing more precise risk categories, the EMA sets the stage for more detailed guidance on how AI can be integrated safely and effectively throughout every step of the lifecycle of medicinal products. Pro-innovation tendencies, such as endorsing the use of black box models in certain instances (as set out in our previous BioTalk article), are still reflected in the final paper. 

As set out in the multi-annual AI workplan from the HMA-EMA Big Data Steering Group a lot more guidance is on its way, ensuring continuous developments in the regulatory area.[4]


 

[1] Reflection paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle’, EMA (9 September 2024), EMA/CHMP/CVMP/83833/2023, p. 3-4 and 13.

[2] Reflection paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle’, EMA (9 September 2024), EMA/CHMP/CVMP/83833/2023, p. 4.

[3] Reflection paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle’, EMA (9 September 2024), EMA/CHMP/CVMP/83833/2023, p. 3.

[4] Artificial intelligence workplan to guide use of AI in medicines regulation | European Medicines Agency (EMA) 

Tags

regulatory law, pharmaceutical regulation, european medicines agency, ai in medicine, medicinal product lifecycle, patient safety, regulatory compliance, eu ai act, clinical trials, drug development, risk management, data protection, medicines regulation, healthcare, pharmaceuticals, regulatory, artificial intelligence, regulatory and administrative, healthcare, life sciences, life sciences, central and eastern europe, southeast europe and turkey, western europe, netherlands, amsterdam, biotalk, insights