This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

BioTalk

Powered by Bird & Bird

| 4 minutes read

EMA’s envisaged risk-based and human-centric approach to regulate AI

Bird & Bird’s international life sciences and healthcare team is at the forefront of the legal developments with regard to AI and the life sciences sector. This article is part of our new series, which will highlight different legal angles to consider when implementing AI in the life sciences and healthcare industry.

Artificial Intelligence (“AI”) has emerged as a transformative force in the healthcare industry, and its significance for the pharmaceutical sector is particularly profound. The future of AI in the pharmaceutical industry is highly promising and is expected to bring about significant advancements going forward, with AI being used more pervasively.

The European Medicines Agency (“EMA”) has in this context recently published a draft reflection paper describing the regulator´s present perspective on AI and Machine Learning (as the most prominent branch of AI today) in the medicinal product lifecycle. The EMA reiterates the importance of prioritizing a (i) risk-based as well as (ii) human-centered approach in all phases of AI development and deployment within the medicinal product lifecycle (including drug discovery, non-clinical development, clinical trials, precision medicine, product information, manufacturing, and post-authorisation activities).

1. EMA´s envisaged risk-based and human-centric approach to regulate AI

While the EMA emphasizes on the importance of complying with existing legal requirements which could be directly applied to AI, the EMA points out that the specific application of legal requirements should be risk-based in the first place, and marketing authorization holders or applicants are expected to engage in continuous risk management. This means the risk mitigation measures required under the law should correspond not only to the type of AI technology deployed (e.g. mainly its level of autonomy – the more autonomous AI works the more safeguards it requires), but also to the specific use case in question. For example, EMA categorizes AI-based precision medicine as a high-risk use case and using AI models for data analysis in early stages of clinical development as being on the lower end of the risk scale. The risk-level linked to a specific use case also depends on the regulatory impact of such use case, which again depends on both the stage within the medicinal product’s lifecycle and the strength of the evidence the utilized data sets will carry within the intended context.

This risk-based approach is the bedrock for pharmaceutical companies to decide how the human-centric approach shall be brought to life in a specific pharmaceutical context according to the EMA. In this respect, the EMA requires active measures to be taken by the responsible company especially during data collection and modelling to qualify as sufficiently human-centric, by specifically considering new risks posed by AI, namely the use of large amounts of data for training black box algorithms/models, and by that i.a. also the avoidance of bias.

After all, the draft reflection paper further emphasizes that a ‘key-principle’ is that the responsibility lies with the market authorization holder or applicant to ensure that the use of this technology is in line with all ethical, technical, scientific, and regulatory standards – and warns that it is possible that these standards might sometimes be stricter than what is considered ‘standard practice’ within the field of data science.

2. Quick overview of recommendations in EMA´s draft guidance

This risk-based approach, which is a common theme from an EU regulatory perspective (see also the AI Act proposal that is currently negotiated on EU level), is consistently applied throughout EMA´s draft guidance, along with specific guidance how to mitigate potential risks. The draft guidance starts with an overview of risks and mitigation strategies along the lifecycle of medicinal products spanning from drug discovery and initial development to post-authorization contexts like pharmacovigilance and efficacy studies. It goes on to highlight certain technical aspects, which relate to how to deal with AI models and the underlying data sets to exactly mitigate the novel AI-related risks pointed out by the EMA before. This includes i.a. guidance on the data sets used, its augmentation, the training, validation and testing of AI models, model development, performance assessment, explicability, and governance.

On the one hand, the EMA is keen to demonstrate the link to existing requirements and how they apply to AI (such as the applicable guidelines on statistical principles for clinical trials, the principles of ICH Q8, Q9 and Q10 for manufacturing or the documentation of any data use in a fully traceable manner in line with GxP requirements).

On the other hand, the EMA seems to understand the importance of striking a balance between regulating actual risks and allowing innovation where certain requirements should not just be complied with as an end in itself. For example, the EMA states that employing black box models could be considered appropriate when developers demonstrate that transparent (meaning, interpretable) models exhibit inadequate performance or resilience. Although the principle of transparency is very important for AI in general (which is for example pointed out by the High-Level Expert Group on AI established by the European Commission, that EMA is mentioning as well), it should not hinder well-performing AI models according to the EMA, even if it means they are not transparent.

3. Outlook

Overall, the EMA appears keen to demonstrate that the use of AI in the life sciences industry is regulated already, to a certain extent, by existing legal requirements. Still, in view of the rather conservative approach of AI by other regulators (e.g. in the world of data protection), it is refreshing to see that the EMA’s approach it at least showing some pro-innovation tendencies, e.g., by also endorsing the use of black box models in certain instances. Going forward it would be desirable that these liberal tendencies will also be reflected in the final Reflection Paper and that they might even gain further acceptance, especially in low-risk applications.

The EMA welcomes input from all relevant stakeholders to provide feedback on the draft guidance until December 31, 2023.

This article is part of our AI & Life sciences series. Please find our first and other articles here. In case of questions, do reach out to Nils Lölfing, Christian Lindenthal or Hester Borgers.

Tags

ai, artificial intelligence, ema, eu, healthcare, life sciences, medicine, regulatory, medtech