This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.


Powered by Bird & Bird

| 4 minutes read

AI in healthcare: the World Health Organisation’s regulatory considerations

The World Health Organisation (WHO) has published regulatory considerations on artificial intelligence (AI) for healthcare in October this year. The aim of these considerations is to promote international collaboration in AI regulations and standards in healthcare, which is vital to ensure the safe and appropriate development and use of AI systems. While the WHO already made calls for safe and ethical AI for health in May 2023, specifically concerning AI-based large language models (commonly labelled as generative AI)[1], the present publication applies to the entire field of AI, including generative[2] as well as predictive AI[3], and identifies six key areas of regulation of AI in healthcare.

1. Documentation and transparency

The first key topic addressed by the WHO is the importance of thorough documentation and transparency in AI development.[4] This includes documenting the problem being addressed, specifying the operational context of the AI system, and detailing the selection and processing of datasets. Important is that the WHO also highlights that AI systems should be worked with on a risk-based approach, similar to the position the EMA has taken. Early engagement with regulators is also recommended.

2. Risk management and AI systems development lifecycle

The WHO secondly discusses the need for holistic risk management covering the entire healthcare lifecycle, from pre-market development to post-market deployment.[5] It highlights the importance of responsible development practices and quality management. The WHO focusses on monitoring clinical endpoints, such as trust, bias and robustness. A lifecycle approach to the development of AI systems is recommended, using the Total Product Lifecycle (TPLC) as an example. The WHO stresses that cybersecurity threats and vulnerabilities should be addressed when implementing and using the aforementioned risk management strategy.

3. Intended use and analytical and clinical validation

The importance of determining the intended use of AI is stressed by the WHO in the third key topic.[6] The intended use affects the system’s safety and performance and this should be used to assess the adequacy and sufficiency of the validation evidence. Moreover, the WHO states that the analytical and clinical validation of AI systems in healthcare should be assessed and that these should address potential bias and discrimination. Transparency is further necessary for the documentation of data sets, particularly with regard to data characteristics and potential biases. The WHO discusses clinical validation methods based on risk levels, ranging from clinical trials to real-world implementation and also that updates require defined and transparent validation measures.

Reference is also made to the lack of dedicated regulatory bodies in low- and middle-income countries, indicating that the use of AI in healthcare may require specific regulation and support from regulatory bodies and adaptive studies from high-income countries.

4. Data quality

(High) data quality in healthcare system and accessibility of diverse data is indicated by the WHO as of importance for the use of AI systems.[7] It highlights the 10 Vs of data characteristics[8] and emphasises the need to address data challenges such as data consistency, usability and data tagging. Data quality challenges need to be addressed to ensure this quality and developers should conduct pre-release testing to identify and mitigate data quality issues. Furthermore, collaboration to create data ecosystems for sharing high quality data sources is encouraged by the WHO.

5. Privacy and data protection

The WHO further mentions the challenges of protecting privacy in the context of the increasing demand for health-related data.[9] It emphasizes the need for adequate security measures at all stages, from data collection to data sharing. Privacy laws and regulations vary from country to country, so it is important for developers to understand and comply with these different legal frameworks. Documentation and transparency are mentioned once more as they play a role in building trust in privacy practices. The WHO proposes that privacy impact assessments be conducted to assess and mitigate privacy risks. Both accessibility and transparency are said to assist regulators and stakeholders. The WHO also mentions AI regulatory sandboxes as flexible tools to foster innovation in controlled environments.

6. Engagement and collaboration

Lastly, engagement and collaboration between stakeholders are marked as essential by the WHO for the safety and quality of AI systems in healthcare.[10]  For countries with limited experience of engagement and collaboration, the WHO recommends the establishment of flexible and modular regulatory models to address the uncertainties when innovating. It also highlights the importance of structured engagement, an accessible engagement mindset and co-regulation, where developers are actively involved in the regulatory process.


The WHO’s considerations aim to inform the process of developing regulations and standards across the world for using AI safely and effectively in healthcare. The WHO emphasises the need for international cooperation and transparency in the dynamic, ever evolving world of AI for three reasons: speeding up regulatory development, ensuring consistency across borders, and supporting countries with limited regulatory capacity. While the considerations provided are, however, not intended as specific guidelines, policies, or regulatory frameworks, they serve as a starting point for discussions on regulatory aspects and as a tool for all stakeholders to responsibly deploy AI in healthcare. The WHO is dedicated to promoting innovation while upholding rigorous standards for quality and privacy protection. Therefore, relevant actors including AI system developers, regulators, manufacturers of AI-embedded medical devices, health practitioners, and related professionals can still benefit from implementing these considerations in their practices.


[2] E.g. drug discovery in the form of generating novel molecular structures for potential therapeutic applications.

[3] E.g. disease diagnosis in the form of analyzing patient data, such as medical records and imaging data, to assist in accurate disease diagnosis.

[4] Regulatory considerations on artificial intelligence for health. Geneva: World Health Organisation; 2023, p. 9-10.

[5] Ibid., p. 12-14.

[6] Ibid., p. 20-25.

[7] Regulatory considerations on artificial intelligence for health. Geneva: World Health Organisation; 2023, p. 27.

[8] Volume, Veracity, Validity, Vocabulary, Velocity, Vagueness, Variability, Venue, Variety and Value.

[9] Ibid., p. 33-37.

[10] Ibid., p. 39-46


ai, artificial intelligence, healthcare, medtech, technology, regulatory