PARIS - AI can help address some of health’s largest challenges, but it is crucial to make use of this powerful new tool while also addressing the risks.


The AI Age is here and here to stay


The OECD has been at the frontier in defining comprehensive policy principles for trustworthy development and use of Artificial Intelligence (AI) with its 2019Principles. These principles seek to mitigate some of AI’s most significant risks including worker displacement, expanding inequities, breaches of personal privacy and security, and irresponsible use of AI that is inappropriate for the context or may result in harm.

The last quarter of 2023 saw several important events and reports designed to drive the safe implementation of AI from the White House Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence; to the G7 Code of Conduct for AI; the AI Safety Summit at Bletchley Park;the provisional agreement on the European Union AI Act;and, in health, theWHO guidance for the regulation of artificial intelligence(US White House, 2023[1]; World Health Organization, 2023[2]; European Commission, 2023[3]; AI Safety Summit, 2023[4]; European Parliament, 2023[5]).


AI has significant potential to save lives, improve health professionals’ work, and make health systems more people-centred


AI can help address some of health’s largest challenges including a depleted workforce, future threats to public health,ageing populations, and increasing complexity of health due to multiple chronic conditions.It is crucial to make use of this powerful new tool while also mitigating its risks. Oversight and robust governance will be necessary torespond rapidly to emerging issues and opportunities.


Failure to turn principles into action poses significant risk


While there are significant risks from the use of AI in health, there are also significant risks from not taking action to operationalise agreed principles. These risks include exacerbating digital and health inequities, increasing privacy risk, slowing scientific advancement, and hampering trust with the public. At present,AI is being designed, developed, and implemented in health facilities around the world, leveraging local data sets for training and making the results available to their local populations.

Bespoke AI applications without the ability or intention to scale(e.g., due to system incompatibility or lack of technical resources), riska fragmented set of AI innovations that are built and maintained by wealthy health organisations and only available to wealthy segments of the public. Strong and co-ordinated policy, data,and technical foundations, both within and across borders, are necessary to unleash the broad and equitable human value that is possible from AI.

This brief outlines the key opportunities for AI to improve health outcomes, critical risks to be addressedin its deployment in health, and proposes practical policy action to operationalise responsible AI that respects human rights and improves health outcomes within and across borders. These actions will benefit from common principles and guardrails. Recent policy actions in the EU, US, and at a global level all confirm the relevance of the OECD AI principles from 2019(OECD, 2019[6]). Moving forward, the OECD AI Principles provide a framework for developing AI policiesin health.


For the full paper, visit: https://www.oecd.org/health/AI-in-health-huge-potential-huge-risks.pdf?utm_campaign=STI%20News%2020%20February&utm_content=Read%20the%20brief&utm_term=sti&utm_medium=email&utm_source=Adestra

 

 

 

Banners

Videos