PARIS - In just a few months, generative AI has gone from technical lab discussions to daily front-page news. It has the potential to revolutionise industries and society and is already used in a variety of sectors to create individualised and scalable content, automate tasks, and improve productivity.

However, generative AI can be misused with severe negative consequences through disinformation, deepfakes, and other manipulated content. This can provoke serious social, political, and economic repercussions at scale, such as distorting public discourse, creating and spreading conspiracy theories and other disinformation, influencing elections, distorting markets, and inciting violence. It is crucial to mitigate these risks and build resilience against generative AI’s misuse. One of the most critical threats of generative AI is to the protection and promotion of information as a public good, as conceptualised in the Windhoek+30 Declaration, whose principles were adopted by UNESCO Member States in 2021.

Recognising the transformative and disruptive potential of generative AI, the G7 has encouraged international organisations, such as the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on Artificial Intelligence (GPAI), to promote international co-operation and explore relevant policy developments and practical projects, including on issues related to disinformation.

In this context, the OECD, IDB, GPAI, the IEEE Standards Association and UNESCO are joining forces with AI Commons and VDE to advance global collaboration, trust, and transparency with regard to generative AI. A problem of this magnitude requires collective action, and we are seeking partners to join us in forging a path for innovative solutions.


A global challenge to build trust


Generative AI’s potential impact and risks transcend national borders, demanding a global scope for new policy and technology solutions. The OECD and its initial design partners AI Commons and VDE are now working with GPAI, IDB, IEEE SA, and UNESCO to form an open, competitive Global Challenge to Build Trust in the Age of Generative AI that will be conducted by a unique coalition of multi-lateral organisations, governments, companies, academic institutions, and civil society organisations.

This challenge will bring together technologists, policy makers, researchers, experts, and practitioners to put forth and test innovative ideas that promote trust and counter the spread of disinformation exacerbated by generative AI. In reaching its goals, the challenge will provide tangible evidence about what works and what doesn’t, yielding proven approaches that can be adapted and scaled across the world.

The Global Challenge differentiates itself from similar efforts by the uniqueness of scope relative to disinformation and by its global focus in terms of solutions applicability and who is encouraged to get involved.

 

 

 

 

Banners

Videos