EU Statement – UN ECOSOC: Special meeting on Artificial Intelligence
Mr. President, Excellencies, dear colleagues,
I have the honour to deliver this statement on behalf of the European Union and its Member States.
Let me first thank you, Mr President, for convening us today to discuss artificial intelligence, and how it can help us achieve the SDGs.
The global community needs to address AI challenges and opportunities on different levels:
-
gathering more scientific evidence;
-
promoting more investments in talent and compute;
-
fostering capacity building; and
-
having the right regulatory and governance frameworks.
Over the past months, the EU and its Member States have been at the forefront of global efforts to make AI a global public good.
[Implementing the GDC - Global partnerships for AI]
Today most developing countries face the urgent challenge of addressing "AI divides" while continuing efforts to close existing digital divides, a message that was underlined at the most recent Internet Governance Forum in Saudi Arabia. We must focus on inclusive digital connectivity, meaningful access, and usage, while also tackling data access, AI education, local AI innovation ecosystems, data governance, and algorithmic literacy.
Strategies to mitigate AI risks and bridge the AI divides include:
-
Establishing ethical guidelines and rights-respecting governance mechanisms for AI;
-
Ensuring data privacy, algorithmic transparency, and accountability;
-
Involving local communities, experts, and stakeholders, including those in vulnerable situations, in AI design and development to ensure that AI applications are tailored to specific needs and contexts, fostering ownership and trust;
-
Investing in education and training to build AI literacy and technical skills, to empower local populations to actively participate in the AI economy and develop their own solutions;
-
And prioritizing linguistically and culturally diverse and representative datasets to reduce bias and ensure relevance of AI models for diverse populations.
These efforts seek to ensure that AI development is grounded in the needs of local communities, preventing the exacerbation of existing inequalities and fostering an equitable digital future. They were at the heart of the Summit for Action on AI co-organized by France and India last February in Paris.
In order to address some of those challenges and leverage AI for sustainable development, in line with the Global Digital Compact, the EU has developed strong partnerships to promote fundamental values-based human-centric digital transformation, whilst supporting multistakeholder engagement – namely with ITU, UNDP, OHCHR, ODET, UNESCO, as well as civil society organisations.
To give you a few concrete examples, the EU partnered with UNESCO to support the implementation of the UNESCO Recommendation on the Ethics of AI, notably by assisting UNESCO Member States in building strong national institutions for promoting AI ethics; and by developing a global capacity building Toolkit for judicial operators on AI with a focus on ethics, human rights and the rule of law.
With ITU, we support countries that want to adapt their strategy from a narrow focus on infrastructure to a holistic approach that covers the enablers of meaningful connectivity: quality, affordability, skills, security.
With ITU and UNDP, we support capacity building of governmental officials and other stakeholders, to increase the number of policy makers who have acquired the necessary skills for developing and implementing effective national digital transformation policies and programmes. This programme includes courses on emerging technologies, data governance and artificial intelligence and is available to our partners around the world.
What the EU brings to the global efforts is also our own experience with developing AI regulation that fosters innovation.
[EU experience balancing regulation and innovation]
In the EU, with the AI Act, we made the choice of taking a risk-based approach to ensure the protection of human rights and user safety, as well as trust in the development, deployment and uptake of AI systems and models. We recognize that providers and companies need trust to build their innovative products. This is why the AI Act introduces basic transparency obligations for all general-purpose AI models, and risk management obligations for powerful models.It also prohibits a small set of use cases considered threats to safety and human rights.
The implementation of the AI Act goes hand in hand with building the required capacity for developing AI, and boosting innovation, take-up and deployment by businesses and organisations, both small and large.
It is with this background and experience at home that we engage on the global stage, striving for international partnerships to support safe, secure and trustworthy development and deployment of AI systems worldwide, to help bridge the digital divides and achieve the SDGs.
[AI Panel and Dialogue]
Dear colleagues,
The ongoing process co-facilitated by Costa Rica and Spain to establish an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance provide a unique opportunity for the UN to step up its role in fostering AI partnerships.
Through this process, and in all future endeavors, the EU remains committed to ensuring AI is used to the benefit of all, leaving no one behind.
Thank you.