Session
AI Governance We Want - Call to action: Liability, Interoperability, Sustainability & Labour
What should be the first priorities in mitigating the environmental impact of Gen-AI technologies? Do Gen-AI and traditional AI applications differ in their environmental impact? Should initiatives to regulate and govern AI globally work together to become more impactful? Who is responsible when an AI system causes damage - should liability be assigned to the developer, provider, user or apportioned across these players? Will AI disrupt or strengthen labour markets? Is using AI for tasks that were previously handled by humans beneficial for workers?
IGF’s Policy Network on Artificial Intelligence (PNAI) will organize a panel discussion featuring leading experts:
Jimena Viveros, Managing Director and CEO of IQuilibriumAI & Member of the United Nations Secretary General's High-Level Advisory Body on AI (Mexico)
Dr. Meena Lysko, Founder and Director, Move Beyond Consulting & Co-Director, Maritime EmpowerHer (South Africa)
Anita Gurumurthy, Executive Director, IT For Change (India)
Yves Iradukunda, Permanent Secretary, Ministry of ICT and Innovation of Rwanda (Rwanda)
Mutaz Ghuni, Assistant Deputy Minister for Digital Enablement, Ministry of Communications and Information Technology of Saudi Arabia (Saudi Arabia)
Brando Benifei, Member of the European Parliament, AI Act co-Rapporteur (Italy)
Moderator (in-person in Riyadh): Sorina Teleanu, Director of Knowledge, DiploFoundation (Switzerland)
Online Moderator: Mohd Asyraf Zulkifley, Associate Professor, Universiti Kebangsaan Malaysia (Malaysia)
PNAI Policy Brief 2024
The session will focus on themes addressed by PNAI in 2024:
- Interoperability in AI governance
- Environmental sustainability within the generative AI value chain
- Liability as a mechanism for supporting AI accountability
- Labour issues throughout AI’s lifecycle
The PNAI Policy Brief 2024 report outlines insights and policy recommendations on these topics. The session will wrap up PNAI’s 2024 activities by discussing key learnings and exploring pathways of bringing its recommendations into action.
Flow of the session and expected outcomes
The session starts with welcome and short overview of PNAI’s work in 2024 and highlights of the PNAI Policy Brief 2024 report. Speakers will share their views on the report and recommendations, building on their experience and expertise. Commenting on the four topics and themes, the speakers will share suggestions for translating the recommendations into action and outlining next steps.
The second part of the session is reserved for interaction between audience and the panelists. The moderator will open the floor and take questions and comments from attendees in the Main Hall, and online facilitator curates questions from the virtual chat for the panelists to address. The session wraps up with a summary of key takeaways.
The expected outcomes of PNAI’s session are: Fostering an inclusive, multi-stakeholder dialogue on the themes of the PNAI report. Generating lively interaction among both onsite and online participants. Gathering views and opinions on the PNAI report recommendations, with a focus on actionable next steps. Identifying ideas for PNAI’s potential future initiatives.
Organiser and contacts
IGF Policy Network on AI
Maikki Sipinen, PNAI Consultant ([email protected])
Amrita Choudhury, PNAI Facilitator ([email protected])
Audace Niyonkuru, PNAI Facilitator ([email protected])
Background
The Policy Network on AI (PNAI) addresses policy matters related to AI and data governance. It is a global multistakeholder effort hosted by the United Nations' Internet Governance Forum, providing a platform for experts and stakeholder to contribute their insights, and recommendations on AI. For more information on PNAI, visit PNAI’s webpage.
Report
Policy Network on Artificial Intelligence (PNAI) organized a panel discussion with leading experts to discuss the PNAI Policy Brief 2024 report that delivered policy analysis and recommendations on four topics:
PNAI’s report acknowledges that liability must be considered from the perspective of AI developers as well as end-users. Anita Gurumurthy (Executive Director, IT For Change, India) emphasized the need for comprehensive liability rules that ensure accountability at all levels of AI deployment. While trade secrets can hinder transparency in AI systems, carefully crafted regulations can potentially make some of this information public. Brando Benifei (Member of the European Parliament, AI Act co-Rapporteur, Italy) stressed the need for transparency and accountability in AI systems to avoid market imbalances and ensure fair competition. Jimena Viveros (Managing Director and CEO of IQuilibriumAI & Member of the United Nations Secretary General's High-Level Advisory Body on AI, Mexico) emphasizes the importance of a global regime to ensure accountability throughout the AI lifecycle, encompassing creators, operators, and users and the importance of criminal responsibility for the entire lifecycle of AI systems.
There’s a need for a comprehensive framework that acknowledges the interconnectedness of labour, sustainability, interoperability, and liability. Yves Iradukunda (Permanent Secretary, Ministry of ICT and Innovation of Rwanda) highlighted the need for capacity building to ensure that all stakeholders understand their responsibilities. Meena Lysko (Founder and Director, Move Beyond Consulting & Co-Director, Maritime EmpowerHer, South Africa) emphasized the need for stricter regulations to mitigate environmental harm by devising comprehensive sustainability metrics of AI systems impact. Mutaz Ghuni (Assistant Deputy Minister for Digital Enablement, Ministry of Communications and Information Technology of Saudi Arabia) stressed the need for proactive governance and regulation to address the challenges posed by AI, emphasizing the importance of addressing liability throughout the AI value chain.
PNAI Policy Brief 2024 focused on the gaps in current AI interoperability governance. The panel highlighted aligning multi-stakeholder processes with multilateral efforts to build coordination and collaboration. Gurumurthy distinguished between technical and legal interoperability, stressing that both are crucial for effective AI governance. Benifei underscored the importance of interoperable AI development and common standards to foster cooperation despite cultural differences, while highlighting the need for AI systems to respect different traditions and histories. Iradukunda, Lysko, and Ghuni concurred that global collaboration is integral to ensuring interoperable AI frameworks and advocated for a shared understanding that can be adapted to local contexts while working towards solutions at a supranational level. Viveros discussed the challenges of fragmented governance regimes and the need for a cohesive global governance framework.
The third topic of discussion focused on the environmental impact of AI and especially the expansion of generative AI platforms as studied in the PNAI Policy Brief 2024. Viveros and Lysko highlighted the environmental impact of AI, particularly in the Global South, where resource extraction and pollution are significant concerns. Ghuni discussed the role of resource use in evaluating the environmental impact of AI. Gurumurthy echoed the Policy Brief’s call for enforcing regulations, suggested incorporating broader environmental laws and responsibilities, especially for powerful countries and emphasized the importance of considering the environmental impact of AI technologies. Benifei highlighted the need for budgetary adjustments and lifelong learning policies in sustainable AI development. Iradukunda stressed the need for AI to improve the lives of citizens, and the importance of sustainability in AI governance to ensure technological advancements benefit society as a whole.
The fourth topic of discussion revolved around the impacts of AI on Labour. Benifei compared the impact of AI on labour to historical technological advancements (such as the spread of electricity or the internet) and emphasized lifelong learning policies and budgetary adjustments to help workers adapt to AI. Benifei, Viveros, Lysko, Ghuni, and Iradukunda emphasized the importance of addressing labor rights and responsibilities to ensure fair and ethical AI deployment, especially regarding the development of AI in the Global South. Iradukunda also stressed the importance of building capacity and partnerships to ensure responsible AI use and equitable access to AI, and awareness of AI's impact, particularly in developing countries. Lysko advocated to address the ethical issues related to labour, such as child labour in mining, and the need for responsible sourcing and clean energy use.
In addition to these core areas, the PNAI Policy Brief 2024 touches upon broader questions of AI governance and its alignment with the Sustainable Development Goals (SDGs). The consensus is that AI should be developed for the benefit of humanity, and that AI governance should facilitate the integration of different legislative approaches. The importance of binding treaties and a proactive approach to transparency and governance are also stressed. The panel underscores collaboration between tech companies, policymakers, and civil society to effectively address AI issues. The transboundary nature of AI necessitates international cooperation to address its complex challenges effectively. A global framework to govern AI development and deployment is needed. The speakers emphasized the importance of aligning multi-stakeholder processes with multilateral efforts to build coordination and collaboration as a cornerstone for consensus on global AI governance. As highlighted in both the report and the meeting, fragmented or localized approaches are insufficient. IGF, and its multistakeholder structures and mechanisms, should be fully utilized to support these discussions.
Overall, the PNAI Policy Brief 2024 presents a compelling case for a coordinated, global approach to AI governance that prioritizes ethical considerations, sustainability, and the well-being of all stakeholders. The report and PNAI’s work received positive feedback for comprehensive analysis, focusing on crucial but often neglected topics (such as environmental impact of AI and labour issues in AI context), including the Global South perspective and delivering actionable recommendations and steps toward better AI governance. Areas for potential future PNAI activities include developing detailed guidance on implementing the report’s recommendations, engaging with a broader range of stakeholders including marginalized communities, updating the policy brief frequently to reflect the latest developments, broadening the analysis on AI liability to include criminal responsibility, and, enlarging policy brief’s remit in terms of liability rules to ensure operators of AI systems are also liable and bear associated costs.
Key takeaways:
- Transparency should be coupled with accountability in AI systems for both developers and end users to better enable effective AI governance.
- Developing consensus on AI governance frameworks is vital in addressing regulatory arbitrage. These frameworks should enhance interoperability through proactive initiatives designed to address the differential impact of AI.
- Designing metrics that address the labour and environmental impacts of AI are key components to consider when evaluating the sustainability of current and future uses of AI.
Calls to action:
- Increase collaboration in developing interoperable AI governance frameworks that will address regulatory arbitrage by focusing on common standards and definitions taking into account AI’s dual-use nature.
- AI systems should be developed in ways that align with societal values and contribute to social and economic development accessible to all segments of society.
- As the use of AI systems proliferates, efforts should be made to promote capacity building and upskilling that takes into account AI systems’ labour and environmental impacts.
Rapporteurs: Yik Chan Chin, Patrick Bell, Muriel Alapini, Umut Parajo