Session
Dynamic Coalition on Core Internet Values
Roundtable
Duration (minutes): 90
Format description: The onsite moderator will explain the discussion topic and introduce speakers and subject matter experts, who will then engage in a roundtable conversation. We are inviting a diverse group of speakers from different geographies and different stakeholder groups.
The session will be divided in two parts, and after each part the moderators (onsite and online) will open the discussion for the audience, and facilitate the conversation. At the end, each speaker will be given an opportunity to summarise major takeaways from the discussion.
Harnessing innovation and balancing risks in the digital space
This session will focus on what lessons can be learned from Core Internet Values, particularly Internet openness, to foster AI openness.
Panellists will reflect on the past decade of Core Internet Values challenges to analyse technical, governance, and regulatory dimensions, with a particular focus on Internet openness, to understand what elements can be applied to the emerging field of AI governance to promote open AI and open AI governance.
Stakeholders will analyse a variety of experiences, including both policy and implementation, to understand what successes and failures drawn from Internet openness may inform AI openness.
Agenda
Introduction and Review of Core Internet Values (6 minutes)
- For newcomers who do not know what Core Internet Values are.
Presentation of Session Topics (6 minutes)
- Overview of the session’s focus on the parallels between Internet openness and AI openness.
Panellist Interventions (48 minutes)
- Panellists will reflect on the past decade of Internet openness challenges to analyse technical, governance, and regulatory dimensions.
- Discussion on how AI impacts core values such as best-effort, interoperability, openness, robustness, end-to-end, scalability, and permissionless innovation.
- Exploration of frameworks that can promote the empowerment of AI users, similar to net neutrality frameworks for Internet users.
Full Discussion with Participants (20 minutes)
- Stakeholders will analyse a variety of experiences, including both policy and implementation, to understand what successes and failures drawn from Internet openness may inform AI openness.
- Discussion on the specific issues raised by generative AIs in terms of Internet openness and how these new services affect end-user access to content online.
Conclusions (10 minutes)
- Summarising key takeaways and actionable insights.
Topics and Questions for Discussion
1. Lessons from Internet Openness for AI Governance
- What frameworks, if any, can promote the empowerment of AI users?
- To what extent can Internet infrastructure be compared to AI infrastructure?
- Can open Internet principles be translated to foster open AI? If so, what principles should be promoted?
- What have been the limits of net neutrality and Internet openness enforcement over the past decade, and how might these limits foresee upcoming bottlenecks in the AI value chain?
2. Core Values and AI
- How does AI impinge on core values such as best-effort, interoperability, openness, robustness, end-to-end, scalability, and permissionless innovation?
- How much does the healthy development of AI and its applications depend on these values?
- Can AI follow an open-source model?
3. Regulating AI and Internet
- What are the parallels in “regulating AI” with “regulating the Internet”?
- Can lessons learned from the defence of the Internet Model, with multi-stakeholder governance of resources, be applied to AI?
4. Proposals for Governance
- Bring all stakeholders to the table, including the technical community.
- Focus on concrete problems with the right combination of stakeholders and their specific representatives.
- Design institutional frameworks with representation, participation, transparency, accountability, and opportunities for reversal of decisions and redress.
- Ensure the ability to evolve by design and consider human agency in shaping conduct and impact.
- Look for a federated approach instead of a single top-down rule setter.
Luca Belli, Center for Technology and Society at FGV, Brazil, GRULAC
Olivier Crépin-Leblond, DC Core Internet Values, UK, WEOG
-
Renata Mielli, Ministry of Science and Technology, Government, Brazil, GRULAC
-
Sandrine Elmi Hersi, Head of Open Internet unit at Arcep, France, WEOG
-
Sandra Mahannan, Data Scientist/Analyst, Uniccon Group of Companies, Business Community, AF
-
Vint Cerf, Internet Evangelist at Google, Business Community, WEOG
-
Yik Chan Chin, Associate Professor Beijing Normal University, Academic Community, APG (member of Policy Network on AI - PNAI)
-
Alejandro Pisanty, Autonomous University of Mexico, Academia, Mexico, GRULAC
-
Wanda Muñoz, Independent Consultant on Equality and Gender in AI, Member of the Feminist AI Research Network and UNESCO’s Women for Ethical AI Platform, Civil Society, GRULAC
-
Anita Gurumurthy, IT for Change, Civil Society, India, APG
Co-moderators Olivier Crépin-Leblond, ISOC UK England, Technical Community, WEOG and Luca Belli, Center for Technology and Society at FGV, Brazil, GRULAC
Alejandro Pisanty, UNAM, Academic Community, GRULAC.
Aneesa Nia Williams, Center for Technology and Society at FGV, Brazil, GRULAC
9. Industry, Innovation and Infrastructure
9.3
Targets: Internet and AI openness are crucial for achieving SDG 9.3’s goals of enhancing access to financial services and market integration for small enterprises in developing nations.
Discussion on how open Internet and AI can bridge the gap for small enterprises, empowering them to participate in the global economy and achieve sustainable development.
Report
Participants underscored that, although the Internet and Artificial Intelligence (AI) are distinct entities, several Core principles from the governance and operational frameworks of the Internet can be effectively transposed to the realm of AI.
Participants emphasised the imperative to embrace a multistakeholder approach, involving diverse voices from various sectors, to thoroughly study and establish frameworks for AI openness.
All stakeholders involved should collaborate to develop a collaborative study between the DC Core Internet Values (DC-CIV) and the DC Network Neutrality (DC-NN) to identify and articulate the elements that facilitate an open AI environment.
All Stakeholders should initiate preparatory work, to be spearheaded by the session speakers, aimed at drafting a comprehensive preliminary report. This report will focus on identifying "desirable properties fostering an open AI ecosystem" and provide a foundation for future discussions and developments.
The first part was a summary of what the Core Internet Values are and how they relate to the lessons that can be learned from them, particularly Internet openness, to foster AI openness.
Luca Belli presented the relationship between Internet openness principles and AI governance, highlighting the interconnectedness of the two systems, with AI relying on Internet infrastructure and many Internet applications incorporating AI. Core principles like transparency and accountability are shared between both domains. In Internet governance, transparency ensures ISPs manage traffic without bias, while in AI, transparency is crucial for understanding how AI systems make decisions, particularly in critical areas such as loan approvals and law enforcement. However, many AI systems, particularly large language models (LLMs), are complex, often obscuring the decision-making process. Accountability, rooted in transparency, is vital for users, regulators, and society. Interoperability, a key principle of Internet openness, faces challenges in AI due to its concentration among a few large corporations, which undermines decentralization and non-discrimination, as this concentration disproportionately affects the Global South, where AI systems create limited access and contribute data to large corporations. Drawing from past Internet regulation debates, such as net neutrality, governance solutions for AI should focus on addressing the risks of power centralization, particularly for underserved regions, and ensure alignment with Internet Values like transparency, fairness, and openness.
Following-up, Olivier Crépin-Leblond gave a brief summary of the Core Internet Values and the technical principles behind them: it is a global, open, and accessible resource, interoperable across all devices, decentralized except for the DNS system, and user-centric, allowing freedom to choose applications. It remains robust and reliable despite growth and cyber threats. Safety has been added as a Core Value, ensuring cyber resilience through security measures. However, these Values face challenges, such as network neutrality issues. Despite this, the Internet's global and interoperable nature persists due to a balance among stakeholders. AI, seen as revolutionary, raises questions about regulation, and Crépin-Leblond suggested that lessons from the Internet’s governance—promoting openness, stability, and innovation—could guide AI regulation, ensuring a balance between safety, stability, and innovation.
Renata Mielli provided a historical perspective, contextualizing Brazil’s viewpoint on the connection between Core Internet Values—such as openness, permissionless innovation, interoperability, and collaboration—and the development of AI systems. Drawing from Brazil’s Internet Decalogue, she suggested that principles like standardization, human rights, inclusion, and democratic governance could guide AI regulation. While principles-based governance offers useful guidelines, Mielli emphasized the need for contextual governance, with countries adapting regulations to local realities while maintaining global cooperation. She also called for South-South collaboration to strengthen research networks and open innovation, noting Brazil's leadership opportunities in the G20 and BRICS to promote global AI governance frameworks. Ultimately, AI development should follow established Internet governance practices, focusing on ethical standards, transparency, and inclusive collaboration for a fair technological future.
Anita Gurumurthy questioned the true meaning of openness, arguing that it does not always equate to inclusivity or accessibility in the current context. She criticized how the early vision of an open Internet has been undermined by centralized data control and the dominance of transnational corporations operating "walled gardens." In the context of AI, she highlighted that systems labeled as "open" often lack transparency, reusability, and fail to democratize access. She cited OpenAI's GPT-4 as an example of this, with minimal transparency about its architecture and data. Gurumurthy called for restoring transparency, access to training data, and reusability, advocating for a collective rights framework inspired by environmental law principles. She emphasized three key AI rights: substantive equality, the right to dignity and freedom from exploitation, and meaningful societal participation in AI development. Her conclusion focused on a societal rights approach to AI governance, prioritizing ethics, collective sovereignty, and inclusion to ensure AI benefits all nations equitably.
Sandrine Elmi Hersi discussed the potential and concerns of generative AI, highlighting its growing role across sectors and its legal, societal, and technical implications. While policymakers, especially in the EU, focus on security and data protection (e.g., EU AI Act), the impact on Internet openness is underexplored. Generative AI tools, like chatbots, are replacing traditional search engines, potentially reducing user control and increasing issues such as bias and lack of transparency. Hersi expressed concern about generative AI’s effect on content diversity, as it could harm traditional media and digital commons. She pointed to the EU's Digital Markets Act and RCEP's initiatives aimed at applying Internet openness principles to AI. Elmi Hersi called for collective responsibility to ensure AI governance prioritizes openness, innovation, sustainability, and safety, urging proactive steps to maintain open Internet Values while fostering ethical AI development.
Vint Cerf discussed the fundamental differences between AI and the Internet, emphasizing that AI lacks the standardization that contributed to the Internet's success. AI systems are proprietary, with their architecture and training data protected as intellectual property, meaning open access does not equate to transparency in their workings. He highlighted the potential for future standardization, particularly for semantic interaction between AI systems, but warned of the risks of ambiguity in AI communication. Cerf stressed that regulation should focus on AI applications and the safety risks they pose to users, with providers held accountable for any harms caused. He raised concerns about the loss of detail in large language models (LLMs), which compress information into statistical models, potentially compromising accuracy. Cerf emphasized the importance of tracking the provenance of training data to ensure the reliability and transparency of AI-generated content. In conclusion, he called for a focus on safety, accountability, and transparency in AI governance, cautioning against treating AI governance the same as Internet governance.
Yik Chan Chin discussed the differences between AI and the Internet, noting that whilst AI relies on Internet infrastructure, its components—algorithms, data, and computing power—are fundamentally distinct. AI does not require standardization like the Internet, but interoperability, defined as the ability for systems to communicate smoothly, remains crucial. She highlighted the need for a balance between permissive innovation and the precautionary principle, as AI's complexity and unpredictability necessitate careful regulation. Core Internet governance Values such as transparency, accountability, and safety are also applicable to AI. Chin addressed global challenges like risk categorization, liability, and training data standards, calling for compatibility mechanisms to reconcile regional regulatory differences. She concluded by stressing the importance of strengthening global institutions for AI governance, ensuring clarity and fairness in key areas like risk evaluation and accountability.
Alejandro Pisanty emphasized the importance of preserving Core Internet Values like openness and interoperability in AI governance, while recognizing the challenge of applying these principles to AI. He argued that AI should not be viewed solely through the lens of generative models like LLMs but should encompass a broader range of applications such as molecular modeling and weather forecasting. Pisanty stressed that regulatory efforts should focus on human intent and misuse of technology rather than overregulating AI systems, which could hinder innovation in critical sectors. He advocated for digital agency over digital sovereignty, encouraging collaboration and capacity-building over restrictive regulations. Drawing from Internet governance, he suggested a multi-stakeholder approach to AI governance, with sector-specific regulations to address risks. Pisanty concluded by calling for a balanced approach that ensures accountability, supports innovation, and avoids overregulation.
Sandra Mahannan provided insights from a business perspective on AI and robotics, highlighting key issues such as bias in AI. She explained that AI often reflects the biases of its creators, leading to problematic outputs, especially in sensitive contexts like religion. Mahannan emphasized that the quality of AI responses is heavily influenced by the data fed into the models, and poor-quality data can worsen biases and inaccuracies. She advocated for regulation focusing on the development process, including areas like data quality, privacy, security, sharing guidelines, and ensuring interoperability across AI systems. Her conclusion stressed the importance of openness and regulation to address these issues and ensure ethical, accurate AI outputs.
Wanda Muñoz emphasized the central role of human rights in AI governance, asserting that human rights are not abstract but require concrete actions, policies, and accountability mechanisms. She argued for reframing AI governance discussions, moving away from terms like "mitigating risks" to recognize that AI harms often lead to systematic human rights violations, disproportionately affecting marginalized groups. Muñoz called for active measures to combat discrimination, stressing that non-discrimination in AI requires more than inaction and must include efforts to redress systemic issues in data, policies, and organizations. She advocated for specific regulatory actions tailored to protect vulnerable groups and ensure equity, aligning with Anita Gurumurthy on adopting a societal rights approach. Muñoz disagreed with Alejandro Pisanty’s view on "not demonizing technology," asserting that highlighting AI's risks and harms is essential for accountability. In conclusion, she stressed the importance of regulation, human rights, and collective societal frameworks to ensure AI promotes justice and equality.
Follow-up discussion Session
Desiree Miloshevic raised concerns about the impact of AI on Internet openness and the need for regulation at the AI layer.
Anita Gurumurthy and Vint Cerf responded, emphasizing the importance of transparency, accountability, and safety in AI governance. Vint Cerf asked in Chat room what would a desirable AI ecosystem look like?
Renata Mielli and Yik Chan Chin discussed the mutual impact of AI and the Internet, highlighting the need for cybersecurity and interoperability.
Sandra Mahannan reiterated the importance of regulating AI development to ensure quality and accountability.
Added post-session; Sandrine Elmi Hersi added that from a regulatory standpoint, it is imperative to ensure that AI models deployed in telecommunications networks adhere to the EU Open Internet Regulation, particularly in regard to AI-based traffic management. This is essential to uphold net neutrality obligations. Additionally, the proliferation of generative AI as a gateway to online content poses a potential threat to the principle of Internet openness. Given the possibility of generative AI becoming a pivotal intermediary for accessing content, it is vital to evaluate its impact on user empowerment and Internet innovation. Effective governance and collaboration will be crucial to sustaining an Open Internet, with a focus on transparency, user choice, and the prevention of discrimination. Detailed information can be found in reports by Arcep and BEREC.