The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> AMRITA CHOUDHURY: Welcome to the main session Addressing Advanced Technologies, Including AI: Common Principles and Risk‑Based Approaches to AI Governance.
Adam and I, we're the main moderators, we're just sporting the moderation and Thomas Schneider online is the actual moderator. Before we start, Adam will give you an overview of why we're discussing this.
>> ADAM PEAKE: Thank you, Amrita.
Adam Peake, a MAG member, this session on AI is a part of an overall theme in addressing advanced technologies and AI. The whole range of technologies would be too much, so we're focused on AI looking to outcomes we can contribute to other processes, particularly the Global Digital Compact, but really learning how can we as a multistakeholder community address this very important topic.
AI has the potential to reshape economies, societies and everything we're doing in our lives. Tremendous opportunities for productivity, economic growth, education, we hear of examples such as traffic applications and AI being used in medical services.
At the same time, we're also concerned about risks and that's just one of the topics we'll talk about. Unchecked AI could lead to biases, discrimination, infringements on security and we'll talk about how AI can be built with the best intentions ‑‑ even when we have systems built with the best intentions there is a potential of harm, we have to look at the opportunities and risks and look at the principles and so on we may be able to develop.
To my pleasure to be able to introduce this and coordinate in the room with Amrita.
I know there are many of you that held sessions on AI this week. We hope that if you're in the room you will be able to contribute, and to ask your question, make comments towards this goal of producing something useful for other processes coming after us on this topic of AI.
With that I turn the Chair over to Thomas Schneider.
With that, over to you, please. I hope that the technology works and thank you very much, everybody, for joining.
>> THOMAS SCHNEIDER: Can you hear me?
>> ADAM PEAKE: Cannot hear you.
>> THOMAS SCHNEIDER: I can hear you ‑‑
>> AMRITA CHOUDHURY: The audio from the Zoom room, it is not working. We cannot hear the speaker.
>> THOMAS SCHNEIDER: People in the Zoom room seem to hear me.
>> AMRITA CHOUDHURY: Can you try now?
>> THOMAS SCHNEIDER: Yes. This is me. I hear an echo. Thank you, Adam, Amrita for the introduction.
I'll be very brief and basically start with giving the floor to the speakers. I think that the key point as Adam has said, we have a chance here, this year's IGF to feed multistakeholder voices into a UN process which is the work on the Global Digital Compact and I think we should try and work together all to produce input and produce messages that will then in the end hopefully be heard by the UN process.
With this, let me introduce the first speaker which is Karine Perset. She worked for quite some time now on AI at the OECD.
Karine Perset, please, the floor is yours.
>> KARINE PERSET:
>> AMRITA CHOUDHURY: I cannot see Karine Perset in the room. If you would like to go with the next speaker first.
>> THOMAS SCHNEIDER: Then we would move and hope that she will join us soon. We would move to professor Tshilidzl Marwala, the next rector of the United Nations University starting from March of next year.
>> TSHILIDZI MARWALA: Thank you very much for that introduction. Good morning from my side, it is still the morning, to all of you.
In the deployment of hybrid systems which focus on the convergence between human beings and machines, we need to be wider in our understanding of sources of human intelligence, if you will.
The pandemic has certainly confirmed that we are in the middle of a deep, profound technological shift and it indicated the global digital divide, that keeps us apart.
Admittedly, however, when we hold on to the Africa context, there is a distinct lack that needs to be addressed. It is imperative we consider multiple perspectives to build more inclusive AI systems. The lagging development, infrastructure deficits and vast scientific and technological gaps on the continent are in part simple parts of this lag and the simplification of systems in Africa that don't necessarily speak to our own context. This is even exacerbated by the fact that in many places where this technologies are being developed Africans are not even on the table. The African countries with large poor populations, they face a frightening future dependency and we should take this serious.
As it was put, the countries not in good shape are the countries that have perhaps a large population but no AI, no technology, no Google, no Facebook, other, no Amazon. That's as we consider AI models, we must consider multiple sources of knowledge, ethics, systems that inform the human component. I would argue that it is imperative to determine how we identify these needs in a manner that speaks to our different context.
In Africa, indeed, in South Africa, there is an argument to be made for operating according to our values. Values based on the principle of Ubuntu, defined as a quality that includes the essential human virtue compassion and humanity presents as an alternative. This is not to say that this calls for a distinct system but rather one that understands that although we are diverse, we are linked to each other. While I use Africa as an example in this case, it is broader consideration and could also be viewed in terms of Global North South divide for example.
We need to ensure that the data that is used to train AI systems is as inclusive as possible and we have seen examples where AI systems are not able to recognize African faces as well. Simply the reason for that, it is because there are simply far fewer data‑based on African populations than database on the rest of the world.
We need to be able to develop AI, that is able to handle things like missing data, which is quite stuck in the African continent, most of the models developed now are assuming that the data is almost complete. It is when you are dealing in Africa, whether it is an AI system that is used in health, we need to understand that there is a gap when it comes to the missing link.
We need to also educate people, ethics must be embedded in AI education. I think that's very, very important. We need to be able to do this, our educational system must be multidisciplinary. It has to equip everyone, almost as a right, to be able to understand all this technological developments. Here in Johannesburg there is an outgoing vice chancellor who have introduced a compulsory AI literacy course so that all our 50,000 students at the least understand where this technology, what this technology is all about.
We need to understand the impact of AI in many aspects of our live, whether it is in the future of work, whether it is in how it handles human relations, and embed that into our legislative framework so that our people can be protected. This, of course, requires our lawmakers to become more technically literate, something that this University is doing, training some lawmakers in Southern Africa to do that.
Thank you very much.
>> THOMAS SCHNEIDER: Thank you very much, Tshilidzl Marwala, for this very relevant intervention that we should not forget those that are not yet on the forefront on all of the technology, those that are not represented north sufficiently represented at least in the data which is one of the reasons for bias that we may get.
Let me now see whether Karine Perset from OECD is ready?
>> KARINE PERSET: Thank you.
I was actually here, but my camera was disabled.
So I'm delighted to participate in this session today. I'll jump right in with an overview.
So back in May, 2019 the OECD with partner countries and the G20 committed to 10 principles, the OECD principles, and since that point we have worked to help countries operationalize the principles. The first five principles, they're values that AI systems should reflect and rights that should be protected. These are things like inclusiveness, Human Rights, democracy, fairness, transparency, explainability and very importantly, accountability of AI actors for these. These are I think principles that were early in terms of intergovernmental agreements but they're very aligned with many of the following agreements that are quite consistent I would say overall.
These principles, they also layout five recommendations to governments to build AI ecosystems that can benefit societies with policies, for example, to build human capacity for AI among others. So since early 2019 we have been trying to help operationalize these principles through a few initiatives. I will just mention a few of them.
One, it is the OECD.AI Policy Observatory, an online platform with resources like a database of national AI policies that currently covers 69 countries and we're looking to expand the coverage to other interested countries.
AI policies as Tshilidzl Marwala had pointed out will necessarily depend on each country's starting point. policies for AI literacy, upskilling in Egypt that I think Sally on the panel also knows very well, will defer from those in Finland, Kenya, Colombia, but still valuable lessons can be learned oftentimes from other countries' experience and this goes to Mr. Tshilidzl Marwala's points I think.
OECD.ai has a section also on worldwide AI trends to inform policies with data on what is happening. What is happening is very interesting in the AI ecosystem and it is not always what you would expect. For example, we often talk about the United Nations and China but India has become a major AI player in terms of the AI skills and software development and actually even on other characteristics, a real major player.
We also conduct analytical work on lots of topics through a global multistakeholder network of experts and a Working Party on AI governance. When I say global, I mean that's very inclusive in terms of it includes of course, OECD member countries and it includes a lot of other countries around the world. That conducts analysis for example many pieces of analysis, I would give a few example, we're focused right now on analysis of AI capacity and compute divide. We focus oftentimes on data and on models and in fact, you know, the actual physical infrastructure AI compute, which is specific to AI, it is an important enabler as well or on how to develop AI language resources for smaller and minority languages, those are topics we're working on. I would also like to highlight active cooperation that we have with eight partner IGOs, including the World Bank, UNESCO, the Inter‑American development bank and represented today also the Commission and Council of Europe. The resources we create together are at global policy.ai.
That's the introduction.
Back to you.
>> THOMAS SCHNEIDER: Thank you very much.
The next speaker which has been mentioned already is Sally Radwan Golestan, she is coordinator of the African Union Working Group on AI and has been mentioned, she has been participating in the UNESCO's expert group working on the recommendation on ethics and also has been an advisor to the Egyptian government on its AI strategies.
Sal Y over to you.
Thank you.
>> SALLY RADWAN GOLESTAN: Thank you. Good morning, good afternoon, everyone.
I will be concise and focus on one specific point which is the multilateral approaches to reaching a binding agreement on AI, a global binding agreement I should say.
Having been through a number of the multilateral processes myself I tend to be deeply skeptical of the possibility of reaching one binding global agreement but certainly not in one step. Not only do I think it is unrealistic, I don't think it is a particularly good idea. I will tell you why.
First of all, multilateral negotiations will inevitably be very long drawn, to the point that they may become irrelevant, outdated before they're even produced. The second reason, it is they will produce a watered down version always of basically what no one objects to as opposed to what we want to achieve as a group.
Thirdly, they have always been steered and dominated by a small number of country, no matter how hard we try to make them inclusive.
Unfortunately, these are usually but not always Western countries which creates again the feeling in the Global South that we're being disadvantaged, left out of the conversation, our priorities and values aren't being taken into account.
What I can see working, since we're trying to be kind of an outcome‑driven session here. What I can see working is a semi bottom‑up approach. So now we have several agreements around principles and values as the UNESCO, the OECD, many, many others, and I think in terms of binding agreements, they should first be formed at the regional, subregional level even, and this would ensure that all parts of the world get a say in the binding agreement and at the same time, much more realistic to achieve because it is easier to get geographically close and like‑minded countries to agree on something.
There should be a fora so to speak and can't be driven by one region, can't be driven by one organization, it needs to be driven by the UN which is why this is important and a good forum to discuss this.
Then we start building bridges between those regional agreements. We don't have to come up with one single binding agreement for everyone, but at least define the interfaces for cross‑border cooperation, product and knowledge transfer, so on. Just like we build interfaces between systems that use different types of data, formats, we can build into faces and bridge gaps between regions.
Part of these agreements should be the definition of standards of compliance, also along the lines of the certifications for medical devices which also include the classification and Karine Perset had mentioned the work that's been done, they started working on the framework for classifying AI systems which I think can be a very good sourcing point for that.
This classification can then lead to different requirements with things like transparency, observability, et cetera, and could be understanding, what each product is capable of or has undergone to reach a particular level of certification in a particular region.
Then ideally, this will start to merge over time so that eventually we end up with one, two, maximum three global standards that are understandable to everyone but take the needs of all countries into account.
Thank you very much.
>> TSHILIDZI MARWALA: Thank you, Sally. Thank you very much for this very interesting proposal of different steps that could lead us to developing something in common based on the work that's been done by different institutions and I would like to directly now follow‑up on that with a short information about the Council of Europe and my role as Chair of one of the key Committees that's currently doing in this regard in one region.
So for those that are not familiar with the Council of Europe, it is important to know that this is not the European Union. The Council of Europe is an intergovernmental organization with 46 Member States with the goal to promote Human Rights, democracy, rule of law as established more than 200 conventions and hundreds of soft law recommendations.
In the patents few years, the Council of Europe has not only developed soft law guidance on AI, different sectors, it has spent two, three years to look into AI in general and think about what may be an appropriate legal framework for the development, design and use of AI.
Know, of course, that existing legal frameworks on Human Rights, on democracy, on rule of law, et cetera, they're already applicable to AI, but there was a feeling that there were gaps interpreting this framework, applying it to AI, and additional set of instruments may be needed.
So the conclusion was that the Committee of Ministers decided that there should be binding transversal instrument establishing a set of fundamental principles for the development, design and use of AI based on the Council of Europe standards so Human Rights, democracy, rule of law, and this would be complemented by binding, non‑binding instruments in the sectors, for instance, in the judicial system, elsewhere, where it is considered necessary.
So the Committee that I'm chairing, it has been mandated to work on a binding Convention containing principles on AI which will be based on the Council of Europe's standards. At the same time, it should be conducive to innovation. It should be a tool that protects people, that also allows them and the industry to flourish to innovate.
It is important to know that this is not just an instrument for the European Member States of this institution, it is an instrument where any country interested in upholding the same values can participate in the elaboration and can also then sign and ratify and to become a party to the Convention. In addition, which is the second point, which is important, we'll look at given that all of these regulatory frameworks that are discussed, all of the principles, it is built on a risk‑based approach on the idea of classifying systems according to risks, impact, we are also at the same time working on what we call the Human Rights, democracy, rule of law, impact assessment that will not be a specific instrument but it will lay out the basic elements that such instruments could have, giving a share umbrella that uses such instruments but allowing each country, its nation state to apply and develop the specific model in its own way.
This is a contribution of the Council of Europe that we hope that will lead to something like Sally proposed.
So with this, let me hand over to the next speaker in our panel, which is Jamila Venturini, Executive Director at Derechos Digitales.
>> JAMILA VENTURINI: Good morning. Thank you for the invitation.
In this initial remarks I would like to highlight three key elements to be taken into account as we discussed common principles for AI. These are inspired by our research around the use of such systems in the public sector in Latin America which you can find at AI.DerechosDigitales.org.
The concept of AI may not be the best one to advance such common principles as we have different understandings and implementations in different regions and immense disparity when it comes to how they develop and deploy such technologies. Several of the implementations of AI and the public sector in Latin America, for instance, they're not restricted to fully automated decision making systems, machine learning, deep learning systems but they also imply several risks. If we adopt this strict understanding of AI, we may leave them uncovered and thus people in countries unprotected.
The second point related to the first one, it is that although standards and principles multiply at the global level, research has shown an underrepresentation of Latin American and African perspectives in the debates around AI and ethics which are inspiring a large part of the national policies on the matter, including in those countries. This means that policymaking may end up ignoring local realities and priorities and I believe that the UNESCO recommendations are the greatest exception to this and it would be key for all stakeholders to get involved in the process of its operationalization and implementation since it brings relevant instruments to contextualize regulation in different regions and countries.
The third point, again on the national AI policies, considering the Latin American context is that while most initiatives, dialogue with such global standards and principles, they also respond to market pressures from the private sector who wish to guarantee legal environments that can foster their commercial interests. Let's recall, Latin America is a region that mostly consumes technology and its production, it is generally localized and recipient. Policymaking is not rarely attached to the global tech industry interests and influence. The case of the Brazilian AI bill is illustrative of how a corporate techno approach can contaminate policy discussions as they try to give a blanket authorization for any form of AI regardless of positive obligations on the development of impact assessments and the existence of any guarantees that such systems won't impact on the exercise of fundamental rights.
Because of all that, to conclude, yes, we need global standards. We need them to be truly global with meaningful participation of Global South countries in the definition of concepts and priorities with proactive measures to look at the Civil Society input and at the national and global levels of policymaking and standard setting with a particular focus again on the Global South and with accountability on how such participation is being considered in this processes.
With concrete mechanisms to mitigate the local and global injustices and inequalities which result from the concentration of data assets and therefore knowledge production in some actors and countries, regardless of the places from where data is being extract and what's been called by some of the data colonialism. Here I would like to refer you to the excellent policy brief developed by the data justice project which brings some initial thoughts around how to regulate this processes.
I will stop here, not to advance so much on time. I'm glad to continue discussing that with you.
Thank you.
>> THOMAS SCHNEIDER: Thank you very much. Thank you also for making the connection to data, of course, because AI systems cannot do anything but use the data they're fed with. Again, this is a question of data governance, it has a big impact on what happens, what AI, what the effects of AI are.
With this, let me move over to the next speaker, which is Yordanka Ivanona from the European Commission.
>> YORKANKA IVANONA: Thank you very much. Good morning, everyone. Very happy to be with you so many distinguished speakers and stakeholders.
Indeed, the representative of the European Commission and the European Union, we would like to stress the great support for the stakeholder event and discussion focusing on the need for human‑centric approach to AI regulation.
In Europe we have put a big focus on artificial intelligence technology and we have already in 2019 adopted the key principles, and afterwards, we have moved forward and last year we proposed a package which aims to create an ecosystem of trust with the proposal for legislative framework for artificial intelligence, which is aiming to address the risks that certain users of the technology could pose to fundamental rights, health and safety while enabling the beneficial uses and creating critical certainties and opportunities for uptake and innovation within the European single market.
We also create and follow the same risk‑based approach by targeting our rules to applications that could pose high‑risks and proposing specific binding requirements that are easy to operationalize and procedures for operators to ensure key principles also mentioned today like data quality, documentation to ensure accountability, accuracy of the system, so we are sure that they perform well on European population and other citizens and people placed in Europe and they're safe for use, so there are specific applications and we rely on the standards a lot in the future and we're keen to cooperate with international investors to build the common understanding of how to actually operationalize and implement the principles and taking into account the need for inclusiveness and different region perspective on those issues.
Besides this regulatory framework, we have proposed a coordinated plan on artificial intelligence as we think it is not only principles and rules, but we also need very specific actions to support and create the ecosystem that enables the way to use the technologies and here we propose a lot of measures for data, computing, facilities, skills, all those issues that are also very important in general and go well beyond also the principles and the regulatory roles.
Maybe I stop with this, not to take more of the time, but just want to conclude by saying as European Union, we're very much keen on cooperating about international level with key partners and we do that bilaterally with many countries and also in multilateral fora is ups that fora but also other international organizations like Council of Europe, OECD, UN that approach the trust working with human‑centric AI.
Thank you.
>> THOMAS SCHNEIDER: Thank you very much, Yordanka.
I have been reminded by the interpreters that we should all try, that includes myself, definitely as well, we should all try to speak a little slower. This is the challenge as we have limited speaking times, then, of course, we all tend to speak faster in order to get more of our messages across.
Having said this, let me introduce the last panelist in this session and then we'll try to open up the discussion to involve as many of you that we can. This is Sunil Abraham, he's data public policy Director, data economy and emerging tech of the Meta India, Asia‑Pacific. Thank you.
>> SUNIL ABRAHAM: Thank you so much, Thomas.
I would like to begin by giving a little more color to Karine's characterization of India as a significant player as far as AI is concerned. Part that have comes from the very top, our Honorable Prime Minister last year set a goal to train 1 million Indians in AI within the period of a single year. There is also movement as far as regulation and policymaking is concerned. Our premier policymaking think tank has published two approach documents. The first one, in February, 2021 and the second one in August, 2021. The first one focused on principles for responsible AI and the second approach document focused on operationalizing these principles for responsible AI.
As Yordank and you, Thomas, have mentioned, we have also embraced youth‑based approach for regulating harms that emerge out of AI and what is important at least for me to note, it is that they have clearly said that where the risk of harms are low then either policymakers should opt for regulatory forbearance or give market players the opportunity to lead with self‑regulation.
I will quickly cover some of the work that's happening at Meta to address some of the harms and also to build and operationalize the principles.
The first, an internal tool, it is used to detect both model bias and training set bias, especially what professor Marwala said about bias in AI systems could be preempted by using this internal tool and the point that Yordanka made on building public digital goods for the testimony and two opensource projects, one that's a free software library for training models with differential privacy and captain, a free software library for model interpreting, interoperability.
As far as explainability for model, Facebook has been adopting a particular approach which we call model cards and you can already see the model cards on the news feeds across Facebook and Instagram and also when you're served an ad, you will see a little sign that says why am I seeing this. That's the approach that we're using for model explainability at Meta.
And to close, I would like to give a case study of the increasing use of opensource, open sign, open data at Meta when it comes to AI development. The case study that I would like to use is an opensource chat bot, as you know, most bots have a tendency or even a possibility of producing toxic and offensive speech. Part of the open signs paradigm is to provide public demos of the models so that they can be tested by the global community and perhaps this is in line with our philosophy when it comes to building the Metaverse, as Meta, we don't believe people build the Metaverse alone, we believe it is a collective Global Project and similarly we don't think that the challenges that AI poses will be solved in a solitary fashion but rather in a collective approach. This approach of sharing IP in this open model allows for global Technology Transfer and this again resonates with our honorable Prime Minister's vision of giving the greatest number of people the skills and visibility into these core technologies.
Thank you again, Thomas.
>> THOMAS SCHNEIDER: Thank you very much, Sunil.
If I'm trying to sum up what we have heard so far, I think we have a large number of principles, of guidance out there, mainly soft law. The E.U. is working on a regulatory tool for market regulation, Council of Europe is working on a binding instrument laying out principles. The problem is on one hand ‑‑ the challenge, it is actually how to implement these principle, how to make sure that these principles are also interoperable across regions, across national borders and what we have heard from several stakeholder, from developing countries that most of these, this work, it is driven by let's say industrialized countries, stakeholders, be it industry, be it governments and the views and the needs of the situation and also the visions of let's say the Global South, the developing countries, they're not necessarily heard enough.
At least this is my personal sentence if I may sum up what we have heard so far.
We do have now some time, I would like to open the floor now to all of us that you can also, also those in particular, those that have participated in the various sessions that have dealt with AI and new technologies on contributing in particular with the view to again what this IGF could formulate as messages to feed into the Global Digital Compact, departing from where we stand now with a number of principles and guidance, with one global instrument from UNESCO that is the first one that set standards on ethics, on a global level, but, of course, this may not be sufficient, it is an important step but may not be sufficient to come to a global framework that allows to reflect all needs of all cultures at the same time allowing for concrete way of dealing with AI by the industry, by regulatory authorities and we have heard a proposal from Sally about how to move forward to getting to a shared way of dealing with AI, allowing AI to produce innovation and at the same time trying to make sure that AI is used for the benefit of all people and to minimize impact.
What are the things that we need to do now? What are the message that you want to give to the world at this late moment in 2022, what are ‑‑ what is the situation? What do we need to do he as next steps in order to turn this principle into something usable for everybody. Also knowing again that important element of how do we organize the idea of classifying such volatile things as AI systems in a way that ‑‑ in a workable way legally, technically, that it is able to differentiate between applications that have no risk at all, very little or very high, maybe even too high, I think this is something that we're all struggling with on how to develop such a methodology and such systems. Let me open the floor to all of you to contribute to this discussion with the aim then at the end to hopefully being able to summarize some key messages that we want to give into the Global Digital Compact.
Now I need your help.
>> AMRITA CHOUDHURY: We have two questions on the floor. Would you like to take them?
>> THOMAS SCHNEIDER: Please. Please. Yes.
>> AMRITA CHOUDHURY: So we have three actually.
Please.
>> AUDIENCE: Thank you very much. I'm from Brazil. I would like to build on the Global South perspective.
Beginning with AI, and just to point to the fact that for the developing country, the term bias in many cases, it is too light. The challenge for us, it is actually to address structural inequalities both within and among countries.
The need is for us to shift the focus on AI system development from a profit‑oriented business model to one that would be deeply embedded on the Sustainable Development Goals, on the need for us to leave no one behind, and in that aspect, it is fundamental, first you work on enhanced international cooperation, both in terms of digital governance and data governance, with the ultimate goal to address and reverse digital divides.
In that aspect, I would very much like to push back on some notions on skepticism on the possibility of achieving multilateral, multistakeholder global agreements. We're here with an issue with us, it is extension for many people in the world, of not only nation, but also individuals and segments of societies.
So we would not have an option other than to be very much ambition, not only in terms of addressing AI, but also digital and data governance that would also include other disruptive technologies, such as quantum computing.
The message that we would very much strongly like to convey here, it is for us to start thinking ambitiously. We have heard a number of times this analogy between the challenge that we have before us in the IGF and the challenge of Climate Change. When we think of Climate Change, it was very much important for us to agree as an international community on a multinational regime. We do need that on digital and data governance as well.
Thank you.
>> THOMAS SCHNEIDER: Thank you very much for this very interesting, also comparison to the discussion on Climate Change which is also let's say a global challenge and also has the elements of global opportunities and also again the link to a data governance, to a global shared vision on how to deal with data, how to use AI with a view to SDGs, not only to make profit. Thank you very much for this. Very interesting suggestions.
>> AMRITA CHOUDHURY: We have two more.
>> AUDIENCE: I'm from Tunisia.
I want to ask a question on how to deal with security issue towards if we consider AI ecosystem which is a route to Metaverse.
>> AMRITA CHOUDHURY: Let's take the next few questions. I see hands frantically waving.
>> AUDIENCE: Thank you very much. I'm coming from India. I think that the fundamentals of AI, they're both in the Global South internally very disparate, in a sense within even the context of digital literacy, about AI, it is fragmented.
I think that the principles have to ‑‑ they have to be very, you know, sort of challenged in terms of the demographies, where we are putting and using, you know, whatever principles are required and it is sector specific. A larger guiding framework would be great but then to look at are we talking about Morocco, Senegal, you know, Ethiopia, and then that's ‑‑ I think that's fundamentally challenged in terms of how AI is looked on for our principles and guidance.
Thank you.
>> THOMAS SCHNEIDER: Thank you very much.
>> AUDIENCE: Thank you. I'm with the Africa alliance, thank you for the presentations.
Artificial intelligence has been with us for quite a while, within this fourth industrial revolution taking on a new life with so many connected things, with the broadband catalyzing this.
Well, I just want to put on record that as we make progress AI must be accountable to us. Under no circumstances should it be let loose but must remain accountable to us.
Another important point, it is that cybersecurity, the security of these devices, it is critical. So between factors that are related, accountability of AI, the need for cybersecurity, security on the devices.
Thank you.
>> AUDIENCE: Thank you very much.
I'm an associate professor in law and technology at Liverpool University. I do have two key perhaps messages that I just want to note.
The first one relates more fully to what the delegate from Brazil just said when it comes to the principles which we see in most regional or national policies regarding artificial intelligence when it comes to the question of bias.
I would want to emphasize that at some point perhaps that term is too weak for one major reason: It is in most of the policies that you read, there is almost an assumption that we're dealing with AI that is a neutral technology. It is not.
I think it is fundamental that when we're talking about the principle of bias we specifically mention the victims, say their names. They are specific individuals or groups that are more fully impacted by this AI technologies, and in some of our research which we have suggested, what we're calling the principle against discrimination and oppression, we specifically say which other groups, which other individuals who are largely impacted by this technology.
The second point, being the last, it is on the aspect of the need for international treaty, legal instruments regulating artificial intelligence technologies.
Of course, we have had skepticism or the challenges that we may face when trying to have this kind of instruments, that they're needed, the reason why they're needed, it is because there exists legal gaps as far as governance of artificial intelligence is concerned.
My last statement on, this we say there is always a benefit, in thinking broadly, trying to have to be that ambitious, if you look at the current attempts at the United Nations on autonomous systems, even those tests have failed for the past eight years that come with a legally binding instrument. At least they are ‑‑ they are principles that they have been able to at least come on convergence. For example, principles such as the principle of meaningful human control over use of force.
I think it will be a good attempt to continue trying to have international instruments on AI.
>> I have listened, there are many initiatives going on, UNESCO, Council of Europe, UN, I think that we need to think which could be the role of the IGF in that. IGF is global, multistakeholder, it is the place where we can add meaningful activities to all of the other fora that mainly multilateral and governmental driven.
Can we assume that the message coming from this IGF is to try to bring into the digital compact a reflection, transversal this initiatives in order to go all in the same direction, having the same meaning? Thank you.
>> ADAM PEAKE: Thank you.
Yeah. That's an interesting comment. As a MAG member, it is interesting to see what role the IGF could have ongoing, perhaps as a place where principles, other aspects could be reviewed each year but we'll leave that to the panelists to discuss. Perhaps there is a role beyond the Global Digital Compact that we could look and review and talk about principles and risks.
I took the mic to actually mention there is a question in the chat.
We are discussing AI design while AI is already ruling many processes, decisions both in private and public sector. It is not just a futuristic tech but it is already used by bank, employers, platforms, schools, admissions, et cetera. I think this should be part of the equation for maximizing profits, why is it now an algorithm design and data governance? I think that the question is soft law the answer? Thank you.
To mass, I think we have had a good collection of questions ‑‑ I'm sorry. One more ‑‑
>> AMRITA CHOUDHURY: Tom, there is Anthony who wants to make a comment, after that, perhaps you can all take it forward.
>> ANTHONY WONG: I'm sorry, I couldn't turn on the video, the hosts wouldn't allow me.
I hosted the AI ethics and regulation panel on Wednesday with my copanelist from UNTAG and with Ed Santo, former Australian Human Rights Commissioner of Australia.
My background, I'm speaking both as a lawyer because I'm a qualified lawyer, practicing at IT, working on AI for many year, speaking in many places on AI. I'm also the President of the International Federation for Information Processing, IFI, which was created ‑‑ under the auspices of UNESCO 60 years ago. We have members from all five continent, from the British Computer Society, Australian, South Africa, so forth.
While speaking, I followed some of the AI sessions over the last couple of days and I just would like to summarize my thought as an IT teacher and a lawyer working in the space.
I followed the UNESCO Declaration of Ethics for some time and actually participated in the process and I agreed with Sally from Egypt that a lot of work has gone into it, it is not a legally binding document, it is another framework but it is the most comprehensive framework that I have seen and worked on.
One of the unique features is the framework linked to the regulatory aspect, because right now we don't have regulatory systems in many parts of our world. The UNESCO declaration, it is not a regulatory framework, but it allows you to link into regional, country, state specific laws that you may have.
We saw over the last couple of days people talk about the European Union, Yordanka, you spoke about that, that's probably the most advanced regulatory systems we have right now in terms of proposed, the discussion has gone on for some time and I think GDPR in Europe, the E.U. legislation when it comes to this is the gold standard for the world and you will have ripples throughout it, through all of the countries in the world. That's the gold standard.
I know that the GDPR is talking about data privacy and protection, but it has rippled in many countries, Brazil, Australia, where I'm based, and will continue to do so.
I know that AI is the most complex of all the technologies that we have seen so far because it has the ability to learn from data, ability to make decisions based on the data learned data experiences. It will be more than a tool that we have experienced in the past.
In terms of our legislation, I think as Sally also mentioned, I think one of the speakers mentioned about regional walking together, like Africa, to come up with some regulatory systems. I think that is a very good idea because to stop to have one single law at this stage, it is nearly impossible. It is like Sally said, it will take a long time. By that time, the AI has moved and we are past that. I know countries like my country, Australia, they probably wanted their own. I think in other countries, in the African region, I think it is very sensible to do a regulatory framework for that which will then feed into the UNESCO ethical declaration, because then you have the enabler as a block to negotiate with other trading partners like the E.U., which is probably the most advanced in terms of proposed regulation at this stage.
I suppose that's a good idea for South America as well and the Asian region as well. I don't think China will want to, but India from what I have heard. I think regionally over time that universal framework that we have would then feed into the regulatory process.
Just one last comment, we talk about intellectual items and as I stated in my session on Wednesday, WIPO, the UN agency for intellectual property, it has been discussing over the last few years on giving rights to AI entities for creation of work, for creative work, including artwork, paintings, novels, videos, games, but they also are looking at patenting, whether AI can Patent and giving the rights to AI. The discussion is evolving now. Now is the time to get involved in that conversation.
In our session, we also talked about being inclusive, allowing open technology diffusion to certain so that the technology could be shared, used, developed in the locally context that we have heard like in Africa, in the local context. I think in terms of IP, this is now the time to get into that discussion.
Thank you very much for listening to me.
>> THOMAS SCHNEIDER: Thank you very much, Anthony, trying to summarize some of the sessions.
I think we have heard a number of good actually points. I would propose to take it actually from the question that had been raised, given that we here at the IGF, given that we have a chance to feed into the Global Digital Compact, given that there is so much work around mainly, but not also ‑‑ not only in Western countries what is the role of the IGF to repeat this question? What can we do here in this forum to move one step ahead? To support those organizations, be they multilateral, multistakeholder, that work on AI to contribute to a global framework that works for everybody and, of course, again, with the particular focus on how we can be more inclusive with regard to issues, to needs, to circumstances from developing country, how can we bring those perspectives better into existing processes that are already working? How can we inspire or support processes in developing regions that they also ‑‑ yeah ‑‑ have a better voice. Maybe, I don't know if I can go back to the panelists.
Also, then I'll go over to you, would it be ‑‑ what would be learnings from the climate processes since the early '90s, should we develop also let's say given that as many of you ‑‑ given that it may be difficult to agree on a worldwide globally shared legal framework,, can we agree on the fact‑finding expert group, we will go over to the panelists for the quick reaction on the very relevant comments that you have heard.
Thank you.
>> KARINE PERSET: I will start. There was a few focuses, I will start on the first.
A key message to bring forward, interoperability between the burgeoning frameworks and the standards, it is very desirable and ideally ahead of their implementation in mandatory, voluntary AI risk assessment and management standards. You said, you know, that there is a converge, there is a convergence over the past two years on risk‑based approaches to managing AI risks and in many cases that is being operationalized through standards. The standards will be voluntary or mandatory. Still, here, you know, we have a key role of many of the actors here today but we also have standards development organizations like the ISO playing key roles as well.
The interoperability, it is desirable, why? Because it doesn't mean having the same roles, and I totally agree with what Sally and others said. Sally, Yordanka Ivanona, Anthony had said about the difficulty of not having the exact same rules. It doesn't mean having the same rules, we'll set the redlines at different levels depending on the country and culture. We can try to align terminology where possible and that will facilitate or may even enable later on that interoperability and at least talking ‑‑ using the same words to talk about the same thing. Otherwise, it is very, very difficult to align ‑‑ that alignment between for example regions or it will be very difficult.
The interoperability in terms of terminology, concepts, it is what I would think is a very important message.
>> THOMAS SCHNEIDER: Thank you for that. Thank you, Karine.
Interoperability is key and interoperability on several level, levels of the principle, levels of the approach to classify AI systems and so on.
Quick reactions from the panelists, what we have heard so far?
>> TSHILIDZI MARWALA: One thing, as we try to get bindings from many guidelines that are available on AI, it is to always be aware that AI often doesn't come as AI, sometimes something else, autonomous drone, maybe self‑driving cars, so on, so forth. So as we talk about binding principle, we should be aware of that. How do you embed that regulation on autonomous vehicles, so forth.
I agree with the idea, that bias, it is simply ‑‑ it is simply just not strong enough.
The inequality in terms of access of AI is concerned between the Global South and the Global North, it is something that has to be taken forward quite urgently.
Thank you.
>> THOMAS SCHNEIDER: Thank you very much.
>> Can I go next?
>> THOMAS SCHNEIDER: Please, be as brief as you can while talking slowly enough for the interpreters. Thank you.
>> SUNIL ABRAHAM: Quickly, on the intellectual property from the perspective of sustainability, Meta, and other partners are part of the Low Carbon Patent Pledge, anybody could use those patents to better build low carbon technologies. Quickly talking about a set of AI opensource projects that are directly relevant for sustainability, open catalyst which is working on developing catalyst for carbon neutral fuels, Galactica, others that are open science and science infrastructure and when it comes to de facto standards, which should also complement the digital standards on Metaverse, specifically on the question of security and safety, there are initiatives like the metaverse standards forum.
Thank you.
>> THOMAS SCHNEIDER: Thank you very much, Sunil.
Please put the information in the chat if you can so that people can follow‑up.
Let's try and focus again on the question, what can we do here? What is the message? We heard structural inequalities is probably a message we have to in addition to interoperability of all levels of the framework there.
A few hands up online.
>> JAMILA VENTURINI: First on the IGF role: I believe this is a key platform for us to coordinate and have developing cooperation methods as mentioned by the audience and concentrate discussions happening right now in very stagnated spaces, at least in a sense that we can give meaningful input to what's been discussed in this processes.
I would also like to offer a quick reaction as to the points that were made, I guess it becomes clear that the risk‑based approach is insufficient for the types of problems that we have in the Global South. At the same time, we do have several Human Rights standards that are binding, that are not necessarily being considered, including some of the standards that are being discussed by several of the processes that were mentioned here.
So I guess as a starting point, again, I would say that the UNESCO recommendations offer a great basis for that, for building some common standards on what we should be looking at. Some of the processes for the implementation of such recommendations, which are ongoing right now, it should also be brought to the IGF and to be discussed openly in a multi‑‑ a truly multistakeholder format. We know there are different power relations in place that prevent some of the participants to be able to participate actively and meaningfully in all of the distributed processes that we have ongoing right now.
Just a quick point to connect to something that the Brazilian representative mentioned, it is that besides standards that prevent or remedy harms for the data industry, for the AI industries, we should also think about cooperation agreements for investing on education and research. This should be prioritized, including trying to find ways to promote more equity in these science and technology spaces in order for diverse people to be able also to develop these technologies, to assess the technologies and review the impacts in several of our countries.
I will let that here for now.
To you, Thomas.
>> THOMAS SCHNEIDER: Sorry. I was muted.
Thank you very much, Jamila, again for this very interesting contribution.
There are a few hands up in the Zoom room.
>> AMRITA CHOUDHURY: There are hands up, I don't know if it is old or new.
>> JAMILA VENTURINI: It is probably ‑‑ who has not yet ‑‑ did you want to react to this, Yordanka Ivanona, quickly?
>> YORKANKA IVANONA: Thank you very much.
I would just like to support from the European Commission the perspective that we have to be collaborating as much as possible with all international partners on those initiatives to better coordinate and to develop the right governance and from our perspective, we think indeed there is an opportunity to develop some common principles and then clear commitments from Member States, how they can be operationalized.
At the same time, working also at the more practical level as was said on really ensuring the interoperability, the sharing knowledge, the developing the actual tools and standards. We learned from each other and we try to build this common understanding.
Then, just maybe the final point, on the role of the ‑‑ we support the input, the inclusion that could be given to awful the international discussions that are now ongoing, including in UN, others, Council of Europe, others, and we are also very open and ready to collaborate in this endeavor.
Thank you.
>> THOMAS SCHNEIDER: Thank you very much.
Listening to you all about how to cooperate across regions, to how to interact with those that develop these principles, why not think about using the IGF, a part of the IGF, adding something to the IGF that could be like a platform that would look once a year on where are we with implementing the different AI principles, where are we with developing the systems that allow us to assess risks and impacts and classifiers. Is that a way forward we take as one of the messages that the IGF could be enriched by yearly, half a day, a day, where we would specifically look into how we develop these principles, how they grow together, how they become interoperable, same for the risk assessment mechanism, same for big technical standards that we have been mentioning a few times but that are not really present maybe enough in the public debate on AI, we focus a lot on regulatory regime, risk assessment, but not necessarily on the technical side. Would that be something that would be ‑‑ that you would like to see in the messages. Let me continue with people who have their hands up. Trying to identify you.
>> AMRITA CHOUDHURY: We have Jorge with hands up.
>> THOMAS SCHNEIDER: Go ahead.
>> MOKABBERI: Can I ask the question?
>> JORGE CANCIO: I'm ready now I think.
I think my video is still not enabled.
Anyway, yeah, thank you very much, from the Swiss government. I think that maybe you can have a look also at the inputs I made on the chat on what are general positions on any regulatory efforts on the international level on AI. Here I would like to concentrate on two points.
First, it is the link to data governance and promoting what we call digital self‑determination, meaning that the control of the data should lie with the citizens and the societies. I think that's a very important connection we have to make in these discussions.
This brings me to my second point, which is that really the IGF is perfectly placed to have this multidisciplinary discussion, not only discussing about AI, but also the connection with data governance, because we are assembling here in this forum the knowledge and the expertise and the points of view of all the stakeholder groups on these issues. We have the inclusivity, the openness of this forum which is both multilateral, because we're based on the United Nations and bottom‑up, multistakeholder.
I think IGF is really a great place for discussing this. I would go beyond what Thomas was saying, it is not only an annual venue for reviewing how principles are being developed or implemented, but we can't even establish a Policy Network on AI which could look specifically into issues like interoperability, about how avoiding discrimination from people from the Global South and into looking at issues where the perspectives of all stakeholders could be considered.
I think next year in January the IGF will probably be looking into what Policy Networks to establish and it is a question for the community to come forward with proposals in this direction.
I leave it at that. Thank you very much.
>> THOMAS SCHNEIDER: Thank you for this very concrete proposal for something that the IGF could do or proposing how the IGF could not just once a year, but actually across the year, and I guess that's in the mutually inclusive we have a Policy Network on AI and then have something that would work across the year and then have, let's say, more visibility at the IGF itself on reviewing implementation of principles and so on.
Also the reference again, this is coming up from several sides, we heard it from many and you cannot really look at the impact of AI without looking at data governance and talking about quality of data, representation of data, so on, so forth. We have heard this from several other speakers as well.
Thank you very much.
I would like to give the floor to the other person, unfortunately, I don't see the name.
>> AMRITA CHOUDHURY: We have Mokabberi.
>> THOMAS SCHNEIDER: Please go ahead.
>> MOKABBERI: Hello. Can you hear me well?
>> THOMAS SCHNEIDER: Yes. We can hear you.
>> MOKABBERI: Thank you for giving me the floor. First of all, also thanks to the organizers for organizing this timely session.
I'm from eye Iran. I would like to ask a question and make a comment.
What would be the practical approach to AI regulation, and artificial intelligence spaces like Metaverse to ensure that public safety, public security and public health and ensure accountability of global AI actors with regard to ethical and legal frameworks of different society.
As we all know, every society has its own ethical, legal frameworks regarding governance, regarding AI regulation. What is the best approach based on this measure?
Another issue I would like to refer, it is the issue of AI‑related crime. AI‑related crime are borderless and are new phenomena and we should also think about what to do about this kind of new crime.
My question in this regard, what would be the role of United Nations, the United Nations Convention on cybercrime and also what would be the contribution of the Global Digital Compact in this regard?
Thank you very much indeed for giving me this opportunity to ask the question.
>> THOMAS SCHNEIDER: Thank you for the questions.
On cybercrime, on the role of the UN, you may know, there is one existing binding instrument to fight against cybercrime which is also from the Council of Europe (Zoom interference) ‑‑ working with more than 100 countries to actually try to create interoperability between law enforcement fighting against signer crime.
With the role of UN, this is currently discussed in several international fora, including for instance the ITU, they have reflections on what could be done.
With regard to a practical approach to regulating the Metaverse, who wants to try to give a short, quick answer on knowing that we're struggling with AI, that is already there, how to regulate it will probably be causing struggle even more, at this stage to try to regulate something in a practical way that's not yet really fully there. If someone has a good idea or an answer, a short answer to that question, please step in now.
>> SUNIL ABRAHAM: I had already tried to answer that question.
I think that AI, as many panelists said, mature technology, it is a clearer since of what the regulatory agenda is, Metaverse is an incoherent technology and therefore I have pointed to the Metaverse standards forum as a sort of opportunity for the beginning of that.
>> THOMAS SCHNEIDER: Thank you very much, Sunil Abraham.
We have 10 minutes left if I understand our situation correctly. I don't see hands up.
>> AMRITA CHOUDHURY: There is a hand up in the room.
Would you be interested to take it or would you want the ‑‑
>> THOMAS SCHNEIDER: Let's take one more.
>> AMRITA CHOUDHURY: It would have to be just minutes.
>> Thank you very much for giving me this opportunity. I was born here in Ethiopia and I live in Australia, in New Zealand.
I repeat the question I asked yesterday but I haven't got a convincing answer.
My question is this: What is the UN, IGF future Plan of Action to accommodate the following two different aspects of outcome on addressing advancing technology, including AI contribution for a global Big Data governance for Sustainable Development?
Currently, there is one point that I make, there are a few technology innovations such as blockchain, AI and NFC, NFT, quantum computing, generating Big Data that is difficult to manage, lack of efficient, effective mechanism or tools with increased unemployment.
For example, in Australia, where I live, almost all shopping centres and big business using checkout machines supported by AI and where employees were working. The government ‑‑ the unemployment is increasing ‑‑ the government in Australia is not taking action for these businesses while they generate increased unemployment.
On the other side, technology creates IoT and ‑‑
>> AMRITA CHOUDHURY: I'm so sorry. We have ‑‑
>> AUDIENCE: I'll finish. I'll finish.
AI performed tasks faster than human processing transactions and what is the plan? Thank you very much.
>> AMRITA CHOUDHURY: Over to you.
>> THOMAS SCHNEIDER: Maybe I can try ‑‑ I may not get a satisfactory answer today either. The question is too big for us all to respond to I guess.
Just one attempt to clarify, the IGF is dialogue platform. The IGF does not develop Action Plans or other types of outcomes. The IGF tries to convey messages to other bodies that take decisions. The IGF is not a decision making forum, it is a forum for dialogue, for understanding and for setting ideas for decision making bodies.
The UN as I said, others said in the beginning, if you talk about the plan of the UN, the Global Digital Compact is such a plan. We're in the middle of a process to help the UN Secretary‑General, help the UN to develop such a plan. We can send messages, we can send messages into the plan.
Let me try to use the 7 minutes that we have for giving each of our panelists less than a minute to give their quick high points, trying to have a summary, then you can relate that.
I think we have agreed we have an enormous amount of principles of guidance of AI that's been developed, that's being developed and looking into binding frameworks. We know this is difficult on a global level, but institutions like the European Union that is developing market regulation, we note that the Council of Europe is developing a binding instrument open to all countries in the world, on principle, basic principles, we note also that we may not be able to wait until the whole world agrees on something because the industry is continuing to work to develop applications. The users are using it, we cannot ‑‑ there is a sense of urgency that I have heard. I have heard many people refer to interoperability on a technical level, on a legal level, but also on a level of the risk assessment, impact assessment, and I think very important thing that's been referred to by many, there is a structural inequality that goes beyond bias, that needs and circumstances, the situation of people living in developing countries are less reflected in the deliberations on how to create the framework for AI that is contributing to the Sustainable Development Goals and that concrete proposals could be that we develop a Policy Network on AI to discuss this in more detail at the IGF that we could have a dedicated half a day or something to review implementation of principles and functionality of risk‑based approaches. I think these are key elements I have heard.
Please complement and help us to wrap up.
Let me go through all of you very shortly.
Karine Perset from OECD.
>> KARINE PERSET: Thank you, Thomas.
I think that sounds like a great idea to review implementation of different principles once a year, maybe even more often.
I just point out that to review implementation usual going ‑‑ you'll need tools and matrix to allow us to do that. We have actually, you know ‑‑ we have ‑‑ we have practical tools and educational approaches like tools to remove bias from AI systems or test how to secure the system against this and there are number of opensource tools and we have UNESCO's AI training program for judge, we have documentation processes for data model, we have the tools and we have a lot of tools, a lot of tools exist. It is just that at present, it is hard for individual actors to find those right tools for their needs. I point out the need for tools and matrix and point out also that is something that we're working on and that will be useful to be able to conduct such review of implementation.
>> THOMAS SCHNEIDER: Thank you.
Trying to summarize it, what I forgot, the needs had that were mentioned, the need for capacity building, bringing everybody on the level to understand and with the tools that you have mentioned.
Professor Tshilidzl Marwala, some key takeaways from you? What is missing, what should be more stressed?
>> TSHILIDZI MARWALA: Thank you very much.
I think one thing that's quite important that we need to pay attention to, how do we capacitate people to be able to take AI and its implementation to a level where it protects and works for the benefit of people?
I will talk again about lawmakers.
How do we capacitate parliamentarians so that they're able to actually first and foremost understand this technology and secondly, to be able to craft a vision for their own nations going forward?
The second issue, it is how do we take this ideas to regional bodies such as Southern African Development Community, in Africa, so that we can be able to indigenize these regionally because of the shortage of economies of scale.
Thank you.
>> THOMAS SCHNEIDER: Thank you very much.
Sally, briefly over to you.
>> SALLY RADWAN GOLESTAN: Two quick points: One, it is again to stress the importance of regional clusters being built if we want to achieve eventually global consensus of something that's implementable rather than talking about abstract principles and guidelines. Whether the IGF can facilitate that, whether another UN body takes on that responsibility, it is important I think.
The other thing to follow on what Karine said, actually I think it would be great if we have an IGF, then it shouldn't be the implementation because there are many fora discussing implementation and a kind of a review of the tools, the matrix, an important piece of input. She is quite right, people often have difficulty finding those fora knowing what's out there, what's been newly developed.
An annual, semiannual survey on the tools, matrix that have been fully developed to aid within with the implementation of the different principles could be helpful I think.
Thank you.
>> THOMAS SCHNEIDER: Thank you very much.
Jamila Venturini.
>> JAMILA VENTURINI: I and several participants addressed most of the pain issues concerning the Global South.
I would like to suggest to also add a strong call from the IGF on the observance of Human Rights principles, standards, criteria that already are in place. While we discuss the Metaverse, facial recognition and other surveillance systems based on AI are implemented in countries and borders effecting particularly those that are historically discriminated and marginalized. I recall the UN High Commissioner for Human Rights call in 2021 to impose a moratorium on the use of remote biometrics facial recognition technologies in public spaces. Something that we should also promote as measure as we discussed artificial intelligence and other technologies emerging.
Thank you.
>> THOMAS SCHNEIDER: Thank you very much.
Yordanka Ivanona.
>> YORKANKA IVANONA: Thank you very much.
I fully agree. I just think that maybe it is good to build on the strength of this forum. The first thing, it is related to broad, inclusive stakeholder approach. The forum could play a key role in providing actually input for the building and now shaping of the framework that are emerging between the digital compact, other, the Council of Europe, other work, I would strongly recommend that somehow all of the stakeholder broader input could actually look at the process, including core issues of Human Rights because there are a lot of discussions now how actually this should be shaped exactly.
The second point, indeed, to think through the practical side and this aspect of, yes, of networks of experts, so much more practical focus on the metrics, sharing knowledge, implementation, good practices, we also strongly support that and we think it is needed because it also goes beyond actually the governance and the international organization actors.
It is on good practice and experience that you can engage Civil Society, and to include other was a multistakeholder approach.
>> THOMAS SCHNEIDER: We have the last 30 seconds and we'll then wrap up.
>> SUNIL ABRAHAM: I just wanted to sort of emphasize the world of self‑supervised AI and the regulations that maybe appropriate for large language models may not be appropriate for models with low resource languages that are important for the Global South.
>> THOMAS SCHNEIDER: Thank you.
We have had a very rich, a number of people could make voices heard that it is more difficult and I am happy to have had this rich debate.
I think we do have some points and messages that we can then bring on paper and to hand them over and feed them on to the leadership panel, to the GDC, so on. Thank you, all, for being part of all that's interested in discussing AI and the Policy Network, whatever we call it.
Thank you. It was a great session. See you soon again hopefully!