The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> MODERATOR SCHNEIDER: It's fine. Okay, yeah. Excellent. I've been James Bond in my previous life. Thank you.
Hi, take a seat.
There's no specific order.
Allison is, I'm here and you go there and we will have Allison here.
Are you okay with the sound?
There's no loudspeakers here, so they don't hear us.
Good morning, everyone. I hope, there are no loudspeakers here. This is meant for online only.
You have headsets. If you can hear me, it's like a silent piece, you will not have a microphone.
Please take your headsets. I think that's the way it's meant to be. Those that are not hearing.
>> Hello?
>> Maybe Allison will be first.
Can you hear me?
Do we exist?
Yes?
We are already a few seconds behind schedule.
No, I don't. We hear each other, I think.
It's the first day. Things take time. Let's give us a few seconds so that everybody who needs a headset can get one. Can you hear me on the --
Okay, let's start.
As you know, this early morning session is about the "The 1st International Treaty on AI and Human Rights" and the rule of law, which has been concluded earlier this year. Of course the aim is to cope, to help us cope with the risks and opportunities that AI will bring. I will not go into detail about risks and opportunities because we all more or less know what these are for the time being.
Let's start with Ambassador Dowling from Australia. Australia also participating in the negotiations of the treaty until this March.
How does Australia view the balance between innovation and safeguards so that we have a robust, resilient innovation thanks to AI, but also make sure that human rights and the rule of law is protected?
>> BRENDAN DOWLING: Thank you. That's the key question we are grappling with. We see the upsides of AI, the benefits for development and economic opportunity. But every new phase of digital technology, we have seen human rights, the rights of women and girls, the rights of freedom of speech and democracy jeopardized. I'm really positive about the Council of Europe's effort to get ahead of the curve. Try to set guardrails, protections for rights, speech, the preservation of democracy. I think we made the mistake in earlier phases of technology when we saw software development in the 90s and social media in the early 2000's, saying we trust the technology world, we trust commercial entities to prioritize safety and rights online. I think we learned the hard way that isn't the case. We shouldn't expect commercial entities to be the guardians of rights and privacy. That's the role of governments and civil society to work together on.
AI is a fast-evolving area of technology. But I think what this treaty does is set some very clear principles and parameters that say, here are the expectations that governments have on the development of this technology. Here is how we expect all platforms to preserve rights, privacy, ensure their technology is not misused.
So I think acting that early in a way that is not overly onerous or prescriptive I think the treaty strikes the right balance where it will not stifle innovation. We feel that it's very commendable effort to try and set those parameters with AI, so we're not looking back in ten years saying we wish we had done more in the early phase to preserve rights and democracy.
>> MODERATOR SCHNEIDER: Thank you, Ambassador Dowling. Now we go to the west, to the United States. We have Allison Peters here, Deputy Assistant Secretary in the Office of Human Rights and Labor, at the State Department.
Ms. Peters, the U.S. was very active in the negotiations on that treaty as we have been able to read in some journals. How do you see the treaty promoting respect for human rights democracy around the world, not just Europe or the United States. And how does the convention, the way it is in front of us now reflect the United States approach for rights respecting AI?
>> ALLISON PETERS: First and foremost, thank you. The negotiations, (?)
I think second we really need to acknowledge the context which we continue to have debates, United Nations and the broader multilateral system in terms of international law, international rights law, it's going to strike there are risks as it relates to artificial intelligence and emerging technologies. We know first and foremost having a convention will help us set a shared baseline on rights respecting use development design on artificial intelligence and that shared baseline goes beyond just the United States and other countries, sort of in the Global North, right?
That shared baseline is a shared baseline that is really global in context that is applicable to every single country at IGF and beyond.
But we know, given the debates that we are seeing in terms of human rights and their applicability to emerging technologies that having a sort of shared baseline amongst democracies and rights respecting design and use of AI is critical.
Certainly the benefits don't end just with implementation of the convention, as we have seen for example, in the Council of Europe context where we have the Budapest Convention on Cybercrime, having an instrument in which our governments can focus on shared cooperation, having a mechanism is useful, salient. As we continue to debate issues around cybercrime and the UN System now.
The follow-up cooperation mechanisms I think will provide a really important opportunity to advance our shared efforts. Including allowing us to share best practices across-the-board, across governments, across regions.
Third, I will say, you know, you asked the question how this helps advance sort of the U.S. priorities as it relates to artificial intelligence, as we look at the need to place safeguards on AI systems and the companies that develop them it's also really critical we preserve this space for innovation. We don't want to do anything as it relates to regulation that cracks down on innovation, doesn't allow that innovation to happen.
So certainly, with this convention, we see a convention that allows us to harmonize various different approaches to AI and one that allows companies to innovate, to be creative, to create new AI systems that help really advance the opportunities as it relates to AI but also help crackdown on some of those risks we are talking about.
In our system, all of these priorities are bipartisan in nature. Even though our government is changing over quite soon, this is a priority that we see across-the-board, across political parties in our countries and we look forward to working with the many governments on this stage but also many governments here at IGF who will join us in this process. Huge thanks for having us today.
>> MODERATOR SCHNEIDER: Thank you, I think at one point it's important it's not just the convention and the participants, it's the bigger corporation setting around, which is something we have been able to see with the Cybercrime Convention and the Budapest Convention as an important factor as well.
Let us now move to another ice country. Canada.
Mr. Fairchild, you also participated very actively, worked on the treaty helping us to count the hours we have left to find compromise on everything.
So if you could share your insights into how international cooperation can strengthen the treaty and ensure it reflects the shared values of democratic nations?
>> DAVID FAIRCHILD: Thank you very much, Ambassador, Day zero. It's always a bit of a cold start, so thank you. Thanking the Council of Europe, we have some of the staff here from the actual process in the room. It was a very long journey. To reflect comments, this is the first international treaty on an emerging technology. It's really important to reinforce it's the Council of Europe that effectively brought forward this legally binding instrument that is creating obligations on signatories to uphold certain values, human rights, respect for democracy and the rule of law.
I think that's an important statement in and of itself, because it's different from a lot of the other work that's going on in multilateral fora, which are speaking more on the normative process. I think which is why Canada who is not a member of the Council of Europe. It was an observer to the process. Our opportunity to frame at the international level. From our perspective it was crucial to be there. As an active participant on the delegation, we spent a lot of time trying to involve as many states as possible.
I think this is another important element to bring to the debate. It's not the UN, but at the same time we felt it was extremely important to get as many Member States, from as many regions involved in the process. From the nearly two years of negotiation, we saw a graduate increase not only in the number of participant but in fact signatories, hopefully at the end of the day.
A cluster of observer groups in and of themselves -- could you bring me also an earphone for me.
Okay. I think it's an important statement. We continue in that regard. I think past the end of the negotiations we are seeing more states actively coming forward, trying to become part of the process. Of course post negotiations, you have to take the treaty as it is, but we are seeing increased interest, particularly the Americas showed a high level of interest, Asia as well, we had Japan who was one of the observers but more countries at this point seemingly coming forward with an express interest. Why is this because they also have at the national level, everybody is fascinated with the question of what to do about AI.
If you haven't regulated, legislated or created frameworks to manage AI, it's going to impact pretty much every sector of your government and every public policy area.
So us, from the beginning, we were also part of the previous process that the Council of Europe ran for the two years prior to the launch of negotiations. This is all about framing. Member States have obligations. The context of this framework instrument was to create a baseline. This is an interesting second point, this is one of the few areas where we were negotiating international treaty where there was frankly no floor.
Most Member States don't have AI legislation. It wasn't about leveling up at the international level, it was about creating the baseline level to which Member States have to now meet.
It was a very unique opportunity for Canada who is quite advanced in its own legislative and regulatory frameworks to help in fact you know create that floor. So I think we were quite successful. Always about negotiations concessions at the end of day. Could we have had a stronger instrument? Yes, could we have had a weaker instrument? Yes. I think what we successfully created are baseline obligations Member States have to take away and as they develop their own regulatory and national legislative frameworks they must meet these obligations and I think that was a critical reason for us to be part of this, thank you.
>> MODERATOR SCHNEIDER: Thank you very much, David. I'm now nicely cabled in here myself several times with several loops.
Yes, I think this is an important element that you named that we didn't start like comparing our national basis, but we had to basically set the floor, set the baseline from scratch. Which was a challenge but it was also an opportunity to seize.
Now we go to a European country for once. We have Isil Denemec from Turkiye. You were also successfully negotiating. How is Turkiye planning to implement this convention, how are you going to organise yourself so that you address the challenges posed by AI and help your country to stay innovative at the same time?
>> ISIL SELEN DENEMEC: Thank you, Thomas.
It brings a batch the legal and ethical questions. Rapid spread of AI made it essential for everyone to take proactive steps to uphold AI principles at every stage of the AI life cycles, addressing the unique challenges that AI poses. We have adopted the national AI strategy back in 2021, it was drafted together with the Digital Transformation Office and the Ministry of Industry and Technology. It's built upon six strategic priorities, which includes regulating to accelerate socioeconomic adaptation. One of the ways Turkiye is planning to address is through strengthening the legal framework. This is being done through amendment of existing laws or adopting new ones. We are working on frameworks on personal and non-personal data. We cannot separate the two together as data is an integral element of AI technologies.
As per strategy, Turkiye's core objective focus on establishing an agile and inclusive process. While enhance data capacity to assess AI socioeconomic impacts.
The strategy underscores the importance of fostering innovation, which was one of the our concerns with the CAI as well and ensuring ethical standards. I will list some measures to support these goals, aligning national AI regulations with international frameworks and commitments to maintain consistency with global governance structure, creating an AI impact assessment framework which re are currently dealing with. Preparing guidelines for algorithmic accountability and explainability to ensure transparency in automated decision making. And encouraging capacity building initiatives and public awareness in AI use.
These are some of the issues we are actually dealing with at the CAI. They require legal solutions beyond just legal considerations. Because we need to encompass social, financial, political and other dimensions at the same time. And Turkiye's geopolitical and geostrategic position influences its approach to global AI governance. That's why we have active been monitoring and participating international efforts. Also one of the strategic six priorities. This commitment is clear from the beginning, which led to development of the first global legally binding AI treaty. We are aware that we are at the forefront of this technological change and its evident that AI holds immense power potential to shape our societies for the better. But how can we do this? We need robust ethical and legal frameworks that guide the process. So the Framework Convention emerging from the process represents pivotal step. Make sure AI aligns with human rights, democratic values and shared global practices.
We must recognize AI knows no borders and its impact inherently global. Legally binding is not just desirable but essential to address and create a unified approach.
We hope will ensure accountability, transparency and fairness. We will continue to be actively contributing and remaining in the (?) processes and other technological advancements. Thank you.
>> MODERATOR SCHNEIDER: Thank you very much, Isil. I think it's also important to raise the importance of data and data governance, the convention deals mainly with AI, but you find hints of data across the convention. If you particularly look at the explanatory report you find references to appropriate data quality and other issues related to data.
Now we have an online participant, I hope the connection works. There's a colleague, Tetsushi Hirano from Japan.
Japan has been also a very active participant in the negotiations on the treaty at the Council of European Strasburg. Japan has not only been very active at the Council of Europe, Japan but also led other initiatives like the G7 Hiroshima framework and other processes like OECD. Tetsushi, I hope you are there. My question, how does Japan's approach to AI governance aligned with the principles being proposed in this treaty?
Let's see whether the connection works.
[ Participant cannot unmute ]
Any information from -- what's the time in Japan, now?
What is the time difference? It's already in the afternoon.
[ Participant cannot unmute ]
>> Do you hear me?
>> MODERATOR SCHNEIDER: If there's difficulties we have one more physically present human being here supposed to talk. We can switch the order. No? He is coming.
>> TETSUSHI HIRANO: Do you hear me?
>> MODERATOR SCHNEIDER: We hear you.
>> TETSUSHI HIRANO: Hello, Thomas.
>> MODERATOR SCHNEIDER: Should I repeat the question?
>> TETSUSHI HIRANO: No, I listened.
>> MODERATOR SCHNEIDER: Excellent. Happy to listen to you now, Tetsushi.
>> TETSUSHI HIRANO: Thank you.
>> MODERATOR SCHNEIDER: Can you speak loud? Your voice is a little low.
>> TETSUSHI HIRANO: Do you hear me clear now?
>> MODERATOR SCHNEIDER: Okay, yeah.
>> TETSUSHI HIRANO: Okay. So I'm very happy to be able to participate in this session, familiar faces from the negotiations. I would like to thank Thomas and congratulate the states and EU for signing the convention.
To answer Thomas, I think it is symbolic this treaty was adopted in an institution founded on the bitter lessons of the second world war.
Standing for common values that Japan shares. This is one of the reasons why after the fall of the Berlin wall, Japan, together with Canada, United States and Mexico became an observer in the Council of Europe. These are crystallized in different ways in the international law in the 57 states that have agreed on the frame of convention artificial intelligence. The aim of the convention is to ensure the activities within the life cycle of artificial intelligence systems complying with existing international legal obligations.
This means Japan along with 56 other countries have agreed on framework to carry the shared values in the future where AI systems will be used in every corner of society.
Some of the principle of the framework are transparency and equality, privacy, reliability. Which are developed taking into account the socio technical characteristics of AI technology.
These principles will be operationalized through risk and impact management framework. In my view, one of the challenges for implementation of this treaty is the operationalization of these principles in accordance with existing legal obligations.
It seems the key lies in building capacity on ethical, legal, social implications of technological characteristics of AI such as explainability or accuracy.
This summer, Japan launched a new discussion on the future of domestic framework for AI regulation under the AI strategy Council. Taking into account the existing laws, guidelines and recent achievements in international fora, including the Hiroshima AI process.
At the same time we are accelerating the internal process of signing the convention. I'm convinced this is the process attract the right balance between innovation and regulation and we create a productive synergy.
We can expect it could have productive impact on discussions in the national level as well as international fora, including the Council of Europe, on risk management of the advanced AI systems.
On the other hand one of the important impacts the AI convention have on Japanese --
>> MODERATOR SCHNEIDER: The connection has turned really bad.
You can tell us, yeah.
>> TETSUSHI HIRANO: Yes.
>> MODERATOR SCHNEIDER: If you could repeat the last few sentences you said, because we have missed it, unfortunately.
>> TETSUSHI HIRANO: Okay. We can expect Hiroshima --
>> MODERATOR SCHNEIDER: We don't hear anything.
It's just very faint. Our technical team is already working on it.
>> TETSUSHI HIRANO: Okay, probably we can come back later. Do you hear me now?
Do you hear me?
Hello?
>> MODERATOR SCHNEIDER: I think there seems to be a problem that here in the room, we can't hear you anymore.
We will be happy to take you back into the discussion.
>> TETSUSHI HIRANO: Sure.
>> MODERATOR SCHNEIDER: Let's move onto Mr. Gibson. You may not have anything to do with the famous guitar but you work for the U.K. government.
Now that you have signed the convention, how do you use the convention to move things ahead on the national level and how you are seeing international cooperation, thanks to the convention but also in other frameworks. What are the plans for the coming months?
Thank you.
>> MAURICIO GIBSON: Thank you very much, sir.
Yes, not quite linked to the Gibson Guitar and I also get Mel Gibson is my uncle.
>> MODERATOR SCHNEIDER: You look like him.
>> MAURICIO GIBSON: Yes, a good way of framing is looking at starting off by considering the U.K.'s approach to governance, obviously we have had government recently and we're thinking about opportunities of AI. Really want to capitalize as well in order to turbo charge economic growth. Talking about risks we won't be in a position to fully capitalize or capture the full potential of technology as well.
That's the kind of key framing that's important when we talk about the Council of Europe AI treaty as well as other international governance frameworks.
Working together in cooperation and setting global baselines due to the cross-border nature is fundamental and really harnessing the potential of every sort of input from different countries civil society and private sector as well.
In order to capture the benefits you need those risks understood, have safe, secure, trustworthy AI.
That's important for us, on the international level, we have not been able to highlight to such extent. But in the U.K. we sort of have the first AI Safety Summit. Actually that paved the way for a greater discussion. We need to mitigate that. In doing so, reinforce the diversity declaration, the importance of secure trustworthy AI.
Bringing AI labs and companies onboard in that has been really fundamental.
[ mic difficulty ]
Really reflected this in the environment, to reinforce the importance of multistakeholder in the UN resolutions which were agreed in 2024. That's reflective of the U.K. approach.
Coming to the Council specifically though, I think this has been an opportunity for us to reinforce in the conversation.
We have taken sort of very, reflecting proportional amount of (?)
In our regulatory approach at the moment.
Allowing different regulators to engage with the technology in a light touch approach to make sure they can work together and harness the potential that way.
Thinking about (?)
Supporting the technology. This is the sort of approach we have reflected in the Council of Europe AI Treaty knowing (?)ations as well. The U.K. has come to reinforce the merits of balanced language. Ensuring there's not too much prescription. The details are being too over prescriptive might make it a bit too challenging for a lot of different countries to get onboard.
Also reinforcing the importance of new and other countries who are a bit nascent in their regulatory approach to AI. However, I think the importance is balancing that with clarity and we are keen to push the point about clarity in text. These are some legal obligations, involving human rights we want to be clear that people understand the technology and that comes back to my point about trust. If we can't trust we can't capture the technology as well.
I think this is reflected in a couple really interesting provisions in the convention as well, provision on safe innovation, another provision which ensured that research and development could be safeguarded, and wasn't necessarily always fully in line and making sure that making sure that is crystal clear. And we want to make really clear we can be a bridge as well as the U.K., talking about that proportion approach, these regulatory approaches will be needed. Toward the end when it might look challenging to get agreement, we are really keen to be pragmatic, providing a sort of basis and training a group of people together on the sort of final 11th hour of the negotiations to come together and find those areas of overlap on the more challenging areas, I think that is reflective in our international AI governance approach.
>> MODERATOR SCHNEIDER: Thank you very much, Mauricio. Indeed, it was not easy, and hopefully we get a few minutes for discussion in the end. Something that holds across time but also have it sufficiently clear, but not over prescriptive in detail.
We have another active participant, Argentina was active but apparently the connection somehow didn't make it across the ocean. Argentina would have been an example of one of the Latin America countries that have been participating. They were about five or six Latin American countries that were already doing the negotiation active in the work on this treaty. We will see whether we can together to her later.
With this, we have gone through the panelists, unless I missed somebody, which doesn't seem to be the case.
We can use a few minutes to also allow the people in the room and also online to make comments and questions. We have a mic here.
Yeah? Stand up and make sure self heard, if you want. And we will try to have some interaction with the audience here.
Present yourself.
>> RAFAEL DELIZ-AGUIRRE: Thank you very much, my name is Dr. Rafael Deliz-Aguirre, I'm at the Max Delbruck Center in Germany. As a member of the scientific community and developer what impact do you foresee such a treaty will have in our daily lives coding?
>> MODERATOR SCHNEIDER: Thank you. Anyone wants to reply to this? Allison?
>> ALLISON PETERS: Thank you, first and foremost for the question. I can only speak for my government but I know I speak for many governments here, throughout the negotiation process, big recognition of the fact this is a legally-binding convention. Making sure the technical community could actually participate. And that we had on our delegation, technical experts leading the negotiations was mission critical.
I think for us, and my colleague from the U.K. spoke about this as well, making sure that we have a convention that is technically rigorous, that is practical, that is clear, in terms of how it could be used. Not just by governments but also the technical community was really a big priority. I won't talk about every single provision of the convention. But I think for you and colleagues in this space, one of the key things we are working on now is actually building out a risk assessment framework.
Again, we talked, every one of the panelists talked about the fact this convention would be a shared baseline in systems that may not have an AI risk assessment framework. In the United States we have our risk mismanagement framework, or RMF as its known by the technical community that helps us do assessments of AI systems and managing risks and putting them in safeguards to help manage those risks. But many don't have risk management frameworks that harmonize across systems. We are working to negotiate a risk assessment framework that could be used across-the-board, whether by governments or individuals in the technical community. If you're not already tracking that process, we want to make sure you are able to engage and follow the negotiations.
I think for folks like you that will be a really important thing we build out, known as the Huderia, for us, making sure that tool is rigorous and technical and interoperable across systems is really top priority in the current moment.
>> I would also reinforce my colleague from the U.S. said. Another thing to touch upon in the convention, I think I touched upon it in my statement as well, exemption for research and development. What that doesn't mean it excludes every form of research and development from the convention but it creates a sort of limitation undertaking certain R&D which might impact human rights can do so with necessary safeguards in place. That was a really hotly focused on area that we really tried to invest a lot of time. So people in your community have a space to continue doing things where there might be implications. That's a particular area I think is important to highlight. Same for the safe innovation space, creating a space for sandboxing and encouraging different regulations to engage in sandboxing so they can do things safely in that space, creating that environment as well.
>> MODERATOR SCHNEIDER: Just to add to this, one comment, indeed, one thing is to have some kind of harmonization, but if you implement this, the convention you are supposed to hold for a few years, 10, hopefully, 20, needs to be on a very general level. That's why this impact and risk assessment frameworks are so important. And the second deliverable that the Council of Committee is delivering the HUDERIA, the Human Rights, Democracy and Rule of Law Risk and Impact Assessment methodology, the key requirements on a very general level are part of the convention. We just adopted a level 2 document, 20-page guidance, non-binding guidance document how to do risk and impact assessment and next year we will be working on a much more detailed document with questionnaires and so on. This is the important thing in cooperation with the IEEE, with ISO, so also to build a bridge between technical standard institutions, programs are probably more familiar with to the legal standardization body. So that's a very good question. Thank you very much.
>> Thank you.
>> MODERATOR SCHNEIDER: Next?
>> Thank you. Good evening, I'm Nigel from Caribbean telecommunications union. Are there any arrangements additional countries might want to (?) to this and secondly, the countries that got this started would be kind of like-mind. So I'm wondering, apart from the processes for additional countries to exceed, are there thoughts, as additional countries come in are there allowances to make adjustments to the treaty? I guess future processes in general.
>> MODERATOR SCHNEIDER: Thank you, David. Very good to see you. One word, the Council of Europe has got nothing to do with the European Union. The Council of Europe, as was said like the UN of Europe was created after the second world war to try to bring peace in a sustainable way. It has no economic component. It has 46 Member States now. The Council of Europe has this unique opportunity to not just develop soft law and hard law standards for its members but it can also include other countries. We have already had 11 in negotiations and any Member States, any country in the world that lives up to certain standards on Human Rights, Democracy and Rule of Law can become part of the process.
We have contacts with a number of countries in Africa, Asia, Latin America, that are in contact with the Council of Europe to become part of the process. Also to keep working on the Huderia but also a potential future party of the convention. If you turn around, tall behind you working for the Council of Europe. He is wanting to get into contact with interested countries or stakeholders from other countries to actually broaden the basis. The example of the Cybercrime Treaty actually shows, I don't know the exact number but the number of signature of parties is around 70. But the corporation, one is to have sign and ratify the convention, the other is to cooperate around the process. There's way more than 100 countries in contact with the Council of Europe, help with interoperability of legal systems and then the implementation mechanisms across the world.
You are very warmly invited to spread the message this is a treaty that is open, countries can join, to become part of the treaty but also be part of the wider process on cooperation and exchange of best practice. Yes, David?
>> Nigel?
>> Yes.
>> Great to hear from you, Nigel, everything you said, plus one. I think an important element, at least from Canada's perspective, we were not a member of the Council of Europe, so our participation was crucial. Simply because the concerns that Canadian legislative basis is different from Europe, different from the E.U.
We had equities, we wanted to protect. Whether CARICOM or others protect in the process and negotiations, absolutely. We can't change that but what we did to make sure in the process of the negotiation was to create a conference of the parties. Once the treaty comes into force, which is a very low number of member state signatories and ratifications it's an annual process for the members of the convention to come together and consider the treaty itself. It's a multistakeholder forum, which means participation from the technical community, civil society, private sector entities to continue to review. Because we understood it's very hard to create international legislation that is future-proof. In a sense, whether you are part of the negotiations or not, obviously down the road, a member of CARICOM, or CARICOM itself, I mean, being a part now and signing up, allows you to be part of the ongoing dialogue.
I think the important element here, the treaty itself is very slim, it's not very long or detailed.
As the expression goes, it doesn't matter what color the cat is, as long as it catches mice. What this treaty was, we were all trying to understand what the cat should look like, not the color. Color is national effort. Canada has regulatory regimes, other countries on this floor don't even have legislation or frameworks. Whether you are from Asia or Africa or North America, as long as we adhere to basic principles your cat can look very different, but as long as its still a cat.
>> MODERATOR SCHNEIDER: Thank you very much for the very good picture. Thank you.
Next, please?
>> Hello, my name is Leena, I'm the Lithuanian Ambassador for Digital Economy.
I would like to thank very much the Council of Europe, the chair and panel for having this discussion on this, The 1st International Treaty on AI and Human Rights and Democracy and Rule of Law. So I intentionally said the long name of the treaty because the treaty was open for signature in Vilnius Convention. It was signed during the Presidency of Lithuania to the Council of Europe. It coincides very much with Lithuania's priorities during that presidency. Including strengthening democracies, including protecting human rights.
But also including fighting disinformation. I wanted to emphasize this important element. In the context of indeed alarming rise of disinformation, which really undermines human rights and particularly those groups of society that are most vulnerable, like children, like seniors, like people with disabilities. I wanted to raise a question, see how the convention could contribute to fighting disinformation. In particular, AI-driven disinformation and harmful content online.
>> MODERATOR SCHNEIDER: Thank you, Lithuania for your support. And also the Council of Europe during that period and organising EuroDIG, the convention will be known as the Vilnius Convention. There's one general reference which is fundamental but remains on a very general level, about state's obligations to secure the functioning of democratic processes. It doesn't go into much detail because it's very difficult. Different from country to country how democratic processes organise. So any short reference how in your countries, you deal with democracy, maybe is here?
>> Thank you. I just wanted to, I think you brought up a very important topic.
Disinformation can definitely impact democratic processes and this is one of the things we are trying to protect with this convention. As our chair of CAI mentioned this is guiding the signatories to adopt measures or laws to prevent issues like this from happening. Well, as you correctly mentioned, the rise of disinformation can actually manipulate public opinion and in turn affect democratic processes.
This can be something that could be developed by signatories in a more detailed manner in their own legal regimes, I believe. Thank you so much.
>> I think this is one of the problems your government, our government, we are all facing this challenge of how AI changes the information environment. I will first say foreign interference disinformation are not new challenges. They are long standing issues. AI does super charge the ability for is actors to use tools. We have seen foreign states already using AI tools to conduct information operations in many countries. At the same time we can over state the risk of disinformation. We can sometimes see countries use the idea that disinformation is prevalent to sensor political speech. There's a careful balance to be struck. Yes there's a threat to democratic participation, however, there's also the risk this narrative is used to sensor free speech.
I think a really core obligation on us all is to build trust, an ability for our societies to critique the information environment, to engage in information so we aren't just believing stories that are out there. This isn't an area we should dive into too deeply, it's an area that can be over corrected. We can stifle the information environment in the desire to address disinformation.
>> MODERATOR SCHNEIDER: Thank you. 30 seconds for a listen because time is up, we need to free the room I guess for the next one.
>> ALLISON PETERS: Just to come back to the point that Thomas started with, while this is not a dedicated convention on AI exacerbating disinformation or broader information integrity around the globe, it does deal with the use of AI systems by public sector actors meeting governments, right? To the point that Ambassador Dowling made, addressing risks on artificial intelligence and safeguarding fundamental freedoms I think we strike that right balance.
Multilateral systems where governments are perhaps not striking to right balance and debates around disinformation and broader information integrity. Having that will be mission critical as we take forward those negotiations in those systems. While it's not exclusively dealing with issues around information manipulation, I think it's certainly quite critical.
>> MODERATOR SCHNEIDER: We have to stop, but let me reference an online question about relationship or how the Council of treaty can be useful to bigger global U.N. Environment, provisions from GDC. We will have more sessions on GDC. This is a baseline, this convention will need to be, and is already complemented by other more specific soft and hard law that follows the logic of the convention, for instance if the Council of Europe does work on Generative AI and freedom of expression, democratic deliberations in a country, this is just one example. Thank you for your attention. We will look forward to engaging again with countries that are not yet part of the process you can come to me or anyone that you've seen here, thank you very much. And enjoy the rest of the day, online or offline.
Can you take a picture of us?
James Bond is there for the time being. He will rise again sooner or later.
Yeah, just pull.