The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> ARDA GERKENS: Good afternoon everybody here at the open forum, where we will speak of the AI regulations and get insight from Parliament. Closer. Of course, you have to put your headsets on. This is what the workshop is about. I will give you some seconds to do so.
It is non‑translation, so all‑English, Channel 1. Thank you so much. I'm happy to say that we have a beautiful panel today. Mr. Axel Voss, member of the European Parliament. Ms. Amira Saber, Member of the House of Representatives of Egypt. And Mr. Rodrigo Gonyi from Uruguay.
Before we start, I would like to point out to some of the inter‑parliamentarian on UI and in future. October 2024 there was adoption of resolution of impact on AI, democracy, human rights and rule of law. Any parliamentarians here or working for Parliament, it is interesting to look at the resolution adopted in October 2024 on impact of AI on democracy, human rights and the rule of law.
Since 2025, beginning of 2025, February, IPU began to publish a monthly tracker that monitors which Parliaments are taking action on AI policy. So that is legislation, committee inquiries. All that tracking is done by the IPU and currently already covers 37 Parliaments.
So we want to know if any Parliaments here are missing from that list. If you have activities on AI and want to be on the tracking list, reach out to Mr. Andy Richardson, sitting here at the corner in front of me. He will add you to that list.
In this year, in November 28 ‑‑ 30th of November, there will be an organised event, the Parliament of Malaysia and others, in shaping future AI and soon will be published, so you can see more about this activity. Of course, we hope many parliamentarians attend. Today from UNDP is also here, Ms. Sarah Lister, co‑director of governance, peacebuilding and rule of law. And she will make the closing remarks, so she won't speak until the end. Not because I don't want her to speak; it is just because she is going to listen and conclude at the end. Okay. Let's further the normal remarks in the beginning.
I would like to ask Mr. Axel Voss, who is a German lawyer and also a politician from the Democratic Union Of Germany, member of Parliament since 2009 and in the Committee of Legal Affairs, in 2017. You have been focusing your work on digital and legal topics, one of them being the schedule (?) on AI. What is the stance on AI?
>> AXEL VOSS: Thank you for having me. Situation follows, we finished AI act the last Monday -- end of 2024, so we have additional part in force at beginning of August. This is kind of discussion going on that they should be postponed. So there is a request also from the American side, so this is one thing.
The other thing is that our companies need probably more time in adopting these. So there is a discussion going on, just focusing on the high‑risk AI systems. So that might be postponed. Is not decided yet. But the discussion is there. The situation is, of course, we are concentrating in the AI Act, especially on the high‑risk systems. This is not forbidden. This is allowed. But we are asking for more environments for the deployer and system.
It is interesting to note what is AI risk system. We have different systems. First, biometrical identification, we consider this as kind of an AI high risk system. Critical infrastructure. Also education and vocational training, so also question of workers, management and also self‑employment. The access and essential enjoyment of private services and benefits. Law enforcement also is some of these elements. Migration, asylum and border control management is considered in a majority of in‑house for AI high‑risk system. Also administration of justice and democratic processes. Also this one is considered high AI risk system.
This is also something the whole AI act is delivering some general, additional remarks on AI but focusing, more or less, on AI high‑risk systems, and therefore then we are trying also to install or to simplify the lives of our businesses in installing so‑called sandboxes.
I have still the feeling, if I may add this, what we shouldn't do. This is not 100% to my satisfaction that we should have only one interpretation of these provisions of the AI act. We haven't done this in the data protection regulation, but here in the AI Act, it is extremely important we are ahead in European single market only one interpretation of everything, and otherwise we will be confused and big company also be confused and this is quite important.
>> ARDA GERKENS: I couldn't agree more. Handling the GDPR, it needs to be interpreted, so important we have definitions, so look forward on that.
Ms. Amira Saber, you are the Foreign Relations Committee. Impressive national winner of the 2025 U.K. Alumni Awards. Action category alumni of the University of Sussex, so quite impressive, right. You are a policy leader, advocate for climate action. Also very important topic, AI governments, youth empowerment while leading on foreign relations and social development.
We have just heard from Mr. Axel Voss what is happening in the European Union, but ‑‑ Europe and somewhere, but maybe you can elaborate a little more what is happening in your country on AI and regulation.
>> AMIRA SABER: Thank you so much. It is a pleasure to be talking on this panel amid esteemed colleagues. Actually, in Egypt we already have a strategy for AI, which we have relaunched this year, so there is the boundaries of the national strategy and I introduced to the Parliament the first draft bill on AI governance, which was very luckily endorsed by the other Egypt NPs.
The question of regulation is a very debatable across the world. Coming to my background on social democracy, I wanted to ask how the data is classified, accordingly how to hold the provider, entity, the government, everyone accountable according to the sensitivity of the data. Because if it comes to AI, which is basically functioning on the data and the data providers, the national data becomes huge asset by itself.
So, for example, the data of a hospital in far places in Egypt is leaked, who should be held accountable based on this. So what I tried to do based on the (?) Because I introduced this bill now, and year and quarter ago, March 2024. And E.U. Act is main bill and we can frame on and discuss especially that.
There was a huge debate in the two big schools of the U.S. And E.U. on regulation. Because when you regulate, there is a constraint somehow on technology's advancement, on investing in like crazy in innovative ideas because this might held you accountable and accordingly you may pay fines, this could be a financial burden so how to balance the innovation and incentivise private sector to invest because we need the AI investments in health care in my country.
Big deal, we need in education and agricultural. Since our esteemed today, many of which are Parliaments, not just talking about the legislative part of which. I'm talking about the (?) Part as well. Here comes the importance capacitating Parliaments, if you do this to use their tools to ask the different ministers how they use the AI technologies in different sectors to advance the work, to benefit as much as we can, the people in the country, this matters a lot.
So the basic thing I'm concerned with is raising awareness of Parliaments in my country and across the region. Also part of the African PARL men teary network on Internet governance and interesting network which tries to raise all the time the knowledge and the exchange of experiences when it comes to data governance.
Again, this the crucial. Again, we discuss how it works. Because works for Egypt definitely couldn't work for other countries and vice versa, but there are experiences we can always learn from and develop. Actually, right away as well there is another bill, which is under progress. It could be governmental. Because in Egypt we have bills either introduced from parliamentarians or the government. So actually, to my knowledge, there is very big coordination, very fruitful one between the (?) And ministry of ‑‑ the Ministers concerned with judicial to coordinate between each other how can they release a draft bill, or how can they introduce a bill on AI governance. So the question of regulation for me, my priorities of which, is how we can make it ethical and focus on sectors that matters most and top, and how can we also incentivise private sector because my bill has certain incentives to get the private sector interest to invest in certain sectors actually on top of which are the ones which are prioritised during my talk now.
So it is a continuous learning process, but, very honestly, after we have seen in the Middle East, basically, and the political consequences of usage of AI when it is weaponized. Because it has been weaponized recently in the different wars in the region. This brings attention much more to how important regulators, the policymakers, everyone a decision‑maker should be very much aware and capacitated on how this touches the lives of thousands.
There have been lots of reports circulating when IGF uses AI on the war, this raises casualties of civilians being dead. The question is political, social and on every aspect of how AI governance today is one of the most things which affects every aspect of our lives, no matter where we are.
>> ARDA GERKENS: This is a very warm way of saying that, you know, the impact that AI has on our lives. Especially in your region. It is very good to hear that you are also there giving advice to other countries in your region and regions. Maybe if there is any parliamentarian to speak to Amira and get advice, please make use of her.
In the meanwhile, I hope you can hear me well because my ‑‑ okay, then it is my headphones not working well. Mr. Rodrigo, a politician from Uruguay and belongs to Partido Nacional and represents the Department of Montevideo. You have been engaged with AI and democracy and highlighted the importance of getting the departments in debate of AI, very important. Can you tell me what is happening on the regulation on AI in your country?
>> Yes, as Uruguay is a very small country. To America, between big countries, Argentina and Brazil. We are open to investment and focusing and promoting, investing in AI.
So in Uruguay ‑‑ neither country in Latin America have approved AI law, as European Act. But in many countries, there are many ‑‑ 100s of draft. In Uruguay, no. In Uruguay we refer -- pass a legal framework, approved by all parties, which self‑mandate us to develop regulation with the participation of all sectors and by advice and consensus. I try to avoid two (?) For investment, so just approve a very, very general legal framework that when we started running to hold stakeholders, academia, companies and (?) To develop the process very slow. I prefer go slow. I prefer, Sarah, that they happen Europe and U.S. And maybe a bigger country like Brazil, for example.
>> ARDA GERKENS: Thank you. Important to highlight investment of having AI and the dangers that you have infringing on that investment when you have regulation. Maybe, Axel, you can tell a little about what progress has been achieved and do you see apart from this risk with AI regulation?
>> AXEL VOSS: We have to be sure digital is transparent, circular and also digital. This brings us all as a legislator under extreme pressure, because it is a challenge now to adapt the online world to the offline world or somehow vice versa. Because we are connecting with AI, this is the fundament of everything what is coming next so it is important to have kind of a frame, how far to go and what might not be a part of AI systems, so what is high risk, what is low risk and so on. So also there is a kind of a fear for shifting power from humans to machines and why we need to face this. There is a thin line between only good purposes and bad purposes. So that is why we have the abilities with AI in widening our human abilities on the one side and organisational, societal possibilities. The meaning of AI for health and climate change, energy, traffic, administration, security, education, future‑oriented research. This is all what we have in mind and go for and why I would say there are a lot of challenges of having advantages, also in using AI. On the other hands we are facing a lot of risks because of this thin line. There is a risk for democracies. Once again, this fear of loss of control. Then we are facing this arguments of surveillance machines, conspiracy theories. There might be not an exchange of views and arguments any longer, what is then anti ‑democratic.
The manipulation of the public opinion, this is also something ‑‑ fake news, disinformation for destabilizing countries. So hurdles of attacking a free and democracies and freedom is lower especially for the young people or youth generation, I would say it is hard to differentiate what is real and what is not. So they can't trust what they are reading, seeing, hearing, so this is a risk situation so you need to focus on these advantages, what you can gain and trying to reduce risks also. This is what we are trying to do.
>> ARDA GERKENS: Thank you, Axel. You mentioned for youth, it is hard to know is real. Maybe I'm not youth anymore. Sometimes hard to know what is real and what is not. Can you tell ‑‑ do you also recognise that? What risks and challenges do you see?
>> AMIRA SABER: Absolutely, one of the biggest challenges is deep fake. Not just how affects the political face when comes to elections and companies and political systems but everyone's lives, especially in closed communities or communities which are still having their own strict cultural frameworks. Imagine a girl living in a village with certain cultural norms and she has leaked photographs of her on pornography. This might threaten her life. She could become ‑‑ there are actually incidents in many countries of girls and women who have suffered or risked their lives to deep fake. So this is affecting everyone's life. You can ruin someone. You can ruin his life, his career, her life, her career, based on deep fake images, deep fake videos. Now during the war, war in Iran, which is just few days ago, we were doomed, everyone was doomed, with massive amount of media, videos, photos, news. Was very, you know, fine line on how to verify this is true or not.
So in today's world what we thought that would empower us towards knowledge and which of this knowledge is questioning the amount of feeds we have, is it true or not. This, again, brings big important question of how do we verify. This applies to education. How do professor verifies certain research is AI‑generated or done by the personal work of the student. What is happening now is I see also developments on many levels when comes to models, verification and to the content verification. Is it AI‑generated or not. This is why I say all the time, educate, educate, capacitate, capacitate for educators, politicians, everyone, because it is a multistakeholder thing. Because it is a multistakeholder thing with all the parties involved in the development on this process. Everyone's life is touched and is altered. You can't keep just away from it.
However, what is asserted in track, you can't. It is embedded in your life. The ethical question are graced on every international table. Usually hear about different strategies that are now trying to regulate and to have a broad framing, which, consequently, have legal responsibilities.
If you are dealing with classified data or data that is of high risk, this should have immediately the kind of legal liability to it. Otherwise, it is a case, because it is already a case, which we really need ‑‑ a case touching everyone's life, which we need to regulate. In my country and my region, the beautiful thing is there are thousands, I can say millions of young people who are very enthusiastic to get to learn all about AI.
I myself before developing the draft bill I went through different courses, crashes and interviews definitely because you can't do something you don't know.
>> ARDA GERKENS: Very good. Take an example what she did, very good.
>> AMIRA SABER: You should have a deep dive. I can pass today without using at least three apps of AI in my daily work. It helps a lot. This is the beautiful face of it. Every kind has another side. This other side, which we as regulators and decision‑makers should look at thoroughly, we should maximize the benefits of this technology and look at risks. And in contrast, like mine and other countries, in even position which is having stressful economic situations, the question of AI is the question of electricity, the question of infrastructure, is a question of capacity‑building. So the divide is there. I don't usually like to speak this language, but there is a divide we all should cross because this model, if anyone remember which of the SDGs leaving no one behind, we just, you know, didn't remember well as humans the lesson of Covid. When we state there had is no one safe until everyone is safe. We just forgot about that. This applies to everything. There is no one safe until everyone is safe.
We have a responsibility to make it as safe space as possible because definitely it is going to be manipulated for different purposes, different reasons, politically and otherwise. But the real responsibility here is how to make it safe as much as possible. The good news also that countries in my region like Saudi Arabia or UAE or Qatar, they are having huge potential, of young people in my country who are eager to learn, eager to get even ahead of the curve of too many competitive set of environments, which is really appreciated. I always say that the private sector and the UN agencies have huge duty to capacitate and get investors in this.
>> ARDA GERKENS: Excellent, I see a question. Hold on. I would like to pose a question to Mr. Rodrigo? Then afterwards give the floor to you, please go ahead. I was talking about progress and challenges and the risks.
>> Yes, we have focus on developing national strategy today of AI to try to involve all the society in the risk, in the challenge. Very focused in capacity. I, the President of the committee of the future and of the Parliament -- I try to -- awareness to the people the risk of the future. So we put focus in prepare, capacity. I have to recognize, in many perspectives, that artificial intelligence have many risks to the jobs. Many changes. So we have to prepare. This is the ‑‑ there are no magic way to face this risk, just capacity, capacity.
But many people don't know about the risk. So I think is our duty, the Parliament, to awareness, to ask to prepare and facilitate the programme. Not just to the children, not just to the school. Also to the worker. So we are focused on this.
>> ARDA GERKENS: Very good. Thank you. The floor. Please state your name.
>> Yes, Josamin Gama (?) from Africa, a member for 40 years. Coming back. AI has been swiftly coming. In fact, it increased a lot the digital gap. We are facing power gap. We are facing competition and power gap and ability to buy processes to do AI. We are facing data access gap. Finally, we are facing scientist's capability within the AI. That would then build capacity for others as well. Many countries are working on building capacity, which is good, putting some strategy is good. But let me just answer ‑‑ ask few questions to you to think about and answer. One thing is what is regulatory body that will implement? We don't, in all countries, to now what we have is telecommunication exactly but which is no longer capable of handling digital society regulation. Going to AI and putting regulation for AI, who is going to implement regulation? So we need, along with building AI regulation to start thinking about the regulator, how we are going to do it. So people to work in AI but at same time to put limitation to misuse of AI.
Second thing is, AI is global, not local. Same as cyber security. We are facing huge challenge of having international law of cybersecurity. Same for AI. Each country will start having its own regulation. But how are we going to implement it globally? Generally data used, WLS fake or right, will be a noble one.
So again, international regulation how is going to handle this? And finally, lot of countries in the South, especially in addition to making of the power, they did not implement yet the changed policy. So we need first to the cross this point in order to be able to go to the next one, thank you very much.
>> ARDA GERKENS: Thank you very much. I would like to take one more question, please, yes.
>> Thank you so much. My name is Ali from Bahrain (?) Council. I believe Bahrain drafted and got approved first AI regulator ‑‑ sorry, law for regulating the use of AI. We manage to do a framework that combine a balance, which that was the challenge between investors ‑‑ getting investors and believe pushing the innovation and how to regulate bad side of use of AI.
My question, I see in neighbors like Dubai and Saudi. They have Minister of AI. And putting AI member in the parliament but don't see they are regulating the AI. Are we doing a step forward? Doing it ahead? Or do we have to slow down in regulating the thing. That is my question, thank you.
>> ARDA GERKENS: The last question, then we answer them all together because they are like.
>> I will keep it short. My name is (?), co-founder at Responsible Technology Hub, a youth nonprofit that focuses on bringing youth voice to responsible technology in area of policy as well.
My question is focused on Mr. Axel Voss. Your municipality is my hometown, so I'm really happy to see you. Also I had to think a bit when you said young people do not really understand or cannot distinguish between news that might be disinformation or misinformation or can't distinguish between deep fakes. I would definitely disagree. Specifically on my work with young people. I would even state that young people are way better at actually seeing what is deep fake and what is not. This shows a little bit the issue we are generally facing as a young generation. We are always deemed at not knowing specific things when we are actually good at specific things. So my question, specifically for youth, how can bring reality of youth to parliamentarians and make sure this is future‑proof for generations that are coming.
>> ARDA GERKENS: Good question. I would like to give Amira the first on the regulatory body capable of doing this. Also because you are from the region for good points made and also it is a global problem, right? How do we make sure that legislation in one country has the same effect on the other, and maybe we don't all have the same effect. How do you go about this as parliamentarian?
>> AMIRA SABER: First from the question from Mr. (?) That was a challenge, but Egypt has Supreme Council on Artificial Intelligence. They didn't have the authority to actually, may I say, hold the governmental entities accountable. This is what I tried to do in my Draft Bill. To give them the authority to hold every ministry accountable.
In Egypt it is very intermingled. Because now, as I said, Minister of Justice and Minister of (?) are collaborating where it is another draft bill. Actually, when comes to every other ministry, whether education, health, whatever, they have mandate on advancing their services with AI technologists, so who could be the body? It should be a body which is actually just having the framework, the strategy and it should be regulating it amongst all the other players, which, in my case, I see in Egypt the Supreme Council of Artificial Intelligence is a good entity to do that. So it depends a lot on the context, a lot on the institutions and the stage and development of these institutions according to each and every country.
But in that case, that is how I see it in Egypt. For the comment and the question of honourably, congratulations, it is good news to my ears to hear another Arab country has walked miles in that way and (?).
It is a matter of civility and matter touching everyone's life, so any parliamentarian in that track is something I very much appreciate. So for this, allow me to just inspire a recommendation, which I said yesterday at one of the panels, is that we need to have in the space a kind of policy regard for AI. That would be extremely knowledgeable for anyone who is accessing that on the level of policy making, to know what is happening in every region, every country, what do they have in place and in progress. So if we have such kind of rather there is a climate policy, I wish there would be AI policy and would great benefit for parliamentarians and decision‑makers, so I know and everyone knows what is happening in other countries.
Should we slow down? This is the question: To regulate or not. But let's at least classify the data. This is what I'm very much concerned with, because what is not classified couldn't be easy to give into. We have broader things to think about when comes to classification and to have a legal liability, based on which, and the other thing is not to slow down at all when comes to incentivising the ministries and asking them and doing this chronotizing job of making every ministry up to using AI for the good of people in their mandate. So it is like the bending on the context, but again at least the data classification is one crucial thing, yeah.
>> ARDA GERKENS: You make a good thing on day classification and also answers a question that is indeed worrying a lot of legislations, still needed on this data exchange so something that needs to be worked upon simultaneously. I would like to ask Mr. Axel Voss the excellent question, how do we make the reality of youth in the legislation? How do we connect it? Maybe you can take another question as well. Because there's been some online question, remote questions asked. Can AI help in mitigating the social impact of child abuse and gender‑related inequalities? Kind of a heavy question, but I thought, as you are very much into the AI acts maybe you can stipulate something on it.
>> AXEL VOSS: Thank you, if I can start with my friend from the Rhineland. I do not act over there. And not an insult, but I feel sorrow, you grow up in a world that can't rely on something you are seeing, hearing or reading. I grow up in a different century. That is why I would say it was more easy for me to find trust in some of these elements.
But the question, of course, is the now active regulators probably do not have the knowledge in a way in really understanding what is going on. Probably sometimes I have the feeling the politicians might be a bigger problem than than the technology, because we are hindering sometimes some elements. Especially if you are dealing with digital loss, then all of a sudden you have a totally different context instead of online laws, so that is why it is difficult. I hope every parliament has someone dealing with these that is understanding a bit more than the average politician. If they are saying oh, we need to develop something, then probably nobody know what is to do. That is why we need to come forward with it, and also to the others.
The problem is ‑‑ so I speak for the European Union, the democratic legislature is too slow for the technology developments. So we are always behind. That is why I would ‑‑ we can face this problem a bit more if we are saying we need to reduce the normal behaviour of a democratic legislator. Meaning there should be kind of a solution in place after three months. Just so ‑‑ otherwise, we are losing track of this problem and if we are once ready with law, might be kind of different problem already occurring. So we need to be faster and need probably not to be detailed all the time. It is more kind of a framework we should have with ethical aspects of a kind of value‑oriented frame, then might have time if everyone knows if I'm in the frame everything is okay. If I'm outside you will face trouble. This might be a better way forward.
Sometimes I'm asking myself instead of avoiding all this risks, avoiding this, ask ourselves what we are expecting from AI. If this is more positive approach probably in saying what we are expecting and we do not want to see other things. So that is why it might help in the kind of a better way. So data exchange policy, what was mentioned the question?
>> ARDA GERKENS: Can it help in mitigating social impact of child abuse and generating inequalities?
>> AXEL VOSS: I would say it can but you need to have kind of a plan in mind how to do this. I would say it is not coming as a kind of stereotyped behaviour. So you have to concentrate on it and also it can help everyone who, in mitigating these problems, I would say. But, of course, you need to be very careful in what are the conditions in formulating these.
>> ARDA GERKENS: So basically is what I hear you say, as a parliament, you should avoid being too detailed but make sure you have a framework here. Also a philosopher comes to mind, Mr. Wittgenstein, who said if you don't know what you are talking about, you shouldn't speak on it. I hear both you say maybe as politician, parliamentarian, put extra effort into understanding AI, because it so important.
(Applause)
>> ARDA GERKENS: Do you think parliaments, also remotes, they are shaping their own AI policies or just looking at what somebody else is proposing? You already said you took the AI Act as example ‑‑
>> AMIRA SABER: I depended on too many because it was in the space in legislation. It definitely has the deep dive on the act.
>> ARDA GERKENS: How about you, Axel? European Union start fresh or examples? Not many examples.
>> AXEL VOSS: You the first –-
(Speaking off microphone)
>> AXEL VOSS: I hope not too detailed, because we are orientated, but in general ‑‑
(Speaking off microphone)
>> ARDA GERKENS: For people not hearing, behind us stated that they have used ‑‑ benchmarked their law against the -- one of the European Commissions. They made it much more simpler, I think. That is what trying ‑‑
>> -- as a framework.
>> ARDA GERKENS: As a framework, yes.
>> AXEL VOSS: Kind of a rule-of-law system, at the end, and this was mentioned with the first question, how to implement and how to enforce. Also yes, the capacity may not be there. What we are trying now is build up so‑called AI office. And also give guidelines and what the provisions should mean and how we might come forward. But at the end, also to meet someone in controlling and enforcing all these. Here we are coming to problem. You need to have something what is also valid in front of a court. So if this is not possible, then might be very tricky.
>> ARDA GERKENS: You wanted to add something?
>> AMIRA SABER: One of main tools parliament is assessment of the legislative impact. It is still missing, to a great extent, to assist the legislative impact of the AI bills, which are already released. So we are also challenged by the impact of these legislations. So, for example, in the E.U. Act, can I say full-mouth 100% this didn't affect amount of private‑sector investments in the fields of high‑tech and AI development and these companies went to the U.S. , it is not accessible yet. These kind of question, it takes time to assess. That is why the question of regulation when comes to AI is very challenging. You need to put regulation now and you need also to assess the impact and you need to, over time, amend and edit, amend and edit according to the advance of technologies what is in place. That is why I probe legal framework and regulations, regulating the basic thing, more the data classification, the rights, the ethics, broader ones. Because when you dive deeply into details, you get completely doomed. This is not what the regulator should be by.
>> ARDA GERKENS: I see lot of people nodding, adding the legislation all the time. There is a question from the floor. If there is another question, you have to be at the microphone. First from the lady.
>> Hi, thank you. My name is Meredith, a Tech and Human Rights Researcher with the Business and Human Rights Resource Centre. It is great we are having this panel about AI regulation and the importance of it and better understanding what are the impacts. Because there are governments, you in here, who are touting some very problematic narratives about the dangers of clipping the wings of the potential and everything that AI can do for benefits of society while ignoring the mountains of evidence that we have about the actual harms taking place now that need to be mitigated and dealt with now.
And justice that needs to happen now.
So my question, from your seats and your positionality, is what is needed and what hope do you have in terms of pushing back against this wave and these very problematic narratives, as I mentioned, in order to hold the line and keep pushing forward on all the momentum that had been building about AI regulation for quite some time. Even keeping the E.U. act strong and office is strong and its enforcement and having a really well‑tuned regulatory approach that can help continue to set more standards, moving forward. Thank you.
>> ARDA GERKENS: Thank you, another question from behind, Your Honourable ‑‑
>> Actually, not a question but comment on amending the law. I believe the more that what we can do is to simplify the law and just to keep it to another body that there are more flexible. Because changing anything in the law is going to take a long time, which we cannot cope with fast developing of the AI so this is the only ‑‑
>> ARDA GERKENS: Very good advice, keep framework in the law and make sure you have lower regulations to be able to be flexible. I'm going to take the last question. Question, please.
>> (?) I'm very happy to catch you. Especially on this topic, explanation of children. Slow to implement and artificial intelligence, I have just published a new article on ‑‑ for political and strategic studies about this issue. How young people are recruited, extremism, localization and terrorism through artificial intelligence. It was surprising, because I started to study and research about this topic. That they are recruiting children through the games, through the gamings. They recruit children through electronic games on the Internet. It is available. So I concluded this study with some recommendation. One of the recommendation, look after your children. Don't leave them alone with their screens. Don't do it. They make ‑‑ isolate them and promote their own ideas and approaches, then we find some young people doing horrible things. So one of the recommendation is awareness is very important. The legislation, of course. Thank you, bye‑bye.
>> ARDA GERKENS: Thank you. This is really the last question I'm going to take and ask you very brief answers because Sarah will then wrap up.
>> Yes, I'm Anuora(?) from Bahrain, would like to thank you and ‑‑ all of you. Making regulation is very easy. But in same time difficult, especially in AI. Because in AI we are working on open space. Every time we do something, of course we are moving and expanding. Are then as miss said we are real‑time asking ourselves can we ‑‑ say yes to thing to be like mainframe work and we are looking for blessing. We don't look to regular regulation, especially with AI. (?) Mainframe work increase to minimise risk of the AI ethics, thank you.
>> ARDA GERKENS: Okay, very short answer from Amira. You are good? Axel, maybe you want to comment.
>> AXEL VOSS: Yes, so thanks for the recommendations. It is good to hear. To Meredith, yes, so keeping the AI Act strong. This is what we are trying to do but in kind of a trip. We are seeing AI and AI systems and generative AI is creating a lot of wealth, so we are lagging behind everywhere, except two regions. Why we try to support this but keep strong limits and frameworks.
>> ARDA GERKENS: Thank you very much. Thank you all. I would love to give the floor to Ms. Sarah Lister, who will give us her closing remarks.
>> SARAH LISTER: Thank you very much. And as we conclude this open forum on AI regulation, I would like to start by thanking, first of all, all the panelists, participants, moderators, in‑person and online, for all your insights and the questions and the commitment to inclusive rights‑based governance of artificial intelligence. As a personal reflection, I'm delighted there is such a well‑attended session focusing on role of Parliaments. My experience in digitalisation discussions, national governance authorities and processes have been forgotten and have been brought too late to ‑‑ all have been brought too late to the processes.
So today's discussion has highlighted the practical role of Parliaments in ensuring that AI systems are aligned with human rights and normative principles and to ensure no one is left behind, as the colleague from Egypt have said.
We have heard Parliaments are on the front lines of some of the most pressing policy issues of our time. How to protect citizens and all people while enabling innovation. That has been a core theme that has run through our discussion this afternoon. It was raised by the colleague from Egypt and Uruguay and from the floor.
Then how do we ensure oversight in a rapidly evolving technological landscape. What type of regulatory entities do you need? How do you create governance frameworks that are grounded in human rights?
We have heard that to ensure the benefits of AI reach everyone, we must invest in the development, governance and support of responsible and ethical AI, as well in country's capacities to build safe, secure and trustworthy AI systems.
Parliaments are key in governing the use of global technologies and ensuring that these truly serve their public's interests and support the achievements of the SDGs. We have heard that no single actor can shape the governance of AI alone. Effective regulation demands a multistakeholder approach, bringing together Parliaments, executive branches, Civil Society, private sector and technical communities. Uruguay talked about the consensus‑based approach that takes place there. We heard from the floor and then in response about the importance of bringing youth voices to the discussions and have asked the question, how can we effectively ensure that multistakeholder, including young people's approach to this issue.
The high level ‑‑ the report of the Secretary‑General's high‑level advisory on AI and global digital compact signed on the margins last year notes AI addresses existing gaps in representation, coordination and implementation. Parliaments are key actors in ensuring that.
At UNDP we see AI not only as a technological issue but a governance one. We support countries with partners in navigating the governance of digitalisation and digitalisation for governance. And we support countries to develop their potential to transform their public services, build a more open and inclusive public sphere and enhance democratic public institutions. We cohost on Parliaments in digital policy to try to bring together elements people have asked for in terms of sharing experiences. And other international organisations joined that expert group to help ensure we pull together.
I see my time has passed, so I would say, once again, thank you very much for being part of this timely and important conversation. A special thanks to Inter‑Parliamentary Union for their partnership and making this event possible and the IGF for hosting us, thank you very much.
(Applause)