IGF 2025 WS #418 Trustworthy AI in the public sector

    Organizer 1: Government, Western European and Others Group (WEOG)
    Organizer 2: Government, Western European and Others Group (WEOG)
    Organizer 3: Government, Western European and Others Group (WEOG)
    Speaker 1: Heather Broomfield, Government, Western European and Others Group (WEOG)
    Speaker 2: Naomi Lintvedt, Government, Western European and Others Group (WEOG)
    Speaker 3: Anders Løland, Government, Western European and Others Group (WEOG)
    Format
    Classroom
    Duration (minutes): 60
    Format description: We expect this to be an exciting but difficult topic for a lot of the participants. We do like to have interactivity. We would therefore prefer a more intimate setting like a classroom, rather than a larger theatre. But we believe discussing this set of themes in small groups can be too demanding, there needs to be good connection with the professionals. We expect using half of the time on opening speeches, and the remaining half of the time on more interactive case-based problem solving.
    Policy Question(s)
    A. Under what conditions can the public sector in each country adopt a leading role in defining trustworthy AI, and where must it merely adhere to choices made by leading AI Powerhouses? B. What instruments or mechanisms do we need to ensure trustworthy communication in a digital landscape where AI is fully embedded, and how can we ensure interoperability across countries and legislations?
    What will participants gain from attending this session? Participants will realize how fundamentally defining AI will be in shaping the future internet dialogue. They will gain comprehension of distinctive characteristics of public sector when it employs AI in its task solutions, decision-making processes and communication processes. Ideally through a comparative lens, where public sector AI use in several countries with different cultural backgrounds is compared and contrasted. Participants may take away ideas and thoughts on how we might mitigate trust risks when AI is used in communications everywhere, all the time. The intention will be to bridge regulatory viewpoints with more technical viewpoints, to address a spectrum of various ideas.
    Description:

    The public sector has in recent years adopted artificial intelligence to improve and streamline its task execution. Just like society at large has. However, the requirements for the public sector are particularly stringent when it comes to accountability and transparency on how the societal mission is carried out and how various measures are employed. In a not-too-distant future, the boundaries between content generated by humans and content generated by artificial intelligence will be completely blurred. Is that a problem? We can choose to only look at the result: When at least as good, or better, we can claim that this is perfectly fine. We don't always need to know exactly how something came to be. But in some situations we really need to understand this, and then it can become a big problem. Which is especially true for a public sector that must be accountable to the citizens for what it does. Ultimately, the question becomes: What is real and what is not real? How can we trust anything at all? What mechanisms will it require to maintain trust? The Norwegian Tax Administration is exploring public sector use of artificial intelligence. As a result, we also see the challenges this technology brings. We therefore believe this is a very important topic to put on the agenda for IGF, for example through a dedicated workshop. It might be interesting to bring in actors from various parts of the public sector in different countries with different cultures and, not least, experience so far when it comes to the use of artificial intelligence.
    Expected Outcomes
    New perspectives and advice should be summarised and published in a brief report. In context of the IGF, this might be shared with the Policy Network on Artificial Intelligence (PNAI). Individual states or regional bodies might review the advice and interlace it with own efforts. For instance in Norway, the Norwegian Digitalisation Agency might consider the advice when reviewing and revising its own Guide on Responsible Development and Use of Artificial Intelligence in the Public Sector.
    Hybrid Format: We begin with opening speeches. Then, we firstly expect to prepare some brief questions which participants should respond to on an individual basis. We will use some digital tool to collect, aggregate and present answers in real time. Probably in the form of a small poll or survey. For this, an interactive tool like for instance mentimeter.com might be used. Or built-in functionality in tools like Teams or Zoom. This ensures that all participants, whether they are onsite or online, can provide feedback to the sessions. Secondly, we might invite to a 10-minute discussion with a question on the topic. The online participants should participate in a virtual location, the onsite participants may form small groups. The final 10 minutes will be used to get feedback from these discussions and summarise the session.