Check-in and access this session from the IGF Schedule.

IGF 2020 Pre-Event #30 From Principles to Practice: Artificial Intelligence and the Role of the Private Sector

    Time
    Tuesday, 3rd November, 2020 (16:50 UTC) - Tuesday, 3rd November, 2020 (18:50 UTC)
    Room
    Room 2
    About this Session
    Bringing together international and regional organizations that have developed or are currently in the process of developing AI guidelines, this session will to consider how these policy recommendations can be translated into practice, specifically by private sector counterparts.

    The session will also highlight best practices from private sector actors that have developed ethical guidelines for their companies in line with existing frameworks and standards.

    Supporting organizations

    • International Chamber of Commerce (ICC)
    • Council of Europe
    • European Commission
    • European Union Agency for Fundamental Rights (FRA)
    • IEEE
    • International Organization for Standardization (ISO)
    • Organisation for Economic Co-operation and Development (OECD)
    • United Nations Educational, Scientific and Cultural Organization (UNESCO)
    Description

    AI is a general-purpose technology that holds the potential to increase productivity and build cost-effective, impactful solutions across numerous sectors. It is perceived as a great transformer for both developed and developing economies, promising to enhance any decision-making process through the knowledge gained from applying analytics and machine learning to the data available. At the same time, the design, development and deployment of algorithmic systems and tools also holds challenges, often surrounding the role of human agencies, transparency, and inclusivity.

    To harness these benefits and mitigate risks, governments, businesses, intergovernmental and multi-stakeholder organizations, and the technical community have developed or are actively considering guidelines, principles and standards along which AI can or should be developed and deployed. Currently there are an increasing number of policy initiatives spearheaded by (coalitions of) governments or international and regional organizations that could have a direct impact on the developers, deployers and users of these technologies.

    As these initiatives are moving from principles to practice, it is of utmost importance to cooperate with those affected by their implementation and involved in their development.

    When it comes to the application of these guidelines, the private sector is on the front lines, be it in the design, distribution or utilization of AI. Another highly impacted stakeholder group are the cities and municipalities on their path towards digitalization, especially in crisis situations, such as the one we are currently experiencing. In order to ensure the full consistency in the design of the underlying data processing activities and an effective implementation of the guidelines and also the enforcement of rights that it refers to, data protection supervisory authorities can play an important role.

    The IGF is the prime venue to gather input from all these stakeholders, those already active in this space, but especially to bring to the table those who have not yet been heard, in particular participants from the Global South and other marginalized groups.

    This session, aims to bring together the various international and regional organizations as well as the technical community that have recently developed or are currently in the process of developing AI guidelines to consider how these policy recommendations can be translated into practice, specifically by private sector counterparts. The discussion will bring together representatives of the African Union, the Council of Europe, the European Union, IEEE, ISO, OECD and UNESCO and business representatives from ICC’s global network. The session will focus on the commonalities and differences of these guidelines and principles and discuss their impact primarily on the private sector, as developers, distributors, and users of these technologies.

    The session will focus specifically on the role of the private sector in the development of these frameworks and guidelines. It will look to highlight best practices of private sector counterparts that have developed ethical guidelines for their companies in line with existing frameworks and standards. It will also look to discover barriers of adoption to some of these principles and guidelines, as well as potential incentives or motivators that would encourage more widespread adoption and/or help unify some of these recommendations globally.

    The first part of the session will ensure presentations of the above-mentioned initiatives by international and regional organizations and technical bodies and their implications for different stakeholder groups, notably the private sector. The moderator will draw commonalities and differences as well as lessons learned for future implementation. The second part of the session will focus on highlighting best practices and use cases of private sector counterparts that have 1) been involved in the development of these guidelines and frameworks and 2) implemented and developed, based on these guidelines and frameworks, company standards in the field of AI. The third part of the session will be an open roundtable discussion on lessons learned with all participants, based on some guiding questions and questions from the audience, with a focus on how to reinforce platforms for cooperation between the private sector and standard-setting bodies and processes.

    The session also aims to act as a convenor for all IGF participants active in the AI space to provide an opportunity for networking and knowledge sharing at the very beginning of the IGF week and provide a common base to be further discussed in various sessions throughout the event.

    Panelists:

    • Emmanuel Bloch, Thales
    • Joanna Goodey, FRA
    • Jan Kleijssen, Council of Europe
    • Clara Neppel, IEEE
    • Carolyn Nguyen, Microsoft
    • Sophie Peresson, ICC
    • Audrey Plonk, OECD
    • Golestan (Sally) Radwan, Government of Egypt
    • Sasha Rubel, UNESCO
    • Christoph Steck, Telefonica

    Background material

    Please find below an overview of participating organisations' work on issues related to Artificial Intelligence:

    OECD

    In May 2019 the OECD adopted its Principles on Artificial Intelligence, the first international standards agreed by governments for the responsible stewardship of trustworthy AI. The OECD Principles on AI include concrete recommendations for public policy and strategy. The general scope of the Principles ensures they can be applied to AI developments around the world.

    In the OECD report Artificial Intelligence and Society, a chapter on public policy considerations reviews salient policy issues that accompany the diffusion of AI. It supports the value-based OECD Principles on AI and outlines national policies to promote trustworthy AI systems. The OECD.AI Policy Observatory, launched in February 2020, aims to help policymakers implement the AI Principles.

    UNESCO

    UNESCO’s work in the field of artificial intelligence aligns with the Organization’s core functions in the fields of education, sciences, culture and communication and information, notably 1) laboratory of ideas 2) clearing house for knowledge 3) standard setter 4) catalyst for international cooperation, and 5) capacity development. These activities include:

    • Ensuring global dialogue on AI:  Several global and regional conferences have been organized by UNESCO;
    • Undertaking research on emerging trends in AI in the fields of UNESCO’s mandate: Several flagship publications have been launched, including on gender equality and AI in education. 
    • Developing and assessing the future of AI application in the areas of UNESCO’s fields of competence:  Notably,  use of AI in disaster risk reduction, the use of AI in the UNESCO geopark network, AI use to mitigate climate change, AI to promote the preservation of minority and indigenous languages,  AI and open access to scientific information, and AI Commons; 
    • Setting international norms and standards:  UNESCO published the Beijing Consensus on Artificial Intelligence (AI) and Educationoffering guidance and recommendations on how best to harness AI technologies for achieving the Education 2030 Agenda. UNESCO is currently elaborating an instrument in the form of a recommendation on the ethics of Artificial Intelligence. 
    • Capacity Development: UNESCO is currently developing an online AI decisionmakers essential platform.  The AI Decision Maker’s Essential is a policy advice toolkit for AI Governance to support decision makers with practical advice on translating AI-related principles into practice across UNESCO's fields of competence. The platform will contain implementation guides, model use cases, capacity building tools, and foresight elements to support AI decision makers across Member States.  Building upon UNESCO's Judges initiative, UNESCO is also developing a MOOC on AI and the Rule of Law for judicial operators in partnership with IEEE and Cetic.br. 

    Please see UNESCO's dedicated website on AI for further information.

    Council of Europe

    The Council of Europe Ad-hoc Committee on Artificial Intelligence (CAHAI) was fromed to examine the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on Council of Europe’s standards on human rightsdemocracy and the rule of law.

    The Council of Europe high-level Conference “Governing the Game Changer – Impacts of artificial intelligence development on human rights, democracy and the rule of law” was held in February 2019 in Helsinki. See videos from all panels, interviews with high-level officials, the Conference report and Conference conclusions paper here

    The Council of Europe report produced by the CAHAI on “Responsibility and AI” provides a deeper understanding of the impacts of AI development on the exercise of human rights and fundamental freedoms. 

    Please see the Council of Europe's dedicated website on AI for further information. 

    EU FRA

    FRA’s project on Artificial Intelligence, Big Data and Fundamental Rights aims to assess the positive and negative fundamental rights implications of new technologies, including AI and big data. In addition to presenting the results of the fieldwork conducted as part of the project, it builds on the findings of a number of papers published during the earlier phases of this project:

    As a separate part of the project, FRA is exploring the feasibility of studying concrete examples of fundamental rights challenges when using algorithms for decision-making through either online experiments or simulation studies.

    Several other FRA publications also address relevant issues: