GFMD Policy & Advocacy Center
GFMD Homepage
  • GFMD Policy & Advocacy
  • GFMD Initiatives
    • Tech and Journalism Crisis and Emergency Mechanism (T&JM)
      • Consultations/meetings Reports
        • Consultation Report - Jan, 25th, 2023 - Riga
        • Consultation Report - Feb, 21st 2023 - Paris
      • Monitoring Organisations
      • Resources and Literature Review
    • Dynamic Coalition on the Sustainability of Journalism and News Media
      • Articles & Resources
      • Conferences, events, and session recordings
        • IGF 2023 Session: Data, Access & Transparency: A Trifecta for Sustainable News
        • IGF 2022 Session: Unbreaking the news: Media sustainability in the digital age
        • Frenemies: reinventing the Big Tech versus journalism dynamic (RightsCon 2022)
    • EU Media Advocacy Working Group
      • EU Advocacy - 2025 Priorities
      • EU Advocacy - 2024 Priorities
        • 2024 EU Elections
      • Previous groups/initiatives
      • Members & Observers
    • Working Group on UN Advocacy
      • The Fourth International Conference on Financing for Development (FFD4)
        • GFMD statement to the Fourth Preparatory Committee Session
        • Advocacy Toolkit
        • Relevant Resources
      • WSIS+20 Review
      • Summit of the Future
    • Journalism Cloud Alliance Inaugural Meeting
      • Meeting agenda
      • Speakers
      • Literature Review
      • Press release
  • Policy meetings
    • 2025
      • GFMD Policy and Advocacy Meeting (April 2025)
        • Meeting Agenda
        • Key recommendations
      • GFMD Policy Meeting (March 2025)
        • Meeting agenda
        • Literature review
      • Connecting the dots: How to use existing mechanisms to protect media freedom online? (January 2025)
        • T&JM Final Case Digest
        • Meeting report
        • Meeting agenda
        • Literature review
    • 2024
      • Post-Summit of the Future Updates and Upcoming Opportunities (November 2024)
        • Meeting agenda
        • Literature review
    • 2023
      • Workshop on Encryption and Media Freedom (June '23)
        • Workshop Report
        • Resources
    • 2022
      • Gender Equality in Media Regulation (May '22)
        • Meeting Report
        • Literature Review
  • Resources
    • Advocacy for Funding: Key Messages, Data and Resources
    • Featured resources
    • Advocacy for Media and Journalism Funding
    • Internet Governance
      • 10 FAQs on Internet Governance
      • Internet Governance
        • Academic Studies
        • Policy papers & briefings
        • Handbooks & Guides
        • Articles
        • Research & Reports
      • Journalism & Media Development – Digital Media
        • Toolkits
        • Handbooks
        • Videos
        • Academic Studies
        • Books
        • Articles
        • Research & Reports
      • Digital Media Literacy
        • Articles
        • Handbooks & Manuals
        • Academic Studies
        • Reports
      • Media Sustainability & Digital Markets
        • Interviews, speeches, videos, and talks
        • Toolkits, Newsletter, Indexes, guides, tools, & courses
        • Academic Studies
        • Policy Papers & Briefings
        • Books
        • Articles
        • Research, handbooks & reports
      • Artificial Intelligence
        • Toolkits
        • Networks
        • Academic Studies
        • Policy Papers & Briefings
        • Articles
        • Videos
        • Research & Reports
      • Content-related resources
      • Data Protection & Privacy
        • Toolkits & Newsletters
        • Academic Studies
        • Policy Papers & Briefings
        • Articles
        • Research & Reports
      • Disinformation and Misinformation – Human Rights
        • Toolkits
        • Handbooks & Guides
        • Academic Studies
        • Policy Papers & Briefings
        • Articles
        • Research & Reports
      • Digital Violence & Security
        • Toolkits & Networks
        • Handbooks & Guides
        • Academic Studies
        • Policy Papers & Briefings
        • Articles
        • Research & Reports
        • Webinars
    • AI and Journalism
    • Public Access to Information – SDG 16.10
  • Articles and reports about US funding freeze
  • Policies and legislation
    • EU's Multiannual Financial Framework
    • Artificial Intelligence Act (AI Act)
      • Official Documents
    • European Media Freedom Act (EMFA)
      • Official Documents
      • Briefing Breakfast on the European Media Freedom Act
      • Joint Letters, Policy Briefs and other resources
    • Global Digital Compact
    • Transparency and targeting of political advertising
      • Policy Briefs and other resources
    • Digital Markets Act (DMA)
      • DMA proposed activities 2021
    • Digital Services Act
      • Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Sing
      • Resources (DSA)
    • Rule of Law and Mechanisms
      • RoL proposed activities 2021
    • SLAPPs - Strategic Litigation Against Public Participation
      • SLAPPs proposed activities 2021
      • Coalition Against SLAPPs in Europe (CASE)
      • Resources & Reports
    • UNESCO Guidelines
  • Actors
    • Institutions & Other Organisations
      • Access Now
      • Association for Progressive Communications (APC)
      • Council of Europe (CoE)
      • Committee to Protect Journalists (CPJ)
      • DiploFoundation and GIP Digital Watch
      • Freedom House
      • GigaNET
      • Global Network Initiative (GNI)
      • Global Partners Digital (GPD)
      • ICANN
      • Institute of Electrical and Electronics Engineers (IEEE)
      • International Federation of Library Associations and Institutions (IFLA)
      • International Telecommunication Union (ITU)
      • Internet Engineering Task Force (IETF)
      • Internet Governance Caucus
      • Internet Governance Project
      • Internet Society (ISOC)
      • Media and Development Forum (FoME)
      • Mozilla
      • openDemocracy
      • Open Internet for Democracy
      • Ranking Digital Rights (RDR)
      • Reporters Without Borders (RSF)
      • Reuters Institute
      • UNESCO
      • Web Foundation
  • Advocacy & capacity building
  • Events and Training
    • Trainings and Capacity Building
      • Summer schools & courses
        • Indexes, guides, & tools
    • Conference & fora
    • Organisations & initiatives
  • ABOUT
    • GFMD Homepage
Powered by GitBook
On this page
  • Why algorithms can be racist and sexist (VOX/Recode)
  • Regulating social media content: Why AI alone cannot solve the problem (ARTICLE 19)
  • Privacy and Freedom of Expression in the Age of Artificial Intelligence (ARTICLE 19 & Privacy International)
  • Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI (Berkman Klein Center for Internet and Society)
  • OECD Principles on AI
  • How Innovative Newsrooms Are Using Artificial Intelligence (Open Society Foundations / GIJN)
  • Five reasons why now is the time to be thinking about artificial intelligence in your newsroom (Fathm)
  • A Global Tipping Point for Reining In Tech Has Arrived (New York Times)
  • The Data Explosion: Media, Big Data, and the Internet of Things (2016) – Carlos Affonso Souza

Was this helpful?

  1. Resources
  2. Internet Governance
  3. Artificial Intelligence

Articles

PreviousPolicy Papers & BriefingsNextVideos

Last updated 2 years ago

Was this helpful?

Why algorithms can be racist and sexist ()

That’s not to say there aren’t technical efforts to “de-bias” flawed artificial intelligence, but it’s important to keep in mind that the technology won’t be a solution to fundamental challenges of fairness and discrimination. And, as the examples we’ve gone through indicate, there’s no guarantee companies building or using this tech will make sure it’s not discriminatory, especially without a legal mandate to do so. It would seem it’s up to us, collectively, to push the government to rein in the tech and to make sure it helps us more than it might already be harming us.

Regulating social media content: Why AI alone cannot solve the problem ()

Over-broad restrictions on freedom of expression arising from regulation of speech online have to be challenged. And the use of technological tools to deal with complex problems like fake news, hate speech and misinformation fall far short of the standards required to protect freedom of expression.

This is in part because of the way the problems are being defined, and the approach that is being taken to address vague concepts such as fake news, fake speech and misinformation. They are too broad and susceptible to arbitrary interpretation and become particularly dangerous when State actors assume responsibility for the way these terms are interpreted. For example, Malaysia’s government introduced a in March 2018 that sought to criminalise speech that criticises government conduct or exhibits critical political opposition.

A more significant challenge is posed by the way the attention has been focussed on technological tools like bots and algorithms to filter content. While useful for rudimentary sentiment analysis and pattern recognition, they alone cannot parse the social intricacies and subjective nature of speech, which are in themselves difficult even for humans to grasp. The nature of development and deployment of AI tools make the to freedom of expression even greater: the presence of human bias in the design of these systems means we are far away from datasets that reflect the complexity of tone, context, and sentiment of the diverse cultures and subcultures in which they function.

Privacy and Freedom of Expression in the Age of Artificial Intelligence ()

The aim of the paper is fourfold:

  1. Present key technical definitions to clarify the debate;

  2. Examine key ways in which AI impacts the rights to freedom of expression and privacy and outline key challenges;

  3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and

  4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities.

Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI ()

The rapid spread of artificial intelligence (AI) systems has precipitated a rise in ethical and human rights-based frameworks intended to guide the development and use of these technologies. Despite the proliferation of these "AI principles," there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends.

To that end, this white paper and its associated data visualization compare the contents of thirty-six prominent AI principles documents side-by-side. This effort uncovered a growing consensus around eight key thematic trends: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. Underlying this “normative core,” our analysis examined the forty-seven individual principles that make up the themes, detailing notable similarities and differences in interpretation found across the documents. In sharing these observations, it is our hope that policymakers, advocates, scholars, and others working to maximize the benefits and minimize the harms of AI will be better positioned to build on existing efforts and to push the fractured, global conversation on the future of AI toward consensus.

The OECD AI Principles set standards for AI that are practical and flexible enough to stand the test of time in a rapidly evolving field. They complement existing OECD standards in areas such as privacy, digital security risk management and responsible business conduct.

Many large newsrooms and news agencies have, for some time, relegated sports, weather, stock exchange movements and corporate performance stories to computers. Surprisingly, machines can be more rigorous and comprehensive than some reporters. Unlike many journalists who often single-source stories, software can import data from various sources, recognize trends and patterns and, using Natural Language Processing, put those trends into context, constructing sophisticated sentences with adjectives, metaphors and similes. Robots can now convincingly report on crowd emotions in a tight soccer match.

These developments are why many in the journalistic profession fear Artificial Intelligence will leave them without a job. But, if instead of fearing it, journalists embrace AI, it could become the savior of the trade — making it possible for them to better cover the increasingly complex, globalized and information-rich world we live in.

Intelligent machines can turbo-power journalists’ reporting, creativity and ability to engage audiences. Following predictable data patterns and programmed to “learn” variations in these patterns over time, an algorithm can help reporters arrange, sort and produce content at a speed never thought possible. It can systematize data to find a missing link in an investigative story. It can identify trends and spot the outlier among millions of data points that could be the beginnings of a great scoop. For example, nowadays, a media outlet can continuously feed public procurement data into an algorithm which has the ability to cross-reference this data against companies sharing the same address. Perfecting this system could give reporters many clues as to where corruption may be happening in a given country.

  • Reason #1: Artificial intelligence (AI) is actually about data

  • Reason #2: AI can support human-centred thinking

  • Reason #3: AI can make you a better journalist

  • Reason #4: AI informs your overall tech strategy

  • Reason #5: You’re late… but not too late

Never before have so many countries, including China, moved with such vigor at the same time to limit the power of a single industry.

Every day we generate more data: our schedules, itineraries, preferences, activities and even our relationships are increasingly quantified. What then is the impact of this explosion of data – potentially available for collection and analysis – on the development of new media and on freedom of expression and the press?

Principles on AI

The OECD Principles on Artificial Intelligence promote artificial intelligence (AI) that is innovative and trustworthy and that respects human rights and democratic values. They were adopted in May 2019 by OECD member countries when they approved the . The OECD AI Principles are the first such principles signed up to by governments. Beyond OECD members, other countries including Argentina, Brazil, Costa Rica, Malta, Peru, Romania and Ukraine have already adhered to the AI Principles, with further adherents welcomed.

In June 2019, the that draw from the OECD AI Principles. A June 2021 report, , presents a conceptual framework, provides findings, identifies good practices, and examines emerging trends in AI policy, particularly on how countries are implementing the five recommendations to policy makers contained in the OECD AI Principles.

How Innovative Newsrooms Are Using Artificial Intelligence ()

Five reasons why now is the time to be thinking about artificial intelligence in your newsroom ()

(New York Times)

– Carlos Affonso Souza

To discuss this challenging issue, (Center for International Media Assistance) organized a at the Global Media Forum, an event sponsored by Deutsche Welle in Bonn from 13 to 15 June 2016. The participants included Sumandro Chattapadhyay, from the (India), Lorena Jaume-Palasi, from the , and Carlos Affonso Souza, from the of Rio de Janeiro. The debate was moderated by Mark Nelson, the senior director of CIMA.

VOX/Recode
ARTICLE 19
“fake news” bill
risk
ARTICLE 19 & Privacy International
Berkman Klein Center for Internet and Society
OECD
OECD Council Recommendation on Artificial Intelligence
G20 adopted human-centred AI Principles
State of implementation of the OECD AI Principles: Insights from national AI policies
Open Society Foundations / GIJN
Fathm
A Global Tipping Point for Reining In Tech Has Arrived
The Data Explosion: Media, Big Data, and the Internet of Things (2016)
CIMA
panel
Centre for Internet and Society
European Dialogue on Internet Governance
Institute for Technology and Society