Academic Studies

This resource section has leant heavily on the "Platform companies and news media" section of the reading list curated by the Reuters Institute for the Study of Journalism at the University of Oxford.

The Market of Disinformation (Oxford Internet Institute)

The2016 USelectionscontinueto cast a long shadow over democratic processes around the world. Morethan 40 countries are ponderinglegislative responses (Bradshaw, Neudert, & Howard, 2018). Meanwhile,the tech platforms have made more than125 announcementsdescribinghow, through self-regulation, they will solve themanipulation of their platforms by bad actors (Taylor, Walsh, & Bradshaw, 2018).

Among the more frequentlyreferencedself-regulatorymeasuresare changes to algorithmsand the use of artificial intelligence (AI)to demote disinformation and junk news.We askwhetherthese changes tookplace, and if so, have they had the intended impactof reducing the spread of disinformation on social media platforms?To date, much of the policy debate has focused on paid-for advertising on the platforms, but what about the viral spread of unpaid, organic content? The ‘black box’nature of today’s most widelyused platforms makesitdifficult for researchers and journalists to understand how algorithmic changes might be affecting both legitimate political campaigning and disinformation. It is essential that any reform of electoral regulation or oversight in the UK is informed by an understanding ofthe techniques usedin both the paid and the unpaid marketsofdisinformation.

The digital marketing industry can offer insights,albeit incomplete and heuristic in nature, intothe impact of algorithmic changes.Social mediamarketing and searchengine optimization (SEO)–that is, the practice of guessing, testing,and experimenting with algorithmsso that searches for particular wordsappear higher in search results–are part ofa multi-billion-dollar industry built upon understanding how these obscure technical systems rank, order, sort, and prioritize information. By interviewing professionalsand reviewing reports from the digital marketing industry, we can gain insight intothe impact that algorithmic changes might have had on the distribution of content online. The findings provide an additional evidencebase that caninform the Oxford Technology and Elections Commission’s project to identify potential regulatory reform of elections.

The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation (University of Oxford)

Over the past three years, we have monitored the global organization of social media manipulation by governments and political parties. Our 2019 report analyses the trends of computational propaganda and the evolving tools, capacities, strategies, and resources.

The Biology of Disinformation: Memes, Media Viruses, and Cultural Inoculation (Institute for the Future)

In this research effort, they are focusing instead on the greater shift to a memetic landscape—no matter the origins and sponsorship of particular memes—and the impact of memetic activity on the media, social, and political environment. What does the migration from broadcast propaganda to social and memetic propaganda do to the social organism and its resistance to manipulation? Can technological fixes and government regulations adequately address the problem of propaganda in a computational environment, or must we look at ways to promote a more resilient social fabric? In short, which is the most effective approach to restoring the integrity of public discourse in an age of weaponized memetics: better technological protections, or a more resistant social psyche?

State-sponsored Trolling: How Governments are Deploying Disinformation as Part of Broader Digital Harassment Campaigns (Institute for the Future)

In this paper, we examine the emergence of a new phenomenon: state-sponsored trolling. We define this phenomenon as the use by states of targeted online hate and harassment campaigns to intimidate and silence individuals critical of the state. There is evidence that governments around the world, leveraging the surveillance and hacking possibilities afforded by a new era of pervasive technology, are using new digital tactics to persecute perceived opponents at scale. These campaigns can take on the scale and speed of the modern internet with pinpoint personalization from troves of personal data afforded by cheap surveillance technologies and data brokers

Regulating disinformation with artificial intelligence (European Parliamentary Research Service)

This study examines the consequences of the increasingly prevalent use of artificial intelligence (AI) disinformation initiatives upon freedom of expression, pluralism and the functioning of a democratic polity. The study examines the trade-offs in using automated technology to limit the spread of disinformation online. It presents options (from self-regulatory to legislative) to regulate automated content recognition (ACR) technologies in this context. Special attention is paid to the opportunities for the European Union as a whole to take the lead in setting the framework for designing these technologies in a way that enhances accountability and transparency and respects free speech. The present project reviews some of the key academic and policy ideas on technology and disinformation and highlights their relevance to European policy.

How Internet Platforms Are Combating Disinformation and Misinformation in the Age of COVID-19 (Open Technology Institute/New America)

The unprecedented spread of COVID-19 across the globe has sparked a significant new wave of misinformation and disinformation online. In a time when the public must be armed with the most accurate information to combat this pandemic, many internet platforms have developed policies and practices to combat misleading and inaccurate information related to the virus. This report provides an overview of how various internet platforms are individually addressing the rapid spread of COVID-19-related misinformation and disinformation on their services. While this report aims to be comprehensive, it is important to note that platforms’ response efforts to the virus are rapidly changing and expanding, and as a result, this report may not encompass all efforts instituted by these companies. This report also offers recommendations on how these platforms can improve the efficacy of their efforts and also provide greater transparency to their users and the public. Further, the report includes recommendations on how U.S. policymakers can encourage further accountability and support efforts to combat the spread of misinformation and disinformation during this time.

Getting it Right: Strategies for truth-telling in a time of misinformation and polarization (American Press Institute)

It highlights:

  • Journalists need a new set of skills and strategies to operate in an information ecosystem infused with misinformation, to fend off attacks on their work as biased or “fake” and to reach polarized audiences.

  • There are a number of strategies for reporting on falsehoods without amplifying them. One is the “truth sandwich,” which involves stating a true fact, then the falsehood, then the true fact again. Journalists also must know their “tipping point” — the point at which a story about false information becomes too big to ignore.

  • Journalists can respond to attacks on their credibility by being transparent in their reporting, quickly acknowledging and correcting mistakes, and avoiding a “war footing” with antagonistic public figures.

  • Reaching polarized audiences calls for better listening and creating more opportunities for journalists to get out in the field. It also calls for more complex, nuanced stories that avoid moral outrage phrases or keywords that contribute to groupishness.

“Fake News Influences Real News” – Study finds fact-checkers have little influence on online news media (Boston University)

A recent study by two BU professors sought to measure the extent that fake news dictates part of the agenda for online media. More research is needed, they say, but their study showed that taken together, all those stories tied up a remarkable amount of news media resources that could have been devoted to other important issues.

Everything in Moderation: An Analysis of How Internet Platforms Are Using Artificial Intelligence to Moderate User-Generated Content (Open Technology Institute/New America)

Internet platforms are increasingly adopting artificial intelligence and machine-learning tools in order to shape the content we see and engage with online. The use of algorithmic decision-making is becoming particularly prevalent in online content moderation, as companies attempt to comply with speech-related legal frameworks while also trying to promote safety, positive user experiences, and free expression on their platforms. This report is the first in a series of four reports that will explore different issues regarding how automated tools are being used by internet platforms to shape the content we see and influence how this content is delivered to us. The reports will focus on content moderation based on a platform’s content policies, the ranking of content on newsfeeds and in search results, the optimization and targeting of advertisement delivery, and content recommendations to users based on their prior content consumption. These reports will also explore how internet platforms, policymakers, and researchers can better promote fairness, accountability, and transparency around these automated tools and decision-making practices.

Digital Planet 2017: How Competitiveness and Trust in Digital Economies Vary Across the World (The Fletcher School at Tufts University)

The DEI 2017 is a data-driven holistic evaluation of the progress of the digital economy across 60 countries, combining more than 100 different indicators across four key drivers: Supply Conditions, Demand Conditions, Institutional Environment, and Innovation and Change. The resulting framework captures both the state and rate of digital evolution and identifies implications for investment, innovation, and policy priorities. DEI 2017 also highlights the evolving nature of the risks being created by our continuing reliance on digital technology. Towards this end, the study covers a key question of “digital trust.“ The DEI 2017 incorporates a newly devised analysis of digital trust that takes into account the trustworthiness of the digital environment for each country; the quality of users’ experience; attitudes towards key institutions and organizations; and users’ behavior when they interact with the digital world. This subject is of great interest to all participants in the digital economy, given the concerns about security of essential information, cyber-attacks, and consumers’ apprehensions—about the digital systems and their reliability, the digital companies and their growing dominance, and about the leaders of digital companies.The DEI framework segments the 60 countries into Stand Outs, Stall Outs, Break Outs and Watch Outs. Three countries are notable as standouts even within the Stand Out segment: Singapore, New Zealand, and the UAE. Each has a unique policy-led digital strategy and a narrative that may be considered by other nations as worthy of emulation or adoption. The Nordic countries and Switzerland are at the top of the DEI 2017 rankings. China, once again, tops the list of countries in terms of the pace of change in its digital evolution, or momentum.

Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation (Oxford University)

The manipulation of public opinion over social media platforms has emerged as a critical threat to public life. Around the world,a range ofgovernment agencies and political parties are exploiting social media platforms tospreadjunk news and disinformation, exercisecensorship and control, and underminetrust in the media, public institutions, and science. At a time when news consumption is increasingly digital, artificial intelligence, big data analytics, and “black-box”algorithms arebeing leveraged to challenge truth and trust: the cornerstones of our democratic society. In 2017, the first Global Cyber Troops inventoryshed light on the global organization of social media manipulationby government and political party actors. This 2018 reportanalyses thenew trends of organized media manipulation,and the growing capacities, strategies and resources that supportthis phenomenon

Last updated