AI and Journalism
This brief highlights the significant role of journalism in the AI ecosystem, emphasizing the need for news publishers to assert their value and rights in the face of tech companies that leverage journalistic content for AI training and applications without adequate compensation. It outlines a three-stage model of value creation in AI—model inputs and development, training and improving models, and outputs and applications—where journalism provides crucial, high-quality data. The brief argues for strategic rate-setting, dynamic licensing frameworks, and collective bargaining to ensure fair compensation for publishers.
The article discusses the work of the Center for Journalism and Liberty at Open Markets in addressing the intersection of artificial intelligence (AI) and monopoly power in the digital age. It highlights the center's efforts to report on how major corporations are positioning themselves to dominate the AI tech stack, potentially controlling technological advancements and undermining high-quality journalism. The center's first report, released in November 2023, provides policy recommendations to counter these monopoly threats and ensure AI development serves the public interest. The recommendations focus on using existing antitrust and competition laws to protect individual liberty, information integrity, and democratic institutions while also proposing regulatory priorities and potential new legislation.
The National Audit Office (NAO) highlighted the potential of AI to deliver significant productivity benefits worth billions in the public sector in the Autumn Statement of 2023. Subsequently, in the Spring Budget of 2024, the government announced funding for several initiatives involving AI as part of its Public Sector Productivity Programme.
The study "Bias Against Women and Girls in Large Language Models" delves into the issue of stereotyping within Large Language Models (LLMs), which are foundational to widely-used generative AI platforms such as GPT-3.5 and GPT-2 by OpenAI, as well as Llama 2 by META. The research presents clear and compelling evidence of bias against women in the content generated by each of these LLMs.
The Tech Accord to Combat Deceptive Use of AI in 2024 Elections is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters. Signatories pledge to work collaboratively on tools to detect and address online distribution of such AI content, drive educational campaigns, and provide transparency, among other concrete steps. It also includes a broad set of principles, including the importance of tracking the origin of deceptive election-related content and the need to raise public awareness about the problem.
The article examines the challenges faced in conducting audits on deployed artificial intelligence (AI) systems. Despite their critical role, effectively executing AI audits remains difficult. Through interviews with 35 AI audit practitioners and an analysis of 390 tools, the article maps the current landscape of available AI audit tools.
The UN General Assembly adopted a landmark resolution on the promotion of “safe, secure and trustworthy” artificial intelligence (AI) systems that will also benefit sustainable development for all.
The project aims to raise awareness among workers and trade unions about the increasing use of AI systems in workplaces. While AI has gained attention through events like the introduction of ChatGPT and Hollywood writers' strikes, its integration into various industries raises concerns about privacy.
The Data Protection and Digital Information Bill, currently at Committee stage in the House of Lords, is set to undermine vital rights that protect vulnerable consumers and help workers understand how they are monitored by companies and public bodies. The Bill is a threat to fair markets and open public services.
A typology of artificial intelligence data work (March 2024)
This article introduces a new framework for comprehending human labor involved in the production of artificial intelligence (AI) systems, particularly focusing on data preparation and model evaluation. Termed 'AI data work,' these labor forms are revealed as crucial components of the AI production process.
The NTIA's Artificial Intelligence Accountability Policy Report, delves into crucial strategies and frameworks aimed at fostering accountability in the development and deployment of AI technologies. This comprehensive report addresses the growing concerns surrounding the ethical implications, risks, and potential harms associated with AI systems.
The Poynter article discusses the ongoing debate among publishers worldwide regarding compensation for licensing their news content to artificial intelligence systems like OpenAI, with particular concern about fair valuation and avoiding past issues with social media platforms. While large companies like OpenAI require quality content, negotiations primarily focus on deals with major outlets in the US and Europe, leaving smaller outlets and those in low-income countries at a disadvantage. Concerns arise over potential oligopolies, lack of transparency, and fears of dependency on AI technology.
This article by Courtney Radsch discusses the European Union's recent passing of the Artificial Intelligence Act (AI Act) and its implications for addressing the ethical, safety, and rights-based standards surrounding AI adoption. While the Act represents progress in regulating AI, it falls short in addressing existing harms caused by AI technologies, such as IP theft and algorithmic decision-making. The article highlights concerns regarding the Act's extended timeline for implementation, particularly in the context of elections and disinformation. However, it acknowledges the Act's potential to impact journalism and democracy positively by mandating transparency and copyright compliance in AI usage.
The GIJN article delves into the various ways journalism outlets worldwide measure the impact of their reporting, highlighting diverse approaches and metrics used. It emphasizes the importance of consistent reporting over time to generate public outcry or regulatory action. Different organizations employ different criteria to gauge impact, with examples including categories such as real-life change, amplification by other outlets, audience engagement, and influence on public debate. Through case studies like Agência Mural de Jornalismo das Periferias in Brazil and The Marshall Project in the US, it explores how these outlets assess their impact on policymakers, advocates, experts, and other media.
Accelerating Progress Toward Trustworthy AI (February 2024)
The report is about Mozilla's ongoing efforts to promote trustworthy artificial intelligence (AI). It provides an update on the progress made since the publication of their 2020 white paper on trustworthy AI. The report outlines Mozilla's work in four strategic areas and maps key initiatives both within Mozilla and across the broader AI ecosystem.
Tracing the Roots of China’s AI Regulations (February 2024)
In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.
Involving the public in AI policymaking (February 2024)
"Connected by Data" is a campaign dedicated to empowering communities to actively participate in the governance of data and artificial intelligence (AI). Their mission is to ensure that communities are central to discussions, practices, and policies surrounding data usage.
The governments of the United Kingdom of Great Britain and Northern Ireland, represented by the Department for Science, Innovation, and Technology, and the Government of Australia, represented by the Department of Infrastructure, Transport, Regional Development, Communications, and the Arts, henceforth referred to as "both participants," have jointly agreed to establish an extensive and forward-thinking memorandum of understanding (MoU) focused on online safety and security.
In Transparency We Trust? (February 2024)
The article discusses the shortcomings of current human-facing disclosure methods in addressing the challenges presented by AI-generated content. It highlights concerns regarding the reliance on visible labels and audible warnings, which may be easily bypassed by bad actors and fail to prevent or adequately address harm.
This white paper examines the impact of AI-generated disinformation on British elections, society, and national security, offering a detailed framework for UK policy-makers, focusing on regulatory measures, technological solutions, and joint governance approaches.
Last updated