A new website promotes tech industry accountability and public understanding
By Andy Lee Roth and Kate Horgan
Algorithms are the building blocks for a host of artificial intelligence (AI) systems that are revolutionizing how news is produced, distributed, and consumed.
Earlier this month, for example, Patrick Soon-Shiong, the billionaire owner of the Los Angeles Times, launched Insights, an AI-powered tool that will label articles published by the newspaper as representing a Left, Center Left, Center, Center Right, or Right viewpoint. The tool, which the Los Angeles Times boasted “operates independently” from the paper’s human journalists and is not subject to their review, will be used to label any articles published by the newspaper that “offer a point of view on an issue.” Soon-Shiong touted the newspaper’s new AI-driven feature as an “evolution” of the newspaper’s efforts to “engage with our audience” and to help readers “distinguish opinion-driven content from our news reporting.”
Soon-Shiong’s claims about his paper’s new feature are typical of the hype that accompanies the launch of many AI systems. Developers and other interested parties promote their systems as “groundbreaking” or “revolutionary,” frequently making false or unjustified claims about progress while ignoring or dismissing potential limitations and risks. Too often, news reports uncritically convey these claims to the public—as Arvind Narayanan and Sayash Kapoor, the authors of AI Snake Oil, have documented— despite the range of societal threats that AI systems pose, including the devaluing of human labor, degradation of the environment, and undermining of public trust in media.
Consequently, there’s a real need to apply insights from critical media literacy to journalism focused on AI systems and the algorithms that drive them.
Responding to that need, Project Censored launched Algorithmic Literacy for Journalists (ALFJ) earlier this month. Developed with support from a 2024–2025 Reynolds Journalism Institute fellowship and input from journalists, computer scientists, and media literacy educators, ALFJ promotes tech industry accountability and public understanding by providing journalists and newsrooms with essential tools for reporting on the promise, limitations, and risks of algorithmic technology in our everyday lives. The tools and resources featured on the ALFJ website are available to journalists, newsrooms, and the general public at no charge.
Just as journalists and other watchdogs strive to hold government and corporate officials responsible when their decisions affect the public, algorithms—and their human developers—should be held accountable for governing how information is distributed and accessed. Journalists and internet users alike deserve transparency and accountability when it comes to how AI systems filter—and sometimes blockade—access to the information and perspectives we need to be well-informed and actively engaged. ALFJ provides an open and easy-to-use platform for opposing inappropriate algorithmic gatekeeping and promoting algorithmic accountability.
ALFJ is composed of two core modules. One, on algorithmic reporting, includes guidance on relevant questions to ask, the importance of featuring sources from outside industry, and how journalists frame stories about AI tech. These three sections provide solutions to fundamental concerns that arise all the time in reporting on AI.
A second module on algorithmic gatekeeping explains how shadow bans, advertising “blocklists,” and other forms of online content reduction can restrict journalists and newsrooms from reaching wider audiences—especially if they cover topics, such as police violence or LGBTQ+ issues, that social media platforms often flag for violations of “community standards.” ALFJ provides specific, practical recommendations for recognizing these forms of digital gatekeeping and what journalists and newsrooms should do if they suspect their online content has been subject to them.
The ALFJ project lives on a website that was designed to present these resources, and some of the significant body of research behind them, in a clean, colorful, and easily navigated format.
One primary design challenge was to develop a framework that would make complex subject matter clear and easy to use. We used color coding to distinguish the content of the two core modules from each other and eye-catching design elements such as “bento box grids” to organize and highlight content within each module. Bento boxes, which organize information in grids of eye-pleasing rounded boxes, are now familiar design elements on numerous popular interfaces, including many Apple products and social platforms, such as Pinterest.
The ALFJ website incorporates this design trend for two reasons: First, their use ensures seamless content presentation, and, second, we liked the idea of using a design element originally popularized by tech companies to present information and tools that promote critical thinking and transparent reporting about those companies’ often cryptic use of algorithms.
Like bento box grids, the use of algorithms to evaluate political bias in news stories, as Patrick Soon-Shiong is touting at the Los Angeles Times, appears clean and perhaps even appealing. News slant is one of many problems technology boosters have proposed to address using algorithmic-powered tools, with promoters stipulating that tools like Soon-Shiong’s “bias meter” could make judgments of news bias more objective and fair.
Algorithmic literacy alerts us to flawed assumptions embedded in projects like Soon-Shiong’s, which seek to replace human judgment, flawed as it sometimes can be, with algorithmic determinations.
As advocates of critical media literacy regularly emphasize, news is never a simple reflection of a fixed reality, and interpretation, often glossed as “news judgment,” is intrinsic to the practice of journalism.
When asked, reporters and editors can explain how they developed a story, including, for instance, choices they made about the sources they quoted or the angles they emphasized. By contrast, AI tools cannot provide accounts for the processes that lead to their outputs. Furthermore, unlike human reporters and editors, whose profession is founded on ethical principles, AI systems are not designed to seek truth or to ensure that their representations of the world minimize harm.
We developed Algorithmic Literacy for Journalists to encourage more journalists and newsrooms to cover current and emerging AI systems from the standpoint of journalism’s cardinal commitments to transparency and accountability. Doing so will benefit both the public and the profession.
Be the first to comment on "Project censored dispatch: Resisting digital gatekeeping by promoting algorithmic literacy"