It’s April 25, the final day of the Cambridge Disinformation Summit and in the Debating Chamber of the world’s oldest known debating society, The Cambridge Union, hands are continuously thrust into the air as dozens of participants, eager to say their piece, hope to catch the eye of an overwhelmed moderator.
The subject of discussion is ‘Disarming disinformation in the media: What works, what doesn’t and why?’ organized by Thomson Foundation, who have invited a select group to try and provide answers. There are about 50 people from across media, academia, technology, philanthropy and the entertainment industry. Someone describes themselves as “until recently an employee of the State Department.” And there’s me.
Often, something is said that causes me to scribble furiously into my notebook (the event was held under the Chatham House rule, meaning I can share pithy quotes, but cannot attribute them). I am learning about the “tech-lash”, a phrase that was apparently coined in 2013 (where have I been all this time?!) and about how AI overviews that now appear as the top result in most search engines discourage people from clicking through to the information sources. As such, the speaker fears, there will be no distinguishing real news from “synthetic news”.
I hear robust debate about how the standard media practice to address mis- and disinformation is ineffective since it attempts to debunk inaccuracies after they’ve been shared. “You can’t unring a bell,” one speaker says, before calling for a more proactive approach: “warn people, expose the playbook,” they implore. Others begin to weigh in on the feasibility – and the ethics – of this “prebunking.”
A knowing smirk spreads across my lips when one person who works in a safety role in a large tech company admits that so much of the work of online moderation is “protecting women from the creepy things men say to them.”
As much as I was intellectually stimulated by the conversation and impressed by the calibre of the speakers, as the session wore on I felt my stomach tighten. “There is still a white, Western, at times imperial and certainly patriarchal worldview at the heart of this discussion,” I wrote in my notebook.
It’s not that the speakers were mostly white and from the U.S. and Europe – which they were. There certainly seemed to be a decent gender mix, as far as these things are discernable through casual observation. My discomfort lay elsewhere.
Many of the suggestions for what to do about mis- and disinformation were what I’d call tech or market solutionist. In other words, they relied on a seemingly unflinching belief that we just needed to find the right technological solution or the right business model to solve what, to me, were fundamentally human problems. Yes, AI and other technologies have changed what can be falsified and the speed with which fake information, images, and even voices can be circulated. But it is people – human beings – who are still at the heart of mis- and disinformation: either generating falsehoods or sharing them. And yes, there are monetary incentives that explain the proliferation of disinformation, but there are age-old reasons too: misogyny, racism, homophobia… people regularly knowingly share information that confirms their biases, whether or not they believe it to be true.
And those people often hold positions of power. What then works to disarm disinformation spread by trusted religious, political or community leaders, and when, in the first instance, their targets are minoritized groups such as women or LGBTQ+ communities and Majority World populations? Are we satisfied to live in a world where the solutions don’t work for most of us?
After the event, over sandwiches cut into triangles, I shared this gnawing sense that some of the discussants seemed oblivious to, or uninterested in, the ways in which these harms have been perpetuated against groups outside of the mainstream. “Russia is messing with Western democracies, sure. But solving for that doesn’t solve everything!” I lamented to Mutale Nkonde, founder of AI for the People.
I assure you, this is not facile “whataboutism”. Instead, it’s about acknowledging that often in the West, racism or sexism are the gateway drugs to other types of extreme content. That attacks on women and gender minorities are attacks on democracy itself.
Thankfully, Nkonde, who is happy for me to name her here, didn’t think me a heretic, but agreed. If we want to understand what’s going on, we’d do well to look at what’s happening in the margins of our societies, she added. I have included two articles she shared to make her point in the recommended reads at the bottom of this newsletter, along with other useful resources that explore these ideas.
My experience in Cambridge was yet another reminder of why newsrooms like The Fuller Project, that may be considered niche, are critically important to healthy democracies: if you determine the effectiveness of policy just by its impact on those with the most privilege, you make bad policy decisions. Whether debunking or prebunking, journalism cannot speak truth to power for harms it doesn’t acknowledge and communities it does not see.
Note: An earlier version of this article incorrectly spelled Thomson Foundation.