Earlier this week, Mark Zuckerberg took a step to end corporate content moderation on Facebook and Instagram. He announced the end of the third-party fact-checking programme and a move towards a Community Notes model. This means an end to fact-checking partners and a global third-party consortium that flagged fake videos. It means a rollback on bots and algorithms that ban content without a human looking at them. There’s also a dial-back of filters on issues like gender and immigration though Meta will continue to automatically block illegal and high-severity violation posts, like terrorism, fraud and drug-related content.
The model that Meta’s social media is moving to is similar to what Elon Musk brought in when he bought Twitter/X in 2022. Musk gave amnesty to banned accounts on X and reduced its content moderation oversight leading to more extremism on the platform, including a proliferation of fake news, misinformation, and hate posts. In spite of this, the platform emerged as the most important digital space for right-wing thought and one of the most important platforms for last year’s US election.
For many, Zuckerberg’s move is seen as a ploy to get in the good books of elected president Donald Trump who has long accused social media of banning right-wing content in the name of moderation. It doesn’t help that the move happened a day after Meta changed its board configuration, inviting more white male billionaires who are more aligned with the Republican party. Meta has also replaced its policy chief with a prominent Republican.
Ending corporate content oversight now might be a strategic political move, but it’s something Zuckerberg has always believed in. “After Trump got elected in 2016, legacy media wrote nonstop about how misinformation was a threat to democracy,” he said in a video posted this week on Meta’s blog. Zuckerberg acknowledged that the complex systems of moderation that Meta built caused “too many mistakes and too much censorship” leading to a lot of anger in the users of its platforms. He placed the blame squarely on governments and legacy media, adding that though Meta tried in good faith “to address those concerns without becoming the arbiters of truth” it wasn’t successful as it led to more and more online censorship and its human fact-checkers (at least in the US) were biased.
In the months following the 2016 election in the US, it became clear that Russia had swung the election in favour of Trump using a powerful social media disinformation campaign on platforms like Facebook. Coming on top of Facebook’s Cambridge Analytica controversy, the platform received backlash with both the public and lawmakers insisting they crack down on the barrage of misinformation and toxic posts.
This led to most social media companies like Facebook, Instagram, Youtube and TikTok to come up with stringent corporate content moderation policies to appease governments globally and more importantly, advertisers who didn’t want their advertisements to drop in next to a hate post. As a result, it became a norm for digital platforms to build content policies and set up human fact-checkers in labour-cheap markets like India and Vietnam. Moderation was an expensive endeavour, but it was the only way these platforms could operate in countries and keep getting advertisers.
It’s back to the wild, wild web
Dressed in an oversized black t-shirt and a gold medallion in the video, Zuckerberg looked more like a hip-hopper off the street rather than one of the richest persons on the planet. Here he is, a good-intentioned business guy, announcing that he has no business in censoring free speech online. All he wants to do is give us a platform, and we can use the platform in whatever way we want to, post whatever content we want to and freely decide which posts are potentially misleading or need more context. It is wrong to expect him to play moderator on our interactions.
His message almost makes you forget that the social media you post on is not a neutral platform where all are equal. It’s a business model meant to make you keep coming back, keep posting and consume content as it colonises and sells your data. Even the term ‘users’, which social media companies use for the customers that use their platform, comes from users of drugs. Multiple studies in the last few years have talked about social media addiction and how it negatively affects teen performance, social behaviour and interpersonal relationships.
As I said, the platforms are not neutral. And we cannot expect the companies running these platforms to moderate content on our behalf. Moderation by for-profit companies is and always was a lobbying tool to appease different governments across the world, rather than an effective way to protect the vulnerable. As the Covid-19 pandemic showed us, even with active corporate moderation the platforms were flooded with medical misinformation. They just didn’t work right. So, it’s a good idea that this is getting over.
My question is, what next? What will replace it? In a time when our digital social spaces are inundated with fake videos, deepfakes and politically divisive content and it’s getting harder to figure out what’s real, who will moderate these platforms? It’s a mess, frankly and Musk and Zuckerberg can see it. It’s expensive and troublesome to keep weeding out deepfakes and misinformation. They’ll tell you that local governments should take care of it. As should the users themselves. If the community finds it offensive, they can flag it, else let it proliferate.
In other words, it’s a chaotic ride at your favourite digital platform. Rather than censor or ban, if we use a platform, we will just need to become our own moderators. As the pandemic showed us, if a virus is set loose, you can’t eliminate it. You have to inoculate against it. Fake, malicious content is the digital virus of social media. We will need to take responsibility of our own content, for the content we see, get influenced by and act on. And if we can’t, maybe we need to stay away.
The Australian government has taken proactive steps to protect its vulnerable from the web. Last November, Australia banned minors under the age of 16 from social media. They were the first country to do it. Will we be able to follow suit?
Shweta Taneja is an author and journalist based in the Bay Area. Her fortnightly column will reflect on how emerging tech and science is reshaping society in the Silicon Valley and beyond. Find her online with @shwetawrites. The views expressed are personal.