RP Sanjiv Goenka Group

How to build trust with effective content moderation

Joao Anastacio
Joao Anastacio
VP, Tech Client Services
Estimated reading time : 4 Minutes

Share this post

On Saturday night, Netflix recommended a new release called ‘Accused’. And so, after the usual toddler bedtime routine, I settled down to watch it.

It’s an almost unbearably tense film in which a young man, wrongly accused of a terrorist attack, is subjected to a social media witch-hunt. It brings to life the dangers of misinformation. And it shows us, with frightening realism, how the flames of misinformation become a wildfire when fanned by social media.

The online world can be a force for good: It keeps us connected and enables us to communicate and collaborate with people we might otherwise never have met. But where there is light, there is shadow. Online platforms are prone to misuse, and all too often become a vehicle for misinformation. The facts the internet so readily places at our fingertips sometimes turn out to be fiction. The content we’re exposed to can be inaccurate or misleading and, at worst, divisive, offensive, and dangerous.

We’re living through an explosion of content, and the more stuff there is out there, the higher the chance that we’ll be exposed to misinformation. A 2021 survey by the Oliver Wyman Forum revealed that 83% thought misinformation was an issue, 63% were worried about falling for it, and 32% claimed to have been the victims of fake news. The problem has got worse since then. Inaccurate and misleading content is eroding our trust in individuals and institutions. Divisive, offensive, and dangerous content is tearing away at the moral fabric of society.

With so much at stake, it’s no wonder that content moderation has become such a hot topic.

Content moderation is essential, it’s also complex and challenging in both theory and practice. In determining policies around what’s acceptable and what’s not, and then implementing and policing those policies, we need to strike the right balance between censorship and freedom of speech. Too little moderation and people are at risk. Too much moderation, and we verge into authoritarianism, threatening individual rights.

Where content is clearly illegal, it’s easier for moderation decisions to be objective. Few would, I hope, disagree with laws that state that images of child pornography are wrong. Measures must be in place to prevent them being uploaded. If any sneak through, they must be removed at lightning speed. But, in many instances, content moderation is less black and white. It’s often nuanced, and subjective. For example, it’s not illegal to express a strong political point of view, however misguided others might believe that view to be. It is, however, illegal to incite violence. At what point does a strong view become inciting? And who decides?

Governments are grappling with these grey areas. The Digital Services Act (DSA), which came into effect across the European Union this summer, requires platforms to detect and remove illegal content as well as report on how they are reducing misinformation while protecting freedom of speech. Meanwhile, in the US, the debate around content moderation legislation and the First Amendment rights to freedom of speech continue to blaze in the run up to the elections. On 24 September, the US Supreme Court announced that they would rule on the 2021 laws passed in Florida and Texas. These state laws allow users to sue social media platforms for political censorship and restrict platforms from removing content even if it contravenes their terms of business. If the state laws are upheld, the decision will have a huge impact on content moderation across the US.

Within this context, online platforms and digital businesses need to sharpen their content moderation initiatives to ensure that they are compliant today, and able to flex and adapt policies and procedures in line with evolving legislation. And, most importantly, they need to keep their users, communities, and customers safe.

In a world rife with misinformation, people expect the platforms and businesses they use to take responsibility by moderating content. Recent research published in PNAS, the Proceedings of the National Academy of Sciences, revealed that, despite significant differences along political lines, an overwhelming 80% of US citizens thought that taking action against misinformation by removing posts and suspending accounts was better than doing nothing.

Putting users first, and making their safety your priority, earns their trust. And trust is possibly the most valuable currency an organization can have. It encourages people to use you more, buy more from you, and recommend your products and/or services to others. It fuels growth.

Organizations need clear and consistent rules to govern the safeguarding of their online spaces. And they need to implement those rules efficiently, effectively, and as objectively as possible. Experience shows that the best way to do this is by blending machine capabilities with human talent.

Automated tools, powered by artificial intelligence (AI) and machine learning (ML), enable content moderation at scale and speed. But an automated solution is only as good as the data and approach used to train the machines, so you need to partner with providers who have the appropriate skills and experience. They must be able to develop algorithms and analytics that set clear parameters for automated decisions and eliminate the risk of false positives (where acceptable content is unnecessarily taken down) and false negatives (where machines fail to detect content that’s unacceptable). Where content moderation decisions fall into the grey areas, cases must be escalated to a human.

While the best AI tools now understand human language, they aren’t able to reason like humans and they can’t properly assess uncertainty. AI is susceptible to bias and can make inappropriate recommendations with confidence. Human judgement remains essential. That’s why, at Firstsource, we always have expert humans in the loop, checking automated recommendations and making those final critical, nuanced, and moral decisions in the way that only humans can.

Our approach is delivering excellent results for clients. By analyzing more than a million pieces of content, in multiple languages, we recently enabled a leading political commission to raise content quality scores to 98%. We believe that progress isn’t just about stopping bad things from happening, it’s about creating the ideal conditions for new, good things to happen too: The commission is using insights generated by our social media behavioral analysis, as well as the 25% we saved them in operational costs, to strengthen their services.

We’re all shareholders in the online world of words and images, and Firstsource is committed to making it a better, safer space.

If you’d like to find out more about our rigorous content moderation services and end-to-end Trust and Safety solutions, please click here or get in touch with me at Joao.Anastacio@firstsource.com.

post

Download Now

Simply fill out this form to download