RP Sanjiv Goenka Group

Generative AI is changing the face of global elections. How can Trust & Safety teams adjust to the new reality?

Joao Anastacio
Practice Head, Trust & Safety
Arup Angle
Advisory Board Member
Estimated reading time : 6 Minutes

Share this post

“We now live in a world where the capacity to generate misinformation and disinformation absolutely swamps the capacity of fact checkers.” – Danielle Allen, Professor of Public Policy, HKS; James Bryant Conant University Professor, FAS.

In February, New Hampshire voters received robocalls with a recording of President Joe Biden encouraging them not to vote in the primary election. Instead, they should “save” their vote for the November election. But the robocall was an AI-generated deepfake.

Leveraging misinformation as a campaign tactic is not new, but AI makes it much easier and faster to create and spread. You might have heard about the recent fake audio clips of British Labour Party leader Keir Starmer, which spread on social media, or how Florida Governor Ron DeSantis used an AI-created deepfake of Donald Trump’s voice in a television ad. In a shockingly memorable incident in 2022, Northern Irish politician Cara Hunter was targeted with an explicit deepfake video that left a “tarnished perception” of her across the country. Other AI-generated campaigns are happening or have happened recently in Moldova, Poland, Slovakia, and Bangladesh. Though the New Hampshire robocalls were unveiled quickly, fact-checkers say AI-generated audio is especially hard to weed out. When such content is created at scale, the problem will only worsen.

Worldwide Elections: A Major Concern

These incidents are worrying experts in an important election year. Time Magazine is calling 2024 the election year, as more people than ever in history will get the chance to vote. Nearly 100 countries, including 8 of the world’s 10 most populous, are holding elections. About one-quarter of the world’s population, or around 2 billion people, is eligible to vote in one of these elections this year.

Communications policy expert Cayce Myers, from Virginia Tech, notes: “Regulating disinformation in political campaigns presents a multitude of practical and legal issues. Despite these challenges, there is a global recognition that something needs to be done.”

In addition to deepfakes and misinformation, AI can influence elections in other ways. Panic about AI misinformation is eroding trust in the media, electoral processes, and government entities. There’s concern that users are now questioning all content they consume– including authentic content. A 2022 survey revealed that half of American adults trusted information from social media and the news less than the year before.

Embracing the good with the bad

Much of this negativity and fear is to be expected. AI is new technology and there’s a lot of uncertainty around how it will be used. Still, there’s a great deal of promise and a huge opportunity to embrace generative AI to make a positive impact on this election year– and beyond. Fact-checkers are already leveraging generative AI to detect false content in real-time and more easily validate images and videos. AI chatbots have the potential to provide voter education resources, like polling locations and procedures, to make voting more accessible. AI-powered tools may also improve data management and reduce human error in election administration. Political parties in India this year leveraged machine learning to target political ads based on voters’ personal data, and political campaigns can now create almost instant responses to new developments.

But what can we do?

Between disinformation, the erosion of trust, and AI-powered campaign strategies, it’s clear that AI is influencing the many elections in 2024. In fact, 79% of American adults think steps should be taken to restrict made-up news– meaning that AI is on the agenda no matter what. Candidates need to have a take on tech and AI to appeal to voter concerns.

Some countries and leaders are already taking action to prepare for the impacts of AI: election officials across the US are working on AI scenario training to mitigate risk. The FCC is considering requiring disclosures for political ads containing AI-generated content, while companies like Google and OpenAi have announced that their AI platforms will simply not answer questions about elections. Other companies, including Microsoft and Perplexity, are prioritizing reputable sources and making updates to services to combat false election information. But there’s more we can do.

At Firstsource, we’ve put together 7 ways that Trust & Safety teams, research groups, and other public professionals can fight against AI-powered misinformation.

  1. Authenticate and disclose sources: One key first step for Trust & Safety teams is to create a system for authenticating content before it goes live on a platform, as well as disclosing the sources for any content. For instance, most social media platforms now have AI content tags that help improve transparency for users. Transparency and external communication about authentication policies are important to help users make informed decisions about the information they’re consuming and understand where that information comes from.
  2. Audit robustly: The EU’s Digital Services Act now mandates that large online platforms undergo third-party audits of their algorithms and content moderation practices. Trust & Safety teams in all kinds of organizations could proactively look at deploying third-party audits since they can reveal biases, assess the effectiveness of algorithms and content moderation policies, and help identify historical trends in the types of content already existing on a platform.
  3. Monitor and moderate content proactively: Even with authentication processes in place, it’s important to monitor content, especially content that is spreading quickly or gaining disproportionate attention. As we noted earlier, it’s nearly impossible for fact-checkers today to keep up with the sheer volume of content shared across platforms every second. There are a number of AI tools that can help automatically detect AI-generated content, and AI-supported algorithms can help flag content more quickly. Still, AI-generated content can slip through the cracks and sometimes, AI tools will falsely label human-generated content as AI-created. The best type of content moderation leverages AI and automation to support human fact-checkers and moderators in making the final decisions.
  4. Reduce the spread and virality of potentially false content: A challenge that integrity teams sometimes face is the question of removing content that may be false but isn’t yet confirmed. A yes/no decision can be bogged down in the decision-making channel– but that’s not the only option. Platforms can apply reach limits that reduce the spread and visibility of posts that are flagged (whether by human moderators or AI tools) as potential misinformation while a team verifies the content. For instance, Facebook has a program that reduces content distribution when a fact-checker flags a post as false, and YouTube reduces recommendations for “borderline content.” Reach limits can be an effective middle ground in certain scenarios.
  5. Address user reports: Trust and safety teams can increase public trust by prioritizing user reports. Moderation teams should promptly provide feedback to users who report content, whether they are of misinformation or other concerning content, demonstrating that reports are taken seriously and encouraging users to flag concerns in the future. Bring transparency to users with clear policies about user report investigation, and increase external communication about the user report process. For instance, some companies share annual briefs describing the enforcement actions they’ve taken in the past year.
  6. Leverage outside resources: When building and refining authentication policies and content moderation systems, platforms should seek feedback from diverse external perspectives. Input from ​​academics, NGOs, and civic organizations can help inform content policies, bring in diverse views, increase public trust in a platform, and reduce blind spots in policy creation. It’s also important to provide greater access to platform data for researchers by growing partnerships with fact-checkers and independent organizations to protect electoral integrity.
  7. Educate users: Platforms must invest in user education and awareness initiatives. Ultimately, there is no way to prevent every piece of misinformation from getting out, so the end user should know how to identify and resist misinformation. Platforms can create resource libraries or on-platform tips that help users understand the risks associated with online interactions and increase digital literacy. Encouraging users to think critically about what they consume can have a broad impact.

Getting Started

Your team doesn’t have to take on all of these action steps alone. Content moderation support is where a company like Firstsource comes in: we bring a combination of human and AI expertise to quickly and accurately support content moderation at scale. In a recent partnership with a political commission, Firstsource was tasked with the goal of suppressing harmful content and protecting free speech. With a multi-lingual team, Firstsource analyzed content in 12 languages, trained a machine learning model on potentially malicious keywords, and built a content moderation process across 5 social media platforms.

The result? Over 1 million pieces of content were effectively analyzed to remove harmful language and imagery, and content validation quality scores of 98% were maintained, increasing customer trust in platform content. We also reduced operating costs by 25% for our client.

The 2024 Imperative

We know that generative AI is going to have a massive impact on an important election year, in countries across the world, offering challenges and opportunities alike. We also know that the impact of AI won’t stop at election-only content. Even while AI helps us more easily detect false content, generative AI is poised to change the spread and consumption of misinformation and disinformation forever. Trust & Safety teams must take action now, learning how to use AI to their advantage: sign up for an AI readiness diagnostic from Firstsource today. Our team is at the forefront of innovative solutions for content monitoring, and we’re ready to partner with you so your team can adapt to the new frontier. Get in touch to find out your next steps.

Are you ready to fortify your trust and safety strategies in the age of AI? Connect with us today to learn how we can help you navigate the complexities of AI-driven global elections. Reach out to our team at Joao.Anastacio@firstsource.com

post

This form will put you in touch with our business development team. For all other functions, please click here.

Ready to get in touch and grow your business?
Tell us a little more so we can connect you with the right person
Last step! Let us know which solutions you're most interested in
Connect with us

Download Now

Simply fill out this form to download