Can A.I. be an effective tool against disinformation and hate speech? A geopolitical view

Neil Raden
6 min readFeb 27, 2022

SUMMARY: A.I. tools to help combat disinformation and hate speech are already in use — but just how effective are they? Is better disinformation tech on the horizon? Will it be strong enough to fight back against state-sponsored disinformation?

Social media sites use technical means to discern hate speech and disinformation, but progress on sifting out hate speech is still lacking. In a recent article, A.I. still sucks at moderating hate speech, Karen Hao writes:

For all of the recent advances in language AI technology, it still struggles with one of the most basic applications. In a new study, scientists tested four of the best AI systems for detecting hate speech and found that all of them struggled in different ways to distinguish toxic and innocuous sentences.

They developed 29 different tests targeting different aspects of hate speech to more precisely pinpoint exactly where each system fails. This makes it easier to understand how to overcome a system’s weaknesses and is already helping one commercial service improve its AI.

There are two popular commercial services: Two Hat’s Sift Ninja and Google Jigsaw’s Perspective API. Major news outlets like the New York Times and the Wall Street Journal employ both. On hate speech, they’re not perfect and therefore require human review.

Perspective tends to be too critical, while Sift Ninja is too forgiving. Human speech is so ambiguous, nuanced and tonal. AI-based hate-speech detection is a very tough problem. If your algorithm is too lenient, it lets through damaging hate speech. On the other hand, if it is too rigid and stiff, it can tend to censor the language of marginalized groups who use social media to protect themselves:

All of a sudden, you would be penalizing those very communities that are most often targeted by hate in the first place,” says Paul Röttger, a Ph.D. candidate at the Oxford Internet.

Lucy Vasserman, Jigsaw’s lead software engineer in A.I., still sucks at moderating hate speech:

Perspective overcomes these limitations by relying on human moderators to make the final decision. But this process isn’t scalable for larger platforms. Jigsaw is now working on developing a feature that would reprioritize posts and comments based on Perspective’s uncertainty-automatically removing content it’s sure is hateful and flagging up marginal content to humans.

What’s exciting about the new work, she says, is it provides a fine-grained way to evaluate state of the art. Jigsaw is now using HateCheck to understand better the differences between its models and where they need to improve.

A.I. might be modestly useful for identifying hate speech, but what about coordinated, state-sponsored disinformation? From a report from the Council on Foreign Relations, How China Ramped Up Disinformation Efforts During the Pandemic:

Beijing has increased its manipulation of information and disinformation efforts around COVID-19 to damage democracies and boost itself, but its strategies have had mixed results. It’s primary focus at the moment is Highlighting and misrepresenting democracies’ failures.

There is a steady flow of disinformation from Iran, Russia and China: they propagandize that no democratic countries succeeded in their programs to fight Covid-19. On the other hand, they broadcast the success of autocratic governments. These claims are false as, for example, Iran’s COVID-19 experience has been a disaster, and it is impossible to discern what China’s results are.

Russia and China had different disinformation programs. Before COVID-19 hit, China avoided the Russian flavor of conspiratorial disinformation. Possibly as a means to detract from being accused of introducing the pandemic, China has adopted more of the Russian style to aggressively blame others (principally the U.S. military) for the origin of the virus and more recently, vigorously denounce the world democracies, especially the U.S., U.K. and France, as failures, and position China as a global leader.

Disinformation domestically — and targeting Taiwan — is a long-standing tactic, but recently, but in recent years, Beijing has been reaching out internationally and mimicking the Russian tactics of stirring up division and chaos in other countries, creating conspiracies and fomenting public doubt about the capability of a country’s leadership.

The U.S. State Department’s Global Engagement Center monitors disinformation and propaganda and has seen a flood of it from China, Iran, and Russia, focusing on disinformation narratives about the United States and the pandemic. Since the pandemic emerged, Beijing has relied even more on ramping up disinformation activities internationally. From the report:

Beijing uses large numbers of fake social media accounts to push its messages. It has increasingly relied on the types of trolls and bots Russia has utilized. Chinese diplomats amplify spin and outright false messages, and prominent Chinese state media outlets push the government’s stories.

U.S. intelligence sources reportedly have found that Chinese intelligence agents, or people linked to them, appear to use text messaging and messaging apps to sow panic in the United States about COVID-19. U.S. officials had not previously noticed Chinese intelligence agents trying to spread disinformation by texting citizens’ mobile phones, a strategy that requires significant knowledge of U.S. infrastructure.

Meanwhile, Google has revealed it caught Chinese hackers trying to get access to email accounts from the presidential campaign of former U.S. Vice President Joe Biden, possibly to influence the 2020 election.

An unending fount from China is that the United States has had a disastrous response to the pandemic (initially true.) Other stories are that healthcare workers in Europe left sick people to die and that (former) President Donald J. Trump planned to lock down the entire country, among other falsehoods. And while COVID-19 was a frequent topic, other stories included U.S. protests over racial injustice, human rights-abusing (incarceration) and a polarization of society.

Disputing the virus’s origins. Beijing has disseminated the story that the virus came outside China, devoid of evidence that it could have come from the U.S. military. Attacking specific countries and leaders. Chinese information and disinformation campaigns have targeted many countries, including Australia, France, and the United Kingdom. The Chinese Communist Party has attempted to tailor its messages while still promoting Beijing as a global leader.

Social media accounts comprise a large percentage of outlets for China to press its messages. Since the pandemic, it engages with trolls and bots similar to the Russian campaign. China also engages its diplomats to promote misleading, false and misinformative. And lastly, the massive Chinese state media promotes the government’s messages.

U.S. intelligence sources reportedly have found that:

Chinese intelligence agents, or people linked to them, appear to use text messaging and messaging apps to sow panic in the United States about COVID-19. U.S. officials had not previously noticed Chinese intelligence agents trying to spread disinformation by texting citizens’ mobile phones, a strategy that requires significant knowledge of U.S. infrastructure.

Google discovered that Chinese hackers attempted to gain access to email accounts, including those of the presidential campaign of the former Vice-President Joe Biden, possibly to influence the 2020 election.

Chinese disinformation is still more simplistic than Russia’s despite its volume and breadth. They are, at this point, not so sophisticated and more accessible to spot than the Russian bots.

However, Beijing’s disinformation punches are landing. And as China and Russia increase their cooperation on information and disinformation tools, they are sharing knowledge through exchanges and in other ways, more dangerous messaging almost surely will increase.

How can this disinformation be detected and confronted? The hope is that further development of disinformation — and hate-sensing — content detection is well within the current capabilities of A.I., especially Natural Language Processing, particularly deep learning GANs (Generative Adversarial Neural Networks) The problem is that Russia and China have very extensive capabilities is dispensing disinformation, so we are in a Spy vs Spy situation, with no international protocols to moderate it.

My take — digital detection of hate speech and disinformation isn’t there yet

Digital detection of hate speech and disinformation may be on the horizon, but there are legal issues of First Amendment Rights, which China sees as our Achilles heel. The social media platforms will eventually find a way to attenuate, but not defeat altogether, hate speech, fake news and disinformation. Part of the problem is the definition of disinformation is subjective, especially across languages, leading to ridiculous outcomes

Expectations are low from the U.S. Federal government. For starters, it’s already fractured, and some are dependent on the support of China and Russia. In addition, catching bad actors can skirt the line of free speech.

To explore the topic of compromised regimes, How Does Kompromat Affect Politics? A Model of Transparency Regimes, is worth a read, if you don’t mind a little Game Theory, but it is worthwhile and readable with the Game Theory Appendix.

--

--

Neil Raden

Consultant, Mathematician, Author; focused on iAnalytics Applied AI Ethics and Data Architecture