How to manage misinformation in online communities

Managing online communities can be a daunting task, particularly in the age of misinformation where falsehoods spread as quickly as fact. Venessa Paech explains how digital marketers and community managers can mitigate risk and proactively cultivate healthy online communities.

Misinformation is hard to escape online. The World Health Organisation (WHO) has dubbed the spread of misinformation about COVID-19 an ‘Infodemic’ – and that barely scratches the surface of conspiracy theories or organised disinformation campaigns.

The 2020 Digital News Report 2020 revealed that 64 percent of Australians are worried about knowing what’s real or fake online. In line with these trends, the Australians who manage online communities are reporting a striking increase in the information and disinformation seeping into their digital spaces.

Misinformation can lead to conflict and negative experiences, which in turn can have a reputational effect or legal consequences for your organisation. There’s also a moral and social cost. When you invite people to gather online you have a duty-of-care over that experience. Research shows that if we don’t step in, we can make it much worse.

Here’s some steps to help you meet the challenge of misinformation and conspiracy theories:

Include it in the rules

All online communities need clear guidance around which behaviours are encouraged or unacceptable. In addition to managing legal risks, these help shape the culture within the group and establish positive social norms. Warnings against bullying, harassment or hate speech are common, but most online communities don’t yet proactively caution against spreading misinformation. Make it clear misinformation and disinformation isn’t permitted and will be removed. 

Don’t forget to factor in the specific regulatory requirements of your industry, for example, if you’re in health or medicine there are often tougher consequences for allowing misleading or false information to remain online.

Consistent moderation

Guidelines are one thing – enforcing them consistently is another. Your community should be moderated with a consistent cadence to prove your guidelines aren’t mere window dressing. 

Use a combination of automated tools and human oversight to detect and remove content that misinforms, disinforms and may result in harm. For example, you might want to prevent links from being shared in the community, or require that they’re approved first.

Create strategic barriers

If you’re trying to build a community amidst noisy social media platforms, it’s hard to create a safe space. Public pages or groups are targeted by drive-by interactions from users that aren’t a member of the community or who may be out to spread harm. Most algorithms propagate content that sensationalises, and this frequently includes posts that feature misinformation or disinformation. 

Hosting a private community with strategic barriers to entry (such as an application process) offers you more control over governance and culture. You’ll need to fight against the tendency for a filter bubble, but you’re in a stronger position to defend against misinformation and other issues.

Talk about it

Many people accidentally share misinformation, so punitive measures aren’t always appropriate. They may be the victims of a coordinated disinformation campaign. Sweeping things under the rug rarely works and nurturing change takes time and conversation. 

If you spot misinformation being shared, consider talking about it within the community. Host a conversation about why it poses a risk to everyone and why it’s being removed. You could share links to official resources to help people build their awareness, or host an online event with a relevant expert on how to spot misinformation. Create content and conversation to help participants understand what it is, why it matters and the role we all play in combating it.

Examine the source

If you can, figure out whether you’re dealing with ordinary users spreading misinformation or a coordinated disinformation campaign. Are the people posting new to your community? Does their account seem fake? Are they posting similar things elsewhere? They may be an account created for disinformation, targeting your users or organisation. If so, record the activity in case authorities want to follow up, then delete and remove them permanently from the community.

Invest in community management

Professional online community management is a fast growing global occupation. Tap the skills of these digital practitioners to negotiate the balance between focused engagement and a risky mess. They can’t solve the misinformation crisis on their own, but they’re trained to help diffuse the problem and get ahead of it by taking the steps outlined here.

Venessa Paech is the senior community consultant at Quiip.
Image from United Nations COVID-19 Response on Unsplash.