Type to search

Facebook’s moderation mess – is marketing complicit?

Social & Digital

Facebook’s moderation mess – is marketing complicit?

Share

Following CEO Mark Zuckerberg’s testimony in front of US Congress, Venessa Paech explains the ethical conundrums associated with Facebook’s moderation procedures. Are marketers complicit?

Zuckerberg survived the first half of his Congressional grilling yesterday, with government representatives pulling no punches in questioning the chief executive on a wide range of the social media giant’s recent public missteps.

The hearing was supposed to be about Facebook’s latest foray into taking over global finance with a new kind of cryptocurrency, but Zuckerberg’s inquisitors took the meeting as an opportunity to bring up other issues too – Facebook’s recent indication that politicians won’t be penalised for lying in their advertising, the legitimacy of Facebook’s claim to value user data privacy and, most timely, the infamous conditions experienced by Facebook’s subcontracted content moderators.

The latter is a topic covered extensively in mainstream media of late. We’ve seen revelations of Facebook paying its content moderators as little as US$30,000 a year to watch the most obscene and disturbing content submitted to the platform. Moderators are also allocated only nine minutes of supervised ‘wellness time’ a day to compartmentalise. As Representative Katie Porter puts it in questioning Zuckerberg “That means nine minutes to cry in a stairwell while somebody watches.”

In fact, this topic was tackled only last week in a piece published by The Conversation, co-authored by Venessa Paech, online communities and AI researcher at the University of Sydney and founder of Australian Community Managers (ACM). The article discusses the gravity of mistreating an estimated 15,000 content moderators, Facebook’s apparent disregard for their wellbeing – Zuckerberg calls some reports “a little overdramatic” – and the inherent internet hierarchy that moderation practices highlight.

Marketing caught up with Paech to dive deeper into the Facebook moderation scandal, the ethical implications it raises for marketers, whether AI will ever totally replace the human moderation function and more.

Is being a marketer also being complicit in an unwinnable moderation race that sacrifices individuals’ mental health for prolonged complacency? 

Venessa Paech: The ethical reflex in marketing and advertising is a welcome turn. If questions aren’t being asked about the real cost and impacts of our work, we’re effectively complicit. Most social media marketing has become trapped in a race to the bottom. Our work began with a refreshing focus on the human, but those efforts have largely vanished, leaving creatives to languish in a routine of passively gaming opaque systems. I’m betting marketers would love a chance to be more strategic and less preoccupied with algorithm chasing. To paraphrase a favourite ’80s movie, War Games, “the only way to win, is not to play”.

For years the smart money has cautioned against renting eyeballs. Any marketing team reliant on Facebook needs to be developing alternatives and exit strategies. Interrogate your dependencies and invest in back up. Facebook isn’t designed for community building and its value as an audience amplifier is under scrutiny. The deletion campaigns aren’t going away, nor the political and regulatory heat. Branded communities are poised for a renaissance and niche social platforms are hungry to make an impact. Facebook is a noisy, volatile, unpredictable space, making this an ideal time for innovators to explore more nuanced digital terrain. 

How can the social media marketing model change – in its relationship with content – to reduce pressure on moderation and moderators? Is this possible at all? 

Investing in brand moderation is important – to model healthier approaches and shield your business from a legal and reputation perspective. Brands can also creatively address these problems in their content and campaigns, rather than lazy baiting for clicks. However, as we’ve seen, even content designed to inspire noble or constructive conversation can be easily derailed by toxic interactions.

This is the bigger picture – trolling-inspired behaviour is now culturally sanctioned, in part due to the moral deficiencies of platform giants. Every business using Facebook for marketing – especially those profiting from its ecosystem – is contributing to this culture. Are the returns really worth it?

Do you believe Zuckerberg when he says he understands the mental strain experienced by content moderators? Do you think Facebook will actually do anything to improve the way content moderation is performed in light of bad press? 

Trust is earned and Zuckerberg is overdrawn. In the tech sector it’s rare for a founding CEO to still be at the helm 15 years later. While his involvement is probably a significant factor in Facebook’s commercial success, I’d argue it’s also a reason for its monoculture and indifference. It’s worth asking what might happen if another CEO took the helm. 

There’s a lot of debate about what AI will and won’t be capable of in the future. The crystal ball analysts would have us believe that we’ll all be floating around in ​Wall-E​-esque chairs while the robots take care of the rest; and others argue that the human brain is a irreplicable computing machine that AI will never quite match. What are the real limits of AI as we know the problem today? Will AI ever be able to moderate content to the degree that we trust humans to do so now?

Whether it’s labelling data sets for machines to recognise and classify, transcribing and interpreting recordings so our voice assistants can better mimic and intuit voices or Facebook’s human moderators adjusting and optimising automated moderation systems, humans are everywhere in our AI supply chains. 

In moderation, AI is mainly used for pattern recognition. It finds identical pieces of content, or accounts that meet specific criteria, then flags or removes it – copyrighted material, for example. This is ‘Narrow AI’ – doing one thing supremely well. We’re beginning to use computer vision and natural language processing (NLP) to colour between these black and white lines (such as ‘do someone’s facial expressions or vocal tonality in a video suggest fear?’).

But current systems are still sloppy at nuance and context. Algorithmic bias is well documented. And there’s a world of murky ethics when we base judgements on outward appearance (AKA digital phrenology). Just ask people of colour. For the foreseeable future, machine moderation needs humans in the loop – programming its frameworks, correcting its misfires and hopefully, guarding against harms and learning from history.

What are we to make of the Facebook moderation mess? We can gawk at it when we see horror stories in the press, but beyond that, what is Facebook’s incentive to clean up its moderation processes? 

Other than bad PR, which scale and influence insulates them against, Facebook has little incentive to meaningfully address challenges around moderation. One option would be to offer improved tools to the professionals managing communities and groups on their platform, and engage these social experts in developing more useful tools for building healthy social culture. It would help mitigate harms (and is a smarter approach to scaling, in my view), but it doesn’t cascade to those most vulnerable – the subcontracted moderation workforce Facebook keeps at a convenient arm’s length. 

Ultimately, it boils down to Facebook’s business model – a dangerous mix of data mining and behavioural manipulation for profit. Until social licence carries more currency than cash, there’s no motivation for Facebook, YouTube or other platforms to retool algorithms that optimise for sensationalising, radicalising content that keeps users captive to mine and monetise. Their gestures of reparation are apology theatre until something structural shifts.

Facebook’s moderation mess is a mess we all own. We’ve appointed these platforms as the social and informational infrastructure of our era and they embody the wealth and power inequities we’re grappling with globally. For all their ‘newness’, these are old, old problems. 

The upshot is, the internet is rewritable. There were online communities and social commerce before Facebook, and they’ll endure after. Small social is the emergent disruptor to these creaking giants, and marketers can hone expertise in building and leveraging smaller, contextual networks around creative ideas. Those who crack clever, authentic ways to centre the human will get the most out of AI and the versions of social media we’ve yet to see.

In the meantime, money talks. If your Facebook budget is significant, try letting your account manager know that your business is watching and says ‘do better’. We’ve stopped kidding ourselves that technology is neutral, and the tools we choose tell the world something about us.

Further Reading:

Image credit:Yang Jing

Tags:
Josh Loh

Josh Loh is assistant editor at MarketingMag.com.au

  • 1

You Might also Like

Leave a Comment