By Sean Goforth

The task sounds straightforward: apply a set of decency rules to social media videos or images. That’s what many content moderators do every day.

Yet, that glosses over the grim reality of how disturbing content moderation can be. “It’s digital…but it’s real life,” said one former moderator during a recent episode of 60 Minutes Australia.

The segment featured interviews with former content moderators who recounted the emotional distress of seeing benign videos that had been flagged on social media, followed by atrocious acts of violence.

This is only the latest in what’s become a steady stream of negative reportage on reactive content moderation, a growing portion of which is handled by third-party outsourcers.

To cite one of many examples, two years ago the Washington Post chronicled the post-traumatic effects of being a content moderator that supported a number of social networks in Manila. “A year after quitting his job,” WaPo noted, “[he] prays every week that the images he saw can be erased from his mind.”

These challenges come as demand for content moderation as a service has surged. Front-line social media customer support is among the top quarter of 34 outsourcer traits that enterprise decision-makers most favor in Australia, France, Italy, Germany and Spain, according to the 2021 Ryan Strategic Advisory Front Office BPO Omnibus Survey.

Yet, outsourcers that deploy AI and other leading technologies show how the burdens of content moderation can be eased. “Because of the development of AI, you can automate out a lot of that content that you don’t want people to see—or have to see,” says Amanda Sternquist, Global Head of the Digital Engagement Practice at HGS.

AI applications can be designed with image detection technology. So, if an image is perceived to be disturbing, or if the content is questionable on other grounds, AI can filter out the content, alleviating the constant need for human moderation.

In this sense, the industry may be suffering from a reputational hit that may already be being addressed. Many first-hand accounts surrounding the horrors of content moderation emerged in the wake of grizzly events that were recorded on cell phones. Several of these instances occurred more than five years ago, at a time when AI was an aspirational technology. Now though, AI applications are being deployed on a wide scale.

“If your goal is customer care, then you particularly staff your team and enable your technology to funnel through the customer care-related content,” said Sternquist, “and use AI to weed out the elements that are not going to be impactful for anyone to see or document.”

Still, not all content that is flagged for moderation can be reviewed by AI applications.  A degree of human review remains necessary.

To address this, some firms have opted to break up content moderator’s day so that an 8-hour shift would not be filled with reviews of shocking or depressing images. For instance, two-hour turns with flagged content can be counterbalanced by hours spent on a more uplifting workflow. Content moderators can spend large portions of their day working on brand management through social media channels.

Technology can also be used to sift the workflows for agents, ensuring that agents’ exposure to potentially disturbing content is minimized throughout the day.

Also, some outsourcers now offer on-site counseling services to all agents, including content moderators. These counseling services appear likely to become even more widespread as more and more BPOs unveil on-site healthcare facilities, a trend that has become common since the onset of COVID-19.

These practices cannot fully remedy the problems posed by disturbing content posted on social media. But the use of AI, better workflow management, and access to mental health services promise to ease the burden for content moderators, offering them a more balanced and healthier workflow.  Outsourcers providing content moderation functions need to ensure that they are equipped with these tools.