Facebook's Expanding their Content Moderation Team in an Effort to Eliminate Criminal Content
Facebook’s mission to connect the world is a commendable one, one which should, theoretically, provide us with more perspective, more understanding of ourselves and other cultures, more opportunity to connect with like-minded people.
But there are side-effects to that goal, in providing people with the means for universal connection, to broadcast themselves and their thoughts at any given moment. While for most, this is a valuable tool, for some, it’s an avenue to spread hate, to share images of violence and crime, and to use the platform to exploit others in various ways.
The arrival of Facebook Live has only exacerbated this – in recent weeks, we’ve seen reports of people live-streaming their own suicide, streaming murder in real-time, for all to see. Even killing their family, all presented on camera, broadcast to the world.
Such content has always existed in the dark recesses of the internet, but Facebook’s reach, and the capacity of Facebook Live in particular, has brought it to the fore. Facebook CEO Mark Zuckerberg has expressed his own horror at such events – as noted in a recent interview with BuzzFeed:
“Working on this issue seems personal for the Facebook CEO, who became upset discussing a recent incident on the platform. “A few weeks ago, a girl livestreamed killing herself,” he said. “It’s hard to be running this company and feel like, okay, well, we didn’t do anything because no one reported it to us.”
But Facebook can’t police everything, and the delivery of real-time connection makes it impossible to stop wholesale. So what can Facebook do?
This week, Zuckerberg has announced that Facebook will take measures to address such activity, with an expansion of their content moderation team from 4,500 to 7,000 people over the next year.
This is in addition to their previously announced artificial intelligence measures which are being trained to detect concerning patterns and behaviors in order to get in early with preventative measures.
It’s an important initiative for Facebook – as the platform continues to expand, so too do the opportunities for people to use it for such activity.
But then again, there’s also a concern about how such efforts affect those tasked with stamping out such activity.
Back in 2013, a former Facebook moderation team leader shared some of her experiences of working with the people who have to sift through all the horrors that Facebook provides, including images of child pornography, domestic violence and various other unspeakable content that no one should ever have to see. But these people have to see it – for eight hours a day they’re tasked with sorting through all of this and removing it from the site.
As noted by Joy Lynskey:
“It’s fair to say that some of the people who work around me do not fare so well. Often they end up suffering from the endless barrage of horror they witness.”
With Facebook looking to expand this team, there has to be some serious concerns about the well-being of these individuals. That’s not to say there are necessarily other options on the table, or a solution that would avoid such impacts, but it is a concern to think about the content these 7,500 will be witness to. In this instance, the development of advanced machine learning can’t come fast enough.
That said, it’s necessary, it’s crucial that Facebook does all it can to eliminate such content and make Facebook a safe place.
Hopefully, the introduction of more moderators will help eliminate mis-use of Facebook Live in particular as usage of the option increases.
This post was originally published on Social Media Today