Facebook moderator sues over 'beheading stress'
A woman who
monitored content on Facebook is suing the company, alleging it does not
properly protect those who face mental trauma as a result of viewing
distressing images.
Moderators
are "bombarded" with thousands of images depicting child sexual
abuse, torture, bestiality and beheadings, the legal action alleges.
It said the
social network was "failing to provide a safe workplace".
Facebook
said its moderators had full access to mental health resources.
The legal
action has been taken in California by former contract employee Selena Scola.
She worked
at Facebook's offices in Menlo Park and Mountain View for nine months from June
2017, under a contract through Pro Unlimited, a Florida-based staffing company,
which is also mentioned in the legal action.
According to
her lawyers, she developed post-traumatic stress disorder as a result of
constant exposure to "highly toxic and extremely disturbing images"
at her workplace.
The legal
action says that there is potential for a class action from
"thousands" of current and former moderators in California.
Facebook
said in a statement: "We recognise that this work can often be difficult.
That is why we take the support of our content moderators incredibly seriously,
starting with their training, the benefits they receive, and ensuring that
every person reviewing Facebook content is offered psychological support and
wellness resources.
"Facebook
employees receive these in house and we also require companies that we partner
with for content review to provide resources and psychological support,
including onsite counselling - available at the location where the plaintiff
worked - and other wellness resources like relaxation areas at many of our
larger facilities."
The social
network has come under fire in recent months over how it handles fake news and
hate speech on its platform and has committed to employing more content
moderators.
Currently it
has 7,500 reviewers, which include full-time employees and contractors.
It also uses
artificial intelligence and has stated that one of its main priorities is to
improve the technology so that the unpleasant job of monitoring disturbing
images and videos can be done wholly by machines.
Comments