Facebook would soon start Identifying Terrorists-Zuckerberg
The Founder
and CEO of Facebook, Mark Zuckerberg had listed a plan that would enable
Artificial Intelligence (AI) software to review stuffs that are being posted on
Facebook.
In a letter
describing the plan, he said algorithms would eventually be able to spot terrorism,
violence, bullying and also help prevent suicide. But he said it would take
years for the necessary algorithms to be developed.
The
announcement has been welcomed by an internet safety charity, which had
previously been critical of the way the social network had handled posts
depicting extreme violence.
In his 5,500-word
letter discussing the future of Facebook, Mark Zuckerberg said it was
impossible to review the billions of posts and messages that appeared on the
platform every day.
"The
complexity of the issues we've seen has outstripped our existing processes for governing
the community," he said.
He
highlighted the removal of videos related to the Black Lives Matter movement
and the historical napalm girl photograph from Vietnam as "errors" in
the existing process.
Recall that
Facebook was criticized in 2014, following reports that one of the killers of
Fusilier Lee Rigby spoke online about murdering a soldier, months before the
attack.
"We are
researching systems that can read text and look at photos and videos to
understand if anything dangerous may be happening.
"This is
still very early in development, but we have started to have it look at some
content, and it already generates about one third of all reports to the team
that reviews content."
"Right
now, we're starting to explore ways to use AI to tell the difference between
news stories about terrorism and actual terrorist propaganda."
Zuckerberg
said his ultimate aim was to allow people to post largely whatever they liked,
within the law, with algorithms detecting what had been uploaded.
"Where
is your line on nudity? On violence? On graphic content? On profanity? What you
decide will be your personal settings," he explained.
"For
those who don't make a decision, the default will be whatever the majority of
people in your region selected, like a referendum.
"It's
worth noting that major advances in AI are required to understand text, photos
and videos to judge whether they contain hate speech, graphic violence,
sexually explicit content, and more.
"At our
current pace of research, we hope to begin handling some of these cases in
2017, but others will not be possible for many years."
The plan was
welcomed by the Family Online Safety Institute, a member of Facebook's own
safety advisory board. The charity had previously criticized the social network
for allowing beheading videos to be seen without any warning on its site.
"This
letter further demonstrates that Facebook has been responsive to concerns and
is working hard to prevent and respond to abuse and inappropriate material on
the platform," said Jennifer Hanley, Fosi's vice president of legal and
policy.
Comments