Facebook ‘fails to delete shocking racist posts and hate speech’

The social media giant admits making mistakes when posts are flagged for review

nintchdbpict000337849609.jpg

FACEBOOK has admitted making wrong decisions when dealing with hate speech and racism on its site.

The social media platform has been accused of failing to enforce its hate-speech policies, with almost half the cases flagged for attention mistakenly ignored by moderators.

Facebook’s contracted content reviewers, or moderators, are expected to check around 8,000 posts each day.

They receive very basic training and are paid around £17.70 an hour.

While the company has admitted posts calling for the deaths of Muslims and ‘Jewish ritual murder’ should have been removed, it defended decisions to allow posts that racially stereotype black people.

The revelations come after Propublica, a non-profit news organisation, confronted Facebook for answers.

The group demanded explanations relating to decisions on 49 posts flagged by users.

The posts were either mistakenly allowed to remain on the site despite containing hate speech, or were legitimate expression that had been labelled as hate and deleted.

Of the 49 cases, Facebook agreed 22 decisions by its reviewers had been incorrect.

In a further 19, it defended the rulings, and in six more said the content did violate policies but had either been flagged incorrectly or had been deleted by their authors.

Facebook declined to comment on the final two cases in question, citing a lack of adequate information.

The social media platform has regularly been accused of inconsistency in moderation, especially on religious hate posts.

For example, the tech giant’s moderators deemed a post saying “the only good Muslim is a f****** dead one” as acceptable, but deleted another saying “death to the Muslims”.

After being contacted by Propublica, Facebook was forced to admit that both posts were equally offensive and needed to come down.

Propublic’s report into inconsistencies also analysed moderating decisions made on 900 different posts.

The report found that Facebook’s 7,500 moderators often made very different calls on very similar content, with many decisions not abiding by the company’s complex guidelines.

The report also found that even when company rules were followed, racist and sexist language often slipped through as it was not found to be ‘sufficiently’ derogatory or violent in order to be removed.

Examples flagged included the refusal to remove an image of a black man with a missing tooth and wearing a Kentucky Fried Chicken bucket on his head.

The image is captioned: “Yeah, we needs to be spending dat money on food stamps wheres we can gets mo water melen an fried chicken.”

According to Facebook, this was permitted by the site because it did not include a specific attack on a protected group.

Facebook has also been accused of ignoring repeated requests from users to delete content flagged as ‘hateful’.

A prime example of this is a complaint made by the Anti-Defamation League and 12 other users in 2012 about a page called ‘Jewish Murder Ritual’.

thesun.co.uk

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s