#Facebook fires human editors now we all get #FakeNews

In firing human editors, Facebook has lost the fight against fake news

It took only two days for an algorithm to highlight a fake story about Fox News anchor Megyn Kelly. Facebook’s influence on news dissemination makes such mistakes arguably irresponsible
By Olivia Solon  Aug 29 2016

https://www.theguardian.com/technology/2016/aug/29/facebook-trending-news-editors-fake-news-stories

Two days after Facebook announced it was replacing the humans that write the Trending Topics descriptions with robots, a fake article about Fox News anchor Megyn Kelly appeared in its list of trending stories.

On Friday, Facebook announced that in a bid to reduce bias it would make the Trending feature more automated and laid off up to 26 contractors hired to write and edit the short descriptions that accompanied each trend. On Sunday a story headlined “Breaking: Fox News Exposes Traitor Megyn Kelly, Kicks Her Out for Backing Hillary” found its way into the list of trending stories – despite the fact that it’s not true.

Facebook hasn’t completely replaced humans with robots. There are still people involved in the process to “confirm that a topic is tied to a current news event in the real world”, says the social network. As the Megyn Kelly episode shows, there are clearly flaws in that process.

The case illustrates how Facebook has lost its battle with fake news.

In January 2015, the social network updated the news feed to “reduce the distribution of posts that people have reported as hoaxes”. The problem is that people are easily fooled by fake news too, and a plethora of tricky-to-distinguish fake news sites have emerged. Facebook’s hoax detection system relies on user-submitted notifications that a link is fishy; if users don’t spot a story is a dud, neither does Facebook.

This problem becomes more pernicious as it leaks out into the real world. In the past month, there have been two cases of mass panic at airports – at JFK on 14 August and at LAX on 28 August – where false reports of gunmen were whipped up by social media in the absence of official information or instructions.

Compounding the issue is the news that Facebook will soon allow users to trigger the Safety Check setting during emergencies. The feature was launched in October 2014to allow users to flag to their loved ones that they were safe during major natural disasters. It has since expanded to cover terrorist attacks as well.

“The next thing we need to do is make it so that communities can trigger it themselves when there is some disaster,” said Facebook CEO Mark Zuckerberg, speaking at at town hall meeting in Rome on Monday.

Moving from a top-down disaster alert model to a bottom-up one should, in theory, help Facebook counter some of the criticism it received for being biased towards western nations.

When the company activated the Safety Check tool after the terror attacks in Paris in November, critics argued that it should have activated the tool in places like Lebanon, where terrorists killed twice as many people on the same day.

While it makes sense to try to bring more balance to the Safety Check system, allowing anyone to trigger it themselves could add legitimacy to the kind of chaotic herd behaviour seen at JFK and LAX.

[snip]

About Educational CyberPlayGround, Inc.®

Educational CyberPlayGround, Inc. strives to help Teachers, Parents, and Policy Makers Learn about: Music, Teaching, Internet, Technology, Literacy, Arts and Linguistics in the K12 classroom.
This entry was posted in NetHappenings. Bookmark the permalink.