Facebook steps up fight against fake news by tightening advertising rules

It hopes to push fake and misleading content off the social media platform.

Facebook announced the changes in wake of criticism as to how it handled fake news (AP Photo/Paul Sakuma, File)
Facebook announced the changes in wake of criticism as to how it handled fake news (AP Photo/Paul Sakuma, File)

Facebook will tighten its rules regarding who can profit from advertising on its network in a move designed to combat fake news and click-bait.

The company says implementation of its new “community standards” mean publishers and content creators will have to ensure that their posts are authentic and not offensive. Content flagged as fake or misleading could be ruled ineligible to profit from Facebook.

“Our goal is to support creators and publishers who are enriching our community,” Nick Grudin, vice president of media partnerships, said in a blog post. “Those creators and publishers who are violating our policies regarding intellectual property, authenticity, and user safety, or are engaging in fraudulent business practices, may be ineligible to monetize using our features.”

Under the new rules, posters who repeatedly violate guidelines by posting click-bait and fake news would lose their ability to monetize. The company will also be adding 3,000 content reviewers to monitor hate speech. 

Facebook and Google alone account for roughly two-fifths of internet advertising, which is forecast to overtake television this year as the biggest market for companies to sell their products.


That rapid growth has prompted serious questions, however — particularly in the wake of the 2016 U.S. presidential election — about how Facebook’s technical layout rewards fake news and click-bait. The more bombastic and shareable a story seems, the more it spreads, the more valuable it is to advertisers, and the more financially rewarding it is to the original posters. This is true regardless of whether or not the story itself is legitimate.

“Facebook’s architecture is optimized for stories that are likely to produce clicks and shares,” a study from the University of Arizona’s Law School reported. “Fake news is likely to cause users to distribute its content, often by confirming biases, which in turn makes it proliferate through Facebook’s news ecosystem.”

In a widely-shared example last year, Buzzfeed News published an investigation that found that found teenagers in a small Macedonian town were spreading fake news in the U.S. in order to earn money through advertising.

“The info in the blogs is bad, false and misleading but the rationale is that ‘if it gets the people to click on it and engage, then use it,'” one university student said at the time. “I start the site for a easy way to make money”, another teenager said.

Two other factors make Facebook an ideal platform for fake news. First, there is the way that stories are interwoven with posts of friends and families on Facebook news feeds, which can lull users into a sense of complacency that the information is personalized and therefore more reliable. “In the never-ending stream of comfortable, unchallenging, personalized info-tainment there’s little incentive to break off, to triangulate and fact check,” professors Evan Selinger and Brett Frischmann wrote in the Guardian.


And second, the sheer mass of data available through a user’s Facebook “likes” enables firms to micro-target users who are especially susceptible to fake news. While this type of targeting might not be available to Macedonian teenagers, it allows political backers to hone in on individuals who are especially vulnerable based on their Facebook preferences, and target them with stories which at best might be bombastic and at worst outright fake.

An example of this sort of micro-targeting can be found with Cambridge Analytica, a technology firm backed by reclusive billionaire Robert Mercer, and whose micro-targeting was seen as key in both Brexit and Trump’s election.

Finally, while this newest campaign might help clamp down on fake news explicitly driven by a desire for Facebook advertising revenue, there is still the wider question of what it will do to combat users who promote false information for fun, or those engaged in a concrete propaganda effort.

The social media giant had previously advertised that it had partnered with third-party fact-checkers in an effort to combat fake news, but a study by Yale University found that their work had remarkably little impact.