What are Google and Facebook doing about Fake News?

Fake news

June 27, 2017

Particularly since the end of last year, internet users have been confronted with an increasingly serious issue – fake news. In other words, entirely made up news reports and messages which are presented or passed off as genuine news. Since an ever-growing number of users rely on search engines and social media for their news, internet giants Google and Facebook are coming under pressure to act. So what is the fundamental issue behind “fake news” and what are Google and Facebook doing about it?

The problem of “fake news”

Why is fake news a problem? News reports play a fundamental role in the formation of political and social opinions in society. People who access incorrect information run the danger of drawing the wrong conclusion or making questionable decisions. Of course, responsibility for critically engaging with news lies ultimately with the consumer themselves. It’s up to the reader to ensure he or she consumes news and reports from different sources in order to ensure a balanced view.
However, modern phenomena such as the “echo chamber” combined with a rapidly decreasing attention span among readers mean that this happens less often. It’s therefore also the responsibility of news sources like Facebook and Google to implement measures to ensure that they don’t spread half-truths or even outright lies. Both companies are already taking steps to achieve this. Let’s start with Facebook.

“Disputed”

It took a while for Facebook to recognize the threat posed by fake news. Fundamentally, founder Mark Zuckerberg and his team were unwilling to enforce any form of censor in any direction – after all, they merely provide the platform and it’s up to the user to decide how to use it. But as the amount of unreliable and sometimes insulting or defamatory content has increased, there has been a change of thought.

Facebook now works with independent fact checkers in several countries who can apply a “disputed” label to posts containing information which cannot be totally verified. It’s still possible to comment on and share the information but the “disputed” label remains prominent.

Furthermore, Facebook are always looking to improve their newsfeed algorithm in order to give less prominence to questionable content. Here, feedback from the community is particularly important and Facebook actively encourages user engagement.

But the question remains as to what direct effect such measures actually have on the user. Somebody who stumbles across a random post from an unfamiliar source is likely to think twice when they see the “disputed” label. But what about users who already trust that source? Or what if the “disputed” content corresponds perfectly to a user’s personal opinion? Such users are unlikely to be put off by a label (indeed it may even annoy them and have the opposite effect) or by a less prominent position in their newsfeed.

Despite the challenges which still exist, Facebook is taking steps to combat the issue of fake news without using censorship. But what about Google?

Fact-checks in SERPs

A key location for the distribution of fake news on Google is of course Google News. Last October, the search engine introduced fact-checking labels with which articles containing fact checks on certain subjects are highlighted. Google are also working with independent and voluntary organizations who check articles for reliability. News providers who publish fact-check articles can boost them using the Claim Review mark-up.

In this way, Google attempts a more positive form of differentiation – rather than negatively censoring suspected fake news, they try to positively highlight genuine, trustworthy news. A nice feature, but one that still doesn’t get right to the heart of the issue of fake news. One of the biggest problems remains the presence of articles which present completely false information.

Quality Raters and “Project Owl”

Google is also targeting its organic search results by tweaking its algorithm to ensure that suspected fake news and content-weak sites are automatically pushed down the SERP pecking order. If a page contains negative content, quality raters can label it “insulting.” This applies to all websites which propagate hate, violence or discrimination on ethnic, religious or sexual grounds. Websites which promote illegal acts should also be flagged up.

But this label alone can’t ensure that such pages are forced down the SERPs – it’s then up to Google and its algorithms to take responsibility and learn to recognize such pages. Human quality raters are only helpers.

With “Project Owl,” Google is aiming to improve its feedback mechanisms. Soon, it will be possible to give feedback on Google’s auto-complete function and report unsuitable suggestions. The relevant (suggested) content will then be deleted.

Similar to Facebook’s “disputed” label, this measure is particularly helpful when users come across disputed and potentially manipulated information unintentionally or innocently. On the other hand, users who intentionally search for such information will still find what they are looking for – often because such users have already searched for the topic before or have even saved or bookmarked certain sites which they like.

Direct answers and featured snippets

An additional theatre in the war against fake news is Google’s “direct answers.” Here, Google automatically draws content from a website it has deemed trustworthy and displays the information as an answer to a user’s question directly in the SERPs. This had led to predictably amusing computing errors such as religious websites stating that the dinosaurs existed in the last 10,000 years or Barack Obama being labelled the King of the United States.

But Google is working on its algorithms here too in order to improve its automated selection process. Users can also give feedback when a snippet is considered unsuitable or even dangerous. The aim of the game remains to provide a correct answer to every question – and not to give fake news a chance. Generally speaking, Google has had significant success here. The fact that the exceptions mentioned above are so notable is because they are precisely that – amusing, extreme exceptions.

Fight financial incentives

While Google and Facebook employ different approaches to combatting fake news, there is one area in which they are united. Both recognize that fake news websites are driven by financial motives, so both have decided to ban such websites from their advertising networks. Sooner or later, this lack of advertising revenue should see many fake news portals shut down.

Not that this will have any effect on the very biggest portals such as “Breitbart” in the United States. These websites already have millions of users who trust them more than the traditional media. They are no longer reliant on advertising and are financed through donations or paid content – much like the traditional media they aim to supplant. Others have never been interested in monetization, but are run for purely ideological reasons.

Conclusion

The examples discussed in this article show that big online players such as Google and Facebook have recognized the problem of fake news and are taking steps to combat it. But they also demonstrate that there is still work to be done if fake news is to be defeated once and for all.
The most important aspect in this battle is also the most difficult. In the fight against fake news, it’s essential to strike the balance between responsible news distribution and censorship. Go too far and we run the danger of opening up a Pandora’s Box of problems whereby all sorts of content is simply censored because a few particularly active, and unaccountable, online protagonists disagree with it. Other avenues must also be explored, the most vital of which is feedback and input from users themselves. Companies like Google and Facebook are dependent on engagement so get involved and help out.

Author Michael Schoettler

Share this post