The world’s largest social network, of course, I am talking about the Facebook which has already shown concern about the spread of fake news, especially during the political election periods. However, now according to the latest reports, the social network giant Facebook is reportedly rating users’ trustworthiness when they report fake news.
Facebook Is Rating Users’ Trustworthiness When They Report Fake News
The social network giant Facebook changed some of its privacy & policies, in order to prevent false news to incite hatred circulating on the social network. The new rule of the founder of the Social Network giant Facebook, of course, Mark Zuckerberg will take into account the texts and posts that only rely on photos.
The novelty has no direct link to any political or presidential election, but it may come at a rather troubled time here. As The New York Times said that the new rules respond to what happened in Sri Lanka, where the social network giant Facebook posts originated violent attitudes against Muslims.
In March of this year, the country even blocked access to social networks to try to reverse the situation, which had already resulted in two deaths and at least eight were injured.
However, now the world’s largest social network, of course, I am talking about the Facebook which has already shown concern about the spread of fake news, especially during the political election periods.
As a way to further prevent the spread of false news, the social network giant, of course, Facebook will score users based on the number of times they tell whether a reported material is true or false.
Users will be evaluated on a scale of 0 to 1, which will depend on the reliability of the content flagged as fake news. However, instead of showing the score of each for everyone, the goal is to make clear who is the elf to the news agencies that the social network giant Facebook uses. The goal is to internally rank and discover people who flag content like fake news, but that’s not actually fact is.
This step is basically taken by the social network giant Facebook simply to reduce the heavy work of the agencies because they will know in advance that the signaled content comes from a person who signals everything that is contrary to what he/she thinks, but which is always confirmed as true. Based on reputation scores, agencies may give more or less attention to what comes up on the list of complaints.
“It’s not uncommon for people to tell us that something is false simply because they do not agree with the story or because they intentionally try to reach a specific post,” says Tessa Lyons, the social network giant Facebook’s product manager. “It’s not uncommon for people to tell us that something is false simply because they do not agree with the premise of a story,” she also adds.
In addition to the score, a number of aspects of the person’s interaction will also be taken into account in the automatic evaluation. While Lyons has not stated what the points are taken into account for each user, to help in the internal score of the company.
So, what do you think about this? Simply share all your views and thoughts in the comment section below.