Many countries, especially India, has been facing a major issue of fake news circulation over the Facebook owned messaging application, WhatsApp. A study which focused India suggests that if WhatsApp decided to add a “Button” to express doubt on the claims made by a post on WhatsApp or if give a user the right to flag the posts as unreliable, baseless, or even problematic can help the platform to restrict the circulation of misinformation.
The study conducted by the researchers at the University of Pennsylvania, IE University, and Leiden University said that “Similar to the ‘like’ functions that exist on other platforms, it would be technically very easy for WhatsApp to add ‘red flag’ or ‘?’ emoji buttons that users can easily click on next to contentious posts”.
The study added that “Such a strategy would be entirely compatible with the encrypted nature of the platform, as ‘red flags’ need not be reported or investigated by the platform, but merely used to communicate to other users that a variety of opinions exist among participants to the thread”.
WhatsApp by itself cannot really do anything about the misinformation that gets circulated on its platform owing to its end to end encryption, which means that only the sender and the receiver can see the content of the message.
However, it is not been the case that WhatsApp has been sitting there with hands on its lap to counter this issue, the platform has restricted the number of forwards to 5 in addition to campaigns it runs to highlight the risk of fake news.
Many of the measures came after the reports of number of deaths which were linked to the spread of rumours over WhatsApp in India.
In a series of tweets, Sumitra Badrinathan, University of Pennsylvania, said that a signal of doubt can be very helpful in reducing fake news. She said “Our findings suggest that though user-driven corrections work, merely signalling a doubt about a claim (regardless of how detailed this signal is) may go a long way in reducing misinformation. This has implications for both users and platforms!”.
Our findings suggest that though user-driven corrections work, merely signaling a doubt about a claim (regardless of how detailed this signal is) may go a long way in reducing misinformation. This has implications for both users and platforms!— Sumitra Badrinathan (@KhariBiskut) January 22, 2020
To conduct this study, the researchers experimentally assessed the effects of various corrective messages on the persistence of rumours among more than 5,000 users on social media in India.
The study said “Our main analyses above overall suggest that exposure to a fact-checking message posted by an unidentified thread participant is enough to significantly reduce rates of belief in a false claim”, while adding “If anything clearly emerges from our results, it is the fact that any expression of incredulity about a false claim posted on a thread leads to a reduction in self-reported belief”. The study was funded by the social media giant Facebook.