If the social media chatter ahead of the US presidential election was anything to go by, it was clear which way the wind was blowing. Donald Trump's win, therefore, was shocking, questioning once again the credibility of social media content.
At the crux of this volley of unverified content on social media is an algorithm that delivers 'relevant' content to us based on our preferences. Continuously reading content that reinforces one's point of view, with the other facet hidden, leads to the creation of the bubble of half-truth in which we all happily live, until it bursts, like in the case of America's election result, explains Siddharth Deshmukh, Associate Dean, Area Leader - Digital Platform & Strategies, MICA.
This problem has exacerbated as Facebook and Twitter have become the go-to platforms for news, and users remain oblivious that the content is determined by what they 'like' to read, as against what the ground reality may be. Facts take a backseat as popular opinion goes 'viral', as seen even during the demonetisation drive in India.
But can we blame these platforms for such propaganda? Deshmukh of MICA says, "These platforms are businesses, and are doing whatever they can to turn profitable. These intermediaries have to be as open as possible because their goal is to replicate real-life conversations." Vivek Bhargava, CEO, DAN Performance Group, agrees. He believes it is the source that is liable, not the platform. "If a person prints out 10,000 copies of an article and circulates it, can the printing company be held liable?" he asks.
Facebook's founder, Mark Zuckerberg, in a blog post, talked about work-in-progress projects to better detect misinformation, and "better technical systems to detect what people will flag as false before they do it themselves". Twitter, too, has launched a feature to mute notifications based on keywords, and a more user-friendly harassment reporting system.
Parminder Singh, former MD of Twitter India and SE Asia, says, "Weeding out fake content is much more complicated, partially because of limitations of technology." The offensiveness of a post can be determined by its content, but judging its authenticity, he says, is not easy. "Even if a comprehensive repository of facts exists, new ones keep getting added, making fact-checking an incomplete exercise."
In case of brands and businesses that use social media for marketing, and hence are a big source of revenue for these platforms, distinguishing fake and malicious information from genuine feedback is a challenge. Even if a malicious post is identified, the action may not be swift. According to a Supreme Court order, a non-government or person's request for taking down online content should be accompanied by a court order. Which means Facebook can process requests based only on official court orders or if it violates its community guidelines.
To reduce the incidence of fake content, Neil Shah, Research Director, Counterpoint Research, suggests that platforms let users rate the content based on the authenticity of the source. For content related to brands on social media, he proposes automated alerts that can be sent directly to companies or the subject of the content, if an article gets viewed or shared above a certain threshold - a feature these platforms can monetise.
In May this year, the European Commission signed a code of conduct with Facebook, Twitter, Microsoft and Google to remove 'hate speech' from these sites. Some criticised the move as it undermines citizens' fundamental rights. Instead of stricter laws, Prasanth Sugathan, Counsel at Software Freedom Law Centre, says, "The government, Internet users and policy makers must come together to define transparency and freedom of speech in the Indian context, and practise self regulation," he says.
Calling the Bluff
Facebook recently asked its users in a survey to indentify misleading posts in what seems to be an effort to curb fake news. In the survey, Facebook asked users to rate news posts on the basis of their headlines, and say whether they were using misleading language. Users could choose from fi ve options - not at all, slightly, somewhat, very much and completely. The question appeared in a bar below the posts. Facebook has been under fi re recently for the way in which it handled fake and misleading posts during the US presidential elections. In a Facebook post, Mark Zuckerberg recently said that 99 per cent of the posts on the social networking site are authentic, and only a small number comprises fake news and hoaxes.
LinkedIn has introduced a feature to make messaging on the LinkedIn app easier. With this new feature, users can engage in conversations with the people they are not familiar with, as LinkedIn will highlight the common things between the two. "We know that reaching out to reconnect, ask for advice or network for potential job opportunities can be intimidating, so we've added personalised conversation starters on LinkedIn messaging to give members authentic ways to break the ice," the professional networking site's blog post said. Now when a user writes a message to a stranger on the LinkedIn app, he can tap on the light bulb icon located above the keypad and explore suggestions from the network on how to use it to start a conversation.