One day in 2013, my mother suddenly called me in panic. At first, I thought something wrong had happened in my house causing her to react like that. I held my breath, wondering, “Would it be fire attack? Robbery? Or what?”
“The red heifer has been born in Israel, which, your sister said, is a sign of the end of time,” she explained. Then I was relieved and said, “Thank God, there is no fire, nor robber in my house.”
What my mother had referred to was about the “Red Heifer Prophecy”, which were written in some books and articles and was broadcasted by my sister through Blackberry Messenger. The issue itself was viral among my sister’s church community members.
Don’t get me wrong. I am not in the position to judge anybody’s belief in this case, nor am I trying to prove it as a wrong thing. I am just amazed by how sensational news, no matter how unverified they are, can easily manipulate people’s emotions, like what happened in my mother’s and sister’s case.
In recent context, what we call as fake news, hoax, and propaganda have resulted in huge concerns. Several countries such as the United States, Germany and Sweden, just to mention a few, are the ones affected. My country Indonesia is no exception—even a serious one. Among the major cases are rumors about the arrival of millions of Chinese workers who are going to take over the Indonesian job market.
The spread of false information is considered to be such a serious security threat that many governments are countering with security measures such as censorships. Unfortunately, their actions would not really protect us. In this matter, China is a case in point. The incidents of fake news spreading are just as prevalent in the country that is one of the least free countries in the world for internet freedom. Despite making a lot of effort to maintain public order through internet censorship, The Wall Street Journal described that WeChat, one of the most popular messaging apps in China, counted that till late-November 2016, it had disabled more than 1.2 million links related to alleged “rumors” and removed 200,000 articles containing “false information”.
We would all agree that controlling the spread of deceitful information in the digital age is a difficult job to do. While it is not new, the prevalence of this phenomenon is so significant nowadays that the states can no longer stand alone to overcome it. Hence, the central questions to be examined are, arguably: what drives individuals to share false information? What is an effective approach to solving the problem?
Many academics and policymakers are trying to explain the phenomenon through theories offered about why people engage in this practice. In this case, one would need to examine the problem using the perspectives of market and incentive theories to explore further on how we can design solutions to overcome it.
The history of information production in our society is an intriguing point to start with. After inventing the printing press, Gutenberg duplicated bibles to sell for a profit motive. The community of scientists then used Gutenberg’s device to widely circulate information about their discoveries and thoughts. What motivated them was, of course, the satisfaction that comes from self-actualization. On the other hand, there were also consumers who had their own reasons why they consume the produced information: the buyers of Gutenberg’s bibles probably wanted to fulfill their spiritual needs, while the ones who accessed the knowledge disseminated by the scientists might have really wanted to know the latest scientific discoveries affecting their lives. Thus, the market of information functions because its players have incentives to participate in transactions.
Similarly, Facebook and other social media work just like Gutenberg’s printing machine. The difference now is that the scope of the market has been broadened and is getting much more complex. In the past, there were just some people who had the privilege to produce and also obtain information. However, since the internet was invented, the cost of producing information has gone down significantly. The barriers to disseminating information are, therefore, being minimized. Having said that, the power in the industry that was previously held by certain producers has now shifted to tremendous numbers of people hence the democratization of information. The user-generated content is a perfect illustration of this disruption. Social media enables everybody to create their own news. Anyone with internet access can produce any information they wanted, regardless of its authenticity.
Each individual has certain motives to participate as active producers and consumers of user-generated information. The most common one is the economic incentive. Some users spread information on certain products to sell and get money. The other one is intangible. Social recognition such as the feel-good factor obtained from becoming an “active online citizen” (especially if one’s posting gets a lot of likes and comments) is said to be another form of incentive. Both of these incentives are significant enough that people keep sharing information in their social media to prop up their own online social status.
Unfortunately, as the amount of information gets much bigger and the speed of its circulation reaches an extreme level, the chances of information being contaminated by “waste” becomes much higher. People now are less mindful of the quality of the information they share and consume.
Furthermore, people might also not be aware of the impact of sharing bad information such as fake news because they cannot see directly the negative impact of doing so besides the likes and comments they earn from posting such things—a kind of cognitive bias that we commonly have. That loophole can then be used by some parties with vested interests: politicians who want to undermine the stability of ruling governments by creating an information warfare; advertisers who want to boost sales by posting clickbait online material; or people who simply want to be celebrities by igniting mass reactions towards certain sensitive topics.
Those behaviors are ultimately difficult to control, even by censorship. So, what might be a better approach to this issue? As the very nature of the information market is the individual’s motives, one of the plausible solutions would be to design incentive systems and institutions that will drive the desired behaviors.
Starting from the side of consumers, a form of social incentive can be provided to engage them to be mindful online citizens. A movement #turnbackhoax in Indonesia is one of the examples. When the hashtag became viral, people were driven to be part of the movement to counter misinformation. They felt the need and urgency to do the “right actions”, joining millions of users who have the same aspirations. The rise of fact-checking sites such as Politifact, FactCheck.org, Open Secrets, etc. is another interesting example of how people, driven by certain motives, want to provide scalable solutions to the problem.
Nevertheless, those instruments might not be sustainable in the long-term. Therefore, creating strong institutions to educate the members of the population is inevitable and required. While it may sound cliché, digital and media literacy is essential to empower citizens with the ability to especially differentiate real news with fake ones and critically assess them. The institutions should involve various stakeholders: civil society, online community, and formal education organizations. In this case, Finland is one of the model countries. Its national policy in media education aims to provide its society with the ability to deal with various information in the mass media, including how to respond to ethical issues emerging in the media environment. Moreover, the Finnish government does not work alone in implementing the policy. For instance, parents, families, NGOs, and cultural and arts institutions are also involved in the media education system.
In managing the fractious dynamic of misinformation in this digital age, regulations such as censorships do not appear to be the best solution. While there might not be a perfect answer to this problem, establishing incentive schemes and strong institutions to create a safe information environment would be a more potent policy moving forward rather than merely filtering a large amount of information.
Monica is currently an analyst at Sale Stock, a growing tech company in Indonesia. She studied Accounting at Universitas Gadjah Mada and took an active role in student journalism during her undergraduate. She has always been motivated to learn about the roles of knowledge and information in business and society. Her areas of interests include the tech culture, digital policy, and digital economy. She can be reached at firstname.lastname@example.org. She is also on Facebook (https://www.facebook.com/laymonicard) and LinkedIn (https://id.linkedin.com/in/laymonica).
Disclaimer: All opinions expressed in this article are the author’s own and do not necessarily reflect the views of the ASEAN Economic Forum.