Álfheimr

Einrúnna Lunda fyrir Hugsa

Social Media Meets Post-truth Society


Chapter 1: Introduction

When the internet first emerged, it was acclaimed as a transformative technology poised to revolutionize the world. This optimistic perspective was accompanied by significant concerns, such as cyber harassment, online fraud, social isolation, and the spread of misinformation, all exacerbated by the anonymity afforded by the internet. Over the subsequent decades, the internet and its associated ecosystem not only surpassed public expectations in enhancing public services, personal entertainment, and political activities, but also unveiled a myriad of unforeseen challenges.

The internet’s profound impact on society has been both revolutionary and multifaceted. Serving as a medium for democratizing information, it has fundamentally reshaped our modes of communication, learning, and global interaction. However, this same medium has also amplified challenges such as misinformation, profoundly impacting public discourse and trust.

In its early days, the internet’s potential was often eclipsed by these emerging challenges. A lack of regulatory oversight, coupled with online anonymity, created a breeding ground for various forms of online misconduct, including the spread of false information. The internet’s architecture, designed to promote open and rapid communication, inadvertently became a conduit for the swift dissemination of misinformation.

As the internet evolved, its role in shaping public opinion and political discourse grew increasingly significant. The rapid distribution capabilities of online platforms allowed both accurate and misleading information to achieve global reach with unprecedented speed and efficiency. This phenomenon was particularly pronounced in the realms of politics and public policy, where discerning truth from falsehood became an arduous task for many.

A striking example of the internet’s role in propagating misinformation is observable in the aftermath of the September 11 attacks in 2001. Following this calamity, the internet burgeoned as a fertile ground for conspiracy theories and misinformation. In the nascent forums and chat rooms of the early internet, speculative narratives and theories such as LIHOP (Let It Happen On Purpose) and MIHOP (Make It Happen On Purpose), which suggested various degrees of government involvement or orchestration, flourished despite the absence of credible evidence.

Reflecting on the public discourse surrounding 9/11, we recognize its significance in the context of this paper. The incident took place in 2001, a time when the internet had developed sufficiently to offer widespread coverage, at least in developed countries like the U.S., making it a viable platform for such dialogues. By 2001, half of U.S. households possessed at least one cellphone, and the internet penetration rate had reached 60%1. Alongside this hardware support, several online services familiar to us today, such as Wikipedia (launched on January 15, 2001), and Google, which recorded a surge in searches, had already established their presence. Under these circumstances, the aftermath of 9/11 saw the emergence of one of the earliest forms of online conspiracy, with a deluge of blogs, discussions, and videos inundating the internet2. Its influence persisted through the Iraq War and continues to be revisited and reproduced to this day3. As the first large-scale manifestation of information disorder predominantly present on the internet — a platform still relatively new to the public at that time — it garnered considerable academic interest, spurring research into the characteristics and dynamics of internet-based misinformation.

Fast forwarding to the present day, the technological foundations that facilitated the spread of misinformation following the 9/11 attacks have evolved to levels previously unimaginable. The number of global internet users has surged to 5.3 billion, with approximately 4.95 billion of them accessing various types of social media4. Between 2001 and 2023, the internet landscape has undergone a dramatic transformation. In 2001, the internet was in its nascent stages, playing a significant, yet not completely pervasive, role in daily life. Internet penetration in developed countries like the U.S. was substantial but had not yet achieved ubiquity. Emerging key online platforms were setting the stage, yet the digital ecosystem was considerably less intricate than what we see today.

By 2023, the internet has ascended to become the most dominant medium in human history. Its development, characterized by significant hardware advancements, software innovation, and increased user accessibility, has been nothing short of exponential. Internet penetration now is nearly universal in many parts of the world, and the extent of online engagement has grown profoundly. The advent of smartphones, widespread broadband access, and the proliferation of social media platforms have radically transformed the dissemination and consumption of information. This transformation has created an environment where misinformation can proliferate more rapidly and extensively than ever before, effortlessly reaching a global audience5.

The scale and velocity of information spread in modern times is in stark contrast to the situation in 2001. While the early internet facilitated the dissemination of conspiracy theories post-9/11, the internet of today possesses the capacity to amplify and sustain misinformation campaigns on an unprecedented scale, significantly impacting global politics, public health, and societal dynamics6. The evolution of digital algorithms, the creation of echo chambers on social media, and the phenomenon of ‘viral’ content have positioned the internet as the foremost channel for misinformation, surpassing traditional media in both reach and impact7.

From a broader perspective, scholars have renewed their interest in the global nature of current online information disorder, prominently exemplified by the COVID-19 pandemic8. Historically, limited internet access meant misinformation was generally confined to localized spaces, specific platforms, countries, or languages. However, during the pandemic, misinformation about the mechanisms and effectiveness of vaccines rapidly transcended regional, platform, and linguistic boundaries9. The tight interconnectivity between various platforms facilitated the easy transmission of this misinformation, making it challenging for any single platform to effectively manage the situation. Moreover, the reach of conspiracy theories and misinformation extended far beyond their origins; for instance, theories originating in the U.S. spread to Europe and Canada, impacting local pandemic management efforts10 and often crossing language and cultural barriers to regions with markedly different conditions from the origin country.

Another salient aspect of the current information disorder phenomenon is its profound implications for politics and public policies. With online media’s extensive penetration into populations worldwide, including rural areas of developing countries11, it’s now a reasonable assertion that almost everyone in modern society is connected online, accessing similar social media platforms. Consequently, in most modern democracies over the past two decades, the need for politicians to establish and enhance their presence and interaction with potential voters on social media has become imperative. A prime example is former President Donald Trump, whose frequent Twitter interactions significantly influenced other politicians’ use of the platform12. As political campaigns increasingly rely on social media, the internet has become a breeding ground for misinformation and agendas. Unlike traditional media, the internet allows individuals to wield significant influence over large populations without the intermediation of traditional agents like publishers or TV hosts. This independence granted to political actors fosters a strong motivation to disseminate propaganda and favorable narratives online. Moreover, the incredibly low barrier to entry on these platforms has led to a proliferation of misinformation at a scale unattainable by traditional media.

In this paper, my primary focus is on the contemporary manifestation of information disorder, particularly as it occurs on online platforms, with a special emphasis on social media. Information disorder encompasses various forms, each necessitating distinct theoretical approaches. My analysis is centered specifically on the realm of critical communication, rather than on political campaigns and propaganda. This distinction is crucial because addressing the latter involves a complex examination of the objectivity inherent in political narratives.

In the context of critical communication, however, there is a baseline assumption regarding the objectivity of the information being disseminated. This assumption allows for a more straightforward analysis of how misinformation impacts critical communication. The critical communication domain offers a unique lens through which we can examine the dynamics of information disorder, shedding light on how misinformation affects essential communication channels, public discourse, and decision-making processes. By narrowing the scope to this area, the paper aims to provide a focused, in-depth analysis of how information disorder within critical communication shapes public understanding and response to vital issues.

Before proceeding further, it is important to clarify the scope of the term ‘information disorder’ as utilized in this study. In the majority of scholarly works, ‘information disorder’ encompasses three distinct types of content:

  1. Disinformation: Intentionally false information created to harm an individual, social group, organization, or country. It is defined by its deliberate fabrication and malicious intent.
  2. Misinformation: False information that lacks an intention to cause harm, often resulting from misunderstandings or misinterpretations.
  3. Malinformation: Fact-based information used maliciously to inflict harm on an individual, organization, or country.

However, for the purposes of this paper, particularly from the perspective of policymakers and platform managers, these distinctions are less pertinent. The intention behind the spread of information disorder can be challenging to ascertain, making differentiation among these categories less relevant to our analysis. Therefore, the term ‘misinformation’ will be used broadly in this study to encompass all three types — disinformation, misinformation, and malinformation. This approach allows us to focus on the impact and management of false or harmful information in general, rather than becoming entangled in the often ambiguous and subjective task of discerning the creators’ intentions13.

This thesis is divided into five chapters, aside from this introductory chapter, the structure is listed as follows:

  1. Chapter 1 (Introduction): This chapter introduces the topic, outlining the significance of the study and defining key terms like ‘misinformation’ and ‘information disorder’. It previews the structure of the thesis and briefly mentions the research methodology.
  2. Chapter 2 (Theoretical Analysis): This chapter delves into the history of information disorder, examining its academic interpretations and evolution. The focus is on unraveling the sociological roots of misinformation, analyzing how societal changes, technological advancements, and communication shifts have influenced the landscape of information disorder. Various sociological theories are explored to provide a comprehensive framework for understanding the driving forces behind misinformation in today’s digital world.
  3. Chapter 3 (Policy and Technical Factors Overview and case study): This chapter examines specific aspects that significantly impact the subject matter of information disorder. It is divided into two sections:
    • The first section assesses policies contributing to information disorder, critiquing the effectiveness of current moderation strategies on major platforms, particularly during public crises.
    • The second section analyzes the design decisions and marketing strategies in networks like blogs and social media, highlighting how these technical factors contribute to the spread of misinformation.
    • Solutions proposed by academia and public institutions, analyzing their advantages based on the theoretical foundation built in previous chapters. The focus is on designs and strategies enhancing critical communication. Case studies are used to illustrate the practical application and effectiveness of these solutions.
  4. Chapter 4 (Conclusion and Future Directions): The final chapter synthesizes the theoretical background and findings from the analysis, reiterating the real-life significance of the research question. It provides critical comments on existing solutions, assessing which aspects of the problem have been addressed and which remain unaddressed. The chapter concludes with observations on the reasons behind these gaps and comments on future research directions, highlighting areas that require further investigation.

The primary methodology employed in this thesis is a comprehensive literature review. This approach involves critically analyzing existing research, policy papers, case studies, and theoretical works, allowing us to construct a thorough understanding of the nuances of information disorder and its management. This method ensures that our conclusions are well-grounded in a robust body of academic knowledge and practical insights.

Chapter 2: Theoretical Analysis

In this chapter, my aim is to delve into a comprehensive analysis of misinformation in the context of critical communication, which encompasses all types of information disorder as we defined in Chapter 1. The exploration begins by tracing the historical trajectory of misinformation, examining how its nature has morphed and expanded in scope and impact over time. This historical perspective will not only shed light on the changing nature of misinformation but also provide a backdrop against which its current manifestation can be better understood.

Central to this analysis is an examination of misinformation’s roots in sociological literature. By scrutinizing key sociological theories and studies, we can uncover the underlying causes and mechanisms of misinformation. This exploration includes how societal changes, technological advancements, and shifts in communication patterns have influenced the proliferation and nature of misinformation.

Furthermore, the chapter will investigate how the causes and effects of misinformation have evolved alongside broader societal transformations. This includes the shift from traditional media to digital platforms, particularly social media, and how this transition has altered the landscape of information dissemination and consumption.

By integrating historical context with sociological theory, this chapter aims to provide a nuanced understanding of misinformation’s evolution. This approach will help in identifying the contributing factors to the current state of information disorder and also set the stage for discussing potential solutions and interventions in subsequent chapters.

The root problem of my discussion, which is misleading, false or low-value information being unintentionally or purposefully spread in society, is not new14. The phenomenon of spreading misleading, false, or low-value information—whether unintentionally or purposefully—has been an integrated part of human politics and social interaction throughout history. However, it is important to note that despite the longevity of this phenomenon, its formal study as a theorized subject within academia is relatively recent. The conceptualization of terms like “information disorder” and their academic exploration have only gained traction in more contemporary times15. According to the simplified timeline of modern information disorder events made by Posetti and Matthews16, early examples, such as the ‘Great Moon Hoax’ of 183517, where The New York Sun published articles about non-existent life on the moon, mark the beginning of what we now recognize as a pattern of deliberate misinformation. As we move forward in time, the timeline highlights significant events where information disorder played a crucial role, from the Boer War propaganda to the infamous ‘German corpse factory’ story of World War I, and the extensive use of propaganda during World War II and the Cold War.

These historical instances reveal a pattern where misinformation primarily served governmental or political objectives, utilizing the limited means of communication available in each era. The predominant use of propaganda, whether in newspapers, radio broadcasts, or later through television, was indicative of the tools and strategies employed to shape public opinion and political discourse. This reliance on state-driven or politically motivated misinformation underscores the primary challenges of earlier eras, where the spread of information was largely controlled by a few powerful entities.

It is in this historical context that academia began to conceptualize and study information disorder. Initially, the focus was on understanding propaganda and its impact on public opinion and wartime morale18. However, as communication technologies evolved, particularly with the advent of the Internet and social media, the academic focus shifted. The study of information disorder expanded to include a broader range of phenomena, encompassing not just state-driven propaganda but also the decentralized and often user-generated spread of misinformation in the digital age.

This shift in academic focus reflects the changing landscape of information dissemination. The traditional gatekeepers of information – governments and major news outlets – are now joined, and often overshadowed, by digital platforms where anyone can create and spread content. This democratization of information creation and dissemination has led to new challenges in identifying, understanding, and combating misinformation.

While our exploration of history provides a clear understanding of the evolution of information dissemination methods, our next step is to delve into the underlying social conditions driving these changes. Theorists have played a crucial role in unraveling these complex social dynamics. Misinformation is not a one-sided phenomenon; it involves both initiators who create and disseminate information and receivers who consume and often propagate it. To comprehensively understand the sociological dimensions of information disorder, it is essential to examine both ends of this spectrum. Thus, my analysis will be divided accordingly, focusing first on the initiators – understanding their motivations, strategies, and the societal factors that facilitate their ability to spread misinformation. Following this, the perspective will shift to the receivers – exploring how societal contexts, psychological factors, and communication mediums influence their reception and reaction to misinformation. This dual-faceted approach will provide a more holistic understanding of the mechanisms and impact of misinformation in society.

A key theoretical framework central to understanding the social condition that fosters misinformation is the concept of the ‘post-truth’ society. This term, though originally emerging from political commentators and bloggers outside academia, has gained substantial traction within scholarly circles. According to the Oxford Dictionary19, the Serbian-American playwright Steve Tesich may have been the first to use the term ‘post-truth’ in a 1992 essay in ‘The Nation.’ It was later popularized in academic discourse, notably by media studies scholar John Hartley in his book ‘The Politics of Pictures’20. The term encapsulates a growing public perception of encountering contradictory information and ‘alternative facts,’ leading to a widespread distrust in the information they consume. This resonance with contemporary experiences of media consumption and public discourse has propelled the rapid adoption of the term.

While the post-truth theory is commonly applied in political science to analyze cases of information disorder with strong political underpinnings, its relevance extends to the domain of my research focus: critical communication. The post-truth framework offers valuable insights into understanding the dynamics of information disorder in critical communication. Although critical communication allows for a clearer assessment of the objectivity of specific information, political motivations and influences still permeate the process of information transmission. By applying the post-truth lens, we can dissect how these political undercurrents affect the credibility and reception of critical communications, even in instances where objective facts are more discernible.

What exactly encompasses the concept of a ‘post-truth society’? According to the Oxford Dictionary, it is defined as ‘relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief’21. In his book ‘Post-truth,’ Lee McIntyre22 expands on this definition, providing examples from recent history that vividly illustrate the concept, notably the US presidential election of 2016 and Brexit – events that coincided with Oxford Dictionary naming ‘post-truth’ as the word of the year.

In a post-truth society, the issue is not that objective truth ceases to be produced or recognized, but rather that it struggles to exert influence over public discourse and political developments. This shift denotes a landscape where the emotional resonance and personal beliefs of information can overshadow factual accuracy, leading to a scenario where misinformation can thrive and propagate more easily. It reflects a societal change where the line between fact and fiction becomes increasingly blurred and where the appeal to emotions or personal beliefs can more effectively sway public opinion than objective facts.

Aside from the general definition that objective truth is sliding away from the center of public discourse, there are many important characteristics of a post truth society. Often the first one is political division and sometimes, polarization23.

In the current era, often referred to as post-truth, we have witnessed a concurrent rise in political division and polarization. This phenomenon, evident across the globe, is marked by a growing ideological divide, manifested through extreme partisan viewpoints and an evident shrinking of the political middle ground24. In a post-truth society, where emotional resonance and personal beliefs often overshadow objective facts, these political divisions are not only reinforced but also exacerbated.

The definition of a post-truth society aligns closely with the notion that public opinion is more significantly shaped by appeals to emotion and personal belief than by objective facts, as noted by the Oxford Dictionary25. Lee McIntyre, in his book ‘Post-Truth’26, expands on this by using recent historical events like the 2016 U.S. Presidential election and Brexit as illustrative examples. These events highlight how, in a post-truth society, objective truths struggle to exert influence over public discourse and political developments, further deepening the divide.

The dynamics of a post-truth society contribute to a feedback loop where political polarization fosters environments conducive to misinformation, which then feeds back into and deepens these political divisions. Social media platforms, particularly through their algorithm-driven echo chambers, significantly contribute to this feedback loop, reinforcing polarized views27.

In transitioning from the broader societal implications of a post-truth environment, it becomes imperative to examine its impact on critical communication. Historically, critical communication relied predominantly on major media outlets, which largely lacked user-generated content. With the advent of social media, however, the dynamics have shifted significantly, empowering individual users in the dissemination of information28.

The intertwining of critical communication with political division is particularly evident in issues like climate change and public health crises. These topics, inherently connected with political and economic interests, necessitate widespread collaboration and understanding among the general populace. Yet, the politicization of these issues complicates the transmission of accurate and unbiased information.

Furthermore, the rapid spread of misinformation on social media exacerbates public confusion, especially when scientific uncertainty is exploited for political or sensational purposes. The blend of scientific complexity and politically charged narratives creates fertile ground for misinformation, as individuals and groups with vested interests may skew or misrepresent facts29.

Consider the COVID-19 pandemic as a prime example. This global crisis necessitated a society-wide response, often involving significant sacrifices, thus intertwining political interests with various approaches to managing the pandemic. On one hand, there were individuals for whom relaxing restrictions and opening up the economy posed a direct threat to their health and safety. On the other hand, others faced critical threats to their livelihoods due to lockdown measures and economic downturns.

The complexity of the pandemic, both in terms of its causes and the effectiveness of different mitigation strategies, further complicated the situation. Scientific institutions, typically regarded as bastions of objective information, found themselves unable to provide absolute directives at several junctures. This was partly due to the evolving nature of scientific understanding regarding the virus. Consequently, occasionally conflicting instructions from these institutions provided fertile ground for politically charged groups to promulgate their own interpretations of the situation.

These dynamics exacerbated the post-truth condition. Political factions seized upon the uncertainty and complexity of scientific information to bolster their agendas, leading to a polarized public discourse. This polarization was not just a disagreement over policy choices; it reflected a deeper division in how different segments of the population perceived the fundamental realities of the pandemic. Such a scenario vividly illustrates how critical communication in a post-truth era becomes mired in political division and polarization, complicating unified responses to societal challenges.

Another significant characteristic that has emerged is the general public’s diminishing trust in traditional institutions and authoritative sources of knowledge. This shift has been amplified by the advent and proliferation of social media platforms, fundamentally altering how information is consumed and trusted. The decline in public confidence is not just limited to political entities but extends to media outlets, scientific institutions, and other long-standing pillars of fact-based information30.

Social media, especially platforms like Facebook, have played a pivotal role in this disruption. Their algorithms, which prioritize user engagement, have inadvertently fostered echo chambers and political polarization. These echo chambers perpetuate misinformation and undermine the credibility of established sources of information31. This phenomenon is exacerbated by the psychological incentives embedded in social media, which encourage the spread of sensational and emotionally charged content over nuanced, fact-based discourse32.

The impact of post-truth politics is a global phenomenon, transcending geographical and cultural boundaries. It encompasses a wide range of deceptive practices, from the production of outright falsehoods to more subtle forms of misleading information. These practices are often amplified by state actors and automated software, making the post-truth landscape even more complex and difficult to navigate33.

Significant global events have also played a role in shaping the post-truth condition. Incidents like the 9/11 attacks, the 2008 financial crisis, the Iraq War, and the rise of Islamic fundamentalism have collectively challenged the credibility of Western media and political institutions. These events have led to a widespread distrust in the globalized economy and its political narratives, further deepening the crisis of trust and authority in the mainstream Western ideological project34.

Technological advancements, particularly in the digital realm, have transformed the nature and reach of deceptive communication. The current post-truth era is marked by an unprecedented speed and intensity in the circulation of information, both true and false. This relentless flow of content has saturated the public’s attention, impairing the ability to critically engage with and discern the truth 35.

Understanding the distinction between misinformation and disinformation is crucial in this context. Misinformation refers to the inadvertent spreading of false information, while disinformation involves the deliberate creation and dissemination of falsehoods. This differentiation is key to comprehending the various tactics employed in post-truth communication strategies36.

Theoretical engagement with post-truth from a critical communication perspective, as articulated by scholars such as Harsin, presents a multifaceted examination of the issue. This includes exploring the epistemic roots of post-truth conditions and the fiduciary issues arising from the erosion of public trust in traditional sources of authority37.

As this chapter concludes, we are left with a general understanding of the multifaceted nature of misinformation and its pervasive influence across time and technology. Historically, misinformation predominantly served political or governmental objectives, leveraging the limited communication tools of each era. Propaganda, whether disseminated through newspapers, radio, or television, was a principal tool in shaping public opinion and political discourse, often controlled by a few powerful entities.

However, with the advent of the internet and social media, the landscape of information dissemination has undergone a dramatic transformation. The democratization of content creation and distribution has introduced new complexities to the phenomenon of misinformation. No longer confined to the realms of state-driven propaganda, misinformation now flourishes in a decentralized digital environment where anyone can be a publisher. This shift has not only expanded the scope of misinformation but has also posed unique challenges in identifying, understanding, and combating it.

The post-truth era, characterized by a decline in public trust in traditional institutions and authoritative knowledge, further complicates the landscape. The interplay of political rhetoric, technological advancement, and global events has reshaped public discourse, necessitating a critical reevaluation of communication strategies. In this context, understanding the distinction between misinformation and disinformation becomes crucial, as each plays a distinct role in the post-truth communication ecosystem.

By integrating historical context with sociological theory, we gain a deeper appreciation of the evolution of misinformation. This exploration reveals how societal changes, technological advancements, and shifts in communication patterns have collectively influenced the proliferation and nature of misinformation. As we transition to the next chapter, which will examine specific policy and technical factors contributing to information disorder, we carry forward a comprehensive framework that elucidates the underlying mechanisms and driving forces behind misinformation in today’s digitally interconnected world.

Chapter 3: Platform Design, Policy Influences and Solution-Oriented Case Studies

In this pivotal chapter, we delve into the intricate web of factors contributing to the proliferation of misinformation, a phenomenon that has dramatically reshaped public discourse in our digital era. Our exploration spans from the structural designs of online platforms to the nuanced policy decisions that have inadvertently fostered environments ripe for the spread of misinformation. This chapter aims to dissect these complex dynamics, offering a multifaceted perspective on how technological architectures, economic incentives, and legislative frameworks intertwine to fuel the current information disorder.

We commence our journey by retracing significant historical policy changes, starting with the 1987 elimination of the FCC Fairness Doctrine38, a decision that set in motion a series of shifts in the media landscape. This is followed by an analysis of the implications of Section 230 of the Communications Decency Act of 199639, a pivotal piece of legislation that redefined online platforms’ responsibilities towards user-generated content. We then consider the effects of the 1996 Telecommunications Act, which catalyzed the consolidation of media entities, thereby influencing the diversity and nature of media content40.

Building on this historical groundwork, we shift our focus to the present, examining how modern digital platforms, particularly those driven by ad-based revenue models, incentivize and amplify misinformation. This section scrutinizes the ‘rage clicking’ phenomenon and its role in escalating political polarization and misinformation. Additionally, we explore the underlying logic of algorithms in prominent social media platforms like Facebook and Twitter (now X), unraveling how their design choices contribute to the information disorder challenge.

This comprehensive analysis sets the stage for a critical review of academic and theoretical works that address the multifaceted problem of misinformation. We will explore various proposed strategies and solutions, evaluating their potential effectiveness and applicability in the real world.

Finally, we culminate our exploration with a series of case studies. These studies will provide practical insights into the efforts undertaken by digital platforms and policymakers to combat the spread of misinformation, offering a critical assessment of their successes and limitations.

Through this chapter, we endeavor to not only understand the roots and mechanisms behind the spread of misinformation but also to evaluate the efficacy of existing and proposed solutions in mitigating its impact on society.

Before delving into the intricate dynamics of misinformation in the digital era, it is essential to understand the foundational policies that have shaped the current media landscape. This exploration is particularly pertinent in the context of the United States, the host nation for many of the world’s major social media platforms and influential media agencies. The policies enacted within this jurisdiction have had a profound and far-reaching impact on global information dissemination practices. As we examine these pivotal legislative decisions, we gain crucial insights into how the media environment, both traditional and digital, has evolved to its present state, setting the stage for the challenges and complexities we encounter today in the realm of misinformation.

The 1987 Elimination of the FCC Fairness Doctrine:

The repeal of the FCC’s Fairness Doctrine significantly shifted the broadcast media landscape. Established to ensure broadcasters presented contrasting viewpoints on controversial issues, its removal in 1987 led to the proliferation of ideologically skewed content. This change facilitated environments where partisan echo chambers thrived, contributing to societal polarization and misinformation. The doctrine’s elimination marked a departure from norms of objectivity and balance, rooted in responses to early 20th-century propaganda and the rise of corporate public relations41.

Section 230 of the Communications Decency Act of 1996:

Section 230, a crucial part of the Communications Decency Act, transformed the digital information sphere by exempting online platforms from liability for user-generated content. This legal protection enabled platforms like Facebook and Twitter to host diverse user content without legal repercussions for misinformation, fostering an environment where fake news could proliferate unchecked42.

The 1996 Telecommunications Act:

The 1996 Telecommunications Act’s allowance for greater media consolidation led to a concentration of media ownership and a reduction in content diversity. Large corporations acquired multiple media outlets, resulting in homogenized content tailored to specific audience demographics and ideological leanings. This heightened commercialization incentivized sensationalism and partisan bias, exacerbating political divides and impacting the quality of democratic discourse43.

As we move from examining the pivotal policy changes that have shaped the media landscape, we now turn our attention to the economic incentives and technical factors that further contribute to the proliferation of misinformation. These elements, deeply embedded within the fabric of the digital advertising ecosystem and social media algorithms, play a critical role in the current state of information disorder.

The dynamics of misinformation are intricately linked to the economic models driving online content creation and distribution. The advent of programmatic advertising, prioritizing user targeting over content curation, has fundamentally altered the media industry. This shift has led to the rise of sensationalist and divisive content, designed to engage users through immediate interactions. These advertising models, focusing on short-term user engagement, often compromise factual accuracy for the sake of profitability44.

Content discovery widgets and clickbait strategies on websites further incentivize publishers to produce polarizing content. This approach, aimed at attracting a broader audience, directly contributes to the spread of misinformation by prioritizing sensationalism over accuracy45.

The complexity of the online advertising marketplace, involving multiple firms in the ad placement process, complicates the oversight of ad content. Brands often find their advertisements placed in contexts they might not endorse, including websites that propagate misinformation. This lack of control over ad placement can inadvertently fund sites that undermine credible information46.

The scrutiny over the role of ad tech in monetizing fake news has intensified, particularly following the 2016 U.S. presidential election. While the industry has proposed various solutions, the effectiveness of these measures, such as accreditation standards and brand-safety tools, remains under debate. The dominant roles of Google and Facebook in the industry pose additional challenges in implementing effective standardization47.

Verification services are used by advertisers to avoid hoax news websites, but their effectiveness varies. The responsibility for detecting and avoiding misinformation often falls on individual ad tech companies. This task is particularly challenging due to the scale of programmatic exchanges and the competitive nature of the ad tech industry48.

In examining the multifaceted nature of misinformation, it is essential to consider the theoretical perspectives on the inherent flaws of social media platforms and the efforts being made to address these issues. Scholars have identified key areas where social media platforms fall short in preventing the spread of misinformation and have proposed various interventions to mitigate these shortcomings.

The lack of effective tools for users to challenge misinformation on platforms is a significant concern. Most platforms do not provide sufficient mechanisms for users to question or correct misleading content, contributing to the unchecked spread of false information. This absence of user empowerment in the digital space has been a focal point of recent studies, which suggest that providing users with more direct ways to engage and challenge misinformation could be a pivotal step in combating its spread49.

Further complicating this issue is the role of algorithms in reinforcing echo chambers. Platforms often prioritize content that maximizes user engagement, which can lead to the promotion of sensationalist and divisive content, further entrenching misinformation. To counteract this, researchers have explored the implementation of features that promote diverse viewpoints and fact-checking. Initiatives like warning labels and fact-check links on posts identified as potentially containing misinformation have been introduced, but their effectiveness often depends on users’ willingness to engage with corrective content, which can be influenced by their pre-existing beliefs and biases50.

Moreover, the interplay of user characteristics with persuasive design elements is crucial. For instance, older users might find certain persuasive strategies more appealing than younger users51. Individual differences in personality traits can also influence the effectiveness of these strategies. This understanding is vital for designing platforms that can effectively motivate a broad range of users to challenge misinformation52.

In response to these challenges, there have been calls for platforms to take a more proactive role in countering misinformation53. This includes not only implementing more effective design features and algorithms but also fostering a digital environment that encourages critical thinking and fact-checking among users. The success of these efforts will require a concerted approach, combining technological innovation, user education, and robust policy frameworks to create a more informed and resilient digital public sphere54.

Having explored the theoretical perspectives on the inherent flaws in social media platform designs and the various proposed improvements, we now shift our focus to practical applications. This next section delves into case studies that illustrate how these theories and strategies have been implemented in real-world scenarios. These case studies provide valuable insights into the effectiveness of different approaches in combating misinformation. They offer a glimpse into the challenges and successes experienced by social media platforms, policymakers, and other stakeholders in their efforts to create a more truthful and reliable digital information landscape. By examining these real-life examples, we can assess the impact of both technological innovations and policy interventions, gaining a clearer understanding of what strategies are most effective in mitigating the spread of misinformation in our increasingly digital world55.

Taiwan’s approach to combating Chinese misinformation during the 2020 elections serves as an intriguing case study. The country employed a comprehensive “whole-of-society” strategy, which involved government, civil society, and technology companies like Facebook and LINE collaborating to detect, debunk, and block fake news online.

China’s sharp power tactics in its foreign information warfare are well-documented56. Their methods include exacerbating societal divides, exploiting informational system weaknesses, financially controlling traditional media, using a cyber army, and obfuscating attack sources, aiming to destabilize democracies and weaken governance in target countries57. Taiwan’s successful response to these tactics during its 2020 elections offers valuable insights. Policy recommendations based on Taiwan’s experiences have been suggested for the US State Department’s Global Engagement Center (GEC), emphasizing steps like adopting Taiwan’s debunking strategy, conducting workshops on combating foreign propaganda, requiring diplomatic missions to submit response SOPs, and hosting conferences to discuss and exchange best practices.

Taiwan’s case illustrates the effectiveness of a coordinated, multi-faceted approach in countering disinformation and propaganda. Their experience provides a blueprint that other nations could adapt and implement in their fight against misinformation, especially in critical times like election periods. This approach reflects the need for a global, collaborative effort to tackle the complex challenge of misinformation in our increasingly digital and interconnected world. However, as a centralized approach, it can be less effective if it is deployed in an environment where the public has less trust towards the authority, and it will works less fair when the misinformation were created around more domestic in nature, that does not involve strong foreign powers58.

Conversely, solutions emphasizing decentralized approaches, such as enhanced community moderation and user-friendly design, are gaining prominence. A notable example is Twitter (currently known as X), which has made significant efforts to curb misinformation, particularly in the contexts of pandemic-related communication and political divisions.

  1. Context of Platform Design:

    Twitter, in response to the rapid proliferation of misleading narratives, especially around the COVID-19 pandemic, introduced a system of soft moderation. This system includes two primary forms: a warning cover that appears before the Tweet is displayed and a warning tag placed below the Tweet​​. This initiative mirrors earlier steps taken by platforms like Facebook, which began labeling disputed content and fact-checking stories around 2016​​. Twitter’s approach offers a compromise between outright content removal and preserving free discourse, a critical aspect given the platform’s role in public communication​​59.
  2. Theoretical Basis for Combatting Misinformation:

    The theoretical underpinning of these warning labels hinges on the assumption that alerting users to potential misinformation will decrease the perceived accuracy of such information, thereby reducing its spread and impact. This theory is supported by studies showing that general warnings, as well as specific tags like “disputed” or “rated false,” can diminish the perceived accuracy of information​​.
  3. Deployment and Performance:

    When it comes to performance, there’s a nuanced picture. While Twitter reported a decrease in engagement (such as retweets and likes) for Tweets with warning labels, other studies found that Tweets with warning labels received more engagement than those without. This indicates that warning labels might not be effective in all contexts, as they can sometimes inadvertently increase the visibility of the very content they aim to moderate​​.
  4. Belief Echoes and Misinformation:

    An interesting phenomenon that complicates the effectiveness of warning labels is the concept of “belief echoes.” These occur when exposure to misleading or negative political information continues to shape attitudes even after the information is discredited. Warning labels, especially those phrased negatively or containing exclamation marks, may not always generate a strong enough response to counter these belief echoes. Thus, despite the presence of a warning label, misinformation can continue to influence perceptions and beliefs​​.
  5. Impact on COVID-19 Vaccine Perceptions:

    In the context of the COVID-19 pandemic, misinformation on social media, including Twitter, has been particularly polarizing and influential in shaping public attitudes toward vaccines. Misleading information about the safety, efficacy, and necessity of COVID-19 vaccines contributes significantly to vaccine hesitancy. Therefore, understanding and improving the effectiveness of Twitter’s warning labels is crucial for promoting public health and combating the pandemic effectively​​.

In summary, while Twitter’s warning labels represent an important step in fighting misinformation, their effectiveness is influenced by several factors including user engagement, the phenomenon of belief echoes, and the complex dynamics of public perception, especially in relation to critical issues like COVID-19 vaccines. Further study and refinement of these moderation strategies are necessary to enhance their efficacy.

Chapter 4: Conclusion

In this master thesis, we have embarked on a comprehensive journey through the evolving landscape of misinformation in the digital age, exploring its intricate dynamics, impact, and the multitude of strategies employed to combat it. The thesis commenced with a historical perspective, tracing misinformation from early propaganda to the digital misinformation of today. We delved into the sociological and psychological underpinnings of misinformation, highlighting how it thrives in a post-truth society. The exploration then shifted to scrutinize the roles of policies and technical mechanisms in propagating or mitigating information disorder, alongside a series of case studies that illuminated the practical aspects and effectiveness of various combating strategies, especially in politically charged and health-related contexts.

As we conclude, it’s crucial to address the paths forward for better governance against misinformation. Governments and private platforms each play a pivotal role. Governments must consider policies that foster transparency and accountability in information dissemination. This could involve enacting stricter regulations for digital platforms, mandating robust fact-checking mechanisms, and promoting digital literacy among the populace. On the other hand, private platforms need to refine their moderation strategies, ensuring that algorithms prioritize factual content and engage more actively with fact-checkers and academic institutions to develop more sophisticated misinformation countermeasures.

Looking to the future, research in this field stands at a critical juncture with promising new avenues to explore. The rise of decentralized social media models like Mastodon and Bluesky60 presents an intriguing prospect. These platforms, with their decentralized moderation responsibilities, offer a novel approach to content governance, potentially enabling more democratic and community-focused strategies against misinformation. Moreover, there is a pressing need to advocate for more open social media environments in terms of data sharing. The ability to conduct thorough quantitative analyses to evaluate the performance and impact of misinformation in critical communication is hindered by limited data access. Encouraging platforms to share more data with researchers can pave the way for more effective strategies and a deeper understanding of misinformation dynamics61.

In summary, this thesis underscores the multifaceted nature of misinformation and the diverse approaches required to tackle it. The battle against misinformation is ongoing and demands a concerted effort involving government regulation, platform responsibility, and proactive academic research. As digital landscapes continue to evolve, so must our strategies to uphold information integrity. By embracing innovative methodologies and fostering collaborations across various sectors, we can hope to effectively mitigate the challenges posed by misinformation, ensuring the preservation of a well-informed public discourse. This endeavor is not just a technical or policy challenge but a fundamental aspect of maintaining the health of our democratic societies.

  1. David Shedden, “New Media Timeline (2001).” Poynter(blog), December 16, 2004. https://www.poynter.org/archive/2004/new-media-timeline-2001/.
  2. Condé Nast, “Click Here for Conspiracy.” Vanity Fair, October 10, 2006. https://www.vanityfair.com/news/2006/08/loosechange200608.
  3. Kevin Roose, “HOW A VIRAL VIDEO BENT REALITY.” The New York Times, September 8, 2021, sec. Technology. https://www.nytimes.com/2021/09/08/technology/loose-change-9-11-video.html.
  4. Ani Petrosyan, “Number of internet and social media users worldwide as of October 2023 (in billions).” Statistia, October 5, 2023. https://www.statista.com/statistics/617136/digital-population-worldwide/#:~:text=As%20of%20October%202023%2C%20there,population%2C%20were%20social%20media%20users.
  5. Joanna M Burkhardt. “Combating fake news in the digital age, Chapter 1.” Vol. 53, no. 8. Chicago, IL, USA: American Library Association, 2017: 6-7
  6. Akram, Waseem, and Rekesh Kumar. “A study on positive and negative effects of social media on society.” International journal of computer sciences and engineering 5, no. 10 (2017): 353
  7. Bakshy, Eytan, Jake M. Hofman, Winter A. Mason, and Duncan J. Watts. “Everyone’s an Influencer: Quantifying Influence on Twitter.” In Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, 65–74. WSDM ’11. New York, NY, USA: Association for Computing Machinery, 2011. https://doi.org/10.1145/1935826.1935845.
  8. Lunz Trujillo, Kristin, and Matthew Motta. “How internet access drives global vaccine skepticism.” International Journal of Public Opinion Research 33, no. 3 (2021): 551-570.
  9. Trujillo, Kristin, and Motta. “How internet access drives global vaccine skepticism.”
  10. Nsoesie, Elaine Okanyene, Nina Cesare, Martin Müller, and Al Ozonoff. “COVID-19 Misinformation Spread in Eight Countries: Exponential Growth Modeling Study.” Journal of Medical Internet Research 22, no. 12 (December 15, 2020): e24425. https://doi.org/10.2196/24425.
  11. Ganesan, Muthiah, Suma Prashant, and Ashok Jhunjhunwala. “A Review on Challenges in Implementing Mobile Phone Based Data Collection in Developing Countries.” Journal of Health Informatics in Developing Countries 6, no. 1 (April 18, 2012). https://www.jhidc.org/index.php/jhidc/article/view/77.
  12. Francia, Peter L. “Free Media and Twitter in the 2016 Presidential Election: The Unconventional Campaign of Donald Trump.” Social Science Computer Review 36, no. 4 (August 1, 2018): 440–55. https://doi.org/10.1177/0894439317730302.
  13. Wardle, Claire, and Hossein Derakhshan. “Information disorder: Toward an interdisciplinary framework for research and policymaking.” Vol. 27. Strasbourg: Council of Europe, 2017: 20
  14. Sunstein, Cass R., and Adrian Vermeule. “Conspiracy theories: Causes and cures.” Journal of political philosophy 17, no. 2 (2009): 202-227.
  15. Wardle, Claire, and Hossein Derakhshan. “Information disorder: Toward an interdisciplinary framework for research and policymaking”: 10
  16. Posetti, Julie, and Alice Matthews. “A Short Guide to the History of ’fake News’ and Disinformation,” n.d: 2-3
  17. Vida, István Kornél. “The” Great Moon Hoax” of 1835.” Hungarian Journal of English and American Studies (HJEAS) (2012): 431-441.
  18. Uberti, D. “The Real History of Fake News.” Columbia Journalism Review, December 15, 2016. https://www.cjr.org/special_report/fake_news_history.php
  19. Flood, Alison. “‘Post-Truth’ Named Word of the Year by Oxford Dictionaries.” The Guardian, November 15, 2016, sec. Books. https://www.theguardian.com/books/2016/nov/15/post-truth-named-word-of-the-year-by-oxford-dictionaries.
  20. Hartley, John. The politics of pictures: the creation of the public in the age of the popular media. Routledge, 2017.
  21. Oxford English Dictionary, s.v. “post-truth (adj.),” July 2023, https://doi.org/10.1093/OED/3755961867.
  22. McIntyre, Lee. Post-truth. MIt Press, 2018.
  23. Kelkar, Shreeharsh. “Post-Truth and the Search for Objectivity: Political Polarization and the Remaking of Knowledge Production.” Engaging Science, Technology, and Society 5 (April 3, 2019): 86–106. https://doi.org/10.17351/ests2019.268.
  24. Prior, Markus. “Media and political polarization.” Annual Review of Political Science 16 (2013): 101-127.
  25. Oxford English Dictionary, s.v. “post-truth (adj.),” July 2023, https://doi.org/10.1093/OED/3755961867.
  26. McIntyre, Lee. Post-truth. MIt Press, 2018.
  27. Munroe, Wade. “Echo Chambers, Polarization, and ‘Post-Truth’: In Search of a Connection.” Philosophical Psychology 0, no. 0 (2023): 1–32. https://doi.org/10.1080/09515089.2023.2174426.
  28. Iyengar, Shanto, and Douglas S. Massey. “Scientific communication in a post-truth society.” Proceedings of the National Academy of Sciences 116, no. 16 (2019).
  29. Neupane, Madhusudan. “Post-Truth as Ethical Crisis with the Misuse of Social Media.” (2020).
  30. Oxford Research Encyclopedia of Communication. “Post-truth and Critical Communication.” 2018. https://oxfordre.com/communication.
  31. Vosoughi, Soroush, Deb Roy, and Sinan Aral. “The Spread of True and False News Online.” Science 359, no. 6380 (March 9, 2018): 1146–1151. https://doi.org/10.1126/science.aap9559.
  32. Lewandowsky, Stephan, Ullrich K. H. Ecker, and John Cook. “Beyond Misinformation: Understanding and Coping with the ‘Post-Truth’ Era.” Journal of Applied Research in Memory and Cognition 6, no. 4 (2017): 353–369. https://doi.org/10.1016/j.jarmac.2017.07.008.
  33. Bradshaw, Samantha, and Philip N. Howard. “The Global Organization of Social Media Disinformation Campaigns.” Journal of International Affairs 71, no. 1.5 (2018): 23–32. https://www.jstor.org/stable/26588388.
  34. Harsin, Jayson. “Regimes of Posttruth, Postpolitics, and Attention Economies.” Communication, Culture and Critique 8, no. 2 (2015): 327–333. https://doi.org/10.1111/cccr.12097.
  35. Wardle, Claire, and Hossein Derakhshan. “Information Disorder: Toward an Interdisciplinary Framework for Research and Policymaking.” Council of Europe report DGI(2017)09. Strasbourg: Council of Europe, 2017. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c.
  36. Fallis, Don. “What Is Disinformation?” Library Trends 63, no. 3 (2015): 401–426. https://doi.org/10.1353/lib.2015.0004.
  37. Harsin. “Regimes of Posttruth, Postpolitics, and Attention Economies.”
  38. Ruane, Kathleen Ann. “Fairness doctrine: History and constitutional issues.” J. Curr. Issues Crime Law Law Enforc 2, no. 1 (2009).
  39. Goodman, Ellen P., and Ryan Whittington. “Section 230 of the Communications Decency Act and the Future of Online Speech.” Rutgers Law School Legal Studies Research Paper Series. Электронный ресурс: https://papers.ssrn.com/sol3/papers.cfm (2019).
  40. Economides, Nicholas. “The Telecommunications Act of 1996 and its impact.” Japan and the World Economy 11, no. 4 (1999)
  41. Lewandowsky, Stephan, Ullrich K. H. Ecker, and John Cook. “Beyond Misinformation: Understanding and Coping with the ‘Post-Truth’ Era.” Journal of Applied Research in Memory and Cognition 6, no. 4 (2017): 353–369. https://doi.org/10.1016/j.jarmac.2017.07.008.
  42. Lazer, David M.J., Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, et al. “The Science of Fake News.” Science 359, no. 6380 (2018): 1094–1096. https://doi.org/10.1126/science.aao2998.
  43. Yengar, Shanto, and Douglas S. Massey. “Scientific Communication in a Post-Truth Society.” Proceedings of the National Academy of Sciences 116, no. 16 (2019): 7656–7661. https://doi.org/10.1073/pnas.1805868115.
  44. Braun, Joshua A., and Jessica L. Eklund. “Fake News, Real Money: Ad Tech Platforms, Profit-Driven Hoaxes, and the Business of Misinformation.” Digital Journalism 7, no. 1 (2019): 1–21. https://doi.org/10.1080/21670811.2018.1556314.
  45. Nelson, Jacob L., and Harsh Taneja. “The Small, Disloyal Fake News Audience: The Role of Audience Availability in Fake News Consumption.” New Media & Society 20, no. 10 (2018): 3720–3737. https://doi.org/10.1177/1461444818758715.
  46. Woolley, Samuel C., and Douglas R. Guilbeault. “Computational Propaganda in the United States of America: Manufacturing Consensus Online.” Project on Computational Propaganda (2017).
  47. Braun and Eklund.
  48. Lewis, Rebecca. “Alternative Influence: Broadcasting the Reactionary Right on YouTube.” Data & Society Research Institute (2018).
  49. Gurgun, Selin, Emily Arden-Close, John McAlaney, Keith Phalp, and Raian Ali. “Persuasive Design Techniques for Challenging Misinformation in Social Media.” 2023.
  50. Lewandowsky, Stephan, Ullrich K. H. Ecker, and John Cook. “The Debunking Handbook 2020.” Center for Climate Change Communication, George Mason University, 2020.
  51. Pennycook, Gordon, and David G. Rand. “The psychology of fake news.” Trends in Cognitive Sciences 25, no. 5 (2021): 388-402.
  52. Gurgun et al.
  53. Vosoughi, Soroush, Deb Roy, and Sinan Aral. “The Spread of True and False News Online.” Science 359, no. 6380 (2018).
  54. Woolley, Samuel C., and Philip N. Howard. “Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media.” Oxford University Press, 2018.
  55. Huang, Aaron. “Combatting and Defeating Chinese Propaganda and Disinformation.” Belfer Center for Science and International Affairs (2020).
  56. Chien, Li-chung, et al., “China using fake news to divide Taiwan,” Taipei Times, 16 September 2018, http://www.taipeitimes.com/News/front/archives/2018/09/16/2003700513.
  57. Cardenal, Juan P., et al., “Sharp Power: Rising Authoritarian Influence,” National Endowment for Democracy, December 2017.
  58. Aspinwall, Nick, “Taiwan Shaken by Concerns Over Chinese Influence in Media, Press Freedom,” The Diplomat, 27 July 2019, https://thediplomat.com/2019/07/taiwan-shaken-by- concerns-over-chinese-influence-in-media-press-freedom/.
  59. Sharevski, Filipo, Raniem Alsaadi, Peter Jachim, and Emma Pieroni. “Misinformation Warning Labels: Twitter’s Soft Moderation Effects on COVID-19 Vaccine Belief Echoes.” arXiv preprint arXiv:2104.00779 (2021).
  60. Zulli, Diana, Miao Liu, and Robert Gehl. “Rethinking the “social” in “social media”: Insights into topology, abstraction, and scale on the Mastodon social network.” New Media & Society 22, no. 7 (2020): 1188-1205.
  61. Calma, Justine. “Scientists Say They Can’t Rely on Twitter Anymore.” The Verge, May 31, 2023. https://www.theverge.com/2023/5/31/23739084/twitter-elon-musk-api-policy-chilling-academic-research.

2 responses to “Social Media Meets Post-truth Society”

  1. Wang Zicheng Avatar
    Wang Zicheng

    This is absolute trash.

    1. Drogo King Avatar

      Aw, hurtful! True, but hurtful.

Leave a Reply

Your email address will not be published. Required fields are marked *