Internet censorship: Diagnosing a global threat

Dr. Navid Yousefian analyzes who is behind global censorship measures, and what their goals are: Part 1

22 mins Read
VPN-2.svg

Nym is proud to present its latest research on the state of global censorship. The extensive report "Censorship without borders: Deconstructing the myth of West vs. East" will be published in two parts. Read the full report: coming soon.


In the early days of the internet, many scholars and policymakers believed that digital networks would dissolve borders and facilitate a global free flow of information, ultimately diminishing the power of state censorship and democratizing information. Cyber-libertarian thinkers imagined the web as a borderless realm where information transcends national boundaries and authoritarian constraints. By contrast, cyber-paternalist and -realist perspectives predicted that states and powerful private interests would find ways to control and manipulate this medium, just as they had done with traditional print and broadcast media. Over the past three decades, experience has vindicated the latter view: online censorship has not only persisted but evolved, adopting new forms and tools that differ substantially from traditional offline methods.

Traditional censorship often involved pre-publication reviews, book bans, newspaper shutdowns, physical intimidation, or the confiscation of printing presses. Today’s digital censorship expands far beyond such overt and localized tactics. It leverages the unique characteristics of the internet – its speed, scale, global reach, and algorithmic sorting mechanisms – to enforce information control. State actors can now employ centralized “firewalls” and technical filtering systems to block entire swaths of the global web, as seen in models like China’s “Great Firewall” and, more recently, Russia’s internet sovereignty measures. Non-state actors, from extremist groups to corporate lobbyists, can exert more subtle forms of influence, shaping what users see through tactics like orchestrated disinformation campaigns or quiet pressures on platforms to de-rank specific topics. For example, between 2012 and 2019, Russian authorities blocked over 4.1 million Internet resources without a warrant, demonstrating how easily a state can shape digital access. Similarly, Turkey’s 2017 ban on Wikipedia forced users seeking basic information to confront sudden, legally enforced barriers, underscoring how entire knowledge domains can be abruptly sealed off.

In recent years, as authoritarian-leaning states have refined their censorship playbooks, these methods have begun to diffuse internationally. Contemporary censorship no longer relies solely on brute force, such as mass arrests or wholesale newspaper shutdowns; it also emerges through elaborate surveillance infrastructures and subtle algorithmic tweaking. For instance, surveillance equipment from Chinese firms like Hikvision and Huawei now appears in dozens of countries worldwide, while Russia’s SORM-compatible technology spreads quietly across former Soviet states and beyond. This global dispersal of information control techniques, ranging from keyword-based blocking to strategic downranking of dissenting voices, shows that censorship’s frontier has become not only transnational but also deeply embedded in the political economies of technology export and influence. Yet, this narrative overlooks the fact that Western democracies also engage in efforts to regulate misinformation and enhance platform accountability, crucial for addressing harms to democratic processes and marginalized groups. However, these initiatives raise questions about unintended impacts on free expression, challenging the simple dichotomy between a “free West” and an “authoritarian East.” As recent debates about banning TikTok in the United States demonstrate, compliance and access to user data are not exclusively Eastern concerns. Western states also leverage economic and security arguments to influence platform behavior and potentially curb free expression online. In Western democracies, mechanisms of censorship have evolved to become more implicit and technologically driven. Germany’s NetzDG law, introduced in 2018, exemplifies this shift. Designed to combat hate speech and illegal content on social media, NetzDG imposes strict deadlines for content removal and hefty fines for non-compliance. While intended to protect users from harmful content, critics argue that it incentivizes over-removal and stifles legitimate free expression, particularly of minority or dissenting voices.

A Simplistic Index Overlooking the Complexities of Soft Censorship

Algorithmic manipulation – such as shadow bans, curated feeds, and content downranking – has become a key form of censorship in democratic societies. Although framed as “quality control” or “user experience optimization,” these covert interventions can systematically marginalize dissenting voices and critical content, especially when guided by government “requests” or policy signals from regulatory bodies. Moreover, the lack of transparency in algorithmic moderation means that citizens in liberal democracies may never know that their speech has been throttled or their posts buried, effectively creating hidden speech hierarchies that silently replicate authoritarian-style control.

Evolving nature of censorship in the digital age

Unlike traditional censorship, which could be blunt and easily identifiable, online censorship can be opaque and elusive. The globalized structure of the internet allows for diverse and layered interventions:

  • Technical Censorship involves direct interference with the underlying infrastructure. This includes DNS blocking, IP filtering, and the use of Deep Packet Inspection (DPI) to identify and halt traffic associated with politically or culturally sensitive content. While classical censorship involved physically seizing a newspaper print run, today’s regimes can “pull a virtual plug” or slow traffic to certain regions, as observed in incidents where Russia, China, or Iran have regionally suspended mobile data service to quell unrest.
  • Algorithmic and Platform-Level Controls: Private companies running social media platforms, search engines, and app stores deploy moderation policies that determine what is visible, trending, or “recommended.” Sometimes this is voluntary, as with corporate “community guidelines” that reflect corporate values and brand protection strategies. Other times, it results from state pressure – such as demanding that foreign technology companies comply with local laws on what can or cannot be posted – or from subtle market incentives that favor certain viewpoints. For instance, the Russian model relies on a combination of legal mandates and extralegal pressure on ISPs and platforms. Similarly, the Chinese case shows platforms employing armies of “content supervisors” or “50 cent party” commentators to shape discourse in line with state preferences. In Western contexts, algorithmic interference often operates more implicitly. For example, Facebook's decision to deprioritize political content in its News Feed in 2021 led to reduced visibility for both activist campaigns and independent media outlets, raising concerns about the platform’s ability to quietly shape public discourse under the guise of content moderation.
  • Self-Censorship and Behavioral Influence: Unlike older systems where a citizen’s silence might be forced by the tangible threat of arrest or violence, in the digital era users may voluntarily censor themselves due to the chilling effect of omnipresent surveillance, identity registration requirements, or platform moderation uncertainty. The breadth of hate speech, fake news, or “extremist” speech laws – such as Russia’s legal provisions on “extremist materials” or China’s ambiguous but strictly enforced guidelines on subversive content – creates an environment where users internalize restrictions. Similarly, in Western contexts, the fear of being shadow banned or having content algorithmically deprioritized for sharing non-mainstream, critical, or political voices can deter users from engaging in open discussions. This implicit form of censorship leads individuals to self-regulate their speech, avoiding controversial topics to maintain visibility and reach, effectively mirroring the suppressive effects seen in more overtly authoritarian regimes.

In the last decade, increasing global connectivity has paradoxically been accompanied by more sophisticated censorship tactics. Governments claim national security and cultural integrity justifications to expand their toolkit, while private platforms argue for content moderation as a way to combat disinformation or hate speech. The result is a highly complex environment where multiple actors – states, corporations, civil society groups – push and pull on the boundaries of permissible discourse. Studying online censorship at this moment is crucial for several reasons. First, the growing convergence of digital communication technologies with every aspect of social, economic, and political life magnifies the impact of information controls. Second, as democracies struggle with the challenges of disinformation and extremist content, the lines between legitimate moderation and subtle censorship blur, making it imperative to understand the nuance of these interventions. Third, as networked authoritarian regimes perfect their techniques – sometimes exporting them or inspiring similar controls in other states – understanding these methods becomes essential to preserving an open global information environment.

The analytical framework developed in this report will consider censorship as a continuum. On one end lies “hard censorship,” exemplified by total blackouts, banned websites, and criminal liability for accessing forbidden knowledge. On the other end, “soft censorship” includes algorithmic downranking of certain topics, shadow-banning of activists, and quiet removal of apps from stores at a government’s request. Between these poles lie hybrid forms: content flagged as “extremist” and placed on public blacklists, forced data localization that encourages corporate compliance with local censorship laws, and platform policies that are continually adjusted in response to political pressure and market incentives.

The concept of censorship in the digital age cannot be divorced from its historical roots. The transition from analog to digital did not erase earlier strategies; it transformed them, making censorship cheaper, more flexible, and more easily concealed. Early internet governance scholarship noted that efforts like China’s “Golden Shield” (Great Firewall) would serve as global precedents. Subsequent scholarship – on Russia’s Internet laws, Europe’s attempts to counter “fake news,” and the U.S.’ debates over platform liability – has shown that states and corporations alike adapt previous modes of influence into the digital domain for security and geopolitical interests. Thus, evolving definitions acknowledge that while censorship once required heavy-handed, resource-intensive action, today’s methods exploit algorithmic curation, platform policies, and social norms to achieve similar ends with less visibility. While much scholarship focuses on the explicit censorship apparatuses in places like China and Russia, Western states increasingly rely on “soft” censorship tools, including algorithms that downrank certain content and disinformation laws that incentivize platforms to remove borderline material, effectively limiting the range of permissible speech without leaving overt legal footprints.

Drivers of online censorship

Political motivations

At the core of many censorship regimes are distinct political agendas. Authoritarian governments leverage online censorship to maintain power, quell dissent, and shape public perception. In China, the party-state’s extensive firewall, coupled with a combination of overt (website blocking, keyword filtering) and subtle (algorithms that downrank politically sensitive content) tools, exemplifies a top-down approach to controlling the narrative. Russia, evolving more recently into a model of digital authoritarianism, employs a mixture of legislative mandates, public-private arrangements with ISPs, and co-optation of major platforms to achieve similar ends – preventing mobilization efforts like the Bolotnaya Square protests and deterring criticism of political leadership. An increasingly assertive digital authoritarianism has emerged from laws prosecuting hundreds for “extremist” online posts. 604 charges were brought between 2011 and 2017 under Articles 280 and 282, serving as a legal foundation for broad censorship.

These cases are not confined to a solely authoritarian axis. Some scholars highlight more subtle forms of political censorship in ostensibly democratic contexts. Western democracies and transitional states often use content moderation policies to combat extremist material or foreign disinformation, raising questions about transparency and due process. While these measures might be justified by national security or public safety concerns, they can also slide into politically opportunistic suppression of particular viewpoints. This tension is echoed by scholarship discussing global Internet governance institutions, showing that domestic politics shape how states respond to perceived digital threats. For instance, electoral interference – in the form of blocking certain news sources or manipulating content circulation – occurs both in authoritarian and semi-competitive regimes, thereby confirming that political motivations for censorship transcend clear-cut categories.

Iran’s online censorship framework offers another lens through which to understand political drivers. Like China and Russia, Iran heavily monitors and restricts content for national security purposes and regime stability. The state’s approach includes blocking foreign platforms, imposing strict controls on messaging apps, and developing a state-sanctioned national intranet. Although Iran’s motivations overlap with those of other authoritarian states – preventing online coordination of protests and suppressing dissident voices – its censorship paradigm is also influenced by religious and cultural factors that amplify the political calculus.

As these political agendas scale outward, we observe governments like Egypt or Tanzania, inspired by China’s sophisticated firewall and Russia’s legislative frameworks, adopting parallel regulations and technologies. For example, China’s Belt and Road Initiative serves as a strategic conduit, allowing Beijing to export censorship know-how and surveillance kit – facial recognition systems, AI-enabled traffic management tools, and data localization mandates – directly to states willing to replicate aspects of its model. Similarly, Russia’s reach through the Commonwealth of Independent States ensures that SORM-based monitoring and interception laws are mirrored in countries like Belarus, Kazakhstan, and Uzbekistan. This transnational replication confirms that censorship regimes often travel alongside economic partnerships and diplomatic ties, transforming political influence into a blueprint for controlling information beyond a single nation’s borders.

In Western democracies, political motivations for censorship may cloak themselves in disinformation-countering strategies. Laws targeting “fake news” or “terrorist propaganda” risk creating an environment where political elites and well-connected actors can shape platform policies and push platforms to remove content that is politically inconvenient or challenging to dominant interests. This subtle form of content steering in the West is no less political than the more explicit blocking tactics observed in authoritarian states. It simply operates under a veneer of legality and corporate compliance, complicating the assumption that open societies inherently maintain truly free digital spaces.

Cultural, religious, and social factors

Cultural and religious norms often interact with political priorities to reinforce censorship. China’s restrictions on religious or ethnically-oriented content – such as information related to Falun Gong or minority groups. Russia’s legal provisions against “offending religious believers” similarly demonstrate that the line between cultural values and political enforcement is thin. Content deemed “extremist” can include religious satire, historical reinterpretations, or commentary on political leaders aligning with the national faith.

Iran provides a vivid example of how religious doctrine can underpin digital censorship. The country’s Islamic Republic status informs its approach to online restrictions: content deemed contrary to Islamic moral standards or threatening to religious authority is systematically filtered out. Here, cultural preservation, religious legitimacy, and national identity are deeply intertwined. The blocking of websites promoting “un-Islamic” values or the suppression of content that offends religious sentiments blends the cultural with the political, ensuring that censorship is not just top-down control, but also resonates with certain segments of the population’s moral expectations. The government has repeatedly blocked messaging apps like Telegram and WhatsApp, citing moral and religious guidelines to justify silencing politically sensitive content, merging cultural norms with political imperatives. These apps, including Facebook, Instagram, X, and YouTube, have been completely blocked in recent years and are only accessible through VPNs. Ironically, millions of Iranians continue to use these platforms via VPNs, many of which are reportedly sold or tacitly permitted by the government, raising questions about potential behind-the-scenes collaborations or mutual interests in maintaining partial access.

The blending of cultural and religious imperatives with state censorship is not limited to any single region. Iran’s “National Information Network,” for instance, materialized partly through consulting with China, aligning Islamic principles with a tightly managed information sphere patterned on the Chinese model. In more secular contexts, like parts of Latin America, cultural values might manifest through narratives legitimizing surveillance under the guise of public safety. Here, Chinese-provided “safe city” solutions are installed to monitor crowds, identify “unruly” behavior, and thus quietly enforce norms, suggesting that where universal religious laws are absent, shared cultural or moral frameworks can still shape what content must vanish or be discouraged. Beyond these explicitly authoritarian contexts, cultural and social pressures can also manifest in more open societies. Social media platforms themselves, spurred by user reports or media backlash, may remove content considered hateful or offensive. While such moderation can be well-intentioned, it also raises questions about cultural biases embedded in algorithmic decision-making and content reviewer guidelines.

Economic & corporate influences

Economic drivers also play a crucial role in shaping online censorship. Surveillance capitalism, where corporations monetize user data, enables subtle forms of censorship by tailoring content recommendations that maximize engagement and ad revenues, often at the expense of diverse or dissenting viewpoints. Many scholars highlight the interplay between corporate actors and state regulations: technology firms often comply with local censorship laws to maintain market access. In China, domestic companies like Tencent, Baidu, and Alibaba have long internalized censorship as part of their operational environment. International firms such as Google or Facebook have sometimes contemplated compromises – data localization, compliance with content takedowns – to enter lucrative markets. LinkedIn initially complied with Chinese rules by censoring profiles, including those of Western journalists, in 2021; however, mounting regulatory challenges and limited scope for operations led the platform to fully exit the Chinese market by August 2023, following the closure of its social networking features in 2021 and the eventual shutdown of its job-focused app, InCareer.

Russia’s recent pushes for data localization laws and the blocking of platforms that refuse compliance similarly underline that economic incentives – access to large user bases and advertising revenues – can lead corporations to tailor their content policies in line with government demands. In democratic contexts, the necessity to comply with local regulations is equally compelling. For example, Brazil’s nationwide ban on X (formerly Twitter) following its role in spreading political misinformation forced the platform to remove content or cease operations entirely, highlighting that companies cannot afford to defy state mandates without significant repercussions, even in the West. Similarly, Telegram’s founder Pavel Durov faced arrest in France after the platform cooperated with government requests to moderate content, illustrating that compliance pressures extend beyond authoritarian regimes. Yet, economic incentives complicate the picture further. Chinese tech giants like ZTE and Alibaba and Russian firms like Protei and VAS Experts rely on foreign markets for growth. As they export not only hardware – such as CCTV cameras and DPI (Deep Packet Inspection) systems – but also consultancy and training, they expand their global footprint while normalizing more stringent content controls. Consider the digital forensics workshops arranged by China’s Meiya Pico which has trained law enforcement from Argentina to Uzbekistan. These sessions highlight corporate-state synergy: governments receive advanced control tools and practical know-how, corporations secure lucrative deals and market influence, and the boundary between legitimate cybercrime policing and politically motivated suppression grows increasingly porous.

In sum, the political, cultural, and economic factors behind online censorship form a multi-dimensional matrix. Governments rely on censorship to safeguard regime legitimacy and neutralize opposition, while religious and cultural imperatives shape what is considered permissible speech. Economic incentives compel corporations to adapt their policies or risk losing market share, but censorship is not solely a tool of state power. Beyond regulatory compliance, purely commercial motivations also drive censorship practices. For instance, ISPs in Western countries may throttle foreign content or prioritize more profitable platforms not due to legal obligations but to reduce operational costs or favor lucrative partnerships. Additionally, platform-level decisions to shadow-ban certain media outlets or deprioritize specific content can stem from commercial interests aimed at maximizing user engagement and advertising revenues.

Key actors and their interactions

The ecosystem of online censorship is shaped by a web of interacting actors – ranging from governments and state security services to international technology giants, local ISPs, and grassroots organizations. These stakeholders do not operate in isolation; their relationships are dynamic, evolving in response to shifts in policy, technology, market forces, and user behavior. By examining their roles and the forms of censorship they employ, we gain a holistic understanding of how digital information flows are regulated, suppressed, or manipulated across different regimes and cultural contexts.

State-level actors

At the apex of most censorship architectures are state institutions – governments, regulatory bodies, and security agencies – who formulate and enforce legal and technical constraints. China’s Cyberspace Administration of China (CAC) and Russia’s Roskomnadzor provide clear illustrations of how centralized authorities impose top-down censorship mandates. In China, the CAC orchestrates a sophisticated model of “hard” censorship – direct blocking, keyword filtering, full-network isolation in critical moments – as well as “soft” censorship, such as algorithmic adjustments to search rankings or news feeds. Russia’s Roskomnadzor, while historically less technologically centralized than the Chinese firewall system, has increasingly aimed to consolidate its control, implementing blacklists and data localization laws and testing isolation exercises to cut off the country’s Internet infrastructure from the global web.

As Chinese and Russian systems of control travel abroad, the dynamic between local ISPs, platform providers, and government agencies intensifies. Countries along China’s Belt and Road, for instance, may adopt local versions of China’s vague cyber laws, encouraging ISPs to practice proactive filtering. Meanwhile, Russian-designed analytics suites – such as those sold by Analytical Business Solutions – help authorities in post-Soviet states swiftly identify perceived threats in social media chatter. Similarly, U.S.-made technologies have also been implicated in enabling censorship abroad. Companies like Cisco and Blue Coat Systems have reportedly sold network filtering and surveillance tools later used by governments in countries such as Saudi Arabia, Bahrain , and Syria to block content and monitor activists. This demonstrates that censorship-enabling technologies are not exclusive to authoritarian states like China and Russia but are also exported from democratic countries under the guise of commercial transactions.

US Surveillance and Censorship Tools Used Around the World

Non-state actors, including foreign journalists receiving training in Beijing’s Baise Executive Leadership Academy, re-enter their home environments armed with insights that could reshape their editorial decisions. Simultaneously, activists and civil society groups face fresh challenges: not only must they learn about local constraints, but they must also navigate infrastructures and legal frameworks originally perfected in faraway states. In this interplay, censorship agendas and resistance strategies become transnational puzzles, each influenced by foreign templates and technologies.

Infrastructure & platform providers

Beyond the state, Internet Service Providers (ISPs), telecommunications firms, and global platform providers (social media networks, search engines, content delivery networks) function as key chokepoints. Technically, ISPs can enforce “hard” censorship by blocking domain names or IP addresses at the state’s behest. Large telecom operators in Russia and Iran, often with partial state ownership or under strict licensing requirements, have limited room to resist such instructions. They must install surveillance and filtering equipment, as seen with Russia’s SORM or Iran’s state-controlled network infrastructure.

Throttling by ISPs often serves not only regulatory or political objectives but also economic motivations, where commercial savings and network efficiency take precedence. ISPs deliberately slow down foreign or high-bandwidth content, such as streaming platforms or international data-heavy services, to reduce operational costs and manage congestion. This practice frequently prioritizes domestic or partnered platforms through mechanisms like zero-rating deals where selected services do not count against data caps, subtly steering users toward favored content. For example, video streaming services or cloud-based applications might face slower speeds during peak usage times, while local alternatives or platforms with exclusive ISP agreements perform seamlessly. Such practices, though framed as fair bandwidth management or congestion control, introduce a commercially-driven layer of censorship that influences user access to diverse global content, often mirroring more explicit control seen in authoritarian contexts.

Meanwhile, major platforms – Facebook (Meta), Twitter (X), Google, TikTok, and WeChat – act as gatekeepers of information flow. These platforms implement their own policies on content moderation, which can lead to “soft” censorship. For example, algorithmic filtering may disproportionately downrank political dissent or favor state-aligned narratives, whether deliberately or as an unintended outcome of opaque recommendation engines. In authoritarian markets, these companies face stark choices: comply with censorship demands for market access or withdraw services altogether. There have been discussions on how Google weighed entering the Chinese market under the “Dragonfly” project or how LinkedIn accepted localized censorship rules to operate in China, illustrating the influence governments wield over global platforms. Similarly, Russia’s Telegram blocking attempts underscore how states pressure messaging and social media services to grant access to user data or adhere to content removal requests. Infrastructure providers and platforms, from major ISPs to Apple’s App Store, sometimes comply with censorship demands – like Apple removing dozens of VPN apps from its Chinese App Store in 2017 and in Russia in September 2024 – showing how global corporations become enablers of local censorship regimes.

However, platforms are not always passive. Some choose limited pushback or transparency measures – e.g., Twitter’s past public “transparency reports” on takedown requests – offering partial resistance. When corporate self-interest aligns with free expression or brand reputation, platforms might refuse compliance or relocate services. The tension between platforms and governments thus becomes a negotiation shaped by economic incentives, public opinion, and reputational costs.

Non-state and transnational actors

Non-state actors also shape the censorship landscape. On the one hand, civil society groups, NGOs, and advocacy organizations – such as those producing indices like Freedom House or working on open-source testing (e.g., OONI) – fight censorship by documenting, monitoring, and exposing hidden content removal (criticism to follow in Section V). They assist users with circumvention tools, pressuring platforms and states toward greater accountability and less arbitrary enforcement. Their influence is global and networked, helping to raise awareness of censorship in places like Russia, China, and Iran, as well as highlighting subtler censorship forms in democracies.

On the other hand, extremist organizations, “troll farms,” and disinformation networks complicate the picture. Far from simply being victims of censorship, these groups also benefit from information manipulation. Russian “troll farms” and disinformation agents exploit platforms’ moderation challenges, forcing states and tech companies to overcorrect and potentially censor legitimate discourse. In these instances, non-state actors provoke an environment where governments justify more authoritarian measures. Similarly, extremist content online leads to calls from activist groups and the public to remove hateful or harmful speech, giving platforms and states a moral rationale for censorship – though this can create a slippery slope. Additionally, some lobbyists, industry coalitions, or professional associations influence how moderation policies develop. By lobbying for stricter or looser regulations, they can tilt the balance between free expression and censorship. In certain cultural contexts, religious institutions or community leaders exert pressure on platforms and ISPs to remove “offensive” cultural or religious content. These actors represent the diverse tapestry of interest groups that can collectively push censorship norms in different directions – sometimes aligning with state objectives, other times opposing them.


Read on in Part 2 to learn how censorship works and how to resist it.

Share

Keep Reading...

NymWorld-1.svg

The US election: Freedom of information and misinformation

The role of privacy and regulatory disparities in a fragmented landscape

12 mins read
Announcements-1.svg
3 mins read
NymWorld-1.svg

Nym’s zero-knowledge network: No logging promises needed

Turning a VPN no log’s policy into a network design and guarantee

10 mins read
VPN-1.svg

Nym is more than a VPN

The first app that protects you from AI surveillance thanks to a noise-generating mixnet

7 mins read
VPN-screen.svg

INTRODUCING NYMVPN

Advanced privacy built for the age of AI

Artboard 1.svg