Censorship technologies and resistance: A global arms race
Dr. Navid Yousefian investigates how censorship technology works, and how people can resist them: Part 2
Nym is proud to present its latest research on the state of global censorship. The extensive report "Censorship without borders: Deconstructing the myth of West vs. East" will be published in two parts. Check out Part 1 here. Read the full report: coming soon.
Techniques and mechanisms of online censorship
Online censorship employs a diverse toolkit of technical and policy-based methods. Some approaches are overt, such as domain-level blocks or firewalls that users can easily detect. Others are more subtle – algorithmic adjustments to content ranking systems or quiet requests for certain posts to be taken down. Understanding these mechanisms is crucial because their sophistication and opacity often make it challenging to assess how, when, and why certain content disappears or becomes harder to find. Moreover, as censorship evolves alongside technology, states and platforms continually innovate, deploying new solutions to tighten information control or circumvent existing blocks.
Direct technical controls
The most visible and traditional forms of online censorship rely on direct technical intervention at the network or infrastructure level. China’s “Great Firewall” exemplifies this approach, blending DNS tampering, IP blocking, keyword filtering, and Deep Packet Inspection (DPI) technology. These tools enable Chinese authorities to inspect the contents of data packets and block or throttle traffic containing proscribed terms or domains. Such measures effectively create a “walled garden” where users remain confined to a filtered version of the global Internet. China’s firewall blocks foreign social media giants like Facebook and Twitter, and its censorship apparatus can target up to 100,000 posts per day with the help of automated scanners and human censors.
Russia has also advanced its censorship capabilities, increasingly relying on DPI and network isolation laws. Originally less centralized than China’s system, Russian authorities now require ISPs to install DPI equipment and enforce blacklists maintained by Roskomnadzor. Periodic tests of “RuNet” isolation and mandates for local data storage further expand the government’s capacity to block or restrict sites and services. Iran’s “National Information Network” employs similar strategies, including keyword-based blocking and regional slowdowns or outages during politically sensitive moments, demonstrating how these tools are often deployed during elections, protests, or other crises to suppress dissent.
Source: Alexander Master’s Dissertation
However, direct technical controls are not limited to authoritarian states. Western democracies also utilize these tools, albeit on a narrower scale and often under the guise of legal compliance or national security. In the UK, the Online Safety Act empowers regulators, such as Ofcom, to enforce a broad range of safety measures aimed at protecting users – particularly children – from harmful or illegal content. This includes the authority to require ISPs to limit or block access to services that fail to comply with mandated safety standards. For instance, platforms hosting illegal content, such as child exploitation material, or those deemed to facilitate hate speech, harassment, or terrorism, may face legal demands to restrict access or implement stringent content moderation systems. Non-compliance could result in significant penalties, such as fines of up to £18 million or 10% of global revenue, and, in severe cases, ISPs may be directed to disrupt access entirely through “business disruption measures.”
This regulatory framework adds a layer of control at the infrastructure level. ISPs could be compelled to block non-compliant platforms, effectively introducing site-level restrictions akin to censorship mechanisms in more authoritarian contexts. While these measures are framed as safeguarding public interest, they carry potential risks for freedom of expression and access to information, especially when criteria for "harmful" content are vague or inconsistently applied.
Similarly, traffic shaping – a method by which ISPs intentionally slow or delay access to certain websites – has been employed in Western countries, including the UK, to prioritize domestic platforms or manage network congestion. High-bandwidth activities, such as peer-to-peer file-sharing or streaming from foreign platforms, are often deprioritized, creating an uneven playing field for online content. This practice, while motivated by network management or commercial concerns, effectively limits users’ access to certain types of content or services. For example, streaming platforms competing with local broadcasters or social media apps requiring substantial bandwidth may experience slower loading speeds, subtly discouraging their use. These practices, while often less visible or dramatic than their counterparts in China or Russia, reveal how direct technical controls are employed globally to shape online behavior. Whether through overt state mandates or commercial arrangements, such interventions underscore the ubiquity of infrastructure-level censorship in maintaining the integrity of various national policies and commercial priorities.
Platform-level moderation and algorithmic biases
As user experiences are increasingly mediated through centralized platforms – social media networks, search engines, and messaging apps – censorship often occurs at the content layer. Instead of blocking entire domains, states or corporations can push platforms to remove or downrank specific content, resulting in a more opaque form of censorship. For example, in China, platforms employ tens of thousands of content supervisors to align discourse with state preferences, while in Western contexts, algorithmic downranking or YouTube’s “restricted mode” can quietly marginalize controversial topics. Shadow-banning – where posts are demoted or hidden without user awareness – exemplifies how content can be functionally silenced without overt bans. Automated moderation systems, driven by AI, further complicate the landscape, as they may over-remove content, inadvertently silencing legitimate debates or minority voices due to cultural or political biases embedded in training data.
In nominally democratic societies or hybrid regimes, state control often operates through indirect mechanisms. Governments in the European Union or the United States often collaborate with platforms to remove “extremist content” or address harmful misinformation. While these efforts are vital for mitigating real threats, they may inadvertently suppress dissenting viewpoints if transparency and due process are lacking. These measures, while less overt than the Chinese or Russian approaches, still risk suppressing dissenting viewpoints, particularly when transparency and due process are lacking.
Economic motivations further amplify these dynamics. Big Tech platforms, driven by profit motives, design algorithms and moderation policies that prioritize user engagement, minimize legal risks, and protect their brands. This leads to forms of "soft" censorship, such as shadow bans, demonetization, or selective de-ranking of content deemed controversial or unprofitable. The political economy of censorship in democracies thus becomes a complex interplay of state pressure, public scrutiny, and corporate self-regulation. For example, recent shadow bans on Meta users posting about Palestine demonstrate how platforms’ actions can echo authoritarian-style suppression under the guise of maintaining community standards or reducing legal exposure.
In countries in the Global South or those undergoing political transitions, censorship practices are often more fluid and reactive. Temporary internet shutdowns during protests, throttling bandwidth during elections, or arresting bloggers under vaguely defined laws illustrate ad-hoc measures frequently employed in politically turbulent environments. India, for instance, resorted to over 100 internet shutdowns in a single year to quell unrest, demonstrating how temporary disruptions can become de facto censorship tools.
Platforms operating in these regions face unique pressures to align with local cultural and religious norms. They may be compelled to remove content deemed blasphemous or politically sensitive, or negotiate data localization and content guidelines on a case-by-case basis. In such contexts, multinational tech companies often defer to local laws or cultural expectations to maintain market access, while local ISPs, lacking political leverage, comply with government directives without resistance. Users are caught in a dynamic tug-of-war between state control, corporate interests, and local activism.
On a global scale, censorship frameworks migrate along geopolitical pathways. China’s Belt and Road Initiative exports its comprehensive surveillance and platform alignment models to Africa, Central Asia, and parts of Eastern Europe. Russia’s influence, though regionally concentrated in the CIS, extends as far as Latin America, where tools like Russian audio forensics are adopted in countries like Mexico. These exported models exemplify how censorship practices are no longer confined by geography, embedding themselves in political, economic, and cultural networks worldwide.
Whether through search result manipulations, shadow bans, or direct takedown requests, platforms increasingly act as gatekeepers of discourse. Their algorithms and moderation policies, shaped by state pressure, legal concerns, and commercial interests, dictate the visibility and accessibility of content. This subtle interplay between state and corporate agendas underscores how even democracies contribute to a global ecosystem of censorship, one that is often invisible in dominant indices portraying the West as bastions of net neutrality and free expression.
Emerging censorship technologies
The censorship arms race is ongoing. New technologies – such as AI-enhanced surveillance tools – promise more precise and comprehensive filtering. Governments may use machine learning to identify patterns of subversive speech, while platforms deploy AI to detect and remove harmful content at scale. Yet these same tools can be repurposed, blurring lines between acceptable moderation and politically motivated suppression.
On the other side, blockchain-based protocols and decentralized architectures challenge the feasibility of large-scale censorship. Content hosted via peer-to-peer networks or on censorship-resistant platforms can evade conventional DNS-level blocks. States respond by strengthening traffic analysis methods, pressuring app stores to remove circumvention tools (e.g., VPNs or privacy-focused applications), and possibly exploring quantum-resistant cryptographic interception strategies.
Often, these techniques and mechanisms are layered. For instance, during politically sensitive events, a government might simultaneously deploy network-level throttling, send legal notices to platforms demanding content removal, and rely on AI moderation tools controlled or influenced by local intermediaries. Companies facing contradictory legal regimes—strict censorship rules at home, freer speech norms abroad – must craft complex solutions, sometimes geofencing content or applying “differential enforcement,” where a video or article remains visible in one country but is blocked in another.
This interplay of hard (direct, infrastructural) and soft (algorithmic, policy-driven) methods creates an environment in which users struggle to discern the forces behind content suppression. Consequently, trust in platforms and information ecosystems erodes. Activists and journalists may self-censor, anticipating punitive responses before posting sensitive material, thereby making censorship partially self-enforcing. In addition, many governments now require real-name registration for online accounts, ensuring that every post or search query can be traced back to a specific individual. By making anonymity more difficult, these identity verification mandates heighten the stakes of speaking out, reinforce self-censorship, and integrate with surveillance tools to streamline the enforcement of censorship laws. The subtlety of some measures also allows states and companies to deflect blame, framing takedowns or visibility reductions as mere “terms of service” enforcement or benign quality control.
Challenging the Global North vs. Global South binary
While it might be tempting to assume that censorship is more pervasive or draconian in the Global South and more refined or limited in the Global North, many scholars caution against such simplistic binaries. Liberal democracies can still enable censorship through corporate-run filters or overly broad hate speech regulations. Conversely, some Global South countries might foster more open information ecosystems due to less technical capacity or stronger reliance on foreign platforms that are less easily controlled. Likewise, trade-offs vary: A state may justify censorship for “national security,” a tech platform may claim it’s filtering “misinformation” in the name of social responsibility, and both could be influenced by cultural expectations for decorum or morality. Corporate compliance or resistance is context-dependent. Some firms take a hard stand, refusing to bend to certain censorious regimes, while others quietly comply, citing profit motives and shareholder interests.
Recommendation algorithms, language filters, and automated moderation tools initially developed for one environment may be repurposed or misapplied in another. Corporate ethics policies, profit motives, and supply chain dependencies intersect with local regulatory frameworks. Over time, these global corporate practices influence local discourse in subtle ways, potentially aligning even disparate regions under a convergent censorship logic – one that blends political, cultural, and commercial interests.
Ultimately, what emerges from a comparative view is that censorship is not monolithic. It does not easily split into “authoritarian” vs. “democratic,” “North” vs. “South.” Instead, it is best understood as an ecosystem of competing pressures: states wishing to protect their narrative and authority, platforms navigating global markets and diverse legal systems, and users caught in a crossfire of policies, algorithms, and cultural mores. From China’s highly centralized, advanced infrastructural model to the more fragmented, market-driven censorship in liberal democracies and onto the hybrid or transitional states where ad-hoc and unpredictable interventions prevail, the global picture defies simple categories. Each context presents unique trade-offs and aligns censorship practices along different axes of power – some privileging national sovereignty, others corporate discretion, and still others local cultural consensus.
In all cases, the political economy framework helps to highlight that censorship is always about trade-offs – between security and freedom, profit and principle, compliance and resistance – and that the distribution of power among states, corporations, and civil society determines the shape and outcome of online information control across regions.
Measuring censorship and identifying blind spots
[See section V of the full report for a detailed consideration of problems in measuring censorship strategies worldwide, and how scholarly methods can be improved.]
Resistance, circumvention, and adaptation
Despite the intensity and sophistication of online censorship regimes, users, activists, and civil society groups across the globe have consistently found ways to circumvent restrictions and reclaim their agency in digital spaces. The existing literature stresses that online information control is not a static, one-sided affair but rather a dynamic ecosystem where each new censorship technique eventually encounters a corresponding workaround. This “cat-and-mouse” dynamic underscores the resilience of user communities and the importance of technological innovation, grassroots mobilization, and digital literacy in pushing back against information control.
Technical workarounds and circumvention tools
One of the most visible forms of resistance arises from the development and widespread adoption of technical circumvention tools. Virtual Private Networks (VPNs), for example, allow users to route their traffic through servers located in more open internet jurisdictions, effectively bypassing local firewalls and IP-based blocks. Proxies and encrypted messaging platforms like Signal further enable secure communication channels that are less vulnerable to surveillance and tampering. Tor, a well-known anonymizing network, masks user identities and protects them from certain forms of traffic analysis, making it more difficult for censors to target specific individuals or content sources.
Beyond these established solutions, newer tools and decentralized protocols are emerging. InterPlanetary File System (IPFS) provides a distributed method of storing and retrieving information, making it harder for any single authority to block or remove specific content. Similarly, projects like Nym add an extra layer of privacy and anonymity by using mixnets, which route data through multiple nodes to obfuscate patterns of communication. By resisting centralized points of control, these technologies challenge the fundamental architecture on which most censorship practices rely.
The evolving nature of technical workarounds is evident in how communities respond to state-level interventions. For example, when governments attempt to crack down on VPN usage through deep packet inspection or VPN signature detection, developers of circumvention tools respond by employing “stealth” protocols or integrating more advanced obfuscation techniques. This iterative process mirrors an arms race: each new censorship tactic sparks a corresponding refinement in anti-censorship technology.
The availability and effectiveness of technological tools alone are not sufficient. Studies emphasize the importance of user agency, community resilience, and grassroots innovation. Local groups, digital rights NGOs, and volunteer collectives often produce and distribute guides – sometimes in multiple languages – on how to set up and maintain secure communications, use circumvention tools effectively, and recognize signs of manipulation in online information. In regions experiencing severe repression, activists might create community mesh networks: localized communication infrastructures that bypass state-controlled ISPs altogether. Such grassroots initiatives highlight that technical literacy and awareness are key components of resistance. For example, In Iran, activists have experimented with mesh networks like Bluetooth-based applications and local Wi-Fi sharing to distribute protest videos and guides to secure messaging, bypassing state-filtered ISPs entirely. Moreover, just as the Russian government tested RuNet isolation exercises in 2019, users responded by updating VPN protocols and circulating fresh IP addresses on encrypted chats, transforming each crackdown into a rallying point for technical ingenuity.
Crowdsourcing knowledge about censorship circumvention is also common. Online forums, social media groups, and diaspora communities regularly share updated lists of unblocked proxies, fresh VPN server addresses, or alternative URLs that mirror censored content. When digital rights organizations run workshops or create helplines where users can learn about anti-censorship solutions, they foster a culture of digital self-defense. This user-to-user knowledge transfer ensures that even as states attempt to clamp down on technical countermeasures, new cohorts of users gain the know-how to navigate controlled spaces.
The landscape of censorship and its resistance is a perpetual arms race. States and platforms evolve their methods, deploying more sophisticated AI-based takedowns, stronger encryption-breaking capabilities, and even proactive blocking measures against known circumvention protocols. In response, tool developers experiment with novel approaches: implementing domain fronting (where traffic is disguised as if coming from popular CDNs), leveraging blockchain-based content storage, or employing AI to guess and adapt to new filtering patterns in real time.
While high-tech tools and complex protocols often dominate discussions on anti-censorship efforts, critics remind us that the human dimension remains central. Without trust, communities of practice, and widespread digital literacy, even the most sophisticated circumvention tools will fail to achieve their potential. Digital literacy campaigns, “how-to” guides, and easily accessible educational resources help ensure that lay users – who may not be tech experts – still have ways to safeguard their access to information. When teachers, journalists, students, and everyday citizens understand basic operational security, encryption usage, and the logic behind decentralized networks, they become more resilient to various forms of online repression.
A multi-layered approach is crucial here: combining political advocacy that demands transparency and accountability from platforms and governments, technological development that outpaces censorious innovation, and community-level education that empowers everyday users. Together, these efforts help create an environment where censorship is never unchallenged and where new barriers to free expression are continually tested and undermined by informed, tech-savvy, and organized communities. This landscape of resistance and adaptation ensures that online censorship is neither inevitable nor absolute. Instead, it becomes a series of tactical moves and counter-moves where ingenuity, collaboration, and resilience can preserve spaces for free expression, open communication, and the continued exchange of ideas.
Conclusion & future directions
The landscape of online censorship is a complex and evolving assemblage of power, technology, and policy, shaped by political, economic, and cultural forces across the globe. While China’s Great Firewall, Russia’s “Internet sovereignty,” and Iran’s National Information Network exemplify overt, state-managed models of censorship, this paper demonstrates that information control is not confined to autocracies. In democratic contexts, market-driven content moderation and algorithmic biases, shadow-banning, and opaque content moderation shape public discourse with the critical aim of combating extremism, misinformation, and hate speech. These efforts highlight the challenge of balancing information control with transparency and accountability.
key takeaway is the convergence of technical sophistication across regimes. Tools like deep packet inspection and algorithmic suppression, once hallmarks of authoritarian states, are increasingly adopted in democracies, raising questions about transparency and accountability. At the same time, the divergence between explicit control in authoritarian systems and tacit mechanisms in democratic contexts shows how Western states, while claiming adherence to free expression, deploy "soft" censorship tactics that often escape scrutiny. This duality sustains the false binary of the “free West” versus the “authoritarian East,” masking the global nature of censorship.
The existing indices used to measure censorship are inadequate for capturing these nuanced dynamics. Future research must integrate qualitative assessments, algorithmic audits, and user-centered metrics to reflect the full spectrum of information control. Understanding practices like algorithmic downranking in democratic systems or the influence of geopolitical alliances on censorship exportation can provide a more balanced and accurate picture.
Private platforms and ISPs are central to this dynamic, acting as both enablers of state-driven censorship and architects of their own content control regimes. Transparent moderation policies, standardized reporting, and independent oversight are critical for mitigating their impact. However, these efforts must be coupled with broader initiatives to empower users through digital literacy, anti-censorship toolkits, and public awareness of platform biases.
The global ecosystem of censorship reflects a shared struggle over the control of information, with states, corporations, and users pulling in different directions. Democratic states must confront their own complicity in shaping discourse through indirect controls, even as they critique more explicit authoritarian practices. The framing of censorship as a strictly “Eastern” problem is both inaccurate and counterproductive, obscuring the pervasive and often invisible mechanisms that operate globally. As digital censorship evolves, so does resistance. Collaborative efforts among policymakers, technologists, civil society, and users are vital to counter these trends. By recognizing the shared dynamics of censorship across regimes and rejecting simplistic binaries, we can work toward a digital future that upholds pluralism, transparency, and the right to free expression for all.
Share
Table of contents
Keep Reading...
Internet censorship: Diagnosing a global threat
Dr. Navid Yousefian analyzes who is behind global censorship measures, and what their goals are. (Part 1)
The US election: Freedom of information and misinformation
The role of privacy and regulatory disparities in a fragmented landscape
Nym welcomes new researcher to its censorship resistance team
Ramping up the fight against global censorship
Nym is more than a VPN
The first app that protects you from AI surveillance thanks to a noise-generating mixnet