How Blockchain Can Help Fight Biased Political Censorship.

Dmitriy Petryakov
6 min readJul 7, 2021

Major social networks (Facebook, Twitter, YouTube, etc.) have long become powerful players on the political scene. Not only they provide platforms to express opinions and share information, but they also influence opinions themselves by applying filtering policies, content promotion algorithms, and other tools that define which information will gain more distribution and how such an information will be perceived by the audience.

The presidential campaign of 2020 in the US revealed the pinnacle of power of social media when Biden vs Trump rivalry reached its peak on the fields of Facebook and Twitter resulting in the suspension of Donald Trump’s accounts on both platforms.

Let us imagine a hypothetical scenario in which two approximately equal political powers compete for the electorate’s favor. One of them (suppose Totalitarians) decides to use the administrative capabilities of the owners of major social platforms to decrease social presence and approval of the other party (suppose Monarchists).

Totalitarian representatives meet with the executives of major social platforms and claim that Monarchists openly express pernicious opinions that don’t coincide with the totalitarian views, for which Monarchists’ posts should be removed as an inappropriate content.

Since content filtering is an ambiguous matter, many statements can be considered either normal or abusive depending on the point of view. This opens up an opportunity to suppress Monarchists and shift the balance of social coverage toward Totalitarians, thus greatly benefitting their electoral campaign.

Can We Create Unbiased Content Filtering System?

We can. We need, however, to solve a couple of major problems on our way to the stated goal.

1. We need a censorship resistant account base.

All social networks store account bases on servers that are maintained by networks’ administrations. Even if we use decentralized content filters, links between all generated content and accounts of its respective creators are established and resolved via such databases, which allows administration to prevent any user from generating any content related to his particular account, as well as break the link between any existing content and the user account, thus in fact banning him.

That’s where blockchain comes to the rescue. Blockchains are immune to data modification attempts: if a transaction is added to the ledger, it’s impossible to alter or remove it. If we put user’s credentials into a transaction, thus creating an account on the blockchain, we can arrange an irreversible registration process.

There is, however, one problem: although blockchains are resistant to posterior censorship, the decision to add any particular transaction into a block is made at the sole discretion of the block producer (miner, validator, etc.) Although, even after being rejected by one or several block producers, the transaction is supposed to be accepted by someone eventually, this is not a truly reliable approach for the purpose at hand.

Fortunately, this problem has been already solved as well: Dynemix blockchain features a novel block proposal algorithm that makes the system completely immune to any kinds of censorship, thus allowing us to implement a completely censorship-free registration process.

2. We need an unbiased censor.

That’s the main point of our concern and the most difficult goal to achieve. We can take three fundamentally different approaches, but getting a bit ahead of ourselves, we should state that only one can be implemented in practice:

1) AI

The most unbiased censor (at least if we teach it to be such) and the least realistic solution. Current gen AI technologies are still very far from being capable of analyzing sophisticated texts and fully understanding the meaning. Even if we manage to make it work to a certain extent, the patterns used by the AI will be easily recognized and circumvented by malicious actors, not to mention constant false calls that will irritate users.

2) Consensus based decisions

A more realistic but, unfortunately, too slow solution. Given that we already use blockchain tech, it would be logical to engage its underlying consensus solution and build a sort of a voting scheme for filtering. This, however, will likely increase the decision-making delay to an unacceptable degree and will also bring everything to majoritarianism: if the majority of participants, for instance, support Totalitarians, they can censor out Monarchists in the same way it can be done by Big Tech.

3) Multiple filtering oracles

As we have already figured out, the developer of the app, being the exclusive censor, can have a certain subjective attitude to the notion of undesired content himself or be incentivized to accept the attitude of a third party, which can eventually cause biasness. But what if we create a separate protocol for filtering via arbitrarily chosen exogenous oracles so that users would be able to switch the filtering services’ provider any time by choice?

Filtering Solution Based on Exogenous Oracles.

Suppose we have a microblog platform Chirp. Unfortunately, its owner happens to be a Totalitarians’ supporter and he starts censoring any content that is consistent with Monarchists’ views. It ends up with the suspension of the account of the Monarchists’ leader Ronald Ace.

As long as content filtering is an internal affair of Chirp, nothing can be done about it. But if we create a separate protocol that allows to attach external filtering services, users who do not agree with Chirp’s censorship policies can simply migrate to any Monarchists-friendly filtering oracle and no longer be hostages of Chirp’s political preferences.

Here is how it works:

1) The protocol provides specs for interaction between user clients and filtering oracles, so that any third parties can create filtering services regardless of their relations with Chirp.

2) When the user downloads the Chirp app, it has an embodied default oracle (as a rule, administrated by the owner of the app) or a set of well-known trusted oracles. In some ways it resembles web browsers, which offer the user to select a default search engine upon installation. Instead of a search engine, in our case the user will be offered to select a guardian oracle (or several oracles) that will provide protection from undesired content.

3) Oracles can follow different political views, have different tolerance toward specific violations, focus on certain types of content, etc. The user can choose a set of oracles that will cover all his preferences simultaneously. If the user detects that a certain oracle starts behaving not as expected, he can reconfigure the filtering set and switch the oracle for a more appropriate one.

4) What is more important, filtering actions are not irreversible. Remember that Chirp banned Ronald Ace permanently? In a conventional system, such actions would lead to the loss of the account and all respective content generated by it. In our system, however, ban only holds as long as the user is following the respective oracle. If the user deactivates the default oracle, Ace’s account instantly becomes available again. All Ronald Ace needs to do is just to tell his supporters to switch to a loyal guardian oracle, which will render the actions of the app’s administration pointless, thus solving the issue of biased censorship. Meanwhile, Ace can continue posting content in his account, which will become available for more and more people as they abandon Chirp’s guardian oracle.

Can We Use the Proposed Solution on Existing Platforms?

No. Unfortunately, the architecture of popular social platforms is incompatible with the proposed solution, which requires a certain degree of decentralization to realize its full potential. We have already mentioned that we would need a censorship-resistant account base, which can be achieved with the help of blockchain tech, but that is actually not the only technical requirement.

If Ronald Ace creates a Chirp of his own as an answer to Totalitarian influence on the original Chirp, he will not be able to implement unbiased filtering and will risk ending up with a reverse Chirp: a platform loyal to Monarchists but hostile to Totalitarians. This will simply split the user base into two groups that won’t be able to interact and thus disrupt the possibility of an actual political discussion, which is only efficient when conducted in the form of the exchange of plural opinions.

If we truly intend to build a platform that will stand for the freedom of speech, we need to implement specific technical solutions that can assure unbiased filtering without the necessity to rely on someone’s subjective attitude, so that both Totalitarians and Monarchists will feel free to exchange opinions.

To implement the ideas described above, we need to design a social network of a new generation that will rely more on P2P technologies rather than the client-server design. Fortunately, the development of blockchain tech and hardware has come to a stage where we are finally getting the possibility to implement our ideas in practice, but that is a different story.

--

--