Web 3.0 will certainly be a step toward a more decentralized, immutable and censorship-resistant version of the web.
Lately, the controversial topic of censorship on Big Tech platforms reached a turning point when U.S. President Donald Trump’s campaign account was banned on both Twitter and Facebook for “spreading coronavirus misinformation.”
The conversation over who has control over what kinds of information that get out to what kinds of audiences is not new. As we move toward Web 3.0, many believe this future version of the internet will be a more decentralized, immutable and censorship-resistant version of the web.
The decentralized data storage solution InterPlanetary File System, or IPFS, is a peer-to-peer hypermedia protocol that is designed to make the web “faster, safer and more open.” It allows users to download webpages and content that is stored across multiple nodes instead of from a central server. With the current paradigm, if anything is changed or blocked, there is no reliable way to access it again. IPFS aims to address deficiencies like this and more.
Security, privacy, scalability and efficiency limitations of Web 2.0
As mentioned, because data is currently stored in centralized servers, they can be accessed, altered or removed by any party that has control of the server. In terms of security and privacy, this is problematic, as control of the server equals control of the data. This could be a legitimate party, but it could also be a hacker or a political authority.
When Turkey decided to ban Wikipedia, IPFS technology was utilized to host a mirror version of Wikipedia so that the site could still be accessed. The Catalan Pirate Party has used it to bypass a block ordered by the High Court of Justice of Catalonia on websites related to the Catalan independence referendum. A Chinese news source, Matters.news, has also utilized IPFS to publish articles to bypass censorship.
The current internet protocol relies on location-based addressing, which identifies data by its location rather than its content. Even if the same data is available at a nearer location, it will still trek all the way to a specific location/address to access that data, which is a limitation in terms of efficiency.
This has served us satisfactorily so far but only because the size of the average web page was relatively small — average web page size increased from only 2 KB to 2 MB the first two decades of the internet. Now, with big data and on-demand HD video streaming, people have started consuming and producing more and more data. The capacity to scale is more important than ever.
Distributed hash tables enable efficient content access and lookup
Using Kademlia distributed hash tables, or DHT, the IPFS P2P file-sharing system spreads data across a network of computers that are coordinated to enable efficient access and lookup between nodes. This kind of data structure is decentralized and functions reliably even when nodes fail or leave the network (fault-tolerant).
Instead of location-based addressing, IPFS addresses a file based on content identification. The content identifier is a cryptographic hash of the content at that address, a unique hash that allows verification of the content asked for.
DHT provides a decentralized data structure where IPFS peers can locate other peers and the content requested. Its fault-tolerant feature means that peers can function independently without central coordination, enabling the system to scale and accommodate millions of peers — not to mention its ability to resist content censorship due to its decentralized structure.
Decentralized marketplace to incentivize data storage and retrieval
Now that we know how IPFS technology uses DHT to locate peers and content, we can move on to how content is requested and retrieved. Blocks of data are exchanged over the IPFS network through its data trading module called Bitswap. As a message-based protocol, Bitswap’s primary roles are to acquire data blocks requested by the client peer/s and send them accordingly to the respective peer/s.
While these tasks are straightforward, the complexity arises from the actual exchange between peers, where “strategies” are required to decide when and to whom to send blocks of data. Unlike BitTorrent, where blocks being exchanged come from a single torrent (usually a single file), on IPFS, it is a big swarm where peers can pull blocks from just about any peer who has them.
Modeling the block exchange as a data exchange marketplace, each peer participant has an internal strategy used to decide if content will be exchanged with any other participant. Strategies could include incentivization, bartering, rewarding uptime, punishing downtime or other approaches.
The developers of IPFS have rolled out an incentivized, bartering, uptime-rewarding and downtime-penalizing block exchange protocol called Filecoin.
The idea is to allow anyone with unused hard-drive storage space to participate as a storage provider in a decentralized marketplace whereby prices are set based on supply and demand. This is a departure from centralized cloud storage with fixed pricing, such as Amazon Web Services, Microsoft and Google.
The network is market-driven and designed to add economic stimulus in order to incentivize participation, strong end-to-end encryption, cryptographic deletion and more. Miners do not only compete on cost; other factors come into play, such as reputation, reliability, data availability, etc. to ensure that the network operates fairly and continues to improve.
The block exchange protocol relies on proof-of-replication, which is used to prove that data is safely stored somewhere and is accessible; and proof-of-space-time is used to prove that data was stored throughout a period of time.
A more resilient, efficient, censorship-resistant and robust internet
These protocols all work together to allow IPFS to form an extensive P2P system for distributing, storing and retrieving blocks of data quickly and robustly.
Resilience, efficiency, censorship-resistance and robustness will be the markers of this future model of the internet.
Its decentralized, fault-tolerant features will drive its capacity to scale and hopefully enable the inclusion of millions of users to participate in its global information network system.
The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.