The Free Internet We Want is a series of research papers:
- The Free Internet We Want (1): How the Internet became centralized, and what that means?
- The Free Internet We Want (2): How Internet centralization affects us
- The Free Internet We Want (3): What do we mean by “Free Internet”?
- The Free Internet We Want (4): Centralized Network Alternatives
On this page:
- Types of Networks
- Decentralized Networks for a Free Internet
- Examples of Decentralized Network Technologies and Applications
- How the Decentralized Internet Becomes a Reality?
The Internet is not centralized by nature. In the end, the network is millions of devices connected to each other by wired or wireless connections. There is no single node that controls the communications among all devices. Internet centralization is related to its virtual network, i.e. the networks created by different applications and services working on the Internet. When our relation to the Internet passes through Google’s services, we become part of a virtual network whose center is Google’s services. But this is not the only form of virtual networks. The Internet itself does not impose one form for the virtual networks working on it, big technology companies impose this form. They offer their services through the Internet and choose to offer them in a centralized manner allowing them full control over the user communications with other parts of the network, which passes through them. But let’s first come to know what does it mean for a virtual network to be centralized, and what are other types of networks.
Any network is a set of nodes connected to each other. The nodes of a virtual network on the Internet, are not devices but applications working on your computer. When you use the Facebook application on your smartphone, this application running on your device represents a node on a virtual network, the Facebook network, and it connects to the network through an application running on a server owned by Facebook. The browser you use is the node representing you on the internet. But when you use Google’s for searching, email, modifying documents and spreadsheets, storing your photos and files, etc. you are a node on Google’s virtual network. In all these cases the application you use works as a client of an application working on the network which is the server. In a virtual network, the server is the same as long as it has the same domain name, such as “google.com” even if it was actually working on thousands of devices distributed all around the world.
A Centralized Network is one that has one (virtual) server that processes all users’ requests and delivers data to them in response to these requests. Other applications connected to the network are clients of this server, and they are directly connected to it alone. All services available through this network, including the service of communicating with some other client, are processed by the server, i.e. it passes through it, and it manages it for the duration of its lifetime. The centralized network is close, from the client’s point of view. The client cannot reach another network through it, even if it made it possible to reach nodes outside it, this access passes through it, and is under its control. As an example, WhatsApp’s messaging service has a server at its center. It allows you to communicate with the rest of its clients (its network users). When you communicate with any of these you send your message to the server, which stores it into the inbox of the one you are communicating with. On the other side, the other user connects to the server to read the messages stored in his/her inbox including yours. When he/she replies, the server stores his/her reply into your inbox, and so forth. This means that your communication with anybody through WhatsApp requires its server to be working. In case it stopped working or had any issues, you won’t be able to communicate with anybody through it, and you won’t be able to access the messages you exchanged with anybody.
A Decentralized Network is actually a number of networks connected to each other, i.e. there exist a number of servers to each of them a number of clients are connected, while the servers are connected to each other. A user of an application may not feel that there are several servers but feels he/she is dealing with one network. This model offers many advantages, according to the type of services it offers and their distribution among different servers. One server might offer a single service. In this case, if this server stopped working only this service will become unavailable, but the rest of the services will continue to be available, and the network itself will continue working. In another case, the servers might cooperate in offering the same suite of services, while distributing workloads among themselves, in this case, if one server stopped working, the workload will be redistributed among the rest, and all the services will still be available to the users. One of this model’s advantages as well is that it is possible, based on the way it is implemented, that the user can access the network through any server connected to it, and thus connecting to the network does not necessarily depend on a specific server.
Federated Network is actually one of the multi-centered networks forms, i.e. it consists of a number of independent networks connected to each other. What a federated network adds is a unified protocol that imposes consistency throughout the networks in a tight manner so that the user practically does not feel he/she is dealing with several networks. The management unified protocol allows federated networks to better pool their resources and makes redistributing loads smoother when needed. This model joins the flexibility and freedom offered by the decentralization of the network, to the integrity and ease of coordination offered by a unified management protocol of application. The different networks connected within the federated network have the right to determine their own rules as they wish. They also have the right to manage their communications with the larger network. At the same time, there is no difficulty in communication among the nodes of the network as a whole, as they do not need to use crossing bridges between different applications, or complicated communication protocols.
Distributed Network is a network with no servers, or servers have quite a secondary role in it and can be replaced by roles assumed by all the applications connected to the network. In this network, any node connected to it can communicate with any other node immediately and without passing through any server that makes communication possible while managing it. The different services available on this network are offered by all the applications connected to it in cooperation among them, i.e. each application connected to the network receives and takes part in offering the service simultaneously, through direct communication with other nodes.
How decentralized networks provide us with more freedom dealing with the Internet? The answer is: distribution of required procedures for making any service available instead of concentrating them in one virtual location. This might range from distributing these procedures among a number of different servers within the multi-centered network, and distributing them among devices of users who cooperate in using the service and making it available. This opens the field widely for a limitless number of alternatives and scenarios, which all depend on separating the different services and not being limited to getting any service from one provider. The success of such scenarios for services distribution depends on the unification of communication protocols among the different applications with the multiplicity and difference of codebases for each. This is simply the idea on which all networks are already based. We all connect to the Internet while using different devices, operating systems, and browsers that all have different codebases, but they all connect to a single network using a set of unified protocols. There is nothing that prevents the cooperation among a set of services and their integration in the same way, which is already implemented.
In a scenario that depends on decentralized and distributed networks, you can control the security and privacy of your data with several options that start with keeping your data on your own device which you fully control its security settings in the way that suits you, and still this data can be accessible for you from wherever you might be, and for whomever you allow accessing it, using services that do not keep a copy of this data but only allow accessing it, i.e. they are mediators who transfer data from its source (its storage), to whoever requests to view it. You still can use a service for storing a backup copy of your data, but this will be independent of making it accessible. If you want to publish content through the network, the services for creating and editing this content can be separate from the ones for storing it (it can also stay on your own device), and from its publishing services.
Unifying protocols or the ability of each application to work with a number of different protocols means that content creation, storage, and publishing services can all work together without problems, and they all can be replaced or multiplied. You can use more than one content creation service as per your changing needs, and you can keep your own content on more than one service to guarantee that a backup always exists. You may also publish your content through more than one service or platform, or not depend on any platform, so your followers may access it directly, each using applications that curate contents from different sources in a distributed network.
On a decentralized network, there is no need for big companies each offering packages of service tightly integrated with each other. It is possible that many companies seeking profit or non-profit as well as different foundations etc. offer varied services that can be integrated together in packages whose parts the user can mix and match at will, he/she can replace any of the parts any time with no damaging consequences. Most importantly, the user won’t be forced to use any of these parts only because it has the largest number of users. In the end, everybody can reach any content on the network using any service they choose, as long as all these services are connected to each other. These scenarios are not dreams waiting for a far-future to come true. Technologies required for their implementation are already available and some of them have applications that we use daily.
Mastodon (Federated Network) is a federation of networks along with a social media application for short blogs similar to Twitter. Mastodon depends on an open source server application that anybody can use to create an independent social media network, which can optionally connect to the rest of Mastodon networks. Any user can, through registering to any of Mastodon networks, publish short blogs and view blogs of other peoples connected to the same network or any of Mastodon other networks. While the application’s interface allows following the timeline of blogs on the network the user is connected to independently of the collected timeline of the other networks, the user’s experience is still unified, and generally feels like dealing with one network. The business model of Mastodon operation, which is non-profit and depends on donations by its users and patrons, allows it to be ad-free. Generally speaking, the Mastodon networks follow the model of displaying blogs chronologically with no priorities or rules for limiting the blogs the user can view. Additionally, each network has completer autonomy when it comes to content moderation rules.
Peer-to-Peer Networks (Distributed Network) Maybe the most used service among those offered by distributed networks is the file download service using Torrent technology. It works by using Peer-to-Peer networks. When you want to download a file using torrent, you first download a small torrent file that your application uses to access a server whose role is to first provide you with addresses of other users who share parts of the file. The application then starts downloading the file by connecting directly to other users to fetch different parts of the file from each of them. After downloading a certain amount of the file, other users can download parts of the file from you. There is no copy of the file on the server, thus nobody downloads the file from it. Its role is limited to tracking the number of peers in the swarm.
Blockchain (distributed network) It is a protocol for registering data in a reliable and secure way, as well as without the need for a central entity that monitors the process of creating and recording this data. Blockchain started as a solution for supporting the Bitcoin digital currency that needed decentralized means for recording the process of its exchange in a way that guarantees the reliability of exchange processes and keeps coins owners’ rights but without the need for centralized banking services. The concept of Blockchain is quite simple. It is a chain of records, each has a unique hash code and points to the next record, which in turn has a unique hash code based on the one of its predecessor. This means that tampering with any record required modifying all the next records, which is practically impossible. The records chain is saved and verified by a system distributed among users. Besides supporting the collective register of Bitcoin and other digital currencies exchange processes, there are several projects for developing distributed services depending on the Blockchain protocol.
If the theoretical models of decentralized networks are available, and there are already many of the technologies required to implement these models with used applications, then why we have not reached the decentralized free internet yet? Actually, there are many reasons, maybe the first among them is the contradiction between ease of use and knowledge. This is a very old contradiction in PC use and applications technologies, the ideal example of it is Windows the operating system whose philosophy is based on the assumption that ease of use means that the user needs to know nothing about how his machine or his OS work, so it is better to hide the details related to this from him. This philosophy that became prevalent in developing applications led us into dealing with closed boxes most of us know nothing about how they work or what are they composed of. Most Internet applications users are limited to knowing how to use the application to get some tasks done. This leads to some sort of prevailing knowledge laziness. The result is that most users have no idea that there are alternatives for the applications they use, or that there might be other means to get the same services without becoming captives of one application managed by one company.
In the time that the life of each of us is becoming more and more dependent on the Internet, such knowledge laziness’s harm becomes bigger, and so there is a need for spreading more information of how the Internet applications work, and for alternatives that use other means that might better suit our needs. The growing interest in and demand for alternatives allows developing more of them. There are of course reasons related to technical issues that require dealing with, but making resources available for accomplishing this depends again on the rising demand for different types of services, hence the key to a free Internet is still in the hands of its users themselves. We cannot imagine that the big companies endeavor to implement solutions that allow their users more freedom including replacing some of their services with others. Neither can we wait for the states to intervene as each, in the end, has only limited authority over cross-nationality companies, and they are more interested in that these companies continue working in their territory, hence there is no alternative to users themselves working on getting the decentralized Internet into reality by encouraging the existing alternatives and taking part in group initiatives for promoting projects of new alternatives, and opening discussions about the need for such alternatives. In conclusion, if we want a free Internet whose users are those who hold its steering wheel, we have to take the lead in the process of creating it ourselves.