Tuesday, March 1, 2016

HTTP vs IPFS: is Peer-to-Peer Sharing the Future of the Web?

The Inter-Planetary File System (IPFS) is a revolutionary model that could change the way we use the Internet. Unlike the typical server-client model we’re accustomed to, IPFS is something more like BitTorrent. Does that grab your attention? Then read on!

IPFS logo

The Problems With Today’s Web

The Hypertext Transfer Protocol (HTTP) is the backbone of the World Wide Web. We use HTTP to access most of the Internet. Any website we visit, typically, is via HTTP. It’s essentially a server–client mentality, where our computer sends requests to the server hosting a website, and the server sends back responses.

HTTP, though, lends itself naturally to a narrower and narrower subset of services. It’s natural for large services to emerge as the sort of structure of a large portion of the Web, but that sort of centralized environment can be dangerous. If any of the large hosting companies and/or providers of services – such as Google, Microsoft, Amazon, Dropbox, Rackspace, and the like – were to suddenly falter, the results to the Web would be disastrous in the short term. And herein lies the problem (at least one of them).

In addition to the natural process of centralization that’s occurring, there’s also a troubling reliability issue with today’s web. Most websites and applications are hosted by a single server, or by a redundant array of load balanced servers, or whatever the case may be. If the owner of those servers, or the datacenter’s management, or even a natural disaster, takes those machines out, will the application continue to run? Backups and redundancy can be put into effect by organizations with enough resources, but even those can’t stop a company which simply decides to take down their website or application.

Reliance on Hosts

If and when the server hosting a site goes down, we’re now reliant on the hosting company to have fail safes, redundant systems, backups, etc. They must recognize that your service is out, and assist you in restoring it. If it’s a hardware issue, they should have alternative systems they can port your setup onto. They should have backup networking systems, and they should be keeping at least a backup of your data, whether they advertise it or not, in the event of a data loss situation that is their fault.

What if they don’t?

Reliance on Site Administrators

Now the impetus falls on site administrators to keep a service going and data backed up. If you’ve ever been an avid user of an application that was suddenly removed, you know this feeling.

Movements to open source help tremendously, allowing multiple forks of a project to take off, and allowing things that are more static – like documentation – to be preserved in multiple locations and in multiple formats. But the fact remains that the majority of the Web is controlled by people like you or me, maintaining servers.

Some freelance developers even manage the hosting and maintenance of some of their smaller clients’ sites. What if they forget to pay their bill? Get angry with a client and lock them out of their site? Get hit by a truck? Yes, the site owner may have legal options in any of these cases, but will that help you while your site is completely inaccessible?

Reliance on Users

Yet one more problem is that of the users of any web application. Content often must have a critical mass of users or visitors to even merit hosting. Often low-traffic applications or static sites are shuttered simply because they aren’t cost effective to run. Additionally, the reverse problem is also very real. Users of the modern Internet are still clustering together. Facebook – which is a single social network – has somewhere in the ballpark of one out of every five persons on the face of the Earth reported as active users. There are countless businesses who entirely depend upon Facebook to exist. What if it shut down tomorrow?

Of course, Facebook won’t shut down tomorrow, and neither will most of the apps you love and use. But some may. And the more users that have flocked to them before that happens, the more damage that will cause to everyday workflows, or even to personal and business finances, depending on what kind of applications you use and for what.

The Answer is IPFS

So, you may be asking, how does IPFS solve these problems? IPFS is a relatively new attempt to solve some of these issues using distributed file systems. The IPFS project is still fairly low on documentation, and is perhaps the first of many different solutions.

IPFS Nodes

First and foremost, you should understand a few things about IPFS. IPFS is decentralized. Without a typical server providing web pages for every client that arrives at the website’s domain, a different infrastructure must be imagined. Every machine running IPFS would be a node as part of a swarm.

Consider the way torrents currently work. You choose a file to download, and when you use a torrent application to do so, you’re essentially sending out a request to all of the computers attached to the same torrent network as you, and if any of them have the file you’re requesting, and are able to upload at the moment, they begin sending pieces of it to your computer. That’s a condensed version.

So how do IPFS nodes work? Each machine that’s running IPFS is able to select what files they want their node to serve.

Continue reading %HTTP vs IPFS: is Peer-to-Peer Sharing the Future of the Web?%


by Jeff Smith via SitePoint

No comments:

Post a Comment