|
Especially since the advent of the World Wide Web in the last half of the 1990s, the Internet has evolved toward a more centralized organization. Consider: the scheme of the World Wide Web is such that each computer (node) in the network is either a client or a server. Thus, in a typical client-server scheme, a kind of dependency is created: clients are dependent on servers, and therefore servers have a special importance within the network. If the server is not functioning, then none of the clients will be able to utilize the network service that the server normally provides. This kind of centralized organization, of course, runs counter to the original design of the Internet. However, cllient-server is not the only scheme for network services. In a peer-to-peer ("P2P") network, each computer (node) can function as either a server or a client at any given time. In a very real sense, this marks a return to the original Internet scheme of distributed network, where no computer (node) is any more important than any other. In 1999, a Northwestern University student named Shawn Fanning launched his software invention, Napster. Designed to facilitate the sharing of mp3 music files, Napster repopularized peer-to-peer networking on the Internet. However, Napster relied upon a central server to arrange for the peer-to-peer exchange of files. Such follow-ups to Napster as Gnutella have featured truer peer-to-peer networking. What makes such peer-to-peer file-sharing schemes controversially difficult to control is precisely what made the ARPANET such a resilient network: it is distributed computing. The ARPANET was able to keep operating in the face of catastrophe involving a portion of the network; similarly, if you shut down one portion of a peer-to-peer file sharing network, the rest of the network can still continue to operate. On the next page, we turn to some of the other issues surrounding the Internet. |
|
||||
|
If you encounter technical errors, contact computing@calvin.edu. |