The Internet didn't just 'happen', there were hundreds of discoveries, achievements and a lot of deep thinking that had to be done before the Internet could come into existence.
As the manager of the New York/Pennsauken Network Access Point during the turn of the millenium, InetDaemon has a slightly different spin on the birth and evolution of the Internet, having witnessed its creation first hand, and personally participated in the Dot-Com boom of the mid to late 1990's. One can never overestimate the personal impact of massive government funding of research (over $1.5 billion) into seemingly frivolous ideas. One can also never fail to recognize the economic impact of creating whole new commercial markets and industries through the deregulation of Federally owned and controlled resources.
BIRTH OF THE ARPAnet
The United States watched as the Russians tested an atom bomb (1949), tested their first hydrogen bomb (1953), and then launched Sputnik into orbit (1957). Though the United States had developed both atomic and nuclear weapons technologies before the Russians, the Russians had beaten the Americans into space. The fact that the Russians had comparable weapons technology, knew how to build rockets that could reach the United States meant serious security concerns. Nuclear explosions are huge and can damage large installations. Most communications systems of the day were centralized and circuit-based, leading to total loss of communications if the central communications point is destroyed.
Clearly the United States had fallen behind in the technology race. The United States Government therefore commissioned the creation of an organization called the "Advanced Research Projects Agency" or ARPA for short.
In 1962 (the year of the Cuban Missile Crisis), the United States Air Force and ARPA began research and development in response to the Russion advances in space and nuclear technology. It became clear that in the event of a nuclear war, the United States would need a Command and Control (C&C) system that could survive one or more nuclear 'hits'. Work began on researching a decentralized system that would be robust enough to survive and function even if most of the network were destroyed.
The project of designing the new C&C system was granted to RAND Corporation. Paul Baran of Rand corporation first conceived the idea for a distributed, packet switching network, built on the premise that communication on the network would be unreliable. (See Paul Baran's "On Distributed Communications" series at RAND's website). The network was designed to be able to operate after a nuclear attack had wiped out large portions of the network. After tons of statistical analysis, Paul figured out that by breaking messages up into pieces and sending them via various redundant paths to the destination, messages would be difficult to destroy, and hard to intercept. A system with no centralized control point would be difficult to target, let alone destroy.
Even if some of the data were to be destroyed, as well as some of the communications points, the message would still get through, and the network would continue to function even when crippled.
After Paul Baran presented his findings, a testbed network was set up. The first machines connected to this experimental communications system (without packet switches between) were a TX-2 located at MIT and an AN/FSQ-32 at System Development Corporation in Santa Monica, CA; and a DEC computer at ARPA. The devices were attatched to 1200bps connections (circa 1965). This formed the first 'Experimental Network'.
The government awarded a Packet Switch contract to Bolt Baranek and Newman to build Interface Message Processors (IMP) in 1968. BBN chose Honeywell DDP-516's with 12K memory as the connection and interface device. The Interface Message Processor (IMP) devices they built were placed on each of the four designated research sites. These sites were colleges who had won research grants from the US government. UCLA, Stanford, UCSB and Univeristy of Utah were the first Universities to interconnect their supercomputers (hosts) via the new ARPAnet IMP's. BBN purchased AT&T 50Kbps dedicated lines for the connections between sites.
DARPAnet: The Defense Department Takeover
The Internet began as a military command and control systems research project. As the network was deployed and more government and research institutions were connected to it, the Defense Department took over the project ARPA. The Defense Department adminstrated the network for several years, and so the name was changed to DARPAnet (Defense Advanced Research Projects Network) in the early to mid 70's.
The DARPAnet eventually expanded beyond the Defense Department's willingness to sponsor it. More than half the connected sites were Universities receiving government research grants; however, the networks were in use by more than just the researchers.
Around 1971, Ray Tomlinson, originally of BBN, wrote an application to send electronic mail back and forth and later modified it to use the @ symbol (user@host). By 1973, 75% of the traffic on the ARPANET network was private and personal e-mail communication. Many of the Defense Department connections were thus dismantled, and the network was handed over to the National Science Foundation (NSF).
DAWN OF TCP/IP
Prior to the 1980's, Network Control Protocol (NCP) was used to move packets over the ARPANET. This protocol was eventually was split into two protocols to isolate functions in separate pieces of software, thus simplifying future software development efforts. Internet Protocol (IP), the first of the two new protocols was to handle addressing. The second protocol, Transmisison Control Protocol (TCP) , was to ride over IP and was designed to handle transport and make it reliable (TCP). Thus was born TCP over IP (TCP/IP). The creation of an operating system called BSD Unix which included a complete TCP/IP software 'stack', allowed colleges to connect individual workstations directly to the Internet without the need to invest in their own Interface Message Processor (IMP). This caused a direct increase in the number of hosts.
BIRTH OF THE NSFNET
The National Science Foundation (NSF) was chartered to continue research and manage the Internet. This expansion began by connnecting colleges and universites using 56Kbps dedicated circuits from MCI in 1985. Later, the NSF contracted Merit to manage the NSFnet. Merit upgraded the NSFnet to 448Kbps MCI circuits and used IBM PC-RT's as routers. In 1987 Merit upgraded to 1.544Mbps T1's. In 1990 MCI, Merit, and IBM formed a non-profit corporation called American Network Services (ANS) to manage the NSFnet. ANS upgraded of the network infrastructure to DS3's (45Mbps).
CREATION OF THE NAPs
The National Science Foundation built Network Access Points (NAPs) in Chicago, IL; Pennsauken, NJ; Vienna, VA; and San Jose, CA. These NAPs allowed designated regional access providers to connect to the National Science Foundation Network (NSFnet). To simplify management and access as well as routing issues, the NSF designated several companies as regional access providers (Argone, BARRnet, CERFnet, NYSERnet etc.)
According to the NSF's charter, only government and organizations receiving government research funding were allowed to connect to the NSFnet. The NFSnet, as it was now called, flourished under the management of the National Science Foundation and later Merit Networks. This, in combination with the proliferation of a low-cost network operating system called UNIX brought the Internet to it's pre-web state. However, the Internet was still an etirely 'private' and non-commercial organization. Many commercial companies saw buisness opportunities in what had become an international computer network.
The Internet's growth and expansion was taken over in 1991 by the three major commercial long distance networks MCI, Sprint and AT&T after the Internet privatization initiatives proposed by the United States President. Also in 1991 Congress authorized the foundation of the National Research and Education Network (NREN).
EMERGENCE OF DOMAIN NAME SERVICE (DNS)
Around 1984 a method for using names that humans could remember, instead of numeric Internet Protocol addresses was established because it was becoming impossible for humans to remember the ever-increacing number of IP addresses of all the computers on the Internet. The list of IP addresses was originally kept in a single file ( /etc/hosts.txt ) stored on a central sever everyone downloaded. This hosts file is the origin of the UNIX and LINUX /etc/hosts files and the %SYSTEM%\drivers\etc\hosts file in Windows. As the number of hosts on the Internet increased, so did the size of the file, and the time it takes to download, and the amount of traffic accessing the central server began to overload network and server capacity. A distributed system was needed, and a protocol describing the method of performing name lookups was developed, documented by Paul Mockapetris and standardized in November 1987 as Domain Name Service (RFC1035 / STD0013).
Around 1993, a centralized set of DNS root servers were put together with database resources from AT&T, registration services from Network Solutions, and information services from General Atomics/CERF. This set of name-resolving servers or 'nameservers' would provide top level domain name resolution, IP addresses administration and delegate authority of domain names to those responsible for the networks for the domain name represents. The organization responsible for maintaining these services was called the Internet Network Information Center (InterNIC)
COMMERCIALIZATION OF THE INTERNET
COMMERCIALIZATION OF THE NAPs (Commercial Internet Exchanges)
Though common folklore claims that Vice President (then Senator) Al Gore took credit for the 'invention' of the Internet, the Internet existed before Al Gore was considering a career in politics. Gore was simply one of the co-authors of the "High-Performance Computing Act of 1991 (Public Law 102-194)" that opened the Internet for commercial use by establishing the National Research and Education Network (NREN). Thanks to Vice President Gore, today you not only have e-mail and web sites, you also have viruses, worms, BOTs and BOTnets and SPAM, Pop-Up ads, porn sites, browser hijacking along with a rich diversity of get-rich-quick fraud, all delivered conveniently to your mailbox daily.
By May of 1993, the National Science Foundation had written and released a solicitation to accommodate and promote the commercialization/privatization of the Internet (NSF93-52). This document mandated the creation of four Network Access Points (NAPs), which were sold via closed bid to the following providers:
|Sprint||Sprint NAP / NY-NAP||Pennsauken, NJ|
|Pacific Bell||PacBell NAP||San Francisco, CA|
|Ameritech||AADS NAP||Chicago, IL|
|MFS||MAE-E NAP||Vienna, VA|
As a condition of sale, the commercial providers purchasing NAP's were required to:
- Connect to the other NAP's.
- Establish policies and fees for connectivity at their own NAP.
- Provide a Route Server.
- Provide a Routing Arbiter Database.
- Provide Network Management Services for managed equipment.
- Manage physical site access and security for Network Provider engineers.
- Provide for the upgrade and expansion of the NAP's
- Provide and maintain interconnects to the other three NAP's
These companies, also referred to as the NAP Managers, used these exchanges to create public Commercial Internet Exchanges (CIX's) to sell connectivity to their networks and exchange data to improve connectivity. The government had already built Federal Internet Exchanges (FIX's) in Maryland and California, maintained by NASA Ames.
The Vienna CIX was sold by NSF by closed bid to Metropolitan Fiber Systems in late 1992. Metro Fibre renamed it "Metropolitan Area Exchange" (MAE's). Metro Fiber Systems was later assimilated by UUnet and installed as MAE-West. UUnet was in turn was assimilated by WorldCom in September 1996. MAE Dallas, Mae Houston, and MAE Los Angeles were added later.
American Network Services (ANS) was originally formed by a partnership between MCI, IBM, Merit and the state of Michigan. ANS was later purchased by America Online in 1995, and in 1997 America Online traded ANS to WorldCom in exchange for the American and overseas Compuserve ventures. WorldCom merged with MCI in 1998 with the Internet backbone of MCI being sold to Cable and Wireless, which is how C&W got into the Internet backbone business.
Work on the Very-highspeed Backbone Network Service (VBNS) was started in 1995 by MCI and the NSF to provide SONET/OC3 high-speed connectivity to supercomputers in research and educational facilities to foster the growth of the next group of Internet technologies, such as fiber optics, multimedia and other high bandwidth services.
As Metro Fiber Systems (MFS) raised prices for connectivity at their MAE's, and MAE East and MAE West became ever more congested by the explosive growth of Internet Traffic, private telecommunications companies began creating additional private access points to their networks called Network Access Points (NAP's). These NAP's became the new exchange points between all telecommunications and Internet service providers, giving them greater control of the flow of data into and out of their networks. These NAP's are located all over the United States, but tend to be concentrated where there is a large ammount of telecommunications equipment.
In 1993, the United States government awarded a private, for-profit organization called Network Solutions, Inc. (NSI), exclusive license for domain name registration services in the .com, .net and .org top-level domains for a five year period. During that time, Network Solutions enjoyed an exclusive monopoly on the registration of those Top Level Domains.
Over the course of a years discussion (circa 1996-7) between the Internet Assigned Nunbers Authority (IANA), the Internet Engineering Task Force (IETF), the National Science Foundation (NSF), the Federal Networking Council (FNC), and the European and Asian registries, a new authority called the American Registry for Internet Numbers (ARIN) was formed as a non-profit organization, whose authority it is to manage the IP address space in use in North and South America, the Caribbean and sub-saharan Africa. ARIN began operations on December 22, 1997.
In October 1998, the US Government appointed a private organization called Internet Corporation for Assigned Names and Numbers (ICANN) to oversee the opening of the Domain Name Registration system to other competing companies. This responsibility technically still resides with InterNIC, however the name InterNIC has now been given to the United States Department of Commerce. Network Solutions still maintains the root DNS server equipment, and this equipment still resides on Network Solutions property. The U.S. Government expects that competition in domain name registration will provide the global Internet community with a number of benefits, including greater choice in services and prices.
The Modern Internet
Today, the 'modern' Internet is made up of several very large commercial and government-run telecommunications carriers whose networks span the globe, or serve an entire country. No single telecommunications carrier owns the Internet. There is no single point of control and there is no single place in which all Internet traffic flows. Indeed, the entire point of the design of the Internet and TCP/IP was to distribute the nodes and decentralize network control so that no single attack or natural disaster could disable the communications network.
This has continued as service providers need to pass ever more data while minimizing expenditures on equipment and cable plant.
The Internet, as structured in the United States, consists of large commercial telecommunications carriers forming a high-speed core internetwork with smaller regional telephone companies and carriers providing the "last mile" connections from the core providers to residences, businesses and other organizations. This is primarilly because the telephone and long distance telecommunications carriers were the first to have digital networks capable of carrying IP data.
Outside the United States, things get a little fuzzy. Some of the developed countries have nationalized their telecommunications systems and those nationalized (government owned) systems provide the Internet access. One benefit of this approach is that all services are integrated into one provider. The downside of this approach is that expansion only happens when there is enough tax revenue generated, so these networks tend to grow much more slowly than commercial networks.
Developing countries tend to have a variety of things going on. Some nations have comercial providers who work much the same way as they do elsewhere in the world. Others have a hodge-podge, anything goes situation where anyone who has a sizeable network, needs additional connectivity and can afford it, and knows how to build it, puts in Internet connections or private peering connections to whomever they can wherever they can. This can be someone who knows how to build a satellite dish and keep it running in the middle of a jungle with no power or infrastructure, or someone who knows how to hook up a T1 between two buildings by stringing it across clotheswire, or knows how to set up radio transmitters and can configure it to carry IP data.
The Last Mile
The last mile is a term used to refer to the physical communications path to individual households and commercial buildings which was controlled by the local phone company. The phone company had a monopoloy on that service, up until anti-trust lawsuits broke up the American phone company and cable TV providers and satellite providers entered the telephony industry. Today, many homes have higher bandwidth and greater speeds than many business websites. Each home served by cable TV-based Internet service has a 2-5 Mbps download capability, while most businesses have a single T1 running at 1.5Mbpsto serve all requests from every user connecting to them. This makes for a lopsided situation where the "speed" is where you need it least and the bottlenecks are where you need speed the most. Thus, many providers such as the cable TV providers are caching popular sites, thereby reducing load at the corporate sites and speeding the end user's access and experience.
The number of estimated Internet users is growing rapidly. At one point it was doubling every 8-9 months. To make the present-day Internet function, major carriers build multiple dedicated private peering points to exchange data between their networks and provide transit connectivity to Internet users outside their own networks. A growing number of new start-ups are building networks to serve more than just data traffic. These new providers seek to merge data, voice, video and distributed computing services into a single service offering.
Users get their access from cable television providers, cellular providers, WiFi hotspots and more. Most of these solutions
All providers make a buisness of building out the Internet backbone and selling access to it. By upgrading the infrastructure of their own networks, they are upgrading the capacity of the Internet, and making money by providing access to smaller regional providers, telephone companies, universities and schools, businesses and other organizations. The regional providers still provide connectivity for public school systems, colleges and other public and private institutions and are often funded by states. Still other companies connect either to these regional providers or connect directly to one or more of the backbone providers, and provide direct-dial access to the Internet for individual users. Such direct-dial companies are called Internet Service Providers, though the term is often extended to more than just dial-up providers. Earthlink, Netcom and dozens of others provide such access for home users. New technogies such as DSL and cable modems are now allowing private users ever-higher speed access to the Internet, allowing local phone companies, cable television and satellite companies to cash in on the Internet.
As the Internet grows and matures, other services are being offered. Today, it is possible to make long distance calls over the Internet for free, Video Conferencing is a reality for anyone with a computer and the right equipment. Other applications such as Multicasting (sending data from a single source to multiple locations simultaneously) and Voice Over IP guarantee that the Internet will continue to expand and merge with other technologies and media.
Data, Voice and Video Convergence
Indeed, the biggest telecommunications carriers already have merged their separate networks, or are in the process of merging their separate voice and data networks. The first step in this process is usually to deploy Multi-Protocol Label Switching (MPLS), a network protocol that allows the network provider to create end-to-end tunnels over which any type of data can flow and with the addition of DiffServ, can manage quality of service at the same level as ATM or other more expensive layer 2 technologies. Carriers are now building Voice over IP networks which will be carried on the new MPLS networks, thereby truely merging data and voice in the same network and using a single suite of technologies and protocols.
The benefit of convergence is that money is spent on a single network, instead of disparate IP data, voice, video and dial-up networks. This produces a considerable cost savings and allows the provider to target money expenditures at one network's bottlenecks and problems.